gabriel huang profile picture

Gabriel Huang

PhD candidate
Mila & University of Montreal
Part-time research intern at Element AI

Publications Thin-8 Scholar Github CV

I am a PhD candidate at Mila & University of Montreal under the supervision of Simon Lacoste-Julien. I am interested in generative learning, latent-variable models, structured prediction, optimal transport, weakly-supervised learning, reinforcement learning, convex optimization, music generation, and fundamental questions of optimization and statistical learning. Previously I did the MVA Master's degree in machine learning at École Normale Supérieure in Paris, in parallel with an engineer's degree at CentraleSupélec, one of the top engineering schools in France. While I was doing my master's, I also worked as a part-time research apprentice on human activity recognition using RGB-D cameras and on recommender systems.

Negative Momentum

Below is an interactive visualization of our paper Negative Momentum for Improved Game Dynamics:
(a) Learning rate (lr) and momentum (beta) hyperparameters.
(b) Resulting eigenvalues in the complex plane for SGD and SGD+momentum.

There is convergence if and only if all eigenvalues are inside the convergence ball (green).
Try to find the hyperparameters for convergence.

(a) Hyperparameters.

(b) Eigenvalues in complex plane.

SGD without momentum: using , eigenvalues are the convergence ball →
SGD with momentum: using and momentum , eigenvalues are the convergence ball →

Publications and preprints

paper Multimodal Pretraining for Dense Video Captioning
Gabriel Huang, Bo Pang, Zhenhai Zhu, Clara Rivera, Radu Soricut
Introduces the Video Timeline Tags dataset (ViTT).
AACL-IJCNLP 2020.
arXiv Are Few-shot Learning Benchmarks too Simple?
Gabriel Huang, Hugo Larochelle, Simon Lacoste-Julien
ICLR'19 workshop.
paper Negative Momentum for Improved Game Dynamics
Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Gabriel Huang, Rémi Lepriol, Simon Lacoste-Julien, Ioannis Mitliagkas.
AISTATS 2019
arXiv Parametric Adversarial Divergences are Good Task Losses for Generative Modeling
Gabriel Huang, Hugo Berard, Ahmed Touati, Gauthier Gidel, Pascal Vincent, Simon Lacoste-Julien.
ICML'17 Workshop, ICLR'18 Workshop, Montreal AI Symposium 2018
paper Scattering Networks for Hybrid Representation Learning
Edouard Oyallon, Sergey Zagoruyko, Gabriel Huang, Nikos Komodakis, Simon Lacoste-Julien, Matthew Blaschko, Eugene Belilovsky.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2018

Thin-8 dataset

The Thin-8 dataset consists of 1585 grayscale handwritten images of the digit 8, with resolution 512x512.
16 people were asked to draw the digit 8 about 100 times using a pen on a tablet PC running Microsoft Windows.
It was collected in October 2017 at the University of Montreal.
Download Thin-8 dataset here

If you use the Thin-8 dataset, please cite our paper :

@article{huang2018parametric,
                    title={Parametric Adversarial Divergences are Good Task Losses for Generative Modeling},
                    author={Huang, Gabriel and Berard, Hugo and Touati, Ahmed and Gidel, Gauthier and Vincent, Pascal and Lacoste-Julien, Simon},
                    journal={arXiv preprint arXiv:1708.02511},
                    year={2017}
                  }

Thanks to Alex, Akram, Aristide, David, Dendi, Eugene, Jae, Joao, Liam, Rémi, Rosemary, Shawn, Sina, and Xing for scribbling all those samples!

Contact

Email: gabriel.huang@umontreal.ca
In person: Mila, 6666 St-Urbain, #200, Montreal, QC, H2S 3H1, Canada