Yamini Bansal
Yamini Bansal
Verified email at g.harvard.edu
Cited by
Cited by
On the information bottleneck theory of deep learning
AM Saxe, Y Bansal, J Dapello, M Advani, A Kolchinsky, BD Tracey, ...
Journal of Statistical Mechanics: Theory and Experiment 2019 (12), 124020, 2019
Deep Double Descent: Where Bigger Models and More Data Hurt
P Nakkiran, G Kaplun, Y Bansal, T Yang, B Barak, I Sutskever
arXiv preprint arXiv:1912.02292, 2019
Minnorm training: an algorithm for training over-parameterized deep neural networks
Y Bansal, M Advani, DD Cox, AM Saxe
arXiv preprint arXiv:1806.00730, 2018
For self-supervised learning, Rationality implies generalization, provably
Y Bansal, G Kaplun, B Barak
arXiv preprint arXiv:2010.08508, 2020
Distributional Generalization: A New Kind of Generalization
P Nakkiran, Y Bansal
arXiv preprint arXiv:2009.08092, 2020
Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling
A Srivastava, Y Bansal, Y Ding, C Hurwitz, K Xu, B Egger, P Sattigeri, ...
arXiv preprint arXiv:2010.13187, 2020
The system can't perform the operation now. Try again later.
Articles 1–6