Yamini Bansal
Yamini Bansal
Verified email at g.harvard.edu
Title
Cited by
Cited by
Year
On the information bottleneck theory of deep learning
AM Saxe, Y Bansal, J Dapello, M Advani, A Kolchinsky, BD Tracey, ...
Journal of Statistical Mechanics: Theory and Experiment 2019 (12), 124020, 2019
2122019
Deep Double Descent: Where Bigger Models and More Data Hurt
P Nakkiran, G Kaplun, Y Bansal, T Yang, B Barak, I Sutskever
arXiv preprint arXiv:1912.02292, 2019
1572019
Minnorm training: an algorithm for training over-parameterized deep neural networks
Y Bansal, M Advani, DD Cox, AM Saxe
arXiv preprint arXiv:1806.00730, 2018
15*2018
For self-supervised learning, Rationality implies generalization, provably
Y Bansal, G Kaplun, B Barak
arXiv preprint arXiv:2010.08508, 2020
22020
Distributional Generalization: A New Kind of Generalization
P Nakkiran, Y Bansal
arXiv preprint arXiv:2009.08092, 2020
12020
Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling
A Srivastava, Y Bansal, Y Ding, C Hurwitz, K Xu, B Egger, P Sattigeri, ...
arXiv preprint arXiv:2010.13187, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–6