Follow
Yi Zhang
Yi Zhang
Senior Researcher at Microsoft Research Redmond
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Sparks of artificial general intelligence: Early experiments with gpt-4
S Bubeck, V Chandrasekaran, R Eldan, J Gehrke, E Horvitz, E Kamar, ...
arXiv preprint arXiv:2303.12712, 2023
17332023
Generalization and Equilibrium in Generative Adversarial Nets (GANs)
S Arora, R Ge, Y Liang, T Ma, Y Zhang
arXiv preprint arXiv:1703.00573, 2017
7642017
Stronger generalization bounds for deep nets via a compression approach
S Arora, R Ge, B Neyshabur, Y Zhang
International Conference on Machine Learning, 254-263, 2018
6542018
Convolutional neural networks with low-rank regularization
C Tai, T Xiao, Y Zhang, X Wang
arXiv preprint arXiv:1511.06067, 2015
5092015
Deep visual analogy-making
SE Reed, Y Zhang, Y Zhang, H Lee
Advances in neural information processing systems 28, 2015
3422015
Do GANs actually learn the distribution? An empirical study
S Arora, Y Zhang
arXiv:1706.08224, 2017
1872017
Do GANs learn the distribution? some theory and empirics
S Arora, A Risteski, Y Zhang
International Conference on Learning Representations, 2018
1702018
Textbooks are all you need
S Gunasekar, Y Zhang, J Aneja, CCT Mendes, A Del Giorno, S Gopi, ...
arXiv preprint arXiv:2306.11644, 2023
1642023
Spectral filtering for general linear dynamical systems
E Hazan, H Lee, K Singh, C Zhang, Y Zhang
Advances in Neural Information Processing Systems 31, 2018
942018
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets
R Kuditipudi, X Wang, H Lee, Y Zhang, Z Li, W Hu, S Arora, R Ge
arXiv:1906.06247, 2019
842019
Towards Understanding the Invertibility of Convolutional Neural Networks
CA Gilbert, Y Zhang, K Lee, Y Zhang, H Lee
arXiv preprint arXiv:1705.08664, 2017
752017
Efficient full-matrix adaptive regularization
N Agarwal, B Bullins, X Chen, E Hazan, K Singh, C Zhang, Y Zhang
International Conference on Machine Learning, 102-110, 2019
582019
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Y Zhang, O Plevrakis, SS Du, X Li, Z Song, S Arora
arXiv:2002.06668, 2020
502020
What makes convolutional models great on long sequence modeling?
Y Li, T Cai, Y Zhang, D Chen, D Dey
arXiv preprint arXiv:2210.09298, 2022
492022
Why are convolutional nets more sample-efficient than fully-connected nets?
Z Li, Y Zhang, S Arora
arXiv preprint arXiv:2010.08515, 2020
482020
Unveiling transformers with lego: a synthetic reasoning task
Y Zhang, A Backurs, S Bubeck, R Eldan, S Gunasekar, T Wagner
arXiv preprint arXiv:2206.04301, 2022
452022
Calibration, Entropy Rates, and Memory in Language Models
M Braverman, X Chen, SM Kakade, K Narasimhan, C Zhang, Y Zhang
arXiv preprint arXiv:1906.05664, 2019
332019
Towards provable control for unknown linear dynamical systems
S Arora, E Hazan, H Lee, K Singh, C Zhang, Y Zhang
262018
Not-So-Random Features
B Brian, Z Cyril, Z Yi
arXiv:1710.10230, 2017
25*2017
Phi-2: The surprising power of small language models
M Javaheripi, S Bubeck, M Abdin, J Aneja, S Bubeck, CCT Mendes, ...
Microsoft Research Blog, 2023
242023
The system can't perform the operation now. Try again later.
Articles 1–20