Praneeth Netrapalli
Praneeth Netrapalli
Microsoft Research
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Low-rank matrix completion using alternating minimization
P Jain, P Netrapalli, S Sanghavi
Proceedings of the forty-fifth annual ACM symposium on Theory of computing …, 2013
9852013
How to escape saddle points efficiently
C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan
International Conference on Machine Learning, 1724-1732, 2017
5692017
Phase retrieval using alternating minimization
P Netrapalli, P Jain, S Sanghavi
IEEE Transactions on Signal Processing 63 (18), 4814-4826, 2015
5602015
Non-convex robust PCA
P Netrapalli, UN Niranjan, S Sanghavi, A Anandkumar, P Jain
arXiv preprint arXiv:1410.7660, 2014
2792014
Learning the graph of epidemic cascades
P Netrapalli, S Sanghavi
ACM SIGMETRICS Performance Evaluation Review 40 (1), 211-222, 2012
1982012
Accelerated gradient descent escapes saddle points faster than gradient descent
C Jin, P Netrapalli, MI Jordan
Conference On Learning Theory, 1042-1085, 2018
1732018
Learning sparsely used overcomplete dictionaries via alternating minimization
A Agarwal, A Anandkumar, P Jain, P Netrapalli
SIAM Journal on Optimization 26 (4), 2775-2799, 2016
1622016
What is local optimality in nonconvex-nonconcave minimax optimization?
C Jin, P Netrapalli, M Jordan
International Conference on Machine Learning, 4880-4889, 2020
147*2020
Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for oja’s algorithm
P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford
Conference on learning theory, 1147-1164, 2016
1212016
Information-theoretic thresholds for community detection in sparse networks
J Banks, C Moore, J Neeman, P Netrapalli
Conference on Learning Theory, 383-416, 2016
114*2016
Learning sparsely used overcomplete dictionaries
A Agarwal, A Anandkumar, P Jain, P Netrapalli, R Tandon
Conference on Learning Theory, 123-137, 2014
1122014
Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford
Journal of Machine Learning Research 18, 2018
107*2018
Faster eigenvector computation via shift-and-invert preconditioning
D Garber, E Hazan, C Jin, C Musco, P Netrapalli, A Sidford
International Conference on Machine Learning, 2626-2634, 2016
107*2016
Accelerating stochastic gradient descent for least squares regression
P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford
Conference On Learning Theory, 545-604, 2018
992018
Morel: Model-based offline reinforcement learning
R Kidambi, A Rajeswaran, P Netrapalli, T Joachims
arXiv preprint arXiv:2005.05951, 2020
952020
Fast exact matrix completion with finite samples
P Jain, P Netrapalli
Conference on Learning Theory, 1007-1034, 2015
902015
A clustering approach to learning sparsely used overcomplete dictionaries
A Agarwal, A Anandkumar, P Netrapalli
IEEE Transactions on Information Theory 63 (1), 575-592, 2016
86*2016
Provable efficient online matrix completion via non-convex stochastic gradient descent
C Jin, SM Kakade, P Netrapalli
Advances in Neural Information Processing Systems 29, 4520-4528, 2016
862016
On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points
C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan
Journal of the ACM (JACM) 68 (2), 1-29, 2021
80*2021
Efficient algorithms for smooth minimax optimization
KK Thekumparampil, P Jain, P Netrapalli, S Oh
arXiv preprint arXiv:1907.01543, 2019
792019
The system can't perform the operation now. Try again later.
Articles 1–20