Seismic: A self-exciting point process model Q Zhao, MA Erdogdu, HY He, A Rajaraman, J Leskovec Proceedings of the 21th ACM SIGKDD international conference on knowledge …, 2015 | 776 | 2015 |
Convergence rates of sub-sampled Newton methods MA Erdogdu, A Montanari Advances in Neural Information Processing Systems, 1090-1098, 2015 | 183 | 2015 |
High-dimensional asymptotics of feature learning: How one gradient step improves the representation J Ba, MA Erdogdu, T Suzuki, Z Wang, D Wu, G Yang Advances in Neural Information Processing Systems 35, 37932-37946, 2022 | 134 | 2022 |
Global non-convex optimization with discretized diffusions MA Erdogdu, L Mackey, O Shamir Advances in Neural Information Processing Systems 31, 2018 | 124 | 2018 |
Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev S Chewi, MA Erdogdu, MB Li, R Shen, M Zhang Foundations of Computational Mathematics, 2024 | 116 | 2024 |
Generalization of two-layer neural networks: An asymptotic viewpoint J Ba, M Erdogdu, T Suzuki, D Wu, T Zhang International conference on learning representations, 2020 | 90 | 2020 |
Manipulating sgd with data ordering attacks I Shumailov, Z Shumaylov, D Kazhdan, Y Zhao, N Papernot, MA Erdogdu, ... Advances in Neural Information Processing Systems 34, 18021-18032, 2021 | 87 | 2021 |
Convergence rates of active learning for maximum likelihood estimation K Chaudhuri, SM Kakade, P Netrapalli, S Sanghavi Advances in Neural Information Processing Systems 28, 2015 | 85 | 2015 |
Stochastic runge-kutta accelerates langevin monte carlo and beyond X Li, Y Wu, L Mackey, MA Erdogdu Advances in neural information processing systems 32, 2019 | 78 | 2019 |
On the convergence of langevin monte carlo: The interplay between tail growth and smoothness MA Erdogdu, R Hosseinzadeh Conference on Learning Theory, 1776-1822, 2021 | 75 | 2021 |
Towards a theory of non-log-concave sampling: first-order stationarity guarantees for langevin monte carlo K Balasubramanian, S Chewi, MA Erdogdu, A Salim, S Zhang Conference on Learning Theory, 2896-2923, 2022 | 74 | 2022 |
Estimating lasso risk and noise level M Bayati, MA Erdogdu, A Montanari Advances in Neural Information Processing Systems 26, 2013 | 71 | 2013 |
Hausdorff dimension, heavy tails, and generalization in neural networks U Simsekli, O Sener, G Deligiannidis, MA Erdogdu Advances in Neural Information Processing Systems, 2020 | 66* | 2020 |
Neural networks efficiently learn low-dimensional representations with sgd A Mousavi-Hosseini, S Park, M Girotti, I Mitliagkas, MA Erdogdu International Conference on Learning Representations, 2022 | 57 | 2022 |
An analysis of constant step size sgd in the non-convex regime: Asymptotic normality and bias L Yu, K Balasubramanian, S Volgushev, MA Erdogdu Advances in Neural Information Processing Systems, 2021 | 50 | 2021 |
Convergence rates of stochastic gradient descent under infinite noise variance H Wang, M Gurbuzbalaban, L Zhu, U Simsekli, MA Erdogdu Advances in Neural Information Processing Systems 34, 18866-18877, 2021 | 44 | 2021 |
Normal approximation for stochastic gradient descent via non-asymptotic rates of martingale CLT A Anastasiou, K Balasubramanian, MA Erdogdu Conference on Learning Theory, 115-137, 2019 | 43 | 2019 |
Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence MA Erdogdu, R Hosseinzadeh, MS Zhang International Conference on Artificial Intelligence and Statistics, 2021 | 41 | 2021 |
Convergence rate of block-coordinate maximization Burer–Monteiro method for solving large SDPs MA Erdogdu, A Ozdaglar, PA Parrilo, ND Vanli Mathematical Programming 195 (1), 243-281, 2022 | 40 | 2022 |
Understanding the variance collapse of SVGD in high dimensions J Ba, MA Erdogdu, M Ghassemi, S Sun, T Suzuki, D Wu, T Zhang International Conference on Learning Representations, 2021 | 40* | 2021 |