Blake Woodworth
Title
Cited by
Cited by
Year
Implicit regularization in matrix factorization
S Gunasekar, B Woodworth, S Bhojanapalli, B Neyshabur, N Srebro
2018 Information Theory and Applications Workshop (ITA), 1-10, 2018
1902018
Learning non-discriminatory predictors
B Woodworth, S Gunasekar, MI Ohannessian, N Srebro
Conference on Learning Theory, 1920-1953, 2017
1822017
Tight complexity bounds for optimizing composite objectives
B Woodworth, N Srebro
arXiv preprint arXiv:1605.08003, 2016
1572016
Graph oracle models, lower bounds, and gaps for parallel stochastic optimization
B Woodworth, J Wang, A Smith, B McMahan, N Srebro
arXiv preprint arXiv:1805.10222, 2018
652018
Kernel and rich regimes in overparametrized models
B Woodworth, S Gunasekar, JD Lee, E Moroshko, P Savarese, I Golan, ...
Conference on Learning Theory, 3635-3673, 2020
64*2020
Lower bounds for non-convex stochastic optimization
Y Arjevani, Y Carmon, JC Duchi, DJ Foster, N Srebro, B Woodworth
arXiv preprint arXiv:1912.02365, 2019
622019
Training well-generalizing classifiers for fairness metrics and other data-dependent constraints
A Cotter, M Gupta, H Jiang, N Srebro, K Sridharan, S Wang, B Woodworth, ...
International Conference on Machine Learning, 1397-1405, 2019
452019
Is local SGD better than minibatch SGD?
B Woodworth, KK Patel, S Stich, Z Dai, B Bullins, B Mcmahan, O Shamir, ...
International Conference on Machine Learning, 10334-10343, 2020
382020
Minibatch vs local sgd for heterogeneous distributed learning
B Woodworth, KK Patel, N Srebro
arXiv preprint arXiv:2006.04735, 2020
202020
The complexity of making the gradient small in stochastic convex optimization
DJ Foster, A Sekhari, O Shamir, N Srebro, K Sridharan, B Woodworth
Conference on Learning Theory, 1319-1345, 2019
192019
Lower bound for randomized first order convex optimization
B Woodworth, N Srebro
arXiv preprint arXiv:1709.03594, 2017
172017
Implicit bias in deep linear classification: Initialization scale vs training accuracy
E Moroshko, S Gunasekar, B Woodworth, JD Lee, N Srebro, D Soudry
arXiv preprint arXiv:2007.06738, 2020
122020
The gradient complexity of linear regression
M Braverman, E Hazan, M Simchowitz, B Woodworth
Conference on Learning Theory, 627-647, 2020
52020
Mirrorless mirror descent: A more natural discretization of riemannian gradient flow
S Gunasekar, B Woodworth, N Srebro
arXiv preprint arXiv:2004.01025, 2020
52020
Guaranteed validity for empirical approaches to adaptive data analysis
R Rogers, A Roth, A Smith, N Srebro, O Thakkar, B Woodworth
International Conference on Artificial Intelligence and Statistics, 2830-2840, 2020
32020
The everlasting database: Statistical validity at a fair price
B Woodworth, V Feldman, S Rosset, N Srebro
arXiv preprint arXiv:1803.04307, 2018
22018
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
B Woodworth, B Bullins, O Shamir, N Srebro
arXiv preprint arXiv:2102.01583, 2021
12021
Open Problem: The Oracle Complexity of Convex Optimization with Limited Memory
B Woodworth, N Srebro
Conference on Learning Theory, 3202-3210, 2019
12019
Training Fairness-Constrained Classifiers to Generalize
A Cotter, M Gupta, H Jiang, N Srebro, K Sridharan, S Wang, B Woodworth, ...
FATML, 2018
12018
Mirrorless Mirror Descent: A Natural Derivation of Mirror Descent
S Gunasekar, B Woodworth, N Srebro
International Conference on Artificial Intelligence and Statistics, 2305-2313, 2021
2021
The system can't perform the operation now. Try again later.
Articles 1–20