Kaifeng Lyu
Kaifeng Lyu
Unknown affiliation
No verified email
Title
Cited by
Cited by
Year
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
K Lyu, J Li
ICLR 2020, 2020
592020
Theoretical analysis of auto rate-tuning by batch normalization
S Arora, Z Li, K Lyu
ICLR 2019, 2019
482019
Learning gradient descent: Better generalization and longer horizons
K Lv, S Jiang, J Li
Proceedings of the 34th International Conference on Machine Learning 70 …, 2017
412017
Fine-grained complexity meets IP= PSPACE
L Chen, S Goldwasser, K Lyu, GN Rothblum, A Rubinstein
Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete …, 2019
182019
Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate
Z Li, K Lyu, S Arora
34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020
32020
Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs
R Duan, K Lyu, H Wu, Y Xie
45th International Colloquium on Automata, Languages, and Programming …, 2018
32018
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning
Z Li, Y Luo, K Lyu
ICLR 2021, 2021
22021
The system can't perform the operation now. Try again later.
Articles 1–7