Follow
Xun Qian
Xun Qian
Researcher, Shanghai AI Lab
Verified email at pjlab.org.cn
Title
Cited by
Cited by
Year
SGD: General analysis and improved rates
RM Gower, N Loizou, X Qian, A Sailanbayev, E Shulgin, P Richtárik
International Conference on Machine Learning (ICML 2019), 5200-5209, 2019
3932019
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Z Li, D Kovalev, X Qian, P Richt{\'a}rik
International Conference on Machine Learning (ICML 2020), 2020
1432020
FedNL: Making Newton-type methods applicable to federated learning
M Safaryan, R Islamov, X Qian, P Richtárik
International Conference on Machine Learning (ICML 2022), 2021
642021
Distributed second order methods with fast rates and compressed communication
R Islamov, X Qian, P Richtárik
International Conference on Machine Learning (ICML 2021), 4617-4628, 2021
482021
Error compensated distributed SGD can be accelerated
X Qian, P Richtárik, T Zhang
Advances in Neural Information Processing Systems (NeurIPS 2021) 34, 2021
382021
L-SVRG and L-Katyusha with arbitrary sampling
X Qian, Z Qu, PR rik
Journal of Machine Learning Research 22, 1-49, 2021
352021
A model of distributionally robust two-stage stochastic convex programming with linear recourse
B Li, X Qian, J Sun, KL Teo, C Yu
Applied Mathematical Modelling 58, 86-97, 2018
342018
SAGA with arbitrary sampling
X Qian, Z Qu, P Richtárik
International Conference on Machine Learning (ICML 2019), 5190-5199, 2019
272019
MISO is making a comeback with better proofs and rates
X Qian, A Sailanbayev, K Mishchenko, P Richtárik
arXiv preprint arXiv:1906.01474, 2019
182019
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
X Qian, R Islamov, M Safaryan, P Richtárik
International Conference on Artificial Intelligence and Statistics (AISTATS'22), 2022
172022
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
R Islamov, X Qian, S Hanzely, M Safaryan, P Richtárik
arXiv preprint arXiv:2206.03588, 2022
72022
Error compensated loopless SVRG, Quartz, and SDCA for distributed optimization
X Qian, H Dong, P Richtárik, T Zhang
arXiv preprint arXiv:2109.10049, 2021
52021
Error compensated loopless SVRG for distributed optimization
X Qian, H Dong, P Richtárik, T Zhang
OPT2020: 12th Annual Workshop on Optimization for Machine Learning (NeurIPS …, 2020
32020
The convergent generalized central paths for linearly constrained convex programming
X Qian, LZ Liao, J Sun, H Zhu
SIAM Journal on Optimization 28 (2), 1183-1204, 2018
32018
Analysis of some interior point continuous trajectories for convex programming
X Qian, LZ Liao, J Sun
Optimization 66 (4), 589-608, 2017
32017
Analysis of the primal affine scaling continuous trajectory for convex programming
X Qian, LZ Liao
PACIFIC JOURNAL OF OPTIMIZATION 14 (2), 261-272, 2018
22018
A strategy of global convergence for the affine scaling algorithm for convex semidefinite programming
X Qian, LZ Liao, J Sun
Mathematical Programming 179 (1), 1-19, 2020
12020
Error compensated proximal SGD and RDA
X Qian, H Dong, P Richtárik, T Zhang
12th Annual Workshop on Optimization for Machine Learning, 2020
12020
Generalized Affine Scaling Trajectory Analysis for Linearly Constrained Convex Programming
X Qian, LZ Liao
International Symposium on Neural Networks, 139-147, 2018
12018
An Interior Point Parameterized Central Path Following Algorithm for Linearly Constrained Convex Programming
L Hou, X Qian, LZ Liao, J Sun
Journal of Scientific Computing, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–20