Interface of PDEs and Machine Learning
-
Why resampling outperforms reweighting for correcting sampling bias with stochastic gradients. [pdf]
Jing An, Lexing Ying, Yuhua Zhu*.
International Conference on Learning Representations (ICLR), 2021.
-
A Sharp Convergence Rate for a Model Equation of the Asynchronous Stochastic Gradient Descent. [pdf]
Yuhua Zhu, Lexing Ying.
Communications in Mathematical Sciences, 19(3), 851-863, 2020.
-
On large batch training and sharp minima: A Fokker-Planck perspective. [pdf]
Xiaowu Dai and Yuhua Zhu*.
Journal of Statistical Theory and Practice (JSTP), special issue on "Advances in Deep Learning", 2020.
-
A consensus-based global optimization method for high dimensional machine learning problems. [pdf]
Jose Carrillo, Shi Jin, Lei Li and Yuhua Zhu*
ESAIM: Control, Optimisation and Calculus of Variations 27, S5, 2020.
-
Borrowing From the Future: An Attempt to Address Double Sampling. [pdf]
Yuhua Zhu and Lexing Ying.
Mathematical and Scientific Machine Learning, PMLR 107:246-268, 2020.