top of page

Publications and Preprints

​​

  • PhiBE: A PDE-based Bellman Equation for Continuous Time Policy Evaluation [pdf]

Yuhua Zhu.

Preprint, 2024.

  • An Interacting Particle Consensus Method for Constrained Global Optimization [pdf]

Jose Carrillo, Shi Jin, Haoyu Zhang, Yuhua Zhu*.

Preprint, 2024.

  • FedCBO: Reaching Group Consensus in Clustered Federated Learning through Consensus-based Optimization. [pdf]

Jose A. Carrillo, Nicolas Garcia Trillos, Sixu Li, Yuhua Zhu*.

Journal of Machine Learning Research (accepted with minor revisions), 2024.

  • Continuous-in-time Limit for Bayesian Bandits. [pdf]

Yuhua Zhu, Zachary Izzo and Lexing Ying.  

Journal of Machine Learning Research, 2023.

Matlab code

  • Operator Augmentation for Model-based Policy Evaluation. [pdf]

Xun Tang, Lexin Ying and Yuhua Zhu*.

Communications in Mathematical Sciences, 2023.

  • Variational Actor-Critic Algorithms. [pdf]

Yuhua Zhu and Lexing Ying.

ESAIM: Control, Optimisation and Calculus of Variations, 2023.

  • A Note on Optimization Formulations of Markov Decision Processes. [pdf]

Lexing Ying and Yuhua Zhu.

Communications in Mathematical Sciences, 20(3):727–745, 2022.

  • The Vlasov Fokker Planck Equation with High Dimensional Parametric Forcing Term. [pdf]

Shi Jin, Yuhua Zhu*, and Enrique Zuazua. 

Numerische Mathematik, 150(2):479–519, 2022.

  • Borrowing From the Future: Addressing Double Sampling in Model-free Control. [pdf]

Yuhua Zhu, Zachary Izzo and Lexing Ying

Mathematical and Scientific Machine Learning, pages 1099–1136, PMLR, 2022.

  • Why Resampling Outperforms Reweighting for Correcting Sampling Bias with Stochastic Gradients. [pdf]

Jing An, Lexing Ying, Yuhua Zhu*.

International Conference on Learning Representations (ICLR), 2021.

  • A Sharp Convergence Rate for a Model Equation of the Asynchronous Stochastic Gradient Descent. [pdf]

Yuhua Zhu, Lexing Ying.​ 

Communications in Mathematical Sciences, 19(3), 851-863, 2020.

  • Borrowing From the Future: An Attempt to Address Double Sampling. [pdf]

Yuhua Zhu and Lexing Ying.

Mathematical and Scientific Machine Learning, PMLR 107:246-268, 2020. 

  • On Large Batch Training and Sharp Minima: A Fokker-Planck Perspective. [pdf]

Xiaowu Dai and Yuhua Zhu*.

Journal of Statistical Theory and Practice (JSTP), special issue on "Advances in Deep Learning", 2020. 

  • A Consensus-Based Global Optimization Method for High Dimensional Machine Learning Problems. [pdf]

Jose Carrillo, Shi Jin, Lei Li and Yuhua Zhu*

ESAIM: Control, Optimisation and Calculus of Variations 27, S5, 2020.

  • A Local Sensitivity and Regularity Analysis for the Vlasov-Poisson-Fokker-Planck System with Multi-dimensional Uncertainty and the Spectral Convergence of the Stochastic Galerkin Method. [pdf]

Yuhua Zhu. 

Networks and Heterogeneous Media, 14(4), 677-707, 2019.

  • An Uncertainty Quantification Approach to the Study of Gene Expression Robustness. [pdf]

Pierre Degond, Shi Jin and Yuhua Zhu*.

Methods and Applications of Analysis (A special issue in honor of the 80th birthday of Prof. Ling Hsiao), 2019.

  • Hypocoercivity and Uniform Regularity for the Vlasov-Poisson-Fokker-Planck System with Uncertainty and Multiple Scales. [pdf]

Shi Jin and Yuhua Zhu*. 

SIAM Journal on Mathematical Analysis, 50, 1790-1816, 2018.

  • The Vlasov-Poisson-Fokker-Planck System with Uncertainty and a One-Dimensional Asymptotic-Preserving Method. [pdf]

Yuhua Zhu and Shi Jin.

Multiscale Modeling and Simulation: A SIAM Interdisciplinary Journal, 15, 1502-1529, 2018.

*: Alphabetical authorship.

Ph.D. Thesis

  • Uncertainty Quantification for Fokker Planck Type Equations and Related Problems in Machine Learning. [pdf]

Yuhua Zhu.

Ph.D. Thesis,  Department of Mathematics, UW-Madison, 2019.

bottom of page