当前位置: 首页/ 学术报告

DeepParticle: learning invariant measure by a deep neural network minimizing Wasserstein distance on data generated from an interacting particle method

发布时间:2023-02-28 浏览量:53

时   间:  2023-02-28 15:00 — 16:00

地   点:  腾讯会议APP4()
报告人:  Zhiwen Zhang
单   位:  The University of Hong Kong
邀请人:  唐敏
备   注:  Tencent Meeting: 221 852 676 Meeting Password: 230228
报告摘要:  


High dimensional partial differential equations (PDE) are challenging to compute by traditional mesh-based methods especially when their solutions have large gradients or concentrations at unknown locations. Mesh-free methods are more appealing; however, they remain slow and expensive when a long time and resolved computation is necessary. In this talk, we present DeepParticle, an integrated deep learning (DL), optimal transport (OT), and interacting particle (IP) approach through a case study of Fisher-Kolmogorov-Petrovsky-Piskunov front speeds in incompressible flows. PDE analysis reduces the problem to the computation of the principal eigenvalue of an advection-diffusion operator. Stochastic representation via the Feynman-Kac formula makes possible a genetic interacting particle algorithm that evolves particle distribution to a large time-invariant measure from which the front speed is extracted. The invariant measure is parameterized by a physical parameter (the Peclet number). We learn this family of invariant measures by training a physically parameterized deep neural network on affordable data from IP computation at moderate Peclet numbers, then predict at a larger Peclet number when IP computation is expensive. Our methodology extends to a more general context of deep learning stochastic particle dynamics. For instance, we can learn and generate aggregation patterns in Keller-Segel chemotaxis systems.