Bigger PINNs Are Needed for Noisy PDE Training

Activity: Talk or presentationOral presentationResearch

Description

Physics-Informed Neural Networks (PINNs) are increasingly used to solve various
partial differential equations (PDEs), especially in high dimensions. In
real-world applications, data samples are noisy, making it essential to
understand the conditions under which a predictor can achieve a small empirical
risk. In this work, we present a first-of-its-kind lower bound on the size of
neural networks required for the supervised PINN empirical risk to fall below
the variance of noisy supervision labels. Specifically, we show that to achieve
low training error, the number of parameters must be lowerbounded by a little
less than one trainable parameter per training sample. Consequently, using more
noisy training data alone does not provide a “free lunch” in reducing empirical
risk. We investigate PINNs applied to the Hamilton–Jacobi–Bellman (HJB) PDE as a
case study. Our findings lay the groundwork for a program on rigorously
quantifying parameter requirements for effective PINN training under noisy
conditions.
PeriodMar 2025
Event title18th INFORMS Computing Society Conference
Event typeConference
LocationToronto, Canada, OntarioShow on map

Keywords

  • PDE
  • lowerbounds
  • deep-learning theory