Towards Size-Independent Generalization Bounds for Deep Operator Nets

Pulkit Gopalani, Sayar Karmakar, Dibyakanti Kumar, Anirbit Mukherjee*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In recent times machine learning methods have made significant advances in becoming a useful tool for analyzing physical systems. A particularly active area in this theme has been ``physics-informed machine learning'' which focuses on using neural nets for numerically solving differential equations. In this work, we aim to advance the theory of measuring out-of-sample error while training DeepONets - which is among the most versatile ways to solve P.D.E systems in one-shot. Firstly, for a class of DeepONets, we prove a bound on their Rademacher complexity which does not explicitly scale with the width of the nets involved. Secondly, we use this to show how the Huber loss can be chosen so that for these DeepONet classes generalization error bounds can be obtained that have no explicit dependence on the size of the nets. The effective capacity measure for DeepONets that we thus derive is also shown to correlate with the behavior of generalization error in experiments.
Original languageEnglish
Number of pages33
JournalTransactions on Machine Learning Research
Publication statusPublished - 2 Dec 2024

Research Beacons, Institutes and Platforms

  • Christabel Pankhurst Institute
  • Institute for Data Science and AI

Fingerprint

Dive into the research topics of 'Towards Size-Independent Generalization Bounds for Deep Operator Nets'. Together they form a unique fingerprint.

Cite this