Large Data and Zero Noise Limits of Graph-Based Semi-Supervised Learning Algorithms

Matthew Dunlop, Dejan Slepcev, Andrew M. Stuart, Matthew Thorpe

Research output: Contribution to journalArticlepeer-review

106 Downloads (Pure)


Scalings in which the graph Laplacian approaches a differential operator in the large graph limit are used to develop understanding of a number of algorithms for semi-supervised learning; in particular, the probit algorithm, level set and kriging methods. Both optimization and Bayesian approaches are considered, based around a regularizing quadratic form found from an affine transformation of the Laplacian, raised to a possibly fractional, exponent. Conditions on the parameters defining this quadratic form are identified under which well-defined limiting continuum analogues of the optimization and Bayesian semi-supervised learning problems may be found, thereby shedding light on the design of algorithms in the large graph setting. The large graph limits of the optimization formulations are tackled through Γ-convergence, using the recently introduced TLp metric. The small labeling noise limits of the Bayesian formulations are also identified, and contrasted with pre-existing harmonic function approaches to the problem.
Original languageEnglish
Pages (from-to)655-697
Number of pages43
JournalApplied and Computational Harmonic Analysis
Issue number2
Early online date4 Apr 2019
Publication statusPublished - Sept 2020


  • Asymptotic consistency
  • Bayesian inference
  • Higher-order fractional Laplacian
  • Kriging
  • Semi-supervised learning


Dive into the research topics of 'Large Data and Zero Noise Limits of Graph-Based Semi-Supervised Learning Algorithms'. Together they form a unique fingerprint.

Cite this