Successive preconditioned conjugate gradient method for the minimization of quadratic and nonlinear functions

K. Davey, M.J. Ward

Research output: Contribution to journalArticlepeer-review


This paper is concerned with a conjugate gradient method that utilizes a successive set of preconditioner matrices. The method is developed for the solution of min{f (x): x ∈ Rn}, where f is a nonlinear function that is sufficiently smooth to possess a Hessian matrix that is continuous. The theory underpinning the method is developed for quadratic polynomials of the form p(x) = 12 (xTAx − 2xTb) where A is an arbitrary n × n matrix that is symmetric and positive definite. The minimization of p is achieved by the successive minimization of hi (ui) = p ◦ gi (ui ) where gi (ui ) = Kiui , i = 0, 1, . . . , n − 1, and where Ki are invertible matrices designed to accelerate the convergence of the conjugate gradient method. It is shown that quadratic termination is achievable for an arbitrary set, {K: i = 0, 1, . . . , n − 1}, of nonsingular matrices. This remarkable property provides for a very general formulation, where the preconditioned conjugate gradient method (PCGM) and quasi-Newton method are just special cases. The method is extended to nonlinear functions by means of an inexact Newton method. The method is tested on some simple functions to demonstrate the flexibility and computational effectiveness of the new approach. Systems of nonlinear equations generated from a finite element flow formulation are tested to illustrate the practical usefulness of the method.
Original languageEnglish
Pages (from-to)129-156
Number of pages28
JournalApplied Numerical Mathematics
Issue number2
Publication statusPublished - Oct 2000


  • Preconditioners
  • Conjugate gradients
  • Optimization
  • Quasi-Newton


Dive into the research topics of 'Successive preconditioned conjugate gradient method for the minimization of quadratic and nonlinear functions'. Together they form a unique fingerprint.

Cite this