A secant-based Nesterov method for convex functions

Razak Alli-Oke, William Heath

    Research output: Contribution to journalArticlepeer-review

    140 Downloads (Pure)


    A simple secant-based fast gradient method is developed for problems whose objective function is convex and well-defined. The proposed algorithm extends the classical Nesterov gradient method by updating the estimate-sequence parameter with secant information whenever possible. This
    is achieved by imposing a secant condition on the choice of search point. Furthermore, the proposed algorithm embodies an "update rule with reset" that parallels the restart rule recently suggested in O’Donoghue and Candes (2013). The proposed algorithm applies to a large class of problems including
    logistic and least-square losses commonly found in the machine learning literature. Numerical results demonstrating the efficiency of the proposed algorithm are analyzed with the aid of performance profiles.
    Original languageEnglish
    JournalOptimization Letters
    Early online date7 Mar 2016
    Publication statusPublished - 2016


    Dive into the research topics of 'A secant-based Nesterov method for convex functions'. Together they form a unique fingerprint.

    Cite this