Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions

Ke Chen, Shihai Wang

    Research output: Contribution to journalArticlepeer-review


    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work. © 2006 IEEE.
    Original languageEnglish
    Article number5444873
    Pages (from-to)129-143
    Number of pages14
    JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
    Issue number1
    Publication statusPublished - 2011


    • boosting framework
    • cluster assumption
    • manifold assumption
    • regularization
    • Semi-supervised learning
    • smoothness assumption


    Dive into the research topics of 'Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions'. Together they form a unique fingerprint.

    Cite this