Simplified neural networks algorithm for function approximation on discrete input spaces in high dimension-limited sample applications

Syed Shabbir Haider, Xiao Jun Zeng

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Unlike the conventional fully connected feedforward multilayer neural networks for approximating functions on continuous input spaces, this paper investigates simplified neural networks (which use a common linear function in the hidden layer) for approximating functions on discrete input spaces. By developing the corresponding learning algorithms and testing with different data sets, it is shown that, comparing conventional multilayer neural networks for approximating functions on discrete input spaces, the proposed simplified neural network architecture and algorithms can achieve similar or better approximation accuracy especially when dealing with high dimensional-low sample cases, but with a much simpler architecture and less parameters. © 2008 Elsevier B.V. All rights reserved.
    Original languageEnglish
    Pages (from-to)1078-1083
    Number of pages5
    JournalNeurocomputing
    Volume72
    Issue number4-6
    DOIs
    Publication statusPublished - Jan 2009

    Keywords

    • Discrete input spaces
    • Function approximation
    • Neural networks
    • Simplified NN architecture
    • Universal approximation

    Fingerprint

    Dive into the research topics of 'Simplified neural networks algorithm for function approximation on discrete input spaces in high dimension-limited sample applications'. Together they form a unique fingerprint.

    Cite this