Transformations of sigma-pi nets: Obtaining reflected functions by reflecting weight matrices

R. S. Neville, S. Eldridge

    Research output: Contribution to journalArticlepeer-review

    Abstract

    This paper presents a methodology that reflected functions by reflecting the weight matrices of an artificial neural network. One of the major problems with the connectionist approach is that trained neural networks can only associate fixed sets of input-output mappings. We provide a methodology which allows the post-trained net to associate different input-output mappings. The different mappings are reflected in a horizontal axis, reflected in a vertical axis and scaling of the initial mapping. The methodology does not train the net on the different mappings but it transforms the weight matrix of the neural network. This paper describes a novel way of utilising sigma-pi neural networks. Our new methodology manipulates sigma-pi unit's weight matrices which transform the unit's output. The weights are cast in a matrix formulation, and then transformations can be performed on the weight matrix of the sigma-pi net. To test the new methodology, the following three steps were carried out on a neural network: (1) the network was trained to perform a mapping function, f; (2) the weights of the network were transformed; and (3) the network was tested to evaluate whether it performs the reflection in the vertical axis, fref-vert(x)=a-f(x). This reflects the function in one dimension. A reflection transformation was used to manipulate the network's weight matrices to obtain a reflection in the vertical axis. Note that the network was not trained to perform the reflection in the vertical axis. The transformation of the weight matrix transformed the function the output performs. This article explains the theory which enables us to perform transformations of sigma-pi networks and obtain reflections of the output by reflecting the weight matrices. These transforms empower the network to perform related mapping tasks once one mapping task has been learnt. This article explains how each transformation is performed and it considers whether a set of 'standard' transformations can indeed be derived. © 2002 Elsevier Science Ltd. All rights reserved.
    Original languageEnglish
    Pages (from-to)375-393
    Number of pages18
    JournalNeural Networks
    Volume15
    Issue number3
    DOIs
    Publication statusPublished - 2002

    Keywords

    • Backpropagation
    • Neural networks
    • RAM nets
    • Reflection
    • Sigma-pi
    • Transformation

    Fingerprint

    Dive into the research topics of 'Transformations of sigma-pi nets: Obtaining reflected functions by reflecting weight matrices'. Together they form a unique fingerprint.

    Cite this