Abstract
Neural network models of categorical perception (compression of within-category similarity and dilation of between-category differences) are applied to the symbol-grounding problem (of how to connect symbols with meanings) by connecting analogue sensorimotor projections to arbitrary symbolic representations via learned category-invariance detectors in a hybrid symbolic/non-symbolic system. Our nets are trained to categorize and name 50 x 50 pixel images (e.g. circles, ellipses, squares and rectangles) projected on to the receptive field of a 7 x 7 retina. They first learn to do prototype matching and then entry-level naming for the four kinds of stimuli, grounding their names directly in the input patterns via hidden-unit representations ('sensorimotor toil'). We show that a higher-level categorization (e.g. 'symmetric' versus 'asymmetric') can be learned in two very different ways: either (1) directly from the input, just as with the entry-level categories (i.e. by toil); or (2) indirectly, from Boolean combinations of the grounded category names in the form of propositions describing the higher-order category ('symbolic theft'). We analyse the architectures and input conditions that allow grounding (in the form of compression/separation in internal similarity space) to be 'transferred' in this second way from directly grounded entry-level category names to higher-order category names. Such hybrid models have implications for the evolution and learning of language.
Discover the world's research
Discover the world's research
Original language | English |
---|---|
Pages | 143-162 |
Number of pages | 20 |
DOIs | |
Publication status | Published - Jun 2000 |