Abstract
In the traditional way of learning from examples of objects the classifiers are built in a feature space. However, alternative ways can be found by constructing decision rules on dissimilarity (distance) representations, instead. In such a recognition process a new object is described by its distances to (a subset of) the training samples. In this paper a number of methods to tackle this type of classification problem are investigated: the feature-based (i.e. interpreting the distance representation as a feature space) and rank-based (i.e. considering the given relations) decision rules. The experiments demonstrate that the feature-based (especially normal-based) classifiers often outperform the rank-based ones. This is to be expected, since summation-based distances are, under general conditions, approximately normally distributed. In addition, the support vector classifier achieves also a high accuracy. © 2000 IEEE.
Original language | English |
---|---|
Title of host publication | Proceedings - International Conference on Pattern Recognition|Proc Int Conf Pattern Recognit |
Pages | 12-16 |
Number of pages | 4 |
Volume | 15 |
Publication status | Published - 2000 |