Abstract
A new algorithm is described for estimating the change in orientation and position of an object in two sets of images. The images within each set are calibrated but the exact geometrical relationship between the two sets of views is unknown. Variations in the two-dimensional silhouette of a fixed and rigid three-dimensional object, as the viewpoint is changed, are analysed to estimate the relative position and orientation of the object in the two different image sets. The main advantage of this method is that no explicit point, or line, correspondences need be identified; the only requirement is for reliable segmentation of the object from the background. It is shown that an incorrect estimate of the relative object pose gives rise to silhouettes which are inconsistent in that they violate a certain geometrical constraint. The extent to which the images are consistent is quantified using a certain consistency metric. Standard minimisation techniques are then used to obtain accurate estimates for both rotational and translational parameters. Results are presented for the registration of synthetic images, with added noise, and for the registration of real image data. For small test objects the relative orientation estimates are consistent to within ± 6 degrees and the relative translation estimates to ± 1.8 mm.
Original language | English |
---|---|
Pages (from-to) | 1-8 |
Number of pages | 7 |
Journal | IEE Proceedings: Vision, Image and Signal Processing |
Volume | 147 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2000 |