Towards an architecture for cognitive vision using qualitative spatio-temporal representations and abduction

Anthony G. Cohn, Derek R. Magee, Aphrodite Galata, David C. Hogg, Shyamanta M. Hazarika

    Research output: Chapter in Book/Report/Conference proceedingConference contribution


    In recent years there has been increasing interest in constructing cognitive vision systems capable of interpreting the high level semantics of dynamic scenes. Purely quantitative approaches to the task of constructing such systems have met with some success. However, qualitative analysis of dynamic scenes has the advantage of allowing easier generalisation of classes of different behaviours and guarding against the propagation of errors caused by uncertainty and noise in the quantitative data. Our aim is to integrate quantitative and qualitative modes of representation and reasoning for the analysis of dynamic scenes. In particular, in this paper we outline an approach for constructing cognitive vision systems using qualitative spatial-temporal representations including prototypical spatial relations and spatio-temporal event descriptors automatically inferred from input data. The overall architecture relies on abduction: the system searches for explanations, phrased in terms of the learned spatio-temporal event descriptors, to account for the video data.
    Original languageEnglish
    Title of host publicationLecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)|Lect Notes Artif Intell
    EditorsC. Freska, W. Brauer, C. Habel, K.F. Wender
    PublisherSpringer Nature
    Number of pages16
    Publication statusPublished - 2003
    EventSpatial Cognition 2002 - Tutzing
    Duration: 1 Jul 2003 → …

    Publication series

    NameLecture Notes in Computer Science


    ConferenceSpatial Cognition 2002
    Period1/07/03 → …
    Internet address


    Dive into the research topics of 'Towards an architecture for cognitive vision using qualitative spatio-temporal representations and abduction'. Together they form a unique fingerprint.

    Cite this