Ontological representation of vision-based 3D spatio-temporal context for mobile robot applications

G.H. Lim, J. Chung, G.G. Ryu, J.B. Kim, S.H. Lee, S. Lee, I.H. Suh, J.H. Choi, Y.T. Park

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper, we propose an ontology-base context model that consists of high level context as well as primitive spatial and temporal context. Moreover reasoning tools are used to find out not only simple contextual information such as object location, movement and distance but also hidden contextual information such as some objects disappeared by moving behind bigger objects. Also we use axiomatic rules for resolving uncertainties which might be caused by the mismatches of 3D SIFT key points. Some practical examples will be provided to show the validities of our proposed ontology-based context model.
Original languageEnglish
Title of host publicationProceedings of the 12th International Symposium on Artificial Life and Robotis, AROB 12th'07
Pages478-483
Number of pages6
Publication statusPublished - 2007
Event12th International Symposium on Artificial Life and Robotics, - Oita, Japan
Duration: 25 Jan 200727 Jan 2007
Conference number: 82299

Conference

Conference12th International Symposium on Artificial Life and Robotics,
Country/TerritoryJapan
CityOita
Period25/01/0727/01/07

Fingerprint

Dive into the research topics of 'Ontological representation of vision-based 3D spatio-temporal context for mobile robot applications'. Together they form a unique fingerprint.

Cite this