Projects per year
Abstract
The socially-aware navigation system has evolved to adeptly avoid various obstacles while performing multiple tasks, such as point-to-point navigation, human-following, and-guiding. However, a prominent gap persists: in Human-Robot Interaction (HRI), the procedure of communicating commands to robots demands intricate mathematical formulations. Furthermore, the transition between tasks does not quite possess the intuitive control and user-centric interactivity that one would desire. In this work, we propose an LLM-driven interactive multimodal multitask robot navigation framework, termed LIM2N, to solve the above new challenge in the navigation field. We achieve this by first introducing a multimodal interaction framework where language and hand-drawn inputs can serve as navigation constraints and control objectives. Next, a reinforcement learning agent is built to handle multiple tasks with the received information. Crucially, LIM2N creates smooth cooperation among the reasoning of multimodal input, multitask planning, and adaptation and processing of the intelligent sensing modules in the complicated system. Detailed experiments are conducted in both simulation and the real world demonstrating that LIM2N has solid user needs understanding, alongside an enhanced interactive experience.
Original language | English |
---|---|
Title of host publication | 2024 IEEE International Conference on Robotics and Automation, ICRA 2024 |
Publisher | IEEE |
Pages | 1019-1025 |
Number of pages | 7 |
ISBN (Electronic) | 9798350384574 |
DOIs | |
Publication status | Published - 8 Aug 2024 |
Event | 2024 IEEE International Conference on Robotics and Automation, ICRA 2024 - Yokohama, Japan Duration: 13 May 2024 → 17 May 2024 |
Publication series
Name | Proceedings - IEEE International Conference on Robotics and Automation |
---|---|
ISSN (Print) | 1050-4729 |
Conference
Conference | 2024 IEEE International Conference on Robotics and Automation, ICRA 2024 |
---|---|
Country/Territory | Japan |
City | Yokohama |
Period | 13/05/24 → 17/05/24 |
Fingerprint
Dive into the research topics of 'Language and Sketching: An LLM-driven Interactive Multimodal Multitask Robot Navigation Framework'. Together they form a unique fingerprint.Projects
- 1 Active
-
MCAIF: Centre for AI Fundamentals
Kaski, S. (PI), Alvarez, M. (Researcher), Pan, W. (Researcher), Mu, T. (Researcher), Rivasplata, O. (PI), Sun, M. (PI), Mukherjee, A. (PI), Caprio, M. (PI), Sonee, A. (Researcher), Leroy, A. (Researcher), Wang, J. (Researcher), Lee, J. (Researcher), Parakkal Unni, M. (Researcher), Sloman, S. (Researcher), Menary, S. (Researcher), Quilter, T. (Researcher), Hosseinzadeh, A. (PGR student), Mousa, A. (PGR student), Glover, E. (PGR student), Das, A. (PGR student), DURSUN, F. (PGR student), Zhu, H. (PGR student), Abdi, H. (PGR student), Dandago, K. (PGR student), Piriyajitakonkij, M. (PGR student), Rachman, R. (PGR student), Shi, X. (PGR student), Keany, T. (PGR student), Liu, X. (PGR student), Jiang, Y. (PGR student), Wan, Z. (PGR student), Harrison, M. (Support team), Machado, M. (Support team), Hartford, J. (PI), Kangin, D. (Researcher), Harikumar, H. (PI), Dubey, M. (PI), Parakkal Unni, M. (PI), Dash, S. P. (PGR student), Mi, X. (PGR student) & Barlas, Y. (PGR student)
1/10/21 → 30/09/26
Project: Research