Visual Interaction for Real-Time Navigation of Autonomous Mobile Robots

Marco Luigi Della Vedova, Tullio Facchinetti, Antonella Ferrara, Alessandro Martinelli

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Visual feedback is one of the most adopted solutions for driving the navigation of autonomous robots in unknown environments. This paper presents the structure of a visual interaction system suitable for real-time robotics applications. By means of a specific modeling, the visual system allows a team of mobile robots to perform any relevant visual task in a timely fashion. As a matter of fact, the guarantee of real-time constraints for the processing tasks related with the visual feedback is crucial to achieve an accurate and robust control of mobile robots. The proposed visual infrastructure is based on a single camera, which provides a global view of the robot's workspace. A degenerated camera model is developed to allow a planar motion in R3 . The model simplifies the visual system calibration, while reducing the cost of coordinates transforms between the real-world and the image space during the system operation. To show the behaviour and to derive the performances of the visual interaction system, experimental results are carried out considering the real-time navigation of autonomous mobile robots.
Original languageEnglish
Title of host publicationCyberWorlds, 2009. CW '09. International Conference on
Pages211-218
Number of pages8
DOIs
Publication statusPublished - 2009
EventCyberWorlds, 2009. CW '09. International Conference on - BRADFORD
Duration: 7 Sep 200911 Sep 2009

Conference

ConferenceCyberWorlds, 2009. CW '09. International Conference on
CityBRADFORD
Period7/9/0911/9/09

Keywords

  • cameras
  • coordinate transforms
  • degenerated camera model
  • mobile robots

Fingerprint

Dive into the research topics of 'Visual Interaction for Real-Time Navigation of Autonomous Mobile Robots'. Together they form a unique fingerprint.

Cite this