Visually guided cooperative robot actions based on information quality

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

In field environments it is not usually possible to provide robots in advance with valid geometric models of its environment and task element locations. The robot or robot teams need to create and use these models to locate critical task elements by performing appropriate sensor based actions. This paper presents a multi-agent algorithm for a manipulator guidance task based on cooperative visual feedback in an unknown environment. First, an information-based iterative algorithm to intelligently plan the robot's visual exploration strategy is used to enable it to efficiently build 3D models of its environment and task elements. The algorithm uses the measured scene information to find the next camera position based on expected new information content of that pose. This is achieved by utilizing a metric derived from Shannon's information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. Second, after an appropriate environment model has been built, the quality of the information content in the model is used to determine the constraint-based optimum view for task execution. The algorithm is applicable for both an individual agent as well as multiple cooperating agents. Simulation and experimental demonstrations on a cooperative robot platform performing a two component insertion/mating task in the field show the effectiveness of this algorithm.

Original languageEnglish
Pages (from-to)89-110
Number of pages22
JournalAutonomous Robots
Volume19
Issue number1
DOIs
StatePublished - Jul 2005
Externally publishedYes

Funding

The authors would like to acknowledge the support of the Jet Propulsion Laboratory, NASA in this work (in particular Dr. Paul Schenker and Dr. Terry Huntsberger). The authors also acknowledge the help of Grant Kristofek, MIT, for the fabrication of the experimental system used in this work.

Keywords

  • Cooperative robots
  • Information theory
  • Unstructured environments

Fingerprint

Dive into the research topics of 'Visually guided cooperative robot actions based on information quality'. Together they form a unique fingerprint.

Cite this