Manipulation
This robotic workstation demonstrator showcased the Centre’s capability in vision-enabled robotic grasping in a clear and compelling way to a general audience. The everyday tasks it replicated included; picking up everyday objects, placing objects, and exchanging objects with a person.
Team Members
Steve Martin
Queensland University of Technology
Steve graduated with a Bachelor of Mechatronics from QUT in 2009 and was a former PhD student at the original QUT Cyphy Lab with Gordon Wyeth and Peter Corke. He rejoined QUT and the Centre in February 2016 as a research engineer to assist with engineering requirements as the Centre grew. Within the group Steve works on a huge range of projects from general day to day robot maintenance, software development and electrical design work.
Gavin Suddrey
Queensland University of Technology
Gavin graduated from QUT with a Bachelor in Games and Interactive Entertainment (Software Technology) in 2011, and a Bachelor of Information Technology (Honours I) in 2014. Since 2014 Gavin has worked in numerous roles at QUT, including as an Associate Lecturer within the school of Electrical Engineering and Computer Science; a Software Engineer with the Humanoid Robotics Project; and most recently as a Research Engineer with the Australian Centre for Robotic Vision. Gavin is also a PhD student studying part-time under Frederic Maire within the School of Electrical Engineering and Robotics.
Project Aim
For robots, grasping and manipulation is hard. One focus of the Centre’s research was enabling robots to master the manipulation of everyday objects in realistic unstructured and dynamic settings. To achieve this, a robot must be able to integrate what it sees with how it moves. The result will be a new generation of robots that can operate effectively in “messy” human environments, and be versatile enough to handle new tasks, new objects and new environments.
This robotic workstation demonstrator showcased the Centre’s capability in vision-enabled robotic grasping in a clear and compelling way to a general audience. The everyday tasks it replicated included; picking up everyday objects, placing objects, and exchanging objects with a person. Specifically, in 2020 the aim of the project was to train the robot to receive an item from a person, hand an item to a person and implement the demonstrations on a mobile manipulation platform.
Key Results
In 2020, the project team, implemented the infrastructure for a table-top manipulation demonstrator intended to be installed in a public space and run safely and largely unsupervised by ensuring that the system is robust to failure. Developments included: a Linux system service to control the life-cycle of demonstrator software and required services and hardware; watch-dog services and automatic error recovery to ensure safety; a behaviour tree for overall system control; and an AR-tag based calibration process to allow users to change the workspace layout.
In addition to the robustness aspects of the system, the team also created an interactive demonstrator experience which showcases Centre research results and includes the following: spoken voice call to action with different feedback based on the proximity of detected faces; a pick and place demo; compliant control demo where the robot opens and closes valves; and a hand-over demo where the robot passes an object to the user.
The demonstrator was implemented as a behaviour tree with demo switching and robust error recovery capabilities. User interaction, for selecting demos and interacting with demos is achieved through a simple web interface.
The demonstrator spent 6 weeks off-site at three different locations, ARM Hub, The Cube QUT and at the World of Drones & Robotics Congress 2020. During these installations the demonstrator operated reliably without constant human supervision and interacted with many hundreds of visitors.
In the final months of 2020, the team continued to add capabilities to the robot including; implementation of a more powerful object recognition system (based on the Centre’s RefineNet-lite), integrating the Centre’s Vision & Language capability, and a new language-to-action interface.
The project team also deployed a version of the demonstrator at the Monash node and open sourced a package of software to interface with Franka-Emika Panda robot.