Manipulation
This demonstrator takes the form of a robotic workstation where we can showcase the Centre’s capability in vision-enabled robotic grasping in a clear and compelling way to a general audience. The everyday tasks it aims to replicate are picking up everyday objects, placing objects, and exchanging objects with a person.
Team Members
Steve Martin
Queensland University of Technology (QUT), Australia
I graduated with a Bachelor of Mechatronics from QUT in 2009 and a former PhD student at the original QUT Cyphy Lab with Gordon Wyeth and Peter Corke. I rejoined QUT and the Centre in February 2016 as a research engineer to assist with the engineering requirements as the Centre grew. Within the group I work on a huge range of projects from general day to day robot maintenance such to software development or electrical design work.
Gavin Suddrey
Queensland University of Technology (QUT), Australia
Gavin graduated from QUT with a Bachelor in Games and Interactive Entertainment (Software Technology) in 2011, and a Bachelor of Information Technology (Honours I) in 2014. He was previously a PhD student within the Robotics and Autonomous Systems group at QUT with Frederic Maire. Gavin returned to QUT in 2017 as a software engineer on the humanoid robotics project. In his role he worked on expanding the general capabilities of the Pepper robot, primarily through the use of vision, whilst also working to assist both researchers and students within the Centre and wider QUT community in utilising Pepper within their own research.
In August 2019, Gavin joined the Centre as a research engineer and is based in the Centre headquarters at QUT.
Project Aim
Creating robots that see is core to the Centre’s mission. The best way to demonstrate that a robot can see is for it perform a useful everyday action like hand-eye coordination.
This demonstrator takes the form of a robotic workstation where we can showcase the Centre’s capability in vision-enabled robotic grasping in a clear and compelling way to a general audience. The everyday tasks it aims to replicate are picking up everyday objects, placing objects, and exchanging objects with a person. A key focus for 2020 is having the robot receive an item from a person, hand an item to a person, and implementing the demonstrations on a mobile manipulation platform.
For robots, grasping and manipulation is hard. The focus of our research is for robots to master manipulation in unstructured and dynamic environments that reflect the unpredictability of the real world. To achieve this, they need to be able to integrate what they see with how they move. The result will be robots that can operate more effectively and be robust enough to handle new tasks, new objects and new environments.
Key Results
The project team implemented a generalised interface to the Franka-Emika Panda robot that allows for position control and velocity control with compliance and joint limits. The implementation has enabled our research students to more easily access the advanced capabilities of this robot. The code has been open sourced to the global research community and also been installed on Panda robots at the Centre’s Monash node.
Within the Centre’s Manipulation & Vision project, PhD Researcher Doug Morrison has developed a generative grasping convolutional neural network (GG-CNN) which predicts a pixel-wise grasp quality that can be deployed in closed-loop grasping scenarios. The network has achieved excellent results in gasping, particularly in cluttered scenes, which has seen an 84% grasp success rate on a set of previously unseen objects, and 94% on household items. His paper with co-authors Centre Director Peter Corke and Research Fellow Juxi Leitner is called “Learning robust, real-time, reactive robotic grasping” and was published in The International Journal of Robotics Research. The project team ported GG-CNN to the Centre’s CloudVis service, our cloud computer vision platform which makes some of the most recent developments in computer vision available for general use. This allows the grasp planner to work on a low-end computer without a GPU which is important in making the demonstrator easy to run on any computer.
A tabletop demonstrator for GG-CNN was created that allows an unskilled user to command the robot to pick up and bin everyday objects placed on the tabletop. This demonstrator uses a Franka-Emika Panda arm with visual input from an end-effector mounted RGB-D camera. This demonstration is an exemplar of many real–world applications that require grasping complex objects in cluttered environments.
Research Engineer Gavin Suddrey developed a touchscreen driven demonstrator front end that manages a complete Robot Operating System startup and shutdown and has a touch-based Graphical User Interface (GUI) for individual demonstrator applications. This means that the demonstrations can be run anywhere and anytime without requiring detailed knowledge of robotic software. The applications are modular and can be installed and updated into the demonstrator framework by end-users across the Centre.
A prototype of a tabletop demonstrator for valve turning was created and changes the state of a user-selected valve (open to closed or closed to open) based on input from an end-effector mounted RGB-D camera and using compliant motion.
Activity Plan for 2020
- Open source the motion control software developed for the Franka-Emika Panda Robot.
- Integrate a comprehensive table-top manipulation demonstrator that incorporates technologies from across the Centre from vision-based grasp planning, to interaction with people through gesture and language.
- Deploy the demonstrator interface on the mobile manipulation platform and demonstrate the ability to move objects between shelves and tables.