RVSS17_banner_1_small

Home       Registration       Important Dates       Program       Venue        Speakers        Travel & Accommodation         Workshop         Organizers 

 

Program Overview

  Sunday – 12 March Monday – 13 March Tuesday – 14 March Wednesday – 15 March Thursday – 16 March Friday – 17 March
8:00 Breakfast Breakfast Breakfast Breakfast Breakfast
9:00 Research Trends A
Prof. Davide Scaramuzza

slides

Tutorial A
SLAM from A-Z
slides-1
slides-2
slides-3
Tutorial B
Semantic Vision

slides-1
slides-2
slides-3
slides-4

Tutorial C
New Approaches to Deal with the Real World

slides-1

slides-2

slides-3

slides-4

Workshop Demos
9:30
10:00
10:30 Tea Break
11:00 Research Trends B
Prof. Javier Civera
slides
Research Trends C
Dr. Simon Lucey

slides

11:30
12:00 Travel
12:30 Lunch Lunch Lunch Lunch Lunch
14:00 Special Topic A
Prof. Hongdong Li
slides
Special Topic C
Dr. Chris McCool

slides

Free time Industry Talk A
Dr. Sebastien Rougeaux (Seeing Machines)

Travel

14:30 Industry Talk B
Dr. David Wood
(Ocular Robotics
)

Slides

15:00 Special Topic B
Dr. Anders Eriksson
slides
Round Table
Chair: Prof. Peter Corke
Special Topic D
Prof. Ian Reid

slides

15:30
16:00 Welcome & Registration Coffee Workshop intro Coffee Workshop Coffee Workshop
16:30
17:00
17:30
18:00 Social Event
(Scavenger Hunt)
Social Event
(Trivia Quiz)
Workshop Social Event
(Bonfire)
Social Event
(Movies)
18:30
19:00 Dinner Dinner Dinner Dinner Dinner
19:30

 

Detailed Program Schedule

Research Trends A

9:00 — 10:30, Monday, 13th March, 2017
Speaker:- Prof. Davide Scarramuzza, University of Zurich
Talk Title:- “Robust Visual-Inertial State Estimation: From Frame-based to Event-based Vision”

Abstract:- I will present the main algorithms to achieve robust, 6-DOF state estimation for mobile robots using passive sensing. Since cameras alone are not robust enough to high-speed motion and high-dynamic range scenes, I will describe how IMUs and event-based cameras can be fused with visual information to achieve higher accuracy and robustness. I will therefore dig into the topic of event-based cameras, which are revolutionary sensors with a latency of microseconds, a very high dynamic range, and a measurement update rate that is almost a million times faster than standard cameras. Finally, I will show concrete applications of these methods in autonomous navigation of vision-controlled drones.


Research Trends B

11:00 — 12:30, Monday, 13th March, 2017
Speaker:- Prof. Javier Civera, University of Zaragoza
Talk Title:- “Monocular Mapping”

Abstract:- Estimating an accurate and dense 3D map of a scene, from a monocular sequence, and in real-time at frame rate is still a research challenge. Most of the more solid results are based on the correspondence of salient points across different views, having two main limitations. Firstly, as low-texture points cannot be reconstructed, maps are incomplete and of limited use for robots. Secondly, low parallax views produce maps of high uncertainty. This might be a problem for mobile robots, as high-parallax motions might not be feasible for certain configurations.

In this lecture, we will review the state of the art in RGB mapping, starting from the most traditional sparse and semi-dense representations, detailing the most successful dense algorithms and highlighting their main weaknesses and strengths. We will reach the most advanced state of the art and sketch the most promising directions for future research.


Special Topic A

14:00 — 15:00, Monday, 13th March, 2017
Speaker:- Prof. Hongdong Li, Australian National University
Talk Title:- “Novel approaches for 3D reconstruction of non-rigid deformable shapes from multiple view correspondences”

Abstract:- In this tutorial, I will start from introducing the conventional factorisation-based multiple view non-rigid structure-from-motion method for 3D reconstruction of deformable shape and structures, and then move on to describing our recent works on novel approaches for solving such non-rigid 3D recovery problem. One of these novel approaches is based on the idea of relaxing the rigidity constraint to an “ARAP (i.e. as rigid as possible)” constraint, and the other is based on the analogy of solving a 3D jigsaw puzzle problem involving many piece-wise rigid surflets. It is my objective that, through this one-hour tutorial session, the students will not only learn particular state-of-the-art techniques for solving the non-rigid SFM problems per se, but also learn insights and skills about how to initiate, and how to carry out a novel original research idea from concept formation to algorithmic implementation. Some of the work that will be presented in this tutorial were reported in top conferences in the research field.


Special Topic B

15:00 — 16:00, Monday, 14th March, 2017
Speaker:- Dr. Anders Eriksson, Queensland University of Technology
Talk Title:- “Proximal Splitting Methods in Computer Vision

Abstract:- This talk will be a gently introduction to proximal splitting methods. First proposed in the 1960s, these methods have recently become increasingly popular in machine learning, signal processing and computer vision as a tool for solving large-scale optimization problems. We will review the basic properties of this very general class of algorithms and present some specific optimization methods based on proximal splitting. The aim is to illustrate the simplicity, applicability as well as potential of these approaches. A number of examples from our own recent work on applications in multi-view geometry will be presented and discussed.


Tutorial A

9:00 — 12:30, Tuesday, 14th March, 2017
Speakers :- Dr. Viorela Ila, Dr. Yasir Latif, Dr. Vincent Lui
Talk Title:- “SLAM from A-Z”

Abstract:- This three part tutorial will cover aspects of localizing a robot in an unknown environment, and simultaneously mapping the environment from noisy sensor measurements, a problem known in robotics as: simultaneous localization and mapping (SLAM). The first part of the tutorial will cover aspects of sensor data processing and registration. The second part will introduce a probabilistic framework for integrating the information from all the on-board sensors and producing an estimation of the robot pose and the map of the environment. This part will detail core SLAM algorithms. The third part of the tutorial will show the importance of the data association and will describe several existing techniques.  

Besides covering the theory, this tutorial also provides some hands-on work, which consist of putting together all the parts needed for implementing and running a SLAM algorithm on-board a mobile robotic platform. We will use Turtlebot platforms, the code is implemented in Python and will run under the robotic operating system (ROS). The exercise will be of the type “fill in the blanks”, with most of the functionalities already implemented.


Special Topic C

14:00 — 15:00, Tuesday, 16th March, 2017
Speaker:- Dr. Chris McCool, Queensland University of Technology
Talk Title:- “Vision for Agricultural Robotics”

Abstract:- This presentation will overview the development of vision systems to detect and segment crops essential for the purpose of developing a robotic (capsicum) harvester, Harvey. Horticulture industry remains heavily reliant on manual labour, and as such is highly affected by labour costs. In Australia, harvesting labour costs in 2013-14 accounted for 20% to 30% of total production costs.

This talk will discuss adaptation of the Faster-RCNN approach of object detection to exploit multi-modal (RGB and NIR) information and describe methods to derive efficient deep convolutional neural network (DCNN) approaches that can be deployed on robotic platforms, such as Harvey.


Round Table Discussion

15:00 — 16:00, Tuesday, 15th March, 2017
Chair:- Prof. Peter Corke, Queensland University of Technology
Topic:- “Future of Robotics and Computer Vision and How to Make a Career in this Area


Tutorial B

9:00 — 12:30, Wednesday, 15th March, 2017
Speakers:- Dr. Basura Fernando, Dr. Sareh Shirazi, Dr. Markus Eich, Dr. Trung Pham
Talk Title:- “Human Action Recognition and Semantics

Abstract:- In this tutorial, we cover human action recognition from videos. Tutorial starts by motivating applications of human action recognition in the real world. A typical human action recognition pipeline consists of temporal feature extraction, temporal modelling, and action classification. In this tutorial, we cover all aspects of this pipeline. At the end of the tutorial, the student should understand how to design a human action recognition system. First, the tutorial will cover some basics about temporal feature extraction such as 3D space-time interest point detection, optical flow features, temporal templates, motion energy images, motion history images, dense trajectories, and motion boundary histograms. Next, the tutorial will cover temporal modelling using several approaches. We start by discussing temporal pooling. Specifically, max pooling, average pooling, and rank pooling will be covered. Then, we discuss temporal modelling techniques such as temporal pyramids, linear dynamical systems, and Hidden Markov Models. Finally, in this section, we cover some basics about neural temporal encoding techniques. Finally, we discuss about classification techniques and semantic representations for objects and actions.

Semantical object representations and semantics of the scene are essential part of any human action recognition system. Correct identification of human object interactions and good scene understanding help to improve action recognition. We discuss Convolutional Neural Networks and semantic segmentation to extract semantics and objects. Then we show how to use these models for human action recognition either using neural classifiers or using support vector machine based action inference.


Tutorial C

9:00 — 12:30, Thursday, 16th March, 2017
Speakers:- Dr. Juxi Leitner,  Dr. Niko Suenderhauf,  Dr. Feras Dayoub,  Dr. Chuong Nguyen

Talk Title:- “New Approaches to Deal with the Real World

Abstract:- An important aspect of visual perception is the recognition and detection of objects. We will be starting off with some basic concepts on how to perform these tasks, before going into the problems that arise when applying these in real-world scenarios. Robotic platforms need to deal with a variety of environments and adverse conditions, e.g. change in lighting, which increases the difficulty of these tasks. Recent years have seen a significant performance improvement by applying deep learning strategies to object detection and recognition. This trend has shown promise also in robotic vision applications and we will present current limits and potentials of this approach.

This tutorial will include some examples and lessons learned from deploying robotic vision systems “in the wild”, from warehouse automation/competition settings, such as the Amazon Picking Challenge, all the way to underwater applications on the CotsBot.


Industry Talk A

14:00 — 14:30, Thursday, 13th March, 2017
Speaker:- Dr. Sebastien Rougeaux, Seeing Machines
Talk Title:- “Making Seeing Machines”

Abstract:- The talk will present a brief history of the Australian computer vision company ‘Seeing Machines’ and will focus on the current technical challenges and opportunities being encountered in the context of commercial driver monitoring applications.


Industry Talk B

14:30 — 15:00, Thursday, 13th March, 2017
Speaker :- Dr. David Wood, Ocular Robotics
Talk Title:- “Make Your Life Easier With Hardware – Solving perception challenges with hardware solutions”

Abstract:- The design of any perception system inherently requires trade-offs. What field do I need to be observing? What resolution do I need? What’s my weight, power and financial budget? In almost every case, these goals will require compromise. Managing this compromise is a significant challenge in developing a system in both the private and research sectors, where top-level performance is required with limited resources. There are a wide array of systems which could benefit from directed perception. Existing offerings are still almost all based on either pan-tilt-zoom systems (PTZ), where the entire sensor is moved to examine the area of interest, or on Virtual PTZ, where a high-resolution wide-field image is sub-sampled to generate a virtual low-resolution steerable camera.

Recent technologies have presented some alternative approaches. In particular, the use of mirror-based directed perception systems where the sensor remains stationary allow a much greater degree of flexibility. This decouples parameters such as size and weight from the motion dynamics of the field of view of the sensor. These systems also allow for ultra-responsive precision pointing, enabling intelligent systems to rapidly acquire directed high-resolution data.

In this presentation, we will go into detail on some of the trade-offs that are typically made in mobile robotic platforms. We will then discuss how more modern approaches can improve performance, and examine the additional advantages that moving to a modern sensor-directing approach can deliver, specifically examining Ocular Robotics’ suite of sensor-pointing solutions.


Special Topic D

15:00 — 16:00, Thursday, 16th March, 2017
Speaker:- Prof. Ian Reid, University of Adelaide
Talk Title:- TBA

Abstract:- TBA


Research Trends C

11:00 — 12:30, Friday, 17th March, 2017
Speaker:- Dr. Simon Lucey, Carnegie Mellon University

Talk Title:- “The Fast & the Compressible” – Reconstructing the 3D World through Mobile Devices”

Abstract:- Mobile devices are shifting from being a tool for communication to one that is used increasingly for perception. In this talk we will discuss my group’s work in the rapidly emerging space of using mobile devices to visually sense the 3D world. First, we will discuss the employment of high-speed (240+ FPS) cameras, now found on most consumer mobile devices. In particular, we will discuss how these high frame rates afford the application of direct photometric methods that allow for – previously unattainable – accurate, dense, and computationally efficient camera tracking & 3D reconstruction. Second, we will discuss how the problem of object category specific dense 3D reconstruction (e.g. “chair”, “bike”, “table”, etc.) can be posed as a Non-Rigid Structure from Motion (NRSfM) problem. We will discuss some theoretical advancements we have made recently surrounding this problem – in particular when one assumes the 3D shape being reconstructed is compressible. We will then relate these theoretical advancements to practical algorithms that can be applied to most modern mobile devices.


Name Location Role
Zongyuan GeQUTPhD Candidate
Zhibin LiaoUniversity of AdelaidePhD Candidate
Zetao “Jason” ChenQUTPhD Candidate
Yuchao JiangUniversity of AdelaidePhD Candidate
Yi “Joey” ZhouANUPhD Candidate
Yasir LatifUniversity of AdelaideResearch Fellow
Yan ZouMonashPhD Candidate
Xiaoqin WangMonashPhD Candidate
Will ChamberlainQUTPhD Candidate
Viorela IlaANUResearch Fellow
Vincent LuiMonashPhD Candidate
Vijay KumarUniversity of AdelaideResearch Fellow
Trung Than PhamUniversity of AdelaideResearch Fellow
Tristan PerezQUTAssociate Investigator
Tracy KellyQUTFinance & Administration Officer
Tong ShenUniversity of AdelaidePhD Candidate
Tom DrummondMonash UniversityChief Investigator
Tim MacugaQUTCommunications and Media Officer
Thuy MaiUniversity of AdelaideNode Administration Officer
Thanuja DharmasiriMonashPhD Candidate
Tat-Jun ChinUniversity of AdelaideAssociate Investigator
Sue KeayQUTChief Operating Officer
Stephen GouldANUChief Investigator
Sourav GargQUTPhD Candidate
Sean McMahonQUTPhD Candidate
Sareh ShiraziQUTResearch Fellow
Sarah AllenQUTNode Administration Officer, PA to Centre Director Professor Peter Corke
Ruth SchulzQUTResearch Fellow
Ross CrawfordQUTAssociate Investigator
Rodrigo Santa CruzANUPhD Candidate
Rob MahonyANUChief Investigator
Richard HartleyANUChief Investigator
Riccardo SpicaANUPhD Candidate
Qinfeng ShiUniversity of AdelaideAssociate Investigator
Philip TorrOxfordPartner Investigator
Peter KujalaQUTPhD Candidates
Peter CorkeQUTCentre Director
Peter AndersonANUPhD Candidate
Paul NewmanOxfordPartner Investigator
Niko SuenderhaufQUTResearch Fellow
Mike BradyOxfordCentre Advisory Committee
Michelle SimmonsUNSWCentre Advisory Committee
Michael MilfordQUTChief Investigator
Matt DunbabinQUTAssociative Investigators
Markus EichQUTResearch Fellow
Marc PollefeysETH ZurichPartner Investigator
Mandyam SrinivasanUniversity of QueenslandCentre Advisory Committee
Luis Mejias AlvarezQUTAssociate Investigator
Lin WuUniversity of AdelaideResearch Fellow
Laurent KneipANUAssociative Investigators
Khurrum AftabMonash UniversityResearch Fellow
Kate AldridgeQUTCentre Administrative Coordinator
Juxi LeitnerQUTResearch Fellow
Juan AdarveANUPhD Candidate
Jonghyuk KimANUAssociate Investigator
Jonathan RobertsQUTChief Investigator
John SkinnerQUTPhD Candidate
Jochen TrumpfANUAssociate Investigator
Jeffrey DevarajQUTPhD Candidate
Jason FordQUTAssociate Investigator
Jae-Hak KimUniversity of AdelaideResearch Fellow
Inkyu SaQUTResearch Fellow
Ian ReidUniversity of AdelaideDeputy Director
Hui LiUniversity of AdelaidePhD Candidate
Hugh Durrant-WhyteUniversity of SydneyCentre Advisory Committee
Hongdong LiANUChief Investigator
Gustavo CarneiroUniversity of AdelaideChief Investigator
Guosheng LinUniversity of AdelaideResearch Fellow
Greg LeeQUTExternal Engagement Coordinator
Gordon WyethQUTChief Investigator
Frank DellaertOxfordPartner Investigator
Francois ChaumetteInriaPartner Investigator
Feras DayoubQUTResearch Fellow
Fatih PorikliANUAssociate Investigator
Fangyi ZhangQUTPhD Candidate
Fahimeh RezazadeganQUTPhD Candidate
Edison GuoANUPhD Candidate
Donald DansereauQUTResearch Fellow
Dinesh GamageMonashResearch Fellow
David SuterUniversity of AdelaideAssociate Investigator
David HallQUTPhD Candidate
David BallQUTResearch Fellow
Dan RichardsQUTPhD Candidates
Clinton FookesQUTAssociate Investigator
Chuong NguyenANUResearch Fellow
Chunhua ShenUniversity of AdelaideChief Investigator
Chris McCoolQUTResearch Fellow
Chris LehnertQUTResearch Fellow
Chris JefferyQUTPhD Candidate
Bohan ZhuangUniversity of AdelaidePhD Candidate
Ben UpcroftQUTChief Investigator
Ben TalbotQUTPhD Candidate
Ben MeyerMonashPhD Candidate
Ben HarwoodMonashPhD Candidate
Basura FernandoANUResearch Fellow
Anton Van Den HengelUniversity of AdelaideChief Investigator
Anthony DickUniversity of AdelaideAssociate Investigator
Anoop CherianANUResearch Fellow
Anjali JaiprakashQUTResearch Fellow
Andrew SpekMonashPhD Candidate
Andrew EnglishQUTPhD Candidate
Andrew DavisonImperial College LondonPartner Investigator
Andres Felipe Marmol VelezQUTPhD Candidate
Anders ErikssonQUTResearch Fellow
Alex ZelinskyDefence Science and Technology OrganisationCentre Advisory Committee
Ajay PandeyQUTResearch Fellow
Ahmet SekerciogluMonash UniversityAssociate Investigator
Adam TowQUTPhD Candidate
Adam JacobsonQUTPhD Candidate