ACRV Challenges

ACRV Scene Understanding Challenge 

This novel challenge tasked competitors with creating systems that can understand the semantic and geometric aspects of an environment through two distinct tasks: Object-based Semantic SLAM and Scene Change Detection. The challenge provided high-fidelity, simulated environments for testing, three challenge difficulties, a simple AI Gym-style API enabling sim-to-real transfer, and a new evaluation measure for evaluating semantic object maps. All of this was enabled using the newly-created BenchBot framework also developed here at the ACRV.

 

Probabilistic Object Detection (PrOD) Challenge

If a robot moves with overconfidence about its surroundings … it is going to break things.

In our probabilistic object detection (PrOD) challenge, we needed our object detection systems to move beyond a simple bounding box and class label score. This challenge requires participants to detect objects in video data produced from high-fidelity simulations.

The novelty of this challenge is that participants are rewarded for providing accurate estimates of both spatial and semantic uncertainty for every detection using probabilistic bounding boxes.

 

To find out more about the ACRV robotic vision challenges, check out our website: roboticvisionchallenge.org



Comments Closed

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
David H.
David H.
August 20, 2020 2:49 pm

Sooooo much work has gone into both of these challenges but it has been very rewarding.

Such a big team effort that I am proud to have been a part of, needing so many different areas of expertise. Object detection, Unreal simulation, robotics, ROS, Isaac, evaluation measure design, and semantic SLAM all needed to be understood to get all of these challenges working.

David H.
David H.
August 20, 2020 2:50 pm

In statistics the probabilistic object detection (PrOD) dataset alone had:

  • 3 datasets (develop, test-dev, test)
  • 8 simulated base environments
  • 3 simulated robot heights
  • day and night lighting variations
  • 40 video sequences (4 develop, 18 test-dev, 18 test)
  • 201,708 images (21,491 develop, 123,704 test-dev, 56,513 test)
  • 796,676 objects (56,578 develop, 552,611 test-dev, 187,487 test)
David H.
David H.
August 20, 2020 2:52 pm

Most proud achievement from the challenges was the creation of the new evaluation measure probability-based detection quality (PDQ) and it finally getting published at WACV 2020. I even got to be in the USA in March of 2020 … just before the world exploded.

David H.
David H.
August 20, 2020 2:54 pm

One of the most amusing memories from the challenge creation process was Niko’s reaction to the original name for probabilistic bounding boxes … fuzzy boxes 🙂

Dim
Dim
August 21, 2020 1:56 pm

These challenges are so important and I can’t wait to the advances they trigger in future years. As my PhD is on uncertainty for object detection, the Probabilistic Object Detection Challenge is one of the most exciting outputs of the Centre in my opinion! It’s great to finally see someone address how to assess the quality of uncertainty in object detection with one clear metric – I hope that this triggers some more research from the community on this topic.

Ben Talbot
Ben Talbot
August 25, 2020 6:40 pm

Creating BenchBot has been a challenge that has taken us on all sorts of twist and turns; something I’m sure would’ve been a bridge too far if it weren’t for the wonderful environment and support the centre provided us.

From an engineering perspective, I’ve never worked on something that’s required such a diverse set of skills to get done. Hardware GPU rendering & passthrough, finding (& then battling against) the limitations of Docker, ROS inside containers, the Bazel build system, extending 3rd party software to build environments, RESTful APIs, 1000s of lines of Bash, busting out of the Docker firewall, supporting a challenge running on contestants’ machines, a real world ground-truthing system, & real robots… all in the one project.

All for a 120GB blob that can be installed, run, & used for research with a couple of scripts.

David H.
David H.
Reply to  Ben Talbot
August 28, 2020 3:17 pm

Ben may be downplaying though just how important that ease of use is. It is a stumbling block for so much research in robotics and I would think in computer science in general. If people can’t use it then it may stop them from trying out ideas they might have otherwise have been interested.

The fact that there is such functionality and that it does work so well is down to Ben’s drive and willingness to learn so many things to get it all working.