The workshop takes place on 19-Jun-2023 as a full-day workshop in Room East 9 and on Zoom, featuring nine invited talks and one ACDC challenge https://acdc.vision.ee.ethz.ch/news#challenge2023.
| Starting Time (PST, CEST, Beijing) |
Room | Program |
| 08:50 17:50 23:50 | East 9 | Opening |
| 09:00 18:00 00:00 | East 9 | Invited Talk 1: Judy Hoffman, Georgia Tech, “Reliable vision for a changing world” |
| 09:30 18:30 00:30 | Zoom | Invited Talk 2: Tim Barfoot, University of Toronto, “Hard Miles: Expanding the Operational Domain for Localization and Mapping” Bad weather, extreme lighting, and tunnels are just some examples of situations that can challenge our ability to accurately position an autonomous vehicle. I will provide a progress update on our long-term efforts to produce localization and mapping able to handle such difficult conditions. In particular, we have been using deep-learned features to improve camera-based path following for long-term offroad navigation. On the road, we are testing both lidar and radar-based localization in harsh weather conditions to understand the advantages of each. We are also exploring the use of Doppler lidar to carry out egomotion estimation in geometrically degenerate situations including long tunnels. Finally, on the theory side, we have been investigating so-called certifiably optimal algorithms to verify our backend optimization algorithms converge to correct solutions despite poor initial guesses. We hope this work, alongside the contributions of many others, will help move the field down the long tail of edge and corner cases standing in the way of real-world autonomous vehicles. |
| 10:00 19:00 01:00 | Zoom | Invited Talk 3: Eren Erdal Aksoy, Halmstad University, “Horizon Europe Project ROADVIEW: Robust Automated Driving in Extreme Weather: Overview and Early Results” In this talk, I will introduce our new EU-funded Horizon Europe Innovation Action project ROADVIEW. The project aims to develop robust and cost-efficient in-vehicle perception and decision-making systems capable of working under extreme weather conditions, such as fog, rain, and snow. After a brief overview of the project, I will share our early results related to cleaning snowy LiDAR point clouds, boosting LiDAR-only object detection, sensor fusion for slipperiness detection, and closing the sim2real gap for sensor noise modeling. |
| 10:30 19:30 01:30 | Break | |
| 11:00 20:00 02:00 | East 9 | Invited Talk 4: Daniel Cremers, TU Munich, “Dynamic 3D Scene Understanding for Autonomous Vehicles” While autonomous vehicles are becoming a reality, the likely biggest open challenge is to achieve a camera-based understanding of the dynamic 3D world. In my presentation, I will briefly sketch developments and open problems on the road to full autonomy. I will present recent advances on Simultaneous Localization and Mapping (SLAM) and 3D scene capture using monocular and stereo cameras, inertial sensors and deep networks. In particular, I will highlight efforts to cope with variations in weather and illumination, introducing datasets like the 4-Seasons Dataset for Multiweather SLAM. In the second part, I will show ongoing efforts to recover the 3D dynamics of human driving as observed from the air or from surveillance cameras. |
| 11:30 20:30 02:30 | East 9 | Invited Talk 5: Robby T. Tan, NUS, “Night Images From the Perspective of Visibility Enhancement and Object Detection” Nighttime conditions are largely associated with low light conditions, which constitute related but different degradation problems, such as low intensity values, high levels of noise (a low signal-to-noise ratio), low contrast, low colour saturation, blurry edges due to noise and low intensity, blur due to motion, etc. Noise is significantly present in low light images because the actual signals emitted by the scene are weak, causing the random noise generated by the camera to dominate the pixel intensity values. In some severe cases, noise levels are higher than the scene pixel intensity, making the recovery intractable. While the low light problem is dominant in nighttime conditions, there are other significant problems. One of them is the imbalanced distribution of light, particularly when there are man-made lights present. In the areas nearby the light sources, the light can be strong, yet in some distant regions from the sources, the light is considerably weak. The imbalance of light distribution is usually manifested in visual effects like flare, glare and glow. In nighttime conditions, the presence of glow can be prominent, particularly when there are a considerable amount of atmospheric particles, in hazy or foggy scenes. The combination of glow and haze/fog can also degrade the visibility significantly since glow somehow occludes the scene behind. |
| 12:00 21:00 03:00 | Lunch Break | |
| 13:30 22:30 04:30 | East 9 | Invited Talk 6: Werner Ritter, Mercedes-Benz AG, “European research project AI-SEE: Artificial intelligence to improve vehicle vision for automated driving in poor visibility conditions – Latest results.” |
| 14:00 23:00 05:00 | East 9 | ACDC Challenge 14:00: Quansheng Liu, “Bag of Tricks for Domain Adaptive Semantic Segmentation in Adverse Conditions” (Winner of normal-to-adverse domain adaptation of semantic segmentation on Cityscapes→ACDC) 14:20: Jinming Su, “Boosting Semantic Segmentation in Adverse Conditions with Transformer-Based Segmenter and Simple Pseudo-Labeling” (Winner of supervised semantic segmentation in adverse conditions) 14:40: Jinming Su, “Enhancing Adverse Panoptic Segmentation through Multi-Task Learning” (Winner of supervised panoptic segmentation in adverse conditions) |
| 15:00 00:00 06:00 | Break | |
| 15:30 00:30 06:30 | East 9 | Invited Talk 7: Patrick Pérez, Valeo.ai, “Reliable Driving Perception” High-level driving automation is impossible without a reliable perception, that is, a perception that is accurate in domain, robust to perturbations, and robust to distribution/domain shifts; and that should be validated accordingly. In this presentation, we shall discuss several tools to assess and improve said reliability with multiple sensors, novel architectures and various forms of model inspection. |
| 16:00 01:00 07:00 | East 9 | Invited Talk 8: Felix Heide, Princeton University, “Designing Sensors to Detect the Invisible: Imaging and Vision in Harsh Conditions” |
| 16:30 01:30 07:30 | East 9 | Invited Talk 9: Adam Kortylewski, Uni. of Freiburg & MPI for Informatics, “Robust Vision through Analysis by Synthesis” |
| 17:00 02:00 08:00 | East 9 | Closing |
Invited Speakers for V4AS@CVPR’23
Daniel Cremers
TU Munich
Judy Hoffman
Georgia Tech
Werner Ritter
Mercedes-Benz AG
Adam Kortylewski
MPI for Informatics
Patrick Pérez
valeo.ai
Robby T. Tan
NUS
Eren Erdal Aksoy
Halmstad University
Felix Heide
Princeton University
Tim Barfoot
University of Toronto
Organizers
Dengxin Dai
Huawei Zurich
Christos Sakaridis
ETH Zurich
Lukas Hoyer
ETH Zurich
Haoran Wang
MPI for Informatics
Wim Abbeloos
Toyota Motor Europe
Daniel Olmeda Reino
Toyota Motor Europe
Jiri Matas
CTU in Prague
Bernt Schiele
MPI for Informatics
Luc Van Gool
ETH Zurich & KU Leuven