One of the biggest challenges in popularizing self-driving vehicles is safety and reliability. In order to ensure safe driving for the user, it is crucial that the autonomous vehicle accurately, efficiently and effectively monitors and recognizes the environment as well as safety hazards for occupants.
While Tesla is trying its best not to release disengagement data that other companies developing autonomous driving systems provide, a group of Tesla FSD Beta testers has been reporting the data independently for some time.
Based on this limited data set, the Tesla FSD Beta can drive only a few miles between disengagement, while other autonomous driving programs like Waymo and Cruise report tens of thousands of miles between shutdowns on average.
At Waymo, one of the methods that is used to assess driver safety is scenario-based testing — a combination of virtual, test-track, and real-world driving.
To identify appropriate test scenarios, they use existing driving data from Waymo’s years of experience, crash data such as databases of the police accidents and crashes captured by dash cams, and expertise in the operational design sphere including geographic areas, driving conditions, and the types of roads. Over time, Waymo continues to add new and representative scenarios they encounter on public roads and in simulations, or as they expand into new territories.
Waymo’s scenario database, developed since 2016, is based on millions of miles driven on public roads, as well as thousands of real-life accidents, and provides comprehensive coverage of dangerous situations. Because the most common types of accidents are similar no matter where you drive, their database can be used as a baseline for any city, allowing for faster scalability. It covers a wide range of common situations that can happen almost anywhere, such as a crosswalk against a signal or when a car pulls out of a driveway.
In a recent study published in IEEE Transactions of Intelligent Transport Systems, a group of international researchers led by Professor Gwangil Jeong of Incheon National University, Korea, developed an IoT-enabled intelligent end-to-end system for real-time 3D object detection, based on deep learning and specialized for self-driving situations.
“We devised a detection model based on YOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects,” elaborates prof. Jeon.
The team fed the collected RGB images and point cloud data as input to YOLOv3, which, in turn, output classification labels and bounding boxes with confidence scores. They then tested its performance with the Lyft dataset. The initial results showed that YOLOv3 achieved an extremely high detection accuracy (>96%) for both 2D and 3D objects, outperforming other current detection models.
This method can be applied to self-driving cars, autonomous parking, autonomous delivery, and future autonomous robots, as well as in applications requiring object and obstacle detection, tracking, and visual localization.