AUTONEWS
Bolstering the safety of self-driving cars with a deep learning-based object detection system
Self-driving cars, or autonomous vehicles, have long been earmarked as the next generation mode of transport. To enable the autonomous navigation of such vehicles in different environments, many different technologies relating to signal processing, image processing, artificial intelligence deep learning, edge computing, and IoT, need to be implemented.
One of the largest concerns around the popularization of autonomous vehicles is that of safety and reliability. In order to ensure a safe driving experience for the user, it is essential that an autonomous vehicle accurately, effectively, and efficiently monitors and distinguishes its surroundings as well as potential threats to passenger safety.
To this end, autonomous vehicles employ high-tech sensors, such as Light Detection and Ranging (LiDaR), radar, and RGB cameras that produce large amounts of data as RGB images and 3D measurement points, known as a "point cloud."
The quick and accurate processing and interpretation of this collected information is critical for the identification of pedestrians and other vehicles. This can be realized through the integration of advanced computing methods and Internet-of-Things (IoT) into these vehicles, which allows for fast, on-site data processing and navigation of various environments and obstacles more efficiently.
In a recent study published in IEEE Transactions of Intelligent Transport Systems, a group of international researchers, led by Professor Gwanggil Jeon from Incheon National University, Korea have now developed a smart IoT-enabled end-to-end system for 3D object detection in real time based on deep learning and specialized for autonomous driving situations.
"For autonomous vehicles, environment perception is critical to answer a core question, 'What is around me?' It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action," explains Prof. Jeon.
"We devised a detection model based on YOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects," he elaborates.
The team fed the collected RGB images and point cloud data as input to YOLOv3, which, in turn, output classification labels and bounding boxes with confidence scores. They then tested its performance with the Lyft dataset. The early results revealed that YOLOv3 achieved an extremely high accuracy of detection (>96%) for both 2D and 3D objects, outperforming other state-of-the-art detection models.
The method can be applied to autonomous vehicles, autonomous parking, autonomous delivery, and future autonomous robots as well as in applications where object and obstacle detection, tracking, and visual localization is required.
"At present, autonomous driving is being performed through LiDAR-based image processing, but it is predicted that a general camera will replace the role of LiDAR in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront," says Prof. Jeon. "Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5-10 years."
Provided by Incheon National University
Nenhum comentário:
Postar um comentário