AUTONEWS

Under The Hood Of Autonomous Vehicles
Access to robotaxis is steadily expanding. Just this month, Waymo and Uber opened an interest list in the Uber app for riders in Austin. Despite their growing presence, the majority of people don’t fully understand the different levels of automation in autonomous vehicles (AV) and most see them as a black box that, for all intents and purposes, work through magic and witchcraft. For AVs to continue building public trust, more must be shared to educate consumers what lies under the hood of autonomous vehicles. This article will introduce a high-level architecture explaining the inner workings of AVs.

High level architecture for autonomous vehicles. Items in green are inputs and those in blue are major subsystems. Gustavo Castillo
When we drive, we use our senses to observe our surroundings. We combine inputs from what we see, and hear to paint a picture of where we are and what is happening around us. Using the current state of the road and all its users, we create a plan to arrive at our destination in a safe and efficient way. Finally we act on that plan through input on our steering wheel and pedals. Autonomous vehicles, like many other robotics applications, follow a similar process which can be described in four main subsystems: Perception, State Estimation, Planning and Prediction, and Control.
Perception...When we drive, we use our senses to observe our surroundings. Every autonomous vehicle company will have their own unique preference for sensors and where to place them on the car; however, there are three sensors that are most frequently used in the industry: camera, radar, and LiDAR. By combining the inputs of these sensors, a vehicle can paint a full picture of the driving environment.
Cameras provide some of the richest information about a vehicle’s surroundings. For example, when observing a pedestrian at a crosswalk, a camera can tell what direction they are facing, their facial expressions, and body language; details that are important in determining where a passenger might walk next. Despite all this information, cameras struggle operating in environments with low light or glare. Radar can be used to detect objects in the dark and can give the precise distance and speed at which they are moving. However, radar alone might have a hard time differentiating two pedestrians walking next to each other. This could affect pedestrian tracking which may lead to inaccurate decision-making in navigation and safety responses.
LiDAR emits lasers and measures the time it takes for the laser to reflect back to the vehicle. Its output is a detailed 3D map of the world where each point is a measure of how far that object is. These maps paint an accurate picture of the world which helps the car detect and avoid objects around it. Despite its accuracy, LiDAR can struggle with rain or snow as lasers can reflect off individual raindrops, distorting the image the vehicle sees. Additionally, LiDAR is the most expensive of the three sensors; however, there is reason to be hopeful the cost of these sensors will continue falling as autonomous vehicles become more common.
Despite the varying strengths and weaknesses 0f camera, radar, and LiDAR; when combined, they create more robust and reliable AVs. The inputs from all three sensors are fed into a machine learning algorithm that labels objects into important categories such as vehicles, pedestrians, cyclists, motorcycles, construction sites, and lane closures.
State Estimation...When we drive, we combine inputs from what we see, hear, and feel to paint a picture of where we are and what is happening around us. State Estimation serves the same purpose. One major step in State Estimation is localization, a process where the vehicle determines exactly where it is. This uses GPS to find a relative location on the map, camera data to find what lane the vehicle is currently in, and if necessary LiDAR data to compare its current position to a previously recorded high definition map to find a known location down to centimeter accuracy. The lack of state estimation for AVs would be the equivalent of us trying to drive but having no idea where we are while also experiencing vertigo. State estimation uses the information it gathers from perception to determine where all the other road users are in relation to its position and where they are moving. All this is useful to know how to plan what the vehicle should do.
Planning and Prediction...When we drive, we observe the current state of the road and all its users to create a plan to arrive at our destination in the most safe and efficient way. State Estimation helps the vehicle understand its position and the surrounding environment. Planning and Prediction uses that knowledge to estimate how the environment will change in the next few seconds and find what path the vehicle should take in response. Planning can work at a high and low level. At a high level a vehicle must determine what route it should take to get to its destination. This works similarly to how we use a map to get around. This plan can include traffic data and information about road closures to find the most efficient route at that time. At a low level, the Planner makes shorter term decisions about which lane the vehicle should be in, how fast it should be going based on the vehicles around it, and also making split second decisions to avoid obstacles. The planner works as the brain which decides what the vehicle should do.
Controls...When we drive, we act on our plan through input on our steering wheel, and pedals. Control is responsible for translating the Planner’s predetermined actions into reality. Human control while driving, is a subconscious process refined over years of experience. When we first learn to drive, we may have jerky acceleration and braking until we get familiar with the right level of input on our pedals. The Planner for an autonomous vehicle may specify that the vehicle should speed up to 45 mph but Control turns that into an output of exact pedal input to ensure a smooth ride while also balancing not being too slow as to disturb other drivers. The same is true for steering. The vehicle may want to switch lanes but if it turns too quickly it could lead to the vehicle overshooting too far into another lane or losing control all together. If you get into a vehicle with a friend and end up car sick by the end of it, your friend isn’t necessarily a bad driver, they’re just bad at control.
Final Thoughts...Autonomous vehicles are highly complex systems. AV companies may classify tasks under differing subsystems or call them other names all together. The performance of each subsystem will make a huge difference on safety and an AV’s ability to navigate in more complex scenarios. Additionally, there are many other systems that are equally important like fault handling, and teleoperation that are highly complex and merit their own discussion. This overview provides a high level description with the goal of helping the public understand how autonomous vehicles operate to better inform the safety of the technology. While companies do share great research explaining their systems and their performance, I encourage additional work by AV companies to share descriptions of the inner workings of AVs to continue building public trust.
Gustavo Castillo(https://linkedin.com/in/gcastil)

Nenhum comentário:
Postar um comentário