AUTONEWS
Self‑driving cars struggle to see at night or in fog—but imitating the human brain can make them safe
Picture this: you're driving on a mountain road, when you suddenly hit a thick patch of fog. You respond instinctively. Your vision sharpens, and you narrow your eyes to make out the shape of any oncoming cars.
Human beings handle these quick changes very well, but if it were a self-driving car—at least one with a current artificial intelligence (AI) system behind the wheel—things could easily end in disaster.
Today's AI vision systems are extremely accurate when visibility is good. On a clear, sunny day a self-driving car can recognize pedestrians, road signs and other vehicles with precision. However, they are extremely vulnerable to environmental changes. If it rains, or gets dark or foggy, standard AI systems become blind, incapable of detecting obstacles that a human driver would spot with ease.
Our research at the University of Valencia proposes a possible solution: instead of exposing AI models to millions of images of every possible road condition, we decided to imitate biology. But biologically speaking, why can humans see so well under such a wide range of conditions?
The brain's 'volume control'...In our brains, neurons do not work alone. They use a truly fascinating form of adaptation that neuroscientists call divisive normalization.
To understand this (without getting into mathematics), we can picture it as an automated "volume control" system, with neurons working in a team. Let's say one neuron is looking at a very dark area of the field of vision, such as a black car at night. The neighboring neurons turn up the "volume" of this weak signal, amplifying the small details to make them more visible.
If we look at a bright light, the same thing happens in reverse. The brain turns down the volume to prevent us from being dazzled.
This mechanism is what allows us to adapt and see clearly in a very wide range of conditions. But in the search for speed and accuracy, modern AI systems have neglected this biological inspiration.
AI in the driving simulator...In our study, we processed images using some of the most widely used AI models, adding layers to simulate the brain's "volume control" mechanism. In basic terms, we forced their neurons to communicate with one another and adapt to their environment, just as our own brains do.
We wanted to see if imitating biology would make cars safer. To do this, we submitted both standard AI models and our brain-inspired modification to a series of tests. Using databases from real driving in European cities, night driving images from Switzerland, and several different virtual driving simulators, we were able to compare responses to difference levels of fog, darkness and light variation.
The results showed that imitating our own brains worked. After being trained, the two types of AI models could drive perfectly well, but once fog and darkness came into the equation, the unmodified one began to fail. It lost the ability to distinguish cars from buildings, and even from the road itself.
The AI system that was equipped with our brain-inspired mechanism, on the other hand, was robust. Even in fog or complete darkness, it performed more than 20% better than its unaltered counterpart.
We analyzed, from the inside, how this new system perceived the world and found that it was doing exactly what we expected. It was capturing and enhancing the details of vehicles hidden in the fog that would otherwise be invisible. As a result, its performance became more stable in the face of changing weather conditions.
Learning from nature...Getting society as a whole to trust AI poses major challenges, and the safety of passengers and pedestrians in self-driving cars is a major aspect of this. It is not enough for smart systems to work under ideal conditions. We need them to be completely safe in the real world, and to safeguard the lives of all road users in all weather conditions.
Our research shows that the key to making artificial intelligence safer, more robust and more adaptable may be closer than it seems. There is no need for more powerful computers or vastly greater amounts of data. Sometimes, all we need is to look at the millions of years of evolution that have shaped our own brains.
In many cases, nature has already solved some of the problems that artificial intelligence faces today. We just need to learn from it.
Why do some sensors in autonomous cars fail in certain conditions, like fog or low light, and what's being done to improve them?
To an autonomous car's laser sensors, heavy fog acts like millions of tiny prisms, creating terrifying "ghost" obstacles that can effectively blind the vehicle.
The core issue lies in the physics of how different sensors gather data. Here is why the three primary autonomous vehicle sensors struggle in certain conditions:
Cameras function much like the human eye, relying entirely on the visible light spectrum. In low light, there simply are not enough photons to create a clear image. In fog or heavy snow, suspended water droplets physically block the lens and scatter the incoming light, causing a whiteout effect that drastically reduces visibility.
LiDAR (Light Detection and Ranging) creates a high-resolution 3D map of the world by bouncing rapid pulses of near-infrared laser light off objects. However, dense fog, heavy rain, or falling snowflakes act like millions of tiny prisms. The laser beams hit these water particles and scatter before reaching their target. This creates false positive "ghost" obstacles in the vehicle's software and severely limits the sensor's functional range.
Radar relies on radio waves, which easily pass through fog, rain, and pitch darkness without scattering. While highly reliable in bad weather, traditional automotive radar has notoriously low resolution. It can detect that a solid object is ahead, but it struggles to classify what it is—making it difficult to distinguish a stopped fire truck from a harmless overhead street sign.
To overcome these physical limitations, engineers are developing a combination of advanced hardware and sophisticated software:
Sensor Fusion: Modern autonomous systems cross-reference data from all three sensor types simultaneously. If the camera and LiDAR are blinded by fog, but the radar detects a dense, stationary mass ahead, the vehicle's computer knows to prioritize the radar data and initiate braking.
4D Imaging Radar: This next-generation radar uses multiple antennas to provide a high-resolution point cloud. It can measure the height, width, depth, and relative speed of objects with enough clarity to distinguish a pedestrian from a parked car, all while remaining immune to weather and lighting conditions.
Thermal Imaging (FIR): Far-infrared cameras are increasingly being integrated into sensor suites. Instead of relying on visible light, they detect heat signatures. This allows the vehicle to clearly highlight a warm pedestrian or an animal crossing the road in absolute darkness or thick fog.
Algorithmic De-noising: Machine learning models are being trained specifically on bad-weather data. These algorithms learn the specific geometric patterns of LiDAR scattering caused by rain and snow, allowing the software to digitally filter out the precipitation and reveal the true obstacles hidden behind it.
Provided by The Conversation
Nenhum comentário:
Postar um comentário