AUTONEWS
Neuromorphic chip lets autonomous vehicles dodge hazards 4x faster
On a dark night, a person suddenly appears in front of a moving car. It is a scene that often appears on programs that show the moment of a crash through dashcam footage. Even an Autonomous Driving car equipped with an advanced computer system can hardly avoid such an accident, but that could change. A technology that mimics the principle by which humans detect moving objects has been developed to dodge obstacles much faster than before.
Shuo Gao, a professor at Beihang University in China, and the research team said in Nature Communications on the 11th that they had "succeeded in capturing moving objects four times faster than conventional computer vision systems with a neuromorphic chip that mimics nerves." If Autonomous Driving cars, unmanned aerial vehicles (drones), and robots are equipped with this chip, they are expected to operate safely even in rapidly changing environments.
Mimicking the neural principle that detects moving objects...Autonomous Driving cars and drones have computer vision systems that serve as human eyes. They look for moving objects in camera footage and estimate where they will move. This is the process of detecting optical flow, the pattern of motion of objects within the video.
The problem is that processing optical flow causes the amount of information the computer software must compute to surge. That is because every pixel in every video frame must be processed. Even an Autonomous Driving car traveling at 80 kph can take up to 0.5 seconds to react to a hazard ahead. That is the time it takes the vehicle to travel about 13 more meters before coming to a complete stop.
For artificial systems to operate safely at home, on roads, or in operating rooms, their vision must be upgraded. Instead of improving software, the research team developed hardware that mimics human visual principles to solve the problem. The human brain can react to a danger in front of the eyes in just 0.15 seconds.
The reason the brain reacts to object motion faster than a cutting-edge computer is that it works on the principle of selecting and concentrating on information processing. A computer processes all the information captured by the camera, but the brain does not detect everything it sees. Visual information sent from the retina goes to the thalamus, which integrates sensory information. There, a part called the lateral geniculate nucleus delivers only signals about regions of interest where there is motion to the cerebrum. As visual signal processing speeds up, it becomes easier to catch or dodge an incoming ball.
The research team developed a synaptic transistor, a neuromorphic chip that remembers and processes information like a neuron (synapse). Instead of sending an entire scene to the main computer, this chip identifies key changes in the scene. If the brightness of any area changes over time, it is considered a region of interest where an object is moving. Because the computer needs to examine only this region of interest instead of the entire image, the whole vision system operates faster. It is similar to tracking only a friend's movement in a crowded street.
Proven effectiveness in Autonomous Driving cars, drones, and robots...The neuromorphic chip that processes visual information is fast. By detecting image changes within one ten-thousandth of a second, it is expected to dramatically improve the safety of Autonomous Driving cars and drones. Because it computes only regions of interest, it also consumes less power. That means longer battery life.
The research team said that equipping systems with the neuromorphic chip made video processing speeds on average 400% faster than conventional computer vision systems. Reaction speed increased fourfold. The team verified the effectiveness of a vision system equipped with the neuromorphic chip in various real-world environments.
In one experiment, for example, the time it took an Autonomous Driving car to detect a pedestrian, predict their movement, and react fell from 0.23 seconds to 0.035 seconds. At 80 kph, a 0.2-second reduction in reaction time can shorten the braking distance by 4.4 meters. Not only speed but accuracy also improved. In Autonomous Driving driving scenarios, accuracy improved by more than twofold (213.5%).
A drone also detected and avoided obstacles in the air using the same method. A robot hand immediately figured out how to grasp an object even when it moved. The success rate of catching fast-moving objects improved by as much as 740.9%. The research team said that with the neuromorphic chip, even small objects that move quickly, like a ping-pong ball, can be captured, making it highly useful in sports.
Researchers from Tsinghua University and Beijing Institute of Technology in China, the University of Hong Kong, the University of Cambridge in the United Kingdom, Northeastern University in the United States, and King Abdullah University of Science and Technology in Saudi Arabia also took part in this study. The team plans to move beyond laboratory settings and develop chips at scale for use in Autonomous Driving cars and industrial robots.
Professor Gao said, "If real-time video processing is possible, autonomous systems can efficiently perform complex tasks such as collision avoidance and object tracking," and noted, "Further research is needed to evaluate this vision system in a wider range of environments for commercialization."
Recent breakthroughs in neuromorphic (brain-inspired) chips have enabled autonomous vehicles (AVs) to detect and react to hazards up to four times faster than humans. Unlike traditional processors that analyze every pixel in a video frame, these chips are event-driven, activating only when they detect motion or changes in light.
Key performance benefits(below):
-Reduced reaction latency: In recent tests, detection and reaction times for pedestrians dropped from 0.23 seconds to just 0.035 seconds.
-Improved accuracy: Motion-related tasks and perception accuracy in self-driving scenarios improved by over 213.5%.
-Energy efficiency: These chips can reduce the energy required for data processing by up to 90% compared to traditional GPU-heavy systems, extending the range of electric autonomous vehicles.
-High-Speed tracking: The technology allows vehicles to capture and track small, fast-moving objects (like a ping-pong ball) that standard vision systems might miss.
How the technology works(below):
Inspired by the human lateral geniculate nucleus (LGN), these chips prioritize rapidly changing visual elements(below):
-Event-based cameras: Instead of scanning full frames, they deliver data only for individual pixels that change, drastically reducing data volume.
-Spiking neural networks (SNNs): Information is processed via discrete electrical "spikes," mimicking how biological neurons communicate.
-Local processing: By handling data "at the edge" (directly on the vehicle), they eliminate the need for cloud-based analysis, which is too slow for high-speed driving.
Industry adoption(below):
Major manufacturers and research teams are actively integrating this technology:
-Mercedes-Benz has partnered with Intel to use the Loihi 2 neuromorphic chip for faster traffic sign and lane recognition.
-Honda is collaborating with Mythic to develop analog neuromorphic systems-on-a-chip (SoCs) for next-generation safety.
-Tsinghua University and international researchers recently demonstrated these chips in various real-world autonomous scenarios, including drones and robotic arms.
by: Lee Young-wan












