Chinese scientists have just successfully developed a new type of nerve simulation chip, inspired by the visual processing structure in the human brain, which helps robots detect movement four times faster than the human eye.
This technology is expected to create a major step forward in self-driving cars, service robots and many automation systems that need real-time response.
The chip was developed by a research group at Beihang University and the Beijing Institute of Technology, based on the operating principle of lateral neural neuron (LGN), which is a structure located between the retina and the visual cortex.
In the human brain, LGN acts as a transit station and information filter, helping the visual system focus on processing moving or rapidly changing objects.
According to the research group, this mechanism inspired the design of artificial neural modules on semiconductor chips.
Instead of processing each static frame like a traditional camera system, the new chip can directly detect changes in light over time, thereby identifying movement as soon as it occurs.
In conventional robot vision systems, cameras will record a series of images and then compare the changes in brightness between frames to recognize motion.
This method is quite accurate but has a significant lag, usually taking more than half a second to process a frame.
With high-speed applications such as self-driving cars, this small delay can become a dangerous factor, increasing the risk of accidents.
The new nerve simulation chip has solved this problem by allowing the system to focus processing power on areas where movement is occurring instead of the entire frame.
In the simulation of driving and controlling the robot arm, the processing delay is reduced by about 75%, while the accuracy of motion tracking is doubled when performing complex tasks.
Notably, the chip's movement detection capability is four times faster than previous methods, even exceeding the reaction speed of the human eye in some situations.
The research team believes that applying the principle of brain image processing to semiconductor hardware is the key to achieving this step forward.
The new technology has the potential for widespread application, from collision avoidance systems in self-driving cars, real-time target tracking on unmanned aerial vehicles to robots capable of responding instantly to human gestures.
In a home environment, chips can help robots recognize small changes such as facial expressions or hand movements, making human-machine interactions more natural.
However, researchers also admit that the chip still depends on optical flow algorithms to interpret the final image and may encounter difficulties in environments with too many simultaneous movements.
However, this is still considered an important step forward in the field of machine vision and hardware artificial intelligence, opening up a direction for the development of robots that respond almost instantly to the world around them.