Main menu

Pages

KAIST research: more accurate robot arm control using only brain waves Artificial intelligence (AI):


KAIST research:  more accurate robot arm control using only brain waves Artificial intelligence (AI):


KAIST research:  more accurate robot arm control using only brain waves Artificial intelligence (AI):

Artificial intelligence (AI) Introduction: brain-machine interface system only by thinking in a three-dimensional space.

KAIST announced on the 23rd that a research team led by Professor Jaeseung Jeong of the Department of Bio and Brain Engineering has developed a “brain-machine interface system” that controls a robot arm with high accuracy (90.9~92.6%) only by thinking in a three-dimensional space.

Professor Jeong's research team developed a new type of brain-machine interface system that controls the robot arm by detecting the intention of arm movement only with EEG measured from the deep part of the human brain using artificial intelligence and genetic algorithms. The 'brain-machine interface' technology, in which a robot or a machine takes action instead of detecting a person's intentions only through brain activity, is developing rapidly in recent years. However, beyond understanding the intention of moving the hand, the technology to move the robot arm precisely by delicately grasping the intention of the direction of the arm movement is not yet high in accuracy.

However, in this study, the research team developed an artificial intelligence AI model that recognizes the intention of the steering 'direction' only by brain activity.

In addition, existing machine learning technologies such as deep learning required high-spec GPU hardware, but in this study, using the Reserve Computing technique, artificial intelligence (AI) learning is possible even on low-spec hardware, so that it can be widely applied to smart mobile devices. It is expected to be widely applied to metaverse and smart devices in the future.

Artificial intelligence (AI) : Brain-machine interface conceptual diagram description



Brain-machine interface conceptual diagram. Brain-machine interface users imagine the direction they want to move in 3D space (purple). EEG measured during directional imagining is sent as input to a recursive neural network designed with an accumulation computing paradigm (blue). In the recursive neural network, automatic extraction and decoding of EEG important features is performed (red). This mimics the performance of complex computational functions in the real frontal lobe. Afterwards, the decoding result is transferred to the readout in the visual cortex to express the result, and the readout with direction selectivity expresses the direction of the user's movement intention (green).

The brain-machine interface is a technology that reads intentions through the user's brain activity and transmits them to a robot or machine.

In particular, the brain-machine interface is considered the most advanced interface technology in that it transmits commands directly from the brain, whereas the existing interface has to indirectly transmit commands (buttons, touches, gestures, etc.) through external body organs.

However, EEG has a limitation in that individual differences are very large and noise is large because it is necessary to interpret the electrical signal characteristics of a group of neurons in a wide area rather than reading an accurate signal from a single neuron.

To solve this problem, the research team implemented the artificial neural network to automatically learn and find important characteristics of individual EEG signals needed at the brain-machine interface using 'accumulated computing technique', one of the most advanced artificial intelligence (AI) techniques.

In addition, the system was designed so that the artificial intelligence (AI) neural network can efficiently find the optimal EEG characteristics using a Genetic Algorithm. The research team developed an artificial neural network that mimics how visual cortex neurons express directions by designing the readout that finally interprets deep brain waves as a Gaussian model. This readout method can be quickly learned even on simple hardware with general specifications using the linear learning algorithm of accumulation computing, making it possible to apply it in daily life such as metaverse and smart devices.

In particular, the brain-machine interface AI model created in this study can decode 24 directions in three dimensions, that is, 8 directions in each dimension, and has an average accuracy of over 90% (range of 90.9%~92.6%) in all directions. seemed In addition, the researched brain-machine interface analysed the brain waves when imagining moving a robot arm in a three-dimensional space, and showed the simulation result of moving the robot arm successfully.

Dr. Kim Hoon-hee, the first author who created the artificial intelligence AI system, said, “Unlike the existing EEG decoding method that has relied on engineering signal processing techniques, we developed an artificial neural network that mimics the actual working structure of the human brain and developed a more advanced brain-machine interface. I am happy to develop the system,” he said.

Professor Jae-seung Jeong, who led the study, said, “Most of the 'brain-machine interface systems' that drive a robot arm with thoughts through brain waves require high-end hardware, making it difficult to advance to real-time applications and difficult to apply to smart devices. However, this system creates an intention recognition AI system with a high accuracy of 90% to 92% and can be widely used in smart devices that make avatars move according to their thoughts in the metaverse or control apps with thoughts alone.”

The results of this study are expected to open up the possibility of applying the brain-machine interface to a variety of systems, from robotic arm mounting and control technology for quadriplegic patients or patients who have lost an arm in an accident, to metaverse, smart devices, games, and entertainment applications. It is expected.

This research was carried out with support from the Brain Source Technology Development Project of the National Research Foundation of Korea.

 

 

 

Comments