Main menu

Pages

Human-machine interaction (HMI), present and future

 
Human-machine interaction (HMI), present and future

Human-machine interaction (HMI), present and future

Human-Machine Interaction (HMI)

HMI is all about how humans and automated systems interact and communicate with each other. HMIs have long been limited to traditional machines, but now they are also associated with computers, digital systems, and devices for the Internet of Things (IoT). More and more devices are connected and do the job automatically. The operation of all these machines, systems and devices should be intuitive and should not place excessive burden on users.

How does HMI work?

Interfaces are required for smooth communication between humans and machines. An interface can be a place where a user uses a machine or an action they take. Simple examples are light switches, the pedals and steering wheel of a car. It works by pressing a switch, turning the steering wheel, or pressing the pedal. Meanwhile, the system can be controlled by entering text or using the mouse, touch screen, voice or gestures.

The device can either control it directly (the user touches the screen of the smartphone or gives a voice command) or the system automatically knows what the person wants (when the vehicle passes an inductive loop detector installed on the road surface, the traffic light automatically changes color replace ). Other technologies can be said to complement our senses rather than being used to control devices. An example of this is virtual reality glasses. In addition, the chatbot, a digital assistant, automatically responds to customer requests and continuously learns.

 HMI: Chatbots and digital assistants

Artificial intelligence and chatbots on HMI

The first chatbot, Eliza, was developed in the 1960s, but soon ran into limitations. Eliza couldn't answer the next question, but it's changed a lot now. Currently, chatbots “work” in customer service, providing information about departure times or services, either text or verbally. To do this, the chatbot reviews and responds to user input while responding to keywords based on pre-programmed rules and routines. Modern chatbots are powered by artificial intelligence. Digital assistants like Amazon Alexa, Google Home or Google Assistant are also chatbots.

Chatbots learn from requests and expand their repertoire without direct human intervention. They can also remember previous conversations and connect and expand their vocabulary. Google's voice-activated digital assistant, for example, can infer questions from context with the help of artificial intelligence. The more a chatbot understands, the better it can respond, and the closer it gets to human-to-human communication. herebig dataalso plays an important role. This is because the more information your chatbot can use, the more appropriate it can answer in a more appropriate way.

The importance of chatbots and digital assistants will increase significantly. According to market researcher IHS, digital assistants such as Amazon's smart speaker Echo alone are projected to grow by 46 percent over the next few years.

 HMI: The road to sophisticated voice control

Users control systems like Alexa, Google Assistant, Google Home or Microsoft's Cortana with their voice. Users no longer need to touch the display. Just say a codeword like "Alexa" that activates your digital assistant, and then say "Turn down the volume" or "Cool the room temperature." Users can save effort and it is more intuitive. Microsoft CEO Satya Nadella already predicted in 2014 that "the human voice is the new interface."

However, speech recognition is not yet complete. Digital assistants do not understand every request due to the disturbance of background noise. They also often cannot tell the difference between a human voice and a TV sound. According to the Consumer Technology Association (CTA) of America, in 2013, the speech recognition error rate was 23 percent. In 2016, researchers at Microsoft lowered that to less than 6 percent for the first time, but that's still not enough.

Infineon is working with British semiconductor manufacturer XMOS to significantly improve voice control. XMOS supplies voice processing modules for IoT devices. A new solution introduced in early 2017 by Infineon and XMOS uses a smart microphone. This technology allows digital assistants to distinguish the human voice from other noises. Infineon's XENSIV™ radar and silicon microphone sensor combine to identify the position and distance of the speaker from the microphone.XMOS's far-field voice processing technologyUsed to capture this voice.

Andreas Urschitz, president of Infineon's Power Management and Multimarket Business Unit, said: "We have improved the speech recognition and accuracy by better suppressing ambient noise. Taking voice control to a "new level" Mr. Urschitz is confident that voice control of smart TVs will become more and more important in the future, and by 2022, the number of TV sets with built-in voice control is expected to grow fivefold to 60 million units.

President Urschitz expects many changes to occur in smart home appliances. The robot vacuum cleaner Robovacs, for example, is currently operated with a touch screen. However, this is impractical because the user has to get to where the device is in order to stop working. "In the future, devices like this will be powered by voice control systems," he adds. However, operating the device using voice commands is also an intermediate step from his point of view. In the long run, we will be controlling devices through gestures, and gestures will suffice to stop the robot. "But it's still in the next steps, and our first challenge is to make voice control more efficient with XMOS," he said.

Trend of HMI

The road to gesture control

Gesture control has many advantages over touch screens. Users don't have to touch the device, so they can give commands away from the device. Gesture control can be an alternative to voice control, especially in public places. Because talking with smart wearables on the subway can be offensive to some people and can get unwanted attention. Gesture control also opens up three dimensions beyond a two-dimensional user interface.

Google and InfineonSoli” developed a new kind of gesture control technology. This technology uses radar technology, where Infineon's radar chip receives the reflected wave from the user's finger. This means that when someone moves their hand, it is recorded on the chip. The Google algorithm then processes these signals. Works in the dark, far away, and even if your fingers aren't clean. The same and constant hand motion applies to all Soli devices. Soli chips can be applied to any device, such as a loudspeaker or a smart watch. “A discreet algorithm that tracks patterns of motion and touch in combination with an ultra-compact and highly integrated radar chip enables a wide range of applications,” says Urschitz. This technology will eliminate the need for all buttons and switches in the future.

HMI: Augmented, virtual and mixed reality

The interaction between modern humans and machines has existed for a long time in various forms beyond moving a lever or pressing a button. Augmented reality technology can also be an interface that connects people and machines.

Virtual, augmented and mixed reality are not only for fun and gaming, but also for Industry 4.0. The app for Microsoft HoloLens provides virtual training courses for technicians. The Brownhofer Institute for Factory Operations and Automation (IFF) offers businesses the mixed reality laboratory Elbedome. They use six laser projectors to show machines, factories, or entire cities on a 360-degree surface, giving developers or customers the feeling they are standing inside the factory they are planning.

 HMI: Opportunities and challenges 

Even complex systems will become easier to use thanks to a modern HMI. To make this possible, machines will increasingly adapt to human habits and needs, and will allow remote control of machines through virtual, augmented and mixed reality. As a result, humans will expand the realm of experience and field of activity.

Furthermore, signal interpretation in machines will continue to evolve in the future, which is essential. This includes situations where: completeautonomous vehiclemust be able to respond accurately to hand gestures from police officers at intersections. Similarly, robots used in medicine must be able to "evaluate" the needs of people they cannot express themselves.

As the roles of machines become more complex, efficient communication between machines and users becomes more important. Does the technology understand the commands as intended? If not, there may be a risk of misunderstanding and the system may not operate normally. For example, a machine can produce parts that don't fit, or a connected car can go wrong on the road

Infineon's3D image sensor chip REAL3™Reproduce the environment in this three-dimensional high quality. The chip has been found in mobile devices such as some smartphones from Asus and Lenovo. This uses ToF technology, where an image sensor chip measures the time it takes for an infrared signal to travel from the camera to the object and back. Direct access to augmented reality is possible by detecting changes in position through motion tracking and measuring the distance of objects with depth perception. Spatial learning allows the device to recognize places it has already captured.

Humans have unique abilities and limitations, which must always be taken into account when developing interfaces and sensors. The operation of the machine should not be overly complicated or require a lot of familiarity. For smooth human-machine communication, the response time between commands and actions should be as short as possible, otherwise users will perceive the interaction as unnatural.

One danger can arise from the fact that machines rely heavily on sensors to control and respond automatically. If hackers gain access to your data, they can reveal detailed information about your behavior and interests. Some critics fear that even a treadmill could act on its own and subdue humans. One question that has not yet been clarified is who is at fault and who is responsible for accidents caused by HMI errors.

Where will HMI go?

Human-machine interaction has a long way to go beyond voice and gesture control, virtual and augmented reality, and mixed reality. In the future, more and more data from various sensors will be combined to capture and control complex processes (sensor fusion).

The use of input devices such as remote controllers, computer keyboards, and on/off switches common today will be reduced. As computer systems, devices and equipment continue to learn and access more data, they will become more and more human-like. And next time, it will be able to take over the work of the sense organs. The machine will see through the camera, listen through the microphone, and the sensored cloth will transmit the sensation of touch. Infineon is working to more precisely replicate the human senses with the help of sensors. "The gas sensor will be able to 'smell', the sensor will interpret the barometric pressure and the 3D camera will improve the 'view' of the device," explains Urschitz.

Machines will be able to analyze what is happening around them with the help of sensors. The result is an entirely new form of interaction. President Urschitz gives the following example. A mobile phone with a gas sensor "smells" a burger nearby. The digital assistant will then advise you to look at the menu as certain burgers are currently on sale. At the same time, the device can interpret and react to the user's body language thanks to its perceptual sensors.

Machines will get smarter with artificial intelligence. With machine learning, the computer itself infers what it discovers from the data. As demonstrated by digital assistants like Amazon's Alexa, these levels are already possible today. But if technology can process more data in less time, the ability of machines to "think" for themselves will also increase.

Comments