Artificial intelligence robots in everyday life 20.22
With the rapid development of artificial intelligence technology, it has
become an era where we can easily meet artificial intelligence in various
fields of life. Now, the machine listens to and responds to us (language
interpretation and translation, speech recognition, utterance), identifies who
the target is (face recognition, human recognition), distinguishes what kind of
object it is (object recognition), and recognizes human emotions (emotion
recognition). You can read or search for appropriate information, and you can
make an appropriate recommendation that is specialized for you.
It has even become possible to compose music or imitate rare and famous
works, and in some areas, it has surpassed human abilities, demonstrating
victory over humans in battles like Go's AlphaGo and StarCraft's AlphaStar. It
is showing that, with sufficient data and suitable learning algorithms,
artificial intelligence that exceeds (or transcends) human intelligence is
possible in certain areas.
If so, how far has the technological level of artificial intelligence
robots that perform physical movements have advanced? Along with the rapid
development of artificial intelligence, automated machines that used to repeat
simple tasks at the production site and entertainment robots that were
presented as a spectacle have recently evolved into smarter robots equipped
with artificial intelligence. Artificial intelligence robots that we only
dreamed of are appearing around us one by one.
In factories, the demand for automation is increasing due to the
shortage of manpower at the production site and rising labor costs. It
automatically assembles glass and tires, repeatedly transports heavy and
dangerous objects, and sorts and picks up fast-moving products on conveyor
belts. Sometimes, artificial intelligence robots that cooperate by assisting humans
in production work have begun to replace humans for specific tasks in
factories. The field of collaborative robots that collaborate with workers in
the same field is also confirming the applicability.
In addition, as the demand for and interest in non-face-to-face physical
services due to the COVID-19 pandemic increases, delivery robots, guide robots,
and quarantine robots in restaurants have come into real life and started to
coexist with people.
The current situation demands that robots become a necessity that
constantly helps people in their daily life. In order to become an inconvenient
existence like a smartphone or a car, and furthermore, it is expected to give
practical help in life by thinking like a human, predicting, planning and
acting like a person. However, contrary to our expectations, the reality is
that the intelligence of robots is progressing slowly compared to artificial
intelligence such as AlphaGo.
In this article, we will first define what robot artificial intelligence
is, and look at what artificial intelligence robots will appear according to
the definition. And find out which intelligent service robots are with us now.
Finally, the technical requirements for future artificial intelligence robots
are summarized, and the challenges to be considered when developing them are
introduced.
Definition of artificial intelligence robot
In order to survive in various environments such as land, sea, forest,
and underground, living things living in nature have sensory sensors for cognition,
a body suitable for the environment, and physical abilities. It evolved by
adapting to the environment with different appearances, such as animals with
feet, fish with fins and gills, and birds with wings. Intelligence has also
been specialized and developed with various natural intelligences to survive in
each environment. Artificial intelligence robots, like living things, exist in
physical space, so they can be defined from a similar point of view.
An artificial intelligence robot is “in the space (environment) where
the robot works, it extracts the information necessary to achieve the desired
task from the environmental (space) information obtained using the sensor
mounted on the robot, and based on this, it performs optimized actions. It can be
defined as “a robot that can learn, select, and create appropriately to perform
a task without error”. According to this definition, the size and shape of the
robot body will change according to the environment and target task for
performing tasks, and sensors, actuators, and robot intelligence will also
exist in different forms. Like natural organisms, artificial intelligence
robots will be specialized according to the environment space in which the
robot lives and the tasks to be performed, and will appear in various forms
with different appearances and artificial intelligence.
The
space in which robots live, and robots and intelligence that change in each
space
Let's take a look at the space (environment) where the robot lives and
what kind of work it is doing. The environment in which artificial intelligence
robots live can be broadly divided into indoor and outdoor environments. In
addition, the indoor and outdoor environments can be further subdivided into
spaces with various characteristics. Outdoors, there are road spaces for autonomous
vehicles to drive, large-scale residential complexes and urban spaces,
agricultural spaces such as rice fields, fields and barns, construction spaces,
sea and deep-sea spaces, drone flight zones, and disaster sites. Indoors, there
is a factory space for producing, assembling, and manufacturing goods, a
warehouse space for logistics, a commercial space such as an office,
restaurant, and building interior used by the public, and a home space such as
a house. Each space has a shape, structure, size, and various environmental
factors, and the work required for each space is also different, and the
specifications for the same type of work are also different.
For example, in the assembly process of a manufacturing line, picking up
and assembling a connector with a size of cm and picking up and assembling a
car glass or tire are the size, shape, sensor used, and required precision of
the robot, respectively. , work hours, etc. are required differently. The size,
force, shape, and size of the robot required for transporting objects weighing
in tons with precision in millimeters and safely transporting food without
spilling food with precision that can reach the destination within 10cm of the
dining table table All sensors and specifications will be different.
Artificial intelligence robots are inevitably different. Nevertheless,
we expect a robot of the same type (humanoid) with one technology or part to
operate like a human with all functions. This expectation is a huge challenge.
In reality, it is thought that specialized robots with artificial intelligence
and shapes suitable for use by learning from natural intelligence will be
commercialized first.
Artificial intelligence robots approaching everyday life
1) Mobile service robot
Autonomous transport and delivery service robots also have different
robot intelligence for each space. In the factory space, delivery-logistics
work that delivers and supplies various materials required for the assembly
line is required, and continuous and repetitive logistics occur from the
loading dock to the assembly line. Various robots are being applied to the
field to automate this. In particular, since the main purpose of a factory is
to increase the efficiency of product production, it can make a dedicated path
for robots to travel, and does not care much about embedding devices in the
floor or attaching additional devices in the factory.
Currently, the most used method is to guide the robot by attaching a
magnetic tape to the floor. The tape is attached following the shape of the
path the robot moves, and the robot moves in response to the magnetic field.
When the robot's eye responds to a stimulus, it has a level of intelligence
that generates a motion corresponding to that stimulus.
Another method is to closely attach a specific mark (QR code) to the
floor (attached at a distance of less than 1m), read the QR code information,
figure out where the robot is, and create a motion to go to the destination and
move it. In this case, the robot has eyes to read the code on the floor. In
addition, there is a method to determine the position by attaching a reflector
around the main position where the robot moves and measuring the distance to
multiple reflectors using a distance sensor mounted on the robot using a
trigonometric method. Through this, it can be used like an indoor GPS.
This technology is being used well in assembly lines for mass production
and in large distribution centers such as Amazon distribution centers. However,
in the future, as the demand for small-lot production of various products
increases, the assembly line will change frequently, and changes within the
factory will also become more frequent. In each of these cases, it costs money
and time to re-build all the infrastructure and attachments for the robot.
In order to replace these methods, a SLAM (Simultaneous Localization and
Mapping) type robot is being tried as an alternative. In this method, the
structure, pattern, and features in the factory are extracted and stored by
themselves from the sensor (a technology that creates, memorizes and stores the
map necessary for the robot to move by itself), and at the same time uses those
features to determine the position of the robot (the robot's position in the
created map). location-finding techniques). As the eye used here, a distance
scan sensor such as LiDAR or a camera sensor to imitate a human eye is mainly
used. This method has become mainstream in the last 10 years, and a lot of
research and development has been done, and many mobile robot startup companies
have appeared. Recently, it is changing to a sensor fusion method that
maximizes the advantages of each sensor and complements the disadvantages.
Unlike factory spaces, in commercial spaces such as restaurants, large
shopping malls and large buildings, it is difficult to install additional devices
around the robot, and it is difficult to secure a path that only the robot
moves. It is important to move.
Although a method for attaching a ceiling sign using a technology
similar to that in a factory space is applied to some restaurant robots, the height
of the ceiling is different for each store and it is difficult to recognize
when exposed to direct sunlight, so it is not easy to apply it to various
restaurants.
Most robots operated in commercial space adopt the slam method centered
on the LiDAR sensor. Commercial space has severe structural and environmental
changes within the space, and it is difficult to operate for a long time with a
map once memorized.
Recently, autonomous driving and deep learning-based environmental
recognition (recognition using object recognition from feature points)
technologies to respond to these changes and operate robustly are being
studied. In addition, research is being actively conducted in two ways: the
method of transforming or updating the memorized map by adapting to changes in
time and space, and the method of finding and remembering features that are not
affected by the change. On the other hand, research on robot intelligence that
enables safer driving by reinforcing learning of driving experiences in various
driving environments to prevent collisions with humans is also being actively
conducted.
In the home space, autonomous driving is possible based on the
technology applied to the commercial space, but the problem is that there are
various variables. Wheeled robots seem to have limitations in overcoming toys,
carpets, steps and stairs on the floor. In addition, since the price of robots
in a home environment must be absolutely cheaper than robots in factory and
commercial spaces, robots for exchanging emotions that communicate with each
other rather than aiming for a specific task are mainly used.
2) Collaborative service robot
Even robots (robot arms) that cooperatively perform tasks such as
assembly, manipulation, and gripping in the same space as humans may have
different robot intelligence in each space. In the factory space, the position
of the industrial robot arm is fixed, and the work necessary for assembly is repeatedly
performed by making the product in the exact same position and in the same
posture on the assembly line. When a specific signal occurs, it has a level of
intelligence that responds to the signal and repeats the same task. This robot
is like an automated machine without eyes.
In recent factories, the assembly is carried out with partial autonomy
by attaching eyes that can recognize two- and three-dimensional postures even
when the position or posture of the product to be assembled is changed. A robot
that automatically assembles automobile glass or tires automatically estimates
the position and distortion of the vehicle body when the vehicle arrives, and
precisely inserts the glass or wheel into the hole in mm to perform attachment
and assembly work. Although it has partial autonomy, the robot is still fixedly
working within a safety fence, and the robot and human do not share a
workspace.
On the other hand, cobots, in which robots and humans share roles and
collaborate, have emerged, and the number of cases that increase human work
productivity is increasing. These robots must share a working space with
people, and in order to safely assist in their work, various sensors instead of
fences are controlled while observing the robot and the operator from the
outside. It detects external contact by measuring the current of the robot's
internal motor, or attaches a sensor that makes the robot feel tactile or
measures proximity to an object to prevent collisions.
If you look at the latest trends in factory space, companies that supply
industrial robots that moved quickly and precisely are making efforts to make
industrial robots coexist with humans, and companies that manufacture
cooperative robots replace the work of industrial robots with cooperative robots.
are making an effort to As the intelligence of robots develops, the boundaries
between industrial robots and collaborative robots are blurring, and the
flexibility of robot tasks is increasing. In addition, in order to install an
industrial robot, a jig/fixture is required that is 3 to 4 times the price of
the robot. In order to make an autonomous robot without peripheral devices
(JIG-Free), smart eyes must be installed and it will need to be transformed
into an artificial intelligence robot that is combined with a mobile robot.
In commercial space, as food tech becomes a reality, automated cafes
where AI robots cook or serve food and AI robots work are expanding. It is
still possible to work at the level of repeating limited motions, such as
grabbing a coffee cup and delivering it to a delivery robot or frying chicken
in oil. However, it is expected to expand to more diverse fields. Research on
home appliance robots to organize things in the home space or to automate
cooking and laundry is also on the rise.
Development direction and challenges of artificial intelligence robots
1) Technical challenge
Although artificial intelligence technology is developing at the level
of humans, can we say that the level of robot intelligence is sufficient to
provide practical help by applying it to robots? It's hard to say I'm mature
enough yet. OpenAI's recent language model, GPT-3 (Generation Pre-traination
Transformer), has an intuitive ability to write text or even code. It is an
artificial intelligence technology that is evaluated as a big step toward the
goal of creating artificial general intelligence (AGI). GPT-3 is rated as one
of the most likely candidates to pass the Turing test, and a whopping 175
billion parameters are generated by learning and used to infer.
Despite these reviews, they do not do everything well. I can only
understand physics to the extent that I learned from texts, but I don't seem to
understand the common sense of popular physics in the realm of time and space.
As an input-to-output method that continuously predicts the next word based on
a large amount of text data, it understands the state and goal of the robot
(environment where the robot lives, work, etc.) in various variables of the
real environment and understands the principle to make movement hard to bet It
seems that artificial intelligence robots need to fully experience the world in
real life using visual information and various cognitive sensors.
For example, consider a situation in which an artificial intelligence
robot uses its arms to make coffee, put it in a cup, and deliver a cup of
coffee to a customer using a mobile robot. The AI robot needs to know that
there are many variables such as how heavy the coffee cup is, how much coffee
is contained, whether the surface of the cup is not slippery, how much force it
can hold, and whether it can be lifted. It is possible to work only when each
variable and various possibilities are modeled depending on some empirical
data.
Humans have recorded thousands of years of experience in DNA with
today's bodies and cognitive sensors. Alpha Star, an artificial intelligence
that defeated humans in the strategy simulation game Starcraft 2, acquired the
amount of game experience that a human must play for 10,000 years through
simulation. It seems difficult for a robot to directly obtain such a vast
amount of experience data, and in particular, data experienced by coexisting
with humans is scarce. I think this is the reason why it is difficult to come
up with an Alpha robot that responds to AlphaGo or AlphaStar.
In addition to robot intelligence to operate well on its own, artificial
intelligence robots need social intelligence to coexist and cooperate with
humans. Human-robot interaction requires intelligence that understands human
intent. It is necessary to understand the content of the conversation to
understand the person's explicit intention, or to minimize the uncertainty of
the intention by continuing the conversation in case of unclear intention. In
addition, it should be used to understand and sympathize with or interact with
people's emotions, and in this case, it should be possible to properly
understand the complex and subtle emotions of people.
It seems that cooperation between humans and artificial intelligence
robots will be possible only when they can accurately understand and grasp
human intentions and interact with appropriate social expressions. In other
words, in order for artificial intelligence robots to coexist and collaborate
with humans in real life, it is necessary to understand popular physics and
public psychology. In addition, robot intelligence, sensor solutions,
mechanisms, motors and control technologies are all important for AI robots to
get smarter and do their jobs better.
2) Social and ethical challenges
As artificial intelligence develops, digital inequality will occur
between countries and individuals depending on the level and gap of technology.
Those who use artificial intelligence technology and artificial intelligence
robots well can maximize convenience and efficiency, but inequality may occur
beyond inconvenience to those who cannot use them. As the artificial
intelligence and robot industries change rapidly, these social problems will
come quickly, and we need to supplement the system and take policy measures.
In addition, it is necessary to respond to problems arising from abuse
and abuse of artificial intelligence technology. Recently, deepfake technology
that synthesizes human faces has become a big problem as it has been used for
socially and ethically violating purposes. We must respond so that people are
not harmed by the misuse of technology.
On the other hand, as the degree of freedom and autonomy of artificial
intelligence robots increases, a situation in which the robot decides and acts
on its own will occur. In this case, the robot should not retaliate against
humans with emotions, and the robot itself should not injure people in order
not to get hurt. Artificial intelligence that can say that it does not know
what it does not know should be installed in the robot. To respond to this,
explainable AI and artificial intelligence that can respond to uncertainty are
needed. A black box that stores relevant data must be included so that the AI
robot can investigate and evaluate why it made such a decision.
concluding remarks
The emergence of artificial intelligence robots in real life will be an
unavoidable path according to the needs of the public and the changing times in
the demographic structure, rapid development of artificial intelligence
technology, and the transition to a semi-compulsory non-face-to-face society
due to COVID-19. If policies and systems to overcome technological challenges
step by step and to solve social and ethical problems are supported, the
transition to a future society in which humans and artificial intelligence
robots coexist will take place in the near future.
In preparation for this, it is necessary for everyone to make efforts to
enhance technological competitiveness internationally and maximize the positive
impact of technology through steady discussions and responses to the
technological, social, and ethical impacts of artificial intelligence robots.
Comments