Main menu

Pages

BENEFITS AND RISKS OF ARTIFICIAL INTELLIGENCE




BENEFITS AND RISKS OF ARTIFICIAL INTELLIGENCE


Benefits and risks of artificial intelligence

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Today's artificial intelligence is called “narrow AI” because it is designed to perform specific tasks, such as facial recognition, internet searches, or driving a car.

WHY STUDY THE SAFETY OF ARTIFICIAL INTELLIGENCE AI

In the short term, the goal of maintaining the societal impact of artificial intelligence AI, from technical topics such as validation, validity, security and control, to topics such as economics and law, fuels research in many fields. If a personal laptop goes down or gets hacked, it can be a minor inconvenience, but when AI systems control our cars, airplanes, pacemakers, automated transaction systems and power supplies, the ability of AI artificial intelligence to work becomes even more important. Another short-term challenge is to prevent the fierce weaponization race of lethal automated weapons.

A more important question in the long run is what will happen if the development of comprehensive AI succeeds, making AI systems superior to humans in all cognitive tasks. As Dr. Irving John Good pointed out in 1965, designing smarter artificial intelligence AI is in itself a cognitive task, so such systems can outsmart human intelligence at an explosive rate through endless self-improvement. The birth of inclusive artificial intelligence AI could be the biggest event in human history, as AI's superintelligence could help solve the problems of war, disease and poverty by inventing innovative new technologies. However, some experts are concerned. Unless we align AI's goals with ours before it significantly surpasses human intelligence, the birth of inclusive AI may be the end of humanity.

While some question the success of inclusive artificial intelligence AI, others argue that AI's superintelligence will certainly benefit humans. While this possibility exists, FLI is also looking at the potential for artificial intelligence systems to intentionally or unintentionally cause significant harm. We believe that by enabling today's research to prevent potential catastrophes in the future, humanity will be able to fully reap the benefits of artificial intelligence.

THE DANGERS OF ARTIFICIAL INTELLIGENCE AI

Most researchers agree that there is no reason for AI to be intentionally good or evil, as hyperintelligent AI artificial intelligence will not display human emotions such as love or hate. However, given the scenarios in which AI could be a risk, experts point to the two most likely scenarios:

When artificial intelligence AI is programmed to do lethal tasks: For example, autonomous weapons are artificial intelligence systems that are programmed to kill, so misuse of these weapons can easily cause massive casualties. Moreover, if an AI arms race accidentally leads to an AI war, these weapons may be designed to be extremely difficult to interrupt to avoid enemy interference, leaving humans out of control of the situation. This risk exists even in limited AI contexts, but increases with the growing level of AI's intelligence and autonomy.

artificial intelligence AI is programmed to do beneficial tasks but uses destructive methods to achieve its goals: This can happen when the AI's goals are not perfectly aligned with our goals. For example, if we ask a self-driving car to get us to the airport as quickly as possible, take our request literally, and we may end up being chased by a helicopter and suffering from motion sickness. As another example, if superintelligent AI is put into a geoengineering project, side effects that destroy the ecosystem may occur, and human efforts to prevent the side effects may be perceived as a threat to the achievement of the goal.

As the example above shows, the concern with superintelligent AI is its ability, not its malice. Superintelligent AI artificial intelligence is very good at accomplishing goals, so if those goals don't align with ours, big problems can arise. You're probably not an ant hater who tramples evil ants, but if you're working on a green hydro project, you probably don't care about the presence of an anthill in your area. The main goal of safety research in artificial intelligence is not to allow humans to become like ants that will be washed away by a flood.

REASONS FOR RECENT INTEREST IN ARTIFICIAL INTELLIGENCE AI SAFETY

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many celebrities and AI researchers in science and technology have expressed their concerns about the dangers of AI in the press and in open letters. Why did this topic suddenly become a hot topic?

The success of the development of comprehensive artificial intelligence has long been thought of as science fiction, or a very distant future. However, thanks to the rapid development of technology, advances in artificial intelligence that would have taken decades just five years ago have already occurred today, and many experts are considering the potential of artificial intelligence. Some experts still believe that human-level artificial intelligence AI is centuries away, but at a conference in Puerto Rico in 2015, the majority of researchers speculated that it would happen before 2060. Therefore, the necessary safety studies can take decades to complete, so you should start doing them now.

Artificial intelligence has the potential to be smarter than any other human, so there is no definitive way to predict how an artificial intelligence AI will behave with human intelligence. We can't even build on the skills of the past to deal with it because we've never, knowingly or unintentionally, created anything with the ability to outperform us. The best example we can understand might be our own evolution. Humans rule the planet not because they are the biggest, strongest or fastest, but because they are the smartest, so can we be sure that we can continue to rule the planet when we are no longer the smartest race?

FLI's position is that as long as we win the race between the ever-growing technology and our wisdom to control it, our civilization will thrive. The best way to win in artificial intelligence AI technology is by supporting AI safety research, not by hindering the advancement of technology, but by increasing our wisdom.

MISCONCEPTIONS ABOUT SUPERINTELLIGENT ARTIFICIAL INTELLIGENCE AI

There is a lot of talk going on about the future of artificial intelligence and what it will or should mean for humanity. World experts are skeptical about the impact of artificial intelligence AI on the job market, whether and when the development of human-level AI will succeed, whether it will lead to an explosion of intelligence, and whether we should welcome artificial intelligence AI development as either a welcome or a fear. Arguing. However, tedious and meaningless debates are also common due to misunderstandings and answers from fellow students. To focus on more informative and interesting questions, we've put together some of the most common misconceptions.

MISCONCEPTIONS ABOUT TIME ARTIFICIAL INTELLIGENCE

How long will it take for machines to surpass human intelligence? We commonly misunderstand that we know the answer for sure.

One of the common misconceptions is the conviction that we will have superhuman artificial intelligence in this century. In fact, history is full of hype and goals about technology. Where are the fusion power plants and flying cars promised to be invented by now? Artificial intelligence has also been continuously exaggerated since ancient times. For example, the creators of artificial intelligence, John McCarty (who coined the word “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Cloud Shannon made overly optimistic predictions with outdated computers that only humans can solve, and improve themselves. will try to We believe that significant progress can be made on some of these issues if carefully selected scientists work together over the summer.”

On the other hand, a common misconception to the contrary is the conviction that we will never get superhuman artificial intelligence AI. Researchers have given us many estimates of how long the development of superhuman AI will take, but given the accuracy of many historically skeptical predictions, we can't say with certainty that the odds of developing superhuman AI this century are zero. For example, the great nuclear physicist Ernest Rutherford said that nuclear power was bullshit just 24 hours before Leo Schillard developed the nuclear chain reaction, and astronomer Royal Richard Woolley said in 1956 that interstellar travel was futile. . The most extreme form of this misconception is that superhuman artificial intelligence AI is materially impossible. But physicists know that the brain is made up of quarks and electrons and is designed to act like a powerful computer, and therefore also know that there are no laws of physics that prevent us from building ever more intelligent quark structures.

When we ran a survey to artificial intelligence AI researchers, 'How many years from now will artificial intelligence at the human level be developed with at least 50% probability?' I don't know'. For example, in a survey of AI researchers, the average answer was 2045, while some researchers predicted hundreds of years or more.

Another misconception related to this is to think that the development of superhuman AI is only a few years away. In fact, those concerned about superhuman artificial intelligence AI speculate that it is at least a decade away. But unless you're 100% sure it won't happen this century, they argue, it's wise to start your safety studies now to prepare for contingencies. The safety issues associated with human-level artificial intelligence are very demanding and difficult, and can take decades to solve. Therefore, it is wiser to start research now rather than a day before the risk arises.

CONTROVERSIAL MISCONCEPTIONS ARTIFICIAL INTELLIGENCE

Another common misconception is to believe that all those who express concerns about AI, and supporters of safety research, are Luddites who don't know much about AI. This claim was made public in Puerto Rico by Stuart Russell, author of the AI ​​textbook, to great laughter from the audience. Related to this, some people mistakenly believe that supporting AI safety studies is controversial. However, like buying fire insurance because you cannot ignore the possibility that your house may catch on fire, in fact, to support AI safety research, you need to understand that the risk is not negligible, rather than a certainty that the risk is high.

It may seem by the media that the debate about the reliability of AI is more controversial than it really is. Our fears are often used in marketing, and it is true that out-of-context implicit citations get more clicks than factual citations. That is why, if the two sides only communicate through the media, you can think of them as more conflicting than they really are. For example, if someone who is skeptical about technological development reads Bill Gates' position only through British newspapers, he may mistakenly believe that Gates believes that the development of superintelligence is imminent. Similarly, if an advocate of the benefits of AI read only his quotes about Martian overpopulation with no knowledge of Andrew Ng's position, he could mistakenly assume that Andrew Ng doesn't care about AI safety. Because Andrew Ng sees the development of artificial intelligence for a long time, he focuses on short-term issues rather than long-term ones.

MISCONCEPTIONS ABOUT THE DANGERS OF SUPERHUMAN ARTIFICIAL INTELLIGENCE AI

Articles with similar titles are innumerable. In general, articles like this, with badly portrayed robot illustrations, etc., encourage us to worry about self-conscious evil robots revolting and killing us. If you take this on a positive note, these articles are actually impressive in that they concisely outline scenarios that AI researchers don't care about. This scenario combines three different misconceptions of consciousness: anxiety, evil, and robots.

When we drive on the road, we have a subjective experience of colors and sounds. Could being an autonomous car evoke any feelings? Of course, questions about this ritual are interesting in their own right, but they have nothing to do with the dangers of AI. This is because it is only the actions of superhuman AI that affect humans, not the subjective emotions that AI feels.

The fear of machines becoming vicious is another factor that confuses us. What we really need to worry about is AI's ability, not its malice. Superhuman AI is very good at accomplishing whatever the goal is, so that goal must be the same as ours. We don't usually hate ants, but they're smarter than them, so if we want to build a hydroelectric dam over their nest, they can't help it. The reason for the effort to achieve beneficial artificial intelligence is to avoid the plight of ants that will be washed away by the flood.

The misconception about self-consciousness is linked to the misconception that machines cannot have goals. But, like a heat-seeking missile hitting a target, a machine can have a target in the sense that it clearly exhibits target-directed behavior. Therefore, if a machine with a misaligned target is a threat, the source of the threat is the misaligned target itself, not whether the machine is self-conscious. If heat-seeking missiles are chasing after them, we're probably not so confident that we don't have to worry because the machine can't have a target.

Because some reporters are obsessed with robots, filling their articles with evil metal monsters, they often portray Rodney Brooks and other robotics pioneers as demons. In fact, the main concern is not the robots, but the intelligence itself, or more specifically, intelligence that strays from our goals. It takes only an internet connection, not a robot body, for this elusive superhuman intelligence to leapfrog financial markets, outstrip human researchers, outperform human leaders, and create weapons we don't even understand. So even if it was physically impossible to build a robot, an AI with great intelligence and wealth could easily manipulate humans or pay them to subconsciously obey orders.

Myths about robots have to do with the illusion that machines cannot control humans. But intelligence makes control possible. We can control tigers, not because we are stronger, but because we are more intelligent. So, if we give up our intellectual position at the top of the planet, we may have to give up our ability to control as well.

INTERESTING DEBATES ARTIFICIAL INTELLIGENCE

Only by not wasting time on the misunderstandings mentioned above can you focus on interesting and truthful arguments. What kind of future do you want? Should we develop lethal autonomous weapons? What do you think about job automation by machines? What kind of career advice would you give to today's children? Do you want new jobs to replace old ones, or do you want a jobless society where everyone can live a prosperous life? Do you want to further develop superintelligence and spread it throughout the universe? Will we control those intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or integrate with us? What does it mean to exist as a human in the age of artificial intelligence? What do you want it to mean? And how can we make the future that way? If you have any doubts about these questions, join this conversation now!

Comments