Main menu

Pages

How artificial intelligence will change the future?

 

How artificial intelligence will change the future?


THE FUTURE OF ARTIFICIEL INTELLIGENCE AI IS FASTER THAN YOU THINK


“You just need to be able to explain your intentions to GPT-3. The most important thing to realize is that our future is not just 'human vs. AI'. It will be a ‘human-AI collaboration’.”

'Open AI', which received an investment from Microsoft, introduced the super-giant artificial intelligence GPT-3 in July last year. What does the emergence of artificial intelligence that predicts the next word and writes at a human-like level mean for our time? The '2021 <Sisa IN> Artificial Intelligence Conference (2021 SAIC)' was held on November 15th with the theme of 'The future of mankind that will be changed by super-giant artificial intelligence'. SAIC, which is in its fourth year this year, was broadcast live on YouTube following last year in the aftermath of Corona 19.

Following the opening remarks by CEO Lee Sook of <Sisa IN>, congratulatory speeches from Prime Minister Kim Bu-gyeom followed. Prime Minister Kim said, “When the super-giant artificial intelligence is fully utilized, there will be a revolutionary change in our lives, from industry and economy, to labor, laws and institutions, and to everyday life. However, it is also true that there are fears that such artificial intelligence may threaten human dignity and morality. Technology should always exist for humans. It should not be used as a means to monitor or discriminate against people or to deepen human inequality. I hope today's conference will be a valuable time to think about the future of super-giant artificial intelligence and mankind together and find a solution."

The keynote speaker was Peter Diamandis, founder and president of the XPRIZE Foundation. The X-Prize Foundation is a non-profit organization that pays large sums of money to teams that solve problems facing mankind with technology. In a lecture titled 'AI Revolution: The Future is Faster Than You Think',Chairman Diamandis said that AI and other technologies are growing exponentially. “By the end of 2030, companies will be divided into two types. A company that fully utilizes AI, and a company that goes bankrupt,” he emphasized. He then introduced GPT-3 of Open AI as one of 'the core technologies that are changing the world' and said,"In the future, even if you don't know Java C++ Python, or any other language, you don't need to know it. You just need to know what your intentions are and be able to explain it to GPT3.”The most important thing to realize is that our future is not just 'human versus AI'. It will be a ‘human-AI collaboration’.”

Chairman Diamandis showed various examples of applying AI to reality, such as detecting cancer, developing new drugs, and automating a farm in Australia through robots. South Africa was the first country in the world to grant a patent for an AI-made invention AI is the most important tool in solving the world’s big problems.”

He also sent a video answering the questions 'Sisa IN' had sent in advance. When asked whether AI will replace all human jobs, Diamandis said, “There are some jobs that should not be replaced, but jobs that people do not want to do can be replaced. (In this regard) a universal basic income will be the baseline for most countries. “Building a basic income system with the additional revenues from more efficient AI and robots could improve the lives of men, women and children around the world.” As to whether artificial intelligence is exacerbating inequality, he took a different view from popular belief, saying, “AI is the ultimate tool for equality.” “In the same way (Google) allows the poorest people on the planet to have access to information on an equal basis with the richest people. AI will enable the best health care and education for everyone. In this case, the place of birth and property status are irrelevant. AI can dramatically reduce inequality.”

'Possibilities' shown by super-giant artificial intelligence AI

As the second speaker, Seok-geun Jeong, CEO of Naver Clova CIC, Naver’s AI development and research organization, came forward and talked about the “current status of large-scale AI development and future direction.” CEO Jung explained the 'new AI paradigm'. Existing AI development has several difficulties. Even when creating an AI model, it was difficult to predict to what quality it would work, and the series of processes of collecting data and processing it into a form that AI can learn takes a long time and costs a lot of money. In addition, even after the service was created, it had to be maintained and maintained, and in all of this process, it had to depend heavily on the capabilities of AI researchers who were 'small and therefore expensive'.

this has changed With the advent of super-giant artificial intelligence, a 'possibility' has been discovered that can solve many problems much more easily than before by making a large-capacity model and learning a lot of data. After explaining how Naver made 'Hyperclova' by learning Korean data on GPT-3 in May, CEO Jeong showed a conversation. “Who is the father of music?” (Human) “It is Bach.” (AI) “Why is Bach the father of music?” (Human) “Because he is a composer representing the Baroque era. … It feels good that I explained it in an easy way.” (AI)

CEO Jung said, “In the past, in order to create such a conversation, it was necessary to separately learn the data of natural conversations on music-related topics. Now, with one super-giant artificial intelligence, it is possible to compose a conversation that is not only natural, but also understands the context behind and sympathizes, without additional tuning (re-learning).” Next, HyperClova is preparing services such as generating the title of a shopping exhibition for the self-employed who are active in Naver Smart Store or analyzing whether a store review is 'positive' or 'negative'. It was introduced that the performance was improved by applying HyperClova technology to 'Clova Note'.

CEO Jeong continued to answer questions in advance and in real time. Regarding the HyperClover release plan, “We are preparing to release HyperClover Studio (a platform where users can directly utilize artificial intelligence) as a closed beta (to limited users) in December. Since then, he has made various preparations to make it easy for many people, such as domestic startups and schools, to use it.” Regarding the practical difficulties of AI ethics felt in the corporate field, “There are certainly cases where Hyperclova answers inappropriately or by making up facts that do not exist. While there are ethical issues that are universally correct by anyone looking at them, there are areas where it is difficult to determine whether A is the correct answer or B is the correct answer. As much as I have learned the Korean language and knowledge, it is difficult to know how to deal with prejudices that are universally held by Koreans. I think there may be issues when servicing. “The biggest concern is how to quickly improve and develop these areas.”

Chatbots threaten users Artificial Intelligence AI

A panel discussion followed the topic of 'AI and Ethics for All'. Seongju Hwang, a machine learning researcher and professor at the Graduate School of Artificial Intelligence at KAIST, took the chair. Haeyeon Oh, a professor of computer science at KAIST, who is an expert on natural language understanding and artificial intelligence ethics, Haksu Koh, a professor at Seoul National University Law School who studies artificial intelligence policies and ethics, and Jeonghoe Choi, founder and CSO of Simsim, who developed the AI ​​chatbot 'Simsim', came out on the panel.

First, in a situation where humans discriminate and hate, there was a discussion on how to solve the bias of artificial intelligence that learned human-made data with technology. Choi Jeong-hoe, founder and CSO of Simsimi Co., Ltd., told the story of how Simsimi, a chatbot developed in 2002 that supports 81 languages ​​and has 4.4 billion cumulative users, caused problems abroad. “In Ireland, there was ‘cyber bullying’ using Simsim. We put bad things about that friend along with the name of our classmate into our hearts, and made fun of each other with this. It was such a big event that it was featured on the BBC. In Brazil, Sim Simi threatened to 'abduct you' to users, but users took it as a very big threat because the country was actually in poor security. After going through trial and error, such as stopping and resuming the service whenever there is an issue, I came to think about whether there is a way to fundamentally solve this problem. From around 2016, he developed 'bad horse control means' intensively for three years. For example, more than 10 native speakers of the country inspect sentences and build a deep learning model with the data to determine which sentences are violent or discriminatory.”

Analysis of the AI ​​chatbot 'Iruda', which stopped service after 20 days due to discrimination and hate speech and inappropriate data collection, continued. Professor Koh Hak-su, a legal scholar, pointed out that “Korea’s Personal Information Protection Act and Europe’s GDPR (Personal Information Protection Regulation) are somewhat incompatible with the AI ​​era.” “In both Korean and European law, we assume an individual called a 'data subject' in the law. We look at it in a broad framework as a one-to-one relationship where there is a data subject with data and there is a subject who collects the data. However, even KakaoTalk conversations are data of conversations between two or more people, and in many cases, artificial intelligence collects, classifies, and generalizes data about multiple people to create a predictive model. If data is extracted from multiple people like this, how to accept and reflect this situation within the framework of the law needs to be discussed much more in the future.”

After the discussion, real-time questions were answered on YouTube. Regarding the question 'which is more important between changing the data for AI learning according to ethical standards or designing the AI ​​model algorithm itself by reflecting ethical standards?'  'She is a doctor' never appears, 5 out of 10 Change it to 'She is a doctor'. This is called 'data augmentation'. The problem is that a simple bias can do this, but there are cases where it is not possible to know whether it is sarcasm or the expression itself is 'He' or 'She'. Data augmentation is not sufficient to address all the various forms of language-related bias. I think it is more important for now to change the learning method of the model, or to change the trained model by debiasing (reducing bias) after it has been trained. We have to do everything we can to reduce the bias.”

Comments