Main menu


artificial intelligence which generation


artificial intelligence which generation

Limitations and Future of Current-Generation artificiel intelligence  AI

Artificiel intelligence AI heyday

Interest in AI (artificial intelligence) has not subsided since AlphaGo shocked people when they finally won a Go match against Lee Sedol 9-dan in 2016 with 4 wins and 1 loss.

Artificiel intelligence AI startups, whose business models are not yet clear, receive huge amounts of money, and when AI develops and strong artificial intelligence (AI) or artificial general intelligence (AGI) emerges in the future, humans can Some people predict that work will disappear.

In fact, the event that shocked people studying this field that AI may even outperform humans is probably the 2012 ILSVRC (ImageNet Large Scale Visual Recognition Challenge) contest rather than AlphaGo.

This contest is a contest that classifies over a million image sets called ImageNet into 1000 using an AI model. It's a well-known case.

Moreover, the error rate of SENet, the 2017 winning model, which was the last competition, is 2.3%, slightly surpassing that of human 5%. At this time, various performance improvement and optimization techniques currently used such as matrix operation using GPU, DropOut, and ReLU have already been proposed.

Also, from this period, CNN (Convolutional Neural Network) adopted by AlexNet in various fields started to be used a lot, and CNN was used for Policy Network and Value Network in the first version of AlphaGo. AlexNet unintentionally contributed to the birth of AlphaGo.

After that, very large models such as GPT-3 are being tested in the field of Natural Language Processing (NLP) mainly in the field of natural language processing (NLP). AI is rapidly replacing it. Nevertheless, it seems that there are still many misconceptions about AI.

Artificiel intelligence AI  is not a magic box that solves everything

Recently, I have conducted consulting for a company to introduce AI, and I explained that the priority is to obtain sufficient and refined data for modeling.

However, the person in charge of the company asked if it was not AI that performs the desired task without such data, or if it is possible to use unsupervised learning or AutoML as the technology is very advanced recently.

Of course, there are areas where unsupervised learning such as DQN (Deep Q-Network) can produce results, but it is difficult to apply such a learning method to this field.

AutoML is not a methodology that can learn without data, but rather automates the pipeline for using ML and finds hyper parameters to increase performance. I went there, but I came to the model selection and needed explanation again.

I said that it would be good to use the classic machine learning model's GBM (Gradient Boosted Model) because the benefits of using deep learning for the dataset are not clear, but the person in charge said again, why isn't deep learning the most superior of all machine learning models? I asked if he didn't use deep learning.

This example highlights the many misconceptions about AI that permeate the industry today. Even those in charge who have never directly applied AI or ML have heard a lot about large models such as AlphaGo and GPT-3, and sometimes get some knowledge of AI from internal studies or external training.

Also, since AI is so popular, most startups and companies include AI or ML in their business plans. Recently, large-scale NLP models have been in the spotlight, so there are many cases where NLP is embedded in presentation materials.

It may seem that they have an unrealistic expectation that all business problems will disappear as if they had opened a magic box once they introduced AI. Unfortunately, the current generation of AI is far from such a magic box.

It is clear that super-large models such as GPT-3 have achieved great results, but there is still a considerable distance from strong artificial intelligence or general artificial intelligence that can replace humans.

What is the current generation of Artificiel intelligence AI  ?

In fact, AI, represented by deep learning, has been able to flourish as it is now thanks to the development of hardware technology. Most of the important theories about artificial neural networks were already published until the 1980s, but it was difficult to produce meaningful results with hardware performance at that time, and we had to be satisfied with simulating the neural networks of living things.

After the 2000s, as cloud computing became the mainstream and the idea of ​​using GPUs for matrix operation was adopted, the computing power given to researchers surged, and work methods that could not be done before began to be attempted. As such research advances, AI applications have increased in areas previously considered impossible to automate, such as video, audio, and translation.

However, if we look at the essential part, there is not much difference between the current deep neural network and the classic machine learning method. One layer of the neural network uses the linear regression model that has been studied for a long time (DNN is a stack of three or more layers), and learns by reducing this loss by looking at the difference between the target value and the actual estimate as a loss. It is the same as the Gradient Descent Algorithm used in Linear Regression.

In other words, the computing power has increased, but in essence, the learning method of estimating the dependent variable (y) by giving the independent variables (X, features) has not changed at all. In the case of reinforcement learning, which is a representative of unsupervised learning, the method of defining the loss is different, but the goal of reducing the loss is the same. Although super-large models are being made one after another for NLP, there are differences of opinion as to whether the context of the speaker's language is really understood.

For example, when GPT-3 was asked how many eyes the sun had, the answer "one" is a famous anecdote. Humans have common sense that the sun is not a living organism and of course does not have eyes, but GPT-3 does not have such a common sense, so he returned “one”, the answer that he thought was the most probable within the range he had learned.

Therefore, to put it a little harshly, it can be said that GPT-3, although very powerful, is just a probability machine that returns the most plausible answer within the category of self-learned data. On the other hand, since human intelligence is also based on the person's experiences and learning, it can be argued that there is a difference between GPT-3 and human intelligence.

The debate over John Sul's argument in The Chinese Room demonstrates that there is still no clear conclusion on this topic. However, even if it is affirmed that human consciousness can be implemented in the same way as GPT-3, it is clear that a huge model, a very large amount of data, and a long learning time are required.

In summary, the development of AI in recent decades has been remarkable, but the implementation of AGI is still a long way off, and there are many things that need to be studied in the future. We still don't really know how our own intelligence works.

What Artificiel intelligence AI can do

However, it is difficult to say that the current AI craze is all exaggerated and not effective. Even if the implementation of AGI is difficult for the time being, well-trained AI in a specific field is highly likely to significantly reduce repetitive tasks performed by humans, as it can lead to actual business benefits.

AI's performance in image recognition has already surpassed that of humans, and machine translation is much smoother than before. Even if you do not implement AGI, AI that only performs specific tasks in a specific area can work more intelligently than before and significantly reduce human work.

Speech-to-text conversion, sentence summarization, document sentiment analysis, and translation are such tasks, some errors can be tolerated and repeated human tasks are performed. In these areas, specialized AI can produce results and contribute to productivity, but human intervention is still inevitable in areas where final refinement of content or decision-making based on the results are made. Therefore, the role of AI at this time will be to help people and increase efficiency.

Also, unless it's AGI, AI can't do anything that humans can't do. However, if we entrust the work of humans to AI, the AI ​​will be able to increase its performance even more because it can do the work faster and more accurately without getting tired.

It is certainly surprising that AlphaGo beat the top professional knights. However, if we simplify the mechanism a bit, we can say that AlphaGo is repeating the task of placing a number with a high probability of winning from the experience gained by playing so many boards of Go.

If a person has a similar level of experience in the game of Go, he may be able to obtain a level of energy similar to that of AlphaGo. However, people cannot play so many games in a short period of time, so learning methods like AlphaGo are impossible.

Therefore, just as it is more efficient to use Excel or a calculator than to do mental calculations by humans, it is right to leave tasks that can be repeated to AI. Now that we have only glimpsed the possibility, the scope of AI will be used more and more in the future.

The future of Artificiel intelligence AI

In conclusion, AGI is still far from our reach, and it is difficult for AI to completely replace human work for the time being. Today's AI is able to use powerful computing power that was not available before, and it is reasonable to think that data scientists who have acquired high computing power have developed through various experiments.

Of course, as new structures such as CNNs, RNNs, LSTMs, GRUs, and Transformers are proposed, there are more tasks that AI can do than in the past, when learning with only one layer perceptron.

However, fundamentally, since the learning approach to the goal by minimizing the loss from Linear Regression has not changed, we are still using AI only within the scope of a specific task, and re-learning is inevitable whenever a new task is applied.

This can be said to be a limitation of the current generation of AI. Because of these limitations, there is a forecast that AI will enter the third winter, but the difference from the previous two winters is that the application of AI is actually benefiting people.

In that respect, it would also be true that Professor Andrew Ng said that AI has entered eternal spring after two winters. There will be many companies looking for opportunities in the AI ​​field in the future, and there are countless areas where AI can be applied.

However, the attitude that AI can do everything is difficult, and the attitude that AI is useless is also difficult. It is clear that AI can do many things, and it is true that it has developed to a much superior level than before, but it is reasonable to approach it in a pragmatic way in that it is a tool that helps humans rather than having unrealistic expectations for AI.

Rather than adopting AI without purpose and following trends, it is best to first determine where AI can be applied in one's domain and approach it with the intention of reducing human work until AGI is released. would.