Everyone is excited about artificial intelligence. Machine learning technology and technology have made great progress. However, in the early stages of its development, we may need to reduce our enthusiasm.
The value of artificial intelligence can already be seen in a wide range of industries, including marketing, sales, business operations, insurance, banking, and finance. In short, this is the ideal way to perform various business activities (from managing human capital to analyzing staff performance to recruitment, etc.). Its potential runs through the core of the entire business ecosystem. Obviously, the value of artificial intelligence to the entire economy can reach trillions of dollars.
Sometimes we may forget that AI is still in progress. Because it is still in its infancy, we must still overcome the limitations of this technology before we enter the brave new world of AI.
In the latest podcast released by the McKinsey Global Institute, the company analyzed the global economy, and its chairman Michael Chui and director James Manyika discussed artificial intelligence. Limitations and how to alleviate these limitations.
Factors limiting AI’s potential
Manyika pointed out that the limitations of AI are “purely technical.” I label them as how to explain the role of the algorithm? Why make choices, results and predictions? Then there are practical limitations involving data and its use.
He explained that during the learning process, we are providing data for the computer, not only programming it, but also training it. He said: “We are teaching them.” Train them by providing them with labeled data. Teach the machine to identify the object in the photo or confirm that the variance in the data stream that may indicate that the machine is about to crash is performed by providing them with a large amount of marking data that indicates that the machine will be interrupted in this batch of data and that The computer will not be interrupted, and the computer will determine whether the computer is about to be interrupted.
Chui identified five AI limitations that must be overcome. I explained that now people are tagging data. For example, people are browsing traffic photos and finding car and lane markings to create marking data that self-driving cars can use to create the algorithms needed to drive cars.
Manyika pointed out that he met some students who went to public libraries to label artworks so that they could create algorithms used by computers to make predictions. For example, in the United Kingdom, a group of people use tag data used to create algorithms to identify pictures of dogs of different breeds so that computers can recognize the data and know what it is.
He pointed out that the process has been used for medical purposes. People are marking pictures of different types of tumors, so that when the computer scans them, they can understand what is the tumor and what type of tumor.
The problem is that teaching a computer requires a lot of data. The challenge is to create a way for computers to process marked data faster.
Tools currently in use include generative adversarial networks (GAN). These tools use two networks-one to generate the right thing and the other to distinguish whether the computer generates the right thing. The two networks compete with each other to enable the computer to perform the correct operation. This technology allows computers to generate artworks in the style of specific artists or buildings in the style of other observed things.
Manyika pointed out that people are currently experimenting with other machine learning technologies. For example, he said, researchers in Microsoft Research Labs are developing stream tagging, a process that uses data to tag data. In other words, computers are trying to interpret data based on how it is used. Although the stream tag has been around for a while, it has made great strides recently. Nonetheless, Manyika believes that tagging data is a limitation that requires further development.
Another limitation of AI is insufficient data. To solve this problem, companies developing AI are collecting data for many years. To reduce the time it takes to collect data, the company is turning to a simulated environment. Creating a simulation environment in a computer allows you to conduct more experiments so that the computer can learn more content faster.
Another problem is to explain why the computer decides what it does. This problem is called interpretability and involves regulations and regulators who may investigate algorithm decisions. For example, if someone is released from prison because of a guarantee, but not released by someone, someone will want to know why. You can try to explain this decision, but it will definitely be difficult.
Chui explained that a technology is being developed that can provide explanations. Called LIME, it stands for the agnostic interpretation of locally interpretable models. It involves looking at the various parts and inputs of the model and seeing whether it will change the results. For example, if you are viewing a photo and trying to determine whether the item in the photo is a pickup truck or a car, if you change the windshield of the truck or the rear windshield of the car, you can choose one of the changes. As. This shows that the model focuses on the rear of the car or the windshield of the truck to make a decision. What happened is that the model is being experimented to determine which factors make a difference.
Finally, biased data also limits AI. If the data entering the computer is deviated, the result will be deviated. For example, we know that some communities are monitored by more police than others. If the computer wants to determine whether there are a large number of police officers in the community to limit crime, and the data comes from a community with a large number of policemen and a community with few policemen, the computer’s decision will contact the police based on more data in the community, if it does not come from the neighborhood If there is no police data, it will not. Oversampled neighborhoods may lead to incorrect conclusions. Therefore, dependence on AI may lead to dependence on inherent deviations of data. Therefore, the challenge is to find a way to “remove bias” data.
Therefore, just as we see the potential of AI, we must also recognize the limitations of AI. Don’t worry that AI researchers are working frantically on these issues. Due to the rapid development of AI, some things that were considered to be limited to AI a few years ago are no longer today. This is why you need to keep checking with AI researchers what is happening today.