Humans Needed

AI is smarter than humans, only in the very specific task they were designed to do (by humans)

We have all heard the scary AI stories: new technologies that are capable of doing things faster, better and cheaper than us will soon take over and render our jobs useless. And the truth is we are surrounded by AI technology. More and more we depend on it. Just imagine your inbox without a spam filter. Now THAT is scary. But most of AI is still at a very early stage of development. It makes mistakes and is still learning. That is... humans are still learning: how to code it, how to maintain it and ultimately how to improve it.

The giraffe dilemma

You don't see a giraffe everyday. So the moment you do, at the zoo or in a jungle safari, chances are you will snap a picture. This means that there is a larger amount of pictures of giraffes than there are giraffes in the world.

When you train an AI algorithm, you provide it with training data. This is information you want it to "learn" before it performs the task you expect it to do. For an image recognition algorithm, its training data includes large amounts of images. The more giraffes it sees while studying, the more likely it is to believe that giraffes are everywhere.

The moment you put it to test, this algorithm will probably "see" giraffes even in images where a human can clearly see there are none, because it learned that giraffes are a common occurrence.

As a result of this, we see many examples of AI bias; like programs assuming people from certain areas are more likely to commit crimes, simply because historically more crimes have been detected there.

Spoiler 1: An algorithm will simply give back whatever a human puts in. Monkey see, monkey do.

Jesús, take the wheel

A self-driving car would include algorithms that detect a tree or a moving truck, but if a capuchin monkey escapes from a house in Mexico City (true story) and jumps in front of the car, then the algorithm will not know what to do and the human will have to step in. This is because nobody ever prepared the car for such a situation.

This is called conditional automation, it is the highest level of automation available for cars, and is a partnership among man and machine. This means the driver must be ready to take over. Which is why you cannot fall asleep in your Tesla. At least, not yet.

The power of AI is still very narrow and limited. They have very short memories and are very specialized. While an algorithm can beat the best chess player in the world, the same program certainly cannot recognize a fingerprint from a banana.

Spoiler 2: AI is smarter than humans, only in the very specific task they were designed to do (by humans).

AI cannot exist without people

The only way we can tell that an AI algorithm has made a mistake, is when we look at the results. AI doesn't know if it has made mistakes. The Amazon algorithm that discriminated agains women's resumes had no idea it was being unfair. The problem in this case was similar to the giraffe one: what was biased was the training data, not the algorithm.

The fact is AI is replacing repetitive tasks like separating rotten tomatoes in a conveyor belt at a ketchup factory. So these are the types of jobs at risk: narrow, specific and monotonous.

But humans are involved in every part of the process that contributes to AI: from selecting the correct database to training it, to coding and asking the algorithm the right questions, to making sure the results make sense. This will continue to be true for quite a long time.

Whenever AI makes mistakes it is typically because either the problem was not clearly stated or the training data was messy, biased... or contained too many giraffes. And a human has to be there to catch it.

The author is professor of Business Intelligence at EGADE Business School.

Article published in Forbes México.

Go to opinion
EGADE Ideas
in your inbox