Can We Trust Business Algorithms?

The good, the bad and the great of algorithms that are changing our lives and businesses

Can We Trust Business Algorithms?

Algorithms, programs with instructions that use data to predict our behavior, are increasingly present in our lives and in business. There are algorithms behind practically everything we do on the internet: from a product suggested on Amazon or a destination on Despegar.com, to the price of our car insurance (based on our history, the type of car we want to buy and our zip code). Algorithms decide for us so frequently that it is logical for us to wonder how they really work and who assures us that they do not make mistakes.

An algorithm is a list of instructions similar to those of a cooking recipe. These instructions must be recognized by a computer that is capable of receiving data (ingredients), applying a process (recipe) and obtaining a result (our favorite dish).

We have all felt as if we were part of Big Brother at one time or another, being "guessed" by these algorithms. The last time I felt like that was when I was about to board a Delta flight (in the good old days, when people could travel). At my boarding gate, I was asked to look at a camera that scanned my face and then I saw a green light announcing that I could enter the plane. I didn’t need to show my passport or boarding pass. After a few seconds of disbelief and a certain sense of vulnerability, I started to connect the dots about the data they had used and exactly when I authorized their use.

So, this is how it went. When I arrived at the airport to check in my suitcase, a self-service kiosk took my photo and turned it into a biometric model (sort of like a scale map of my face); it then linked that photo to my boarding pass, basically telling the system: "this photo goes with this boarding pass." For boarding, the camera compared my face (from the gate area) with the registered face and verified that they were the same, and that the face was associated with a boarding pass on that flight. Et Voilà, I was ready to fly!

Now that we understand the example, it seems familiar to us, but it is not always so easy to determine what data the algorithm used, how they were used to give a result, or if the result of the evaluation had any level of bias. Two very specific examples explain the origin of potential bias.

A first example has to do precisely with face recognition algorithms that have been largely trained on the basis of images of white people and are therefore challenged when required to recognize faces of color. This is not a big problem at an airport where you can always show your boarding pass as an alternative, but it is when the system stops you entering your bank account or your office. Here, the source of the problem is the data, and the consequence is that companies such as IBM and Microsoft have stopped developing and selling such technologies.

A second example of bias is the case of the admission test that the UK government applied this year to decide which school graduates were eligible for which universities. Universities offer admission by combining two important aspects: the ranking of the students in their schools and the historical ranking of the schools in the system. A slight adjustment in the algorithm notoriously favored students from private schools to the detriment of those from state-run schools, provoking a massive reaction throughout the country. Here, the problem was the process, that is, how the algorithm gave more weight to one factor than another to reach a decision. The pressure was such that the government decided to eliminate the test.

Can these errors and biases occur in our businesses? The answer is yes. An algorithm can assume the wrong age of a client and send inappropriate material, it can suggest an additional line of credit for clients even if their credit rating is not within the required parameters, or it can generate an email campaign and include customers who have exited the system. In all these cases, it is a human, a person who knows the business, who defines the data to be used and the results that are to be obtained.

If such a situation is analyzed, it is possible to identify whether the error was in the data being used or in the way it was processed. Was the data clean? Was there a good definition of the target population? Were smaller populations tested or A/B tested to see if the action worked in one group versus the other? These are some of the questions that need to be answered before the programming code is revisited or before going back to the whiteboard to review the initial question(s). The good news is that any model can be adjusted as many times as necessary until the prediction improves.

It is true that at the beginning the algorithms will not be so precise but, with more and more users and data, what they predict will improve and be increasingly accurate, allowing companies to save time and money and dedicate more resources to strategic decisions.

When someone asks me about the potential errors of algorithms in certain processes, I always ask what they think is the percentage risk of human error in that process. Nobody would think today of giving a person the responsibility of recognizing people among millions of photos or of looking for a cheap plane ticket. But, can all decisions be made by algorithms? The answer is no. Fortunately, there are countless decisions and processes that require our judgment, experience and intuition. The human will always have the last word... even if the algorithm says otherwise.

Article originally published in Alto Nivel.

Go to research
EGADE Ideas
in your inbox