When companies turn to artificial intelligence, they do so in the hopes of optimizing business decision making. But what happens when you’ve grown accustomed to the commodity of the technology, are relying blindly on an AI tool and it starts making biased decisions? You open yourself up to making terrible choices and can even lose valuable customers.
An example of this is Microsoft’s Twitter bot Tay.ai. The bot was intended to learn from online conversations and decide on new content to Tweet to her followers. Although the use case sounds relatively simple, within a day the bot started spouting racist and sexist phrases. Tay didn’t serve as a great advertisement for AI decision making.
So being a responsible business person on the cautious side, you may decide to stick to human decision making. It’s a slower process and you may miss some opportunities but it is also comforting to know that your decision-making is unbiased.
If we take a step back, in the scenario above, have you actually succeeded in reducing the level of bias in your decision making? How can we determine whether humans or AI tools are making better and less biased decisions?
The dictionary defines bias as “the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment.” If we dig a bit deeper, we can break down into bias multiple sub-categories including statistical bias, cognitive bias, and prejudice.
Statistical bias is a technical term. Simply put, it is the difference between your estimate of something—for example, how much you will sell of a particular product—and the actual value. This difference can have many causes, but essentially they all boil down to problems with the information you use to make your estimate—as the saying goes, “garbage in, garbage out.”
Cognitive biases are repeated errors in human thinking. They may arise due to limitations of the human brain, e.g., memory lapses. Or they may be mental shortcuts that we have developed to help us make quicker decisions.
Finally, as described in the dictionary, prejudices are unfair and unreasonable opinions or feelings, especially when formed without enough thought or knowledge. Examples include racism, sexism, and ageism.
Examples of bias in AI
Following are some examples of the different types of bias in a business context:
You base your stock purchasing decisions on a machine learning model for predicting daily store sales, but you only use data from Black Friday to build the AI model. The resulting predictions will wildly overestimate sales. This is known as selection bias: using an unrepresentative sub-sample of data.
You base your maintenance schedule on an AI model built to predict how long until a crucial piece of machinery fails, but you don’t include the hours per day the machine is run for. The resulting AI model is unlikely to be accurate. This is known as omitted variable bias: missing important factors that impact your target.
You predict how long it will take to complete projects at work so that you can decide how to allocate resources. Despite your best efforts, your projects always overrun. This is an example of optimism bias or planning fallacy: overestimating favorable outcomes and underestimating task completion times.
You hold a meeting of your marketing team to decide on a new strategy. Your team is very experienced and everyone agrees with the plan but within a few days, major issues emerge. This could be an example of groupthink: people try to avoid conflict and achieve consensus which can mean that ideas are not properly scrutinized.
You believe that your hiring process is fair but your team ends up being composed mostly of people who are very similar. This limits the different viewpoints on offer and harms decision making. This may be down to conscious or unconscious bias towards certain races, genders or ages
Human bias vs machine bias
Now that we have an understanding of bias, it is clear that it is, in general, harmful in a business setting. So, this brings us to the important question, do humans or AI machines make better, less biased decisions?
One theme that runs through all of the AI-related examples above, is that decisions made by machine learning models are only as good as the data they are based on. The reason Microsoft’s Tay.ai turned into a racist was not that the people who programmed her wrote that into their code. It was because the data (tweets) she used to learn contained racist content.
Most AI decision-making tools use a technique called supervised machine learning. In simple terms, this means that they take in historical data consisting of a target and predictor variables and use this data to build a model that can predict the target for a new set of data. This approach does not require a human to create a set of rules for the model. The AI tool does this itself. The great benefit of using this sort of self-learning approach is that the AI tool can process large amounts of data and find patterns that would be very difficult (if not impossible) for a human to spot.
Where bias comes into the picture is in the historical data you use. If the historical data used is based on prejudiced past human decisions, this can influence the resulting models negatively. A classic example of this is using machine learning models to predict which job candidates will succeed in a role. If past hiring and promotion decisions were biased, this will lead to a biased model.
So we have a dilemma. On the one hand, humans are terrible at decision making because we are naturally biased. On the other hand, since machine learning–based models can inherit bias from the historical data used to build them, they can also end up making bad decisions. These bad decisions generate more biased data, so rebuilding the AI models with new data only further entrenches the problem. So, should we rely on humans or trust AI?
The answer is somewhere in the middle. Blindly following the advice of an AI algorithm (or letting one loose unsupervised on Twitter) is a bad idea. But missing out on all of the advantages AI can bring is not an option for most businesses today. AI allows you to explore a much wider range of data patterns than a human ever could, and at far greater speeds. If you are not making the most of the opportunities AI brings, your competitors definitely will be!
How to mitigate bias in business decision making
To solve this conundrum, we need to achieve a few key things:
- Build AI solutions that provide explainable predictions/decisions – so-called “glass boxes” rather than “black boxes.”
- Integrate these solutions into human processes that provide a suitable level of oversight.
- Ensure that AI solutions are properly benchmarked and frequently updated.
This way we can leverage the power of AI and use one of the greatest attributes of the human brain, common sense.
There is currently a big push in the world of AI to find ways to explain the predictions made by machine learning models. For example, Google has created a What-If tool to allow users to easily probe model results. Other companies have a strong focus on building glass box solutions. These companies offer models with human-understandable features and predictions that come with explanations of what’s driving the result.
Bias has always been a problem for business decision-makers. There is no way to completely eliminate bias, regardless of whether we use humans or machines to make decisions. Looking out for potential bias in all decision making and acknowledging when there is a problem can go a long way. Bias can be mitigated through greater transparency in both human- and machine-led decision making and with appropriate oversight and safeguards.