Photo Credit: Jewish Press

It’s not just humans who are biased. Apparently, computer algorithms are too. That’s why IBM recently unveiled AI Fairness 360 (AIF360) – an artificial intelligence tool aimed at detecting bias in algorithms.

AI Fairness 360 analyzes algorithms’ decisions and – more importantly – the reasoning behind them. It checks for unwanted bias in machine learning models, datasets and algorithms. The tool, which is open-sourced, also works in real-time so customers can see what factors are being used to make decision.

Advertisement




The best part about AI Fairness 360? It doesn’t just highlight bias; it also suggests adjustments to eliminate bias as much as possible.

IBM is not the only company working on tools to fight AI bias. Google, Microsoft, and even Facebook have recently launched, or are planning to launch, bias detection tools. Google’s What-If bias detection tool shares a similar goal with AI Fairness 360. It does not, however, operate in real-time. We have yet to see what Microsoft and Facebook are working on.

To whom is AI Fairness 360 geared?

Any company that uses algorithms to make critical decisions can benefit from it. That includes the insurance industry, the billing industry, the medical and healthcare industries, and the banking and finance industries – as well as the police force and nearly all technology companies.

But is it really necessary?

Let’s put it this way. In 2017, Google’s smart reply feature, Google Allo, suggested a man with a turban emoji as one of three responses to a gun emoji. Bias detection tools can prevent an AI suggestion of this nature by ensuring that AI algorithms are fair, impartial and anti-discriminatory. They can also, for example, ensure that two 35-year-olds with similar medical histories receives the same life insurance policy options.

When it comes to AI, it’s really all about trust. As companies increasingly rely on machine learning and AI to make a wide range of critical decisions – many of which directly impact real people in real life – it becomes ever more important to ensure that specific groups of people are not short-changed. It’s easy for AI to be biased because it’s difficult to test algorithms in enough diverse situations.

In its initial form, AI Fairness 360 bears nine different bias mitigation algorithms and 30 fairness metrics. It promises to be an interactive experience, with three tutorials on credit scoring, predicting medical expenditures, and classifying face images by gender.

AI Fairness 360 is setting the stage to take AI to the next level. With bias detection tools, executives and their customers can have confidence in the decisions of companies’ AI systems. That being said, getting the bias detection tools “just right” is a process, and we can – and should – expect many adjustments in the coming months.

Advertisement

SHARE
Previous article70 Years Later: IDF Soldiers Sing Israel’s National Anthem at Majdanek
Next articleRobert Kennedy: Assassinated For His Support Of Israel
Bracha Halperin is a business consultant based in new York City. To comment on her Jewish Press-exclusive tech columns -- or to reach her for any other purpose -- e-mail her at [email protected]. You can also follow her on Instagram or Twitter at: @brachahalperin.