image credit: freepik

Explainable AI: Bringing trust to business AI adoption

September 24, 2019

Via: CIO

When it comes to making use of AI and machine learning, trust in results is key. Many organizations, in particular those in regulated industries, can be hesitant to leverage AI systems thanks to what is known as AI’s “black box” problem: that the algorithms derive their decisions opaquely with no explanation for the reasoning they follow.

This is an obvious problem. How can we trust AI with life-or-death decisions in areas such as medical diagnostics or self-driving cars, if we don’t know how they work?

Read More on CIO