Top
image credit: Pixabay

Enhancing trust in artificial intelligence: Audits and explanations can help

August 13, 2019

Via: CIO

There is a lively debate all over the world regarding AI’s perceived “black box” problem. Most profoundly, if a machine can be taught to learn itself, how does it explain its conclusions? This issue comes up most frequently in the context of how to address possible algorithmic bias. One way to address this issue is to mandate a right to a human decision per the General Data Protection Regulation’s (GDPR) Article 22. Here in the United States, Senators Wyden and Booker propose in the Algorithmic Accountability Act that companies be compelled to conduct impact assessments.

Read More on CIO