『Episode 3: When AI gets it Wrong』のカバーアート

Episode 3: When AI gets it Wrong

Episode 3: When AI gets it Wrong

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

When artificial intelligence systems fail, the consequences aren’t always small—or hypothetical. In this episode of The Tinker Table, we dive into what happens after the error: Who’s accountable? Who’s harmed? And what do these failures tell us about the systems we’ve built?


We explore real-world case studies like:


The wrongful arrest of Robert Williams in Detroit due to facial recognition bias, The racially biased predictions of COMPAS, a sentencing algorithm used in U.S. courts, And how predictive policing tools reinforce historical over-policing in marginalized communities, We also tackle AI hallucinations—false but believable outputs from tools like ChatGPT and Bing’s Sydney —and the serious trust issues that result, from fake legal citations to wrongful plagiarism flags.


Finally, we examine the dangers of black-box algorithms—opaque decision-making systems that offer no clarity, no appeal, and no path to accountability.


📌 This episode is your reminder that AI is only as fair, accurate, and just as the humans who design it. We don’t just need smarter machines—we need ethically designed ones.


🔍 Sources & Further Reading:


Facial recognition misidentification

Machine bias

Predictive policing

AI hallucinations


🎧 Tune in to learn why we need more than innovation—we need accountability.

Episode 3: When AI gets it Wrongに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。