『The Tinker Table』のカバーアート

The Tinker Table

The Tinker Table

著者: Hannah Lloyd
無料で聴く

このコンテンツについて

Welcome to The Tinker Table—a podcast where big ideas meet everyday questions. Hosted by engineering educator, researcher, and systems thinker Hannah Lloyd, this show invites curious minds to pull up a seat and explore the intersection of technology, ethics, design, and humanity. From AI ethics and digital literacy to intentional innovation and creative problem-solving, each episode breaks down complex topics into thoughtful, accessible conversations. Whether you’re a teacher, a parent, a healthcare worker, or just someone trying to keep up with a rapidly changing world—you belong here.Hannah Lloyd
エピソード
  • Episode 3: When AI gets it Wrong
    2025/07/08

    When artificial intelligence systems fail, the consequences aren’t always small—or hypothetical. In this episode of The Tinker Table, we dive into what happens after the error: Who’s accountable? Who’s harmed? And what do these failures tell us about the systems we’ve built?


    We explore real-world case studies like:


    The wrongful arrest of Robert Williams in Detroit due to facial recognition bias, The racially biased predictions of COMPAS, a sentencing algorithm used in U.S. courts, And how predictive policing tools reinforce historical over-policing in marginalized communities, We also tackle AI hallucinations—false but believable outputs from tools like ChatGPT and Bing’s Sydney —and the serious trust issues that result, from fake legal citations to wrongful plagiarism flags.


    Finally, we examine the dangers of black-box algorithms—opaque decision-making systems that offer no clarity, no appeal, and no path to accountability.


    📌 This episode is your reminder that AI is only as fair, accurate, and just as the humans who design it. We don’t just need smarter machines—we need ethically designed ones.


    🔍 Sources & Further Reading:


    Facial recognition misidentification

    Machine bias

    Predictive policing

    AI hallucinations


    🎧 Tune in to learn why we need more than innovation—we need accountability.

    続きを読む 一部表示
    9 分
  • Episode 2: Who is at the AI table?
    2025/07/01
    If AI is shaping our future, we have to ask: Who’s shaping AI? In this episode of The Tinker Table, Hannah digs into the essential question of representation in technology—and why it matters who gets invited to build the tools we all use. We explore how a lack of diversity in engineering and data science has led to real-world consequences: from facial recognition tools that misidentify women of color (Buolamwini & Gebru, MIT Media Lab, 2018) to healthcare algorithms that underestimated Black patients' needs by nearly 50% (Obermeyer et al., Science, 2019). This episode blends Hannah’s own research on belonging in engineering education with broader examples across healthcare, education, and AI development. You'll hear why representation isn’t just about race or gender—it’s about perspective, lived experience, and systemic change. And most importantly, we talk about what it means to build tech that truly works for everyone. Whether you’re a developer, educator, team leader, or thoughtful user—pull up a seat. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. Gender Shades: Intersectional accuracy Disparities in commercial gender classification. (2018). Proceedings of Machine Learning Research, 81, 1–15.
    続きを読む 一部表示
    9 分
  • Episode 1: Is AI Good?
    2025/07/01
    Is AI good? Is it bad? Or is it something more complicated—and more human—than we tend to admit? In this first episode of The Tinker Table, Hannah breaks down the foundations of AI ethics—what it is, why it matters, and where it shows up in our lives. From biased hiring algorithms that penalized women (Winick, 2022) to predictive systems shaped by decades-old redlining data (The Markup, 2021), and even soap dispensers that don’t detect darker skin tones (Fussell, 2017)—this episode explores the ways AI isn’t just about what we can build, but what we should. We ask: Who gets to shape the tools shaping our world? What values are embedded in our algorithms? And what happens when human bias becomes digital infrastructure? Whether you’re a teacher, parent, technologist, or simply AI-curious—this is the conversation to start with. Winick, E. (2022, June 17). Amazon ditched AI recruitment software because it was biased against women. MIT Technology Review. The secret bias hidden in Mortgage-Approval algorithms – the Markup. (2021, August 25). Fussell, S. (2017, August 17). Why Can’t This Soap Dispenser Identify Dark Skin? Gizmodo
    続きを読む 一部表示
    10 分

The Tinker Tableに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。