エピソード

  • "Does social ethical AI exist?" with Emmanuel Matthews, Product Leader (ex-Google, DeepMind)
    2024/09/12

    In this episode, Alex explores the intersections of social media, mental health products, consumer trust, and building responsible AI products with Emmanuel Matthews, Product Leader, previously at Spring Health, Course Hero, Google, and DeepMind.

    Emmanuel discusses building trust in AI-powered mental health solutions, bringing AI products to market and focusing on user experience, and shares his outlook for the future of AI with connected devices and personal AI assistants.

    続きを読む 一部表示
    55 分
  • "How can AI augment civic spaces?" with Sarah Lawrence, Design Director and Founder
    2024/08/29

    In this episode, Alex explores the intersections of design, civic engagement, and AI, with Sarah Lawrence, Design Director at Design Emporium and Founder of Tallymade.

    Sarah discusses driving community engagement through the creation of AI-powered collective art, preserving creativity in the age of AI art, testing usability with her very own "mom test", and leveraging AI is for everyday utility, like generating recipes for eliminating food waste, or efficient errand mapping.

    続きを読む 一部表示
    48 分
  • Is it enough for AI "to do no harm"? with Nana Nwachukwu, AI Ethics and Governance Consultant
    2024/05/28

    In this episode, Alex explores AI governance, transparency, and consent in healthcare and the creator economy with Nana Nwachukwu⁠, AI Ethics and Governance Consultant at Saidot. Nana shares her three pillars for AI governance, the challenges of transparency in AI systems, and reveals the most critical question consumers should be asking about AI.

    Nana is an accomplished lawyer, knowledge manager, and policy consultant currently specializing in digital governance and AI, with a decade of experience spanning multiple sectors and countries.

    続きを読む 一部表示
    36 分
  • "Is trust the currency of AI?" with Josh Schwartz, CEO and Co-founder of Phaselab
    2024/03/19

    In this episode, Alex explores the intersections of ethics, privacy, and AI with Josh Schwartz, CEO and Co-founder of ⁠Phaselab⁠. Josh shares the opportunities and risks when operationalizing ethics in a for-profit business, the importance of prioritizing responsible AI in early stage companies, and responsibility in the context of AI and privacy.

    Josh founded Phaselab in 2023 to help companies automate their data privacy programs. Prior to Phaselab, he served as CTO at Chartbeat, leading one of the largest and most influential analytics companies in the world from its early days through its exit. Josh is a machine learning expert by training, and has researched computer vision and AI at MIT CSAIL, Cornell, and the University of Chicago.

    続きを読む 一部表示
    31 分
  • "Does data governance and responsible AI intersect?" with Sabrina Palme, CEO and Co-founder of Palqee
    2024/03/12

    In this episode, Alex chats with Sabrina Palme, CEO and Co-founder of Palqee, about language learning, culture, transparency and explainability in AI systems, the power of a diverse founding team, and the future of AI morality. Sabrina also explains her unique hierarchy for data governance.

    Sabrina is a certified Data Compliance Officer with broad experience in implementing privacy-by-design and security-by-design principles, aligning with international data protection laws and Information Security frameworks. As CEO of Palqee, a UK-based company specializing in Governance, Risk, Compliance (GRC), and AI Governance automation software operating across Europe, LatAm, and the US, Sabrina is driving the development of an innovative solution designed to detect contextual bias in AI systems to enhance trustworthiness, transparency, and fairness in AI technology.

    続きを読む 一部表示
    42 分
  • "Is there a human cost to AI bias?" with Gerald Carter, Founder and CEO, Destined AI
    2024/03/05

    In this episode, Alex chats with Gerald Carter, CEO and Founder of Destined AI, about the importance of de-biasing data, the power of language, founder perseverance, and what it means to generate diverse, consented data at scale.


    Gerald Carter founded Destined AI to help companies detect and mitigate unwanted bias in AI. He developed the company after experiencing AI’s mislabeling of diverse people with derogatory terms, failing to decipher speech by people with southern U.S. accents, and struggling to detect brown skin tones. Through accurate, balanced, and ethically sourced data, Destined AI aims to create a safe and equitable world where AI represents the best that humankind has to offer, not the worst. The company has ethically sourced data and has worked with over 800 contributors.

    続きを読む 一部表示
    32 分
  • "Can AI be built to benefit society and the individual?" with Sharon Zhang, Co-founder and CTO, Personal.ai
    2024/02/27

    In the inaugural episode, Alex and Sharon Zhang, CTO and Co-Founder of Personal.ai, discuss building a consumer AI company, explainability, AI applications for good, and its impact on society and individuals.

    Sharon Zhang (she/her) is passionate about building AI and NLP products that impact the lives of patients, employees, and everyday people. She is the Co-founder and CTO of Personal.ai– a consumer AI startup on a mission to build private, personal, and trainable AI models for people and brands. Sharon informs her perspectives from well over a decade of experience in AI and NLP, and leading AI dev teams at Nuance, Glint, Kaiser Permanente, and others.

    続きを読む 一部表示
    41 分
  • Welcome to The Culture of Machines
    2024/02/19

    Welcome to The Culture of Machines⁠ hosted by ⁠⁠Alex de Aranzeta⁠⁠ (she/her), the show where responsible AI meets culture.

    Together with AI founders, ethicists, and experts, Alex will uncover and explore key questions about responsible AI and its intersections with culture, people, and society.

    Follow us on X/ Twitter and LinkedIn.

    続きを読む 一部表示
    1 分