Responsible AI Report

著者: Responsible AI Institute
  • サマリー

  • Welcome to the RAI Report from the Responsible AI Institute. Each week we bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it's unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly.

    Support the show
    Visit out website at responsible.ai.

    © 2024 Responsible AI Report
    続きを読む 一部表示

あらすじ・解説

Welcome to the RAI Report from the Responsible AI Institute. Each week we bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it's unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly.

Support the show
Visit out website at responsible.ai.

© 2024 Responsible AI Report
エピソード
  • The Complexities of State and Federal AI Regulation with Soribel Feliz, Former Senior AI & Tech Policy Advisor at the US Senate | EP 07
    2025/01/02

    For this episode of the Responsible AI Report, Soribel Feliz discusses the complexities of AI regulation, emphasizing the need for a balanced approach that considers both innovation and the rights of creators. She highlights the challenges faced by startups in complying with regulations and the differing impacts of state versus federal policies. The discussion also touches on the evolving landscape of intellectual property rights in the context of AI development.

    Takeaways

    • Big tech has valid arguments against regulations.
    • Overly burdensome regulations could hinder innovation for startups.
    • Effective regulation must be tailored to different contexts.
    • State-level regulations can be more responsive than federal ones.
    • A patchwork of regulations can complicate compliance for startups.
    • Balancing AI development with creator rights is complex.
    • Policymakers need to collaborate with AI developers and creators.
    • Fair use and data licensing frameworks are evolving areas of policy.
    • Smaller creators often lack resources to defend their rights.
    • Ongoing dialogue is essential for effective AI governance.

    Learn more at:
    https://www.linkedin.com/in/soribel-f-b5242b14/
    https://www.linkedin.com/newsletters/responsible-ai-=-inclusive-ai-7046134543027724288/

    Soribel Feliz is a thought leader in Responsible AI and AI governance. She started her career as a U.S. diplomat with the Department of State. She also worked for Big Tech companies, Meta and Microsoft, and most recently, worked as a Senior AI and Tech Policy Advisor in the U.S. Senate.





    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    17 分
  • Mitigating the Risks of AI Chatbots with Vyoma Gajjar, IBM | EP 06
    2024/12/19

    In this episode of the Responsible AI Report, Patrick speaks with Vyoma Gajjjar about the critical issues surrounding responsible AI, particularly in the context of generative AI and chatbots. They discuss the balance between creating engaging AI interactions and maintaining transparency, the importance of implementing safety measures from the ground up, and the necessity of developing emotional intelligence frameworks to better understand and respond to user emotions. The conversation emphasizes the need for robust safety protocols and regulations to ensure that AI technologies are developed ethically and responsibly.

    Takeaways

    • Responsible AI is crucial in today's technology landscape.
    • Transparency in AI interactions is essential for user trust.
    • AI companies must implement safety measures at the foundation.
    • Emotional intelligence frameworks can enhance AI responsiveness.
    • Robust safety measures are necessary to protect vulnerable users.
    • AI should be validated by AI to ensure effectiveness.
    • There is a need for regulations around AI technologies.
    • Explainable AI will be vital for future developments.
    • Monitoring and parental controls are important for AI interactions with minors.

    Learn more at:
    https://www.linkedin.com/in/vyomagajjar/

    Vyoma Gajjar is an AI Technical Solution Architect at IBM, specializing in generative AI, AI governance, and machine learning. With over a decade of experience, she has developed innovative solutions that emphasize ethical AI practices and responsible innovation across various global industries. Vyoma is a passionate advocate for AI governance and has contributed her expertise as a speaker and mentor in numerous academic and professional settings. She is dedicated to fostering a deeper understanding of AI's impact on society, promoting transparency, and enhancing trust in AI technologies. Vyoma holds a patent in AI and is actively involved in initiatives that drive positive change in the tech industry.





    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    20 分
  • Navigating the AI Deepfake Dilemma with Sophie Compton, Director of ANOTHER BODY | EP 05
    2024/12/05

    Director/Producer, Sophie Compton, joins us for episode 05 of the Responsible AI Report! In this conversation, Sophie Compton discusses the implications of AI and deepfakes, particularly in the context of misinformation during elections and the broader societal impacts of deepfake abuse. She emphasizes the importance of consent, the structural issues surrounding deepfake technology, and the need for accountability from tech companies. The discussion also highlights the potential positive uses of AI in storytelling and the cultural implications of deepfake technology on gender equality.

    Takeaways

    • Deepfakes pose significant risks in elections and misinformation.
    • The majority of deepfake abuse targets women, often in non-consensual contexts.
    • AI technology can be reclaimed for positive storytelling.
    • Cultural messaging around deepfakes is crucial for prevention.
    • Accountability from tech companies is lacking and necessary.
    • Public awareness and education are essential in combating deepfakes.
    • AI's potential for good exists but requires responsible use.
    • Engagement with campaigns like #MyImageMyChoice is vital for change.

    Learn more at:
    https://myimagemychoice.org/
    @myimagemychoice

    Sophie Compton is a documentary director and producer who tells women's stories of injustice and healing. Her work is impact-driven and she runs impact projects alongside each creative piece, amplifying survivor voices. Her projects have been supported by Sundance Institute, International Documentary Association, Impact Partners, Hot Docs, Arts Council England and others. Her debut feature ANOTHER BODY follows a student’s search for justice after discovering deepfake pornography of herself online. It premiered at SXSW 2023, winning the Special Jury Award for Innovation in Storytelling, and played at Hot Docs, Doc Edge, Champs Elysées, Munich, Aegean, DMZ, Woodstock, Mill Valley and New/Next Film Festivals among others, winning multiple Audience Awards. She is the co-founder of #MyImageMyChoice, a cultural movement tackling intimate image abuse. Her second feature HOLLOWAY (in post-production) follows six women returning to the abandoned prison where they were once incarcerated, produced by Grierson and BIFA-winning Beehive Films. Previously, she was Artistic Director of theatre company Power Play, producing/directing six plays including the Fringe First winning FUNERAL FLOWERS, and work at Tate Modern, V&A, Pleasance, Copeland Gallery. As an impact producer she has worked with grassroots organisations, NGOs, governments and press including The White House, World Economic Forum, and NOW THIS on viral content and new legislation and policy.

    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    22 分

Responsible AI Reportに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。