エピソード

  • The Complexities of State and Federal AI Regulation with Soribel Feliz, Former Senior AI & Tech Policy Advisor at the US Senate | EP 07
    2025/01/02

    For this episode of the Responsible AI Report, Soribel Feliz discusses the complexities of AI regulation, emphasizing the need for a balanced approach that considers both innovation and the rights of creators. She highlights the challenges faced by startups in complying with regulations and the differing impacts of state versus federal policies. The discussion also touches on the evolving landscape of intellectual property rights in the context of AI development.

    Takeaways

    • Big tech has valid arguments against regulations.
    • Overly burdensome regulations could hinder innovation for startups.
    • Effective regulation must be tailored to different contexts.
    • State-level regulations can be more responsive than federal ones.
    • A patchwork of regulations can complicate compliance for startups.
    • Balancing AI development with creator rights is complex.
    • Policymakers need to collaborate with AI developers and creators.
    • Fair use and data licensing frameworks are evolving areas of policy.
    • Smaller creators often lack resources to defend their rights.
    • Ongoing dialogue is essential for effective AI governance.

    Learn more at:
    https://www.linkedin.com/in/soribel-f-b5242b14/
    https://www.linkedin.com/newsletters/responsible-ai-=-inclusive-ai-7046134543027724288/

    Soribel Feliz is a thought leader in Responsible AI and AI governance. She started her career as a U.S. diplomat with the Department of State. She also worked for Big Tech companies, Meta and Microsoft, and most recently, worked as a Senior AI and Tech Policy Advisor in the U.S. Senate.





    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    17 分
  • Mitigating the Risks of AI Chatbots with Vyoma Gajjar, IBM | EP 06
    2024/12/19

    In this episode of the Responsible AI Report, Patrick speaks with Vyoma Gajjjar about the critical issues surrounding responsible AI, particularly in the context of generative AI and chatbots. They discuss the balance between creating engaging AI interactions and maintaining transparency, the importance of implementing safety measures from the ground up, and the necessity of developing emotional intelligence frameworks to better understand and respond to user emotions. The conversation emphasizes the need for robust safety protocols and regulations to ensure that AI technologies are developed ethically and responsibly.

    Takeaways

    • Responsible AI is crucial in today's technology landscape.
    • Transparency in AI interactions is essential for user trust.
    • AI companies must implement safety measures at the foundation.
    • Emotional intelligence frameworks can enhance AI responsiveness.
    • Robust safety measures are necessary to protect vulnerable users.
    • AI should be validated by AI to ensure effectiveness.
    • There is a need for regulations around AI technologies.
    • Explainable AI will be vital for future developments.
    • Monitoring and parental controls are important for AI interactions with minors.

    Learn more at:
    https://www.linkedin.com/in/vyomagajjar/

    Vyoma Gajjar is an AI Technical Solution Architect at IBM, specializing in generative AI, AI governance, and machine learning. With over a decade of experience, she has developed innovative solutions that emphasize ethical AI practices and responsible innovation across various global industries. Vyoma is a passionate advocate for AI governance and has contributed her expertise as a speaker and mentor in numerous academic and professional settings. She is dedicated to fostering a deeper understanding of AI's impact on society, promoting transparency, and enhancing trust in AI technologies. Vyoma holds a patent in AI and is actively involved in initiatives that drive positive change in the tech industry.





    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    20 分
  • Navigating the AI Deepfake Dilemma with Sophie Compton, Director of ANOTHER BODY | EP 05
    2024/12/05

    Director/Producer, Sophie Compton, joins us for episode 05 of the Responsible AI Report! In this conversation, Sophie Compton discusses the implications of AI and deepfakes, particularly in the context of misinformation during elections and the broader societal impacts of deepfake abuse. She emphasizes the importance of consent, the structural issues surrounding deepfake technology, and the need for accountability from tech companies. The discussion also highlights the potential positive uses of AI in storytelling and the cultural implications of deepfake technology on gender equality.

    Takeaways

    • Deepfakes pose significant risks in elections and misinformation.
    • The majority of deepfake abuse targets women, often in non-consensual contexts.
    • AI technology can be reclaimed for positive storytelling.
    • Cultural messaging around deepfakes is crucial for prevention.
    • Accountability from tech companies is lacking and necessary.
    • Public awareness and education are essential in combating deepfakes.
    • AI's potential for good exists but requires responsible use.
    • Engagement with campaigns like #MyImageMyChoice is vital for change.

    Learn more at:
    https://myimagemychoice.org/
    @myimagemychoice

    Sophie Compton is a documentary director and producer who tells women's stories of injustice and healing. Her work is impact-driven and she runs impact projects alongside each creative piece, amplifying survivor voices. Her projects have been supported by Sundance Institute, International Documentary Association, Impact Partners, Hot Docs, Arts Council England and others. Her debut feature ANOTHER BODY follows a student’s search for justice after discovering deepfake pornography of herself online. It premiered at SXSW 2023, winning the Special Jury Award for Innovation in Storytelling, and played at Hot Docs, Doc Edge, Champs Elysées, Munich, Aegean, DMZ, Woodstock, Mill Valley and New/Next Film Festivals among others, winning multiple Audience Awards. She is the co-founder of #MyImageMyChoice, a cultural movement tackling intimate image abuse. Her second feature HOLLOWAY (in post-production) follows six women returning to the abandoned prison where they were once incarcerated, produced by Grierson and BIFA-winning Beehive Films. Previously, she was Artistic Director of theatre company Power Play, producing/directing six plays including the Fringe First winning FUNERAL FLOWERS, and work at Tate Modern, V&A, Pleasance, Copeland Gallery. As an impact producer she has worked with grassroots organisations, NGOs, governments and press including The White House, World Economic Forum, and NOW THIS on viral content and new legislation and policy.

    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    22 分
  • Understanding AI Governance Structures with Megha Sinha, VP of AI/ML Practice at Genpact | EP 04
    2024/11/21

    In this episode of the Responsible AI Report, Patrick and Megha Sinha discuss the essential components of responsible AI governance. They explore the significant gap between AI ambitions and the resources available for implementing governance frameworks, emphasizing the need for organizations to establish clear ethical guidelines, accountability mechanisms, and cross-functional teams. Megha outlines an eight-step approach to building a responsible AI framework, highlighting the importance of transparency, bias mitigation, and continuous monitoring. The conversation also delves into the critical role of governance structures in ensuring accountability as global AI regulations evolve, and the necessity of incorporating responsible AI thinking from the design phase to prevent ethical and legal violations.

    Takeaways
    - 97% of organizations have set responsible AI goals, but 48% lack resources.
    - Establishing a code of conduct is critical for responsible AI.
    - Transparency is essential for building trust in AI systems.
    - Governance structures are vital for ensuring accountability.
    - Incorporate responsible AI thinking from the start of development.
    - Prevent ethical and legal violations by embedding responsible AI early.
    - Designing for explainability enhances accountability in AI.
    - Continuous monitoring is necessary for responsible AI frameworks.
    - Fostering a culture of responsible AI is crucial for success.
    - AI governance must adapt to evolving regulations.

    Learn more by visiting:
    https://www.genpact.com/
    https://www.linkedin.com/in/megha-sinha/

    Megha Sinha is an AI/ML leader with 15 years of expertise in shaping technology strategy and spearheading AI-driven transformations and a Certified AI Governance Professional from IAPP. As the leader of the AI/ML & Responsible AI Platform competency in the Global AI Practice, Megha has built high-performing teams across ML Engineering, ML Ops, LLM Ops, and Responsible AI to architect and scale robust platforms. Her leadership drives the strategic integration of AI technologies, ensuring the delivery of impactful, ethical solutions that align with enterprise goals and industry standards. She successfully spearheaded the end-to-end launch of an enterprise-grade Generative AI Knowledge Management product, driving product strategy, enabling go-to-market (GTM) execution, and establishing competitive pricing models. As a trusted advisor to Client CXOs, she is known for her strategic foresight, strategy realization through right implementation and leadership in technology strategy and AI/ML solution design. Her ability to navigate the complex AI landscape and guide organizations toward measurable business outcomes instills confidence in her clients. Her leadership has enabled successful partnerships with industry bodies like NASSCOM, fostering joint solutions with Dataiku and driving Responsible AI initiatives building partnerships to benefit clients. She has been recognized with the Women in Tech Leadership Award and is a thought leader in AI strategy and responsible AI. With numerous technical publications in IEEE journals, she shapes the conversation around AI scale using ML Ops, LLM Ops, ethics, governance, and the future of technology leadership, positioning her at the forefront of AI-driven business transformation.


    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    24 分
  • The Role of Experts in AI Regulation with Dr. Richard Saldanha, Founding Member of IST's AI Special Interest Group, UK | EP 03
    2024/11/07

    In this episode of the Responsible AI Report, Patrick and Dr. Richard Saldanha discuss the EU's AI Code of Conduct and its collaborative approach to AI governance. They explore the importance of adaptability in regulations, the balance between innovation and safety, and the need for qualified personnel in regulatory bodies. Richard emphasizes the significance of a principles-based approach and the role of collaboration among stakeholders in shaping effective AI regulations.

    Takeaways

    • The EU AI Act aims to create a global model for AI regulations.
    • Collaboration between academia, industry, and civil society is crucial for effective AI governance.
    • A principles-based approach allows for flexibility in AI regulation.
    • Regulators should hire individuals with a strong understanding of technology.
    • Balancing regulation and innovation requires pragmatism from all parties involved.
    • A supportive regulatory environment can enhance technological development.
    • Finding consensus among diverse stakeholders can be challenging.
    • The UK aims to align with the EU AI Act while maintaining flexibility.
    • Professional accreditation in AI skills is essential for industry growth.


    Learn more by visiting:
    1. Referenced article: https://www.ainews.com/p/eu-gathers-experts-to-draft-ai-code-of-practice-for-general-ai-models

    2. EU AI Act 2024/1689: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689

    3. UK Automated Vehicles Act 2024: https://www.legislation.gov.uk/ukpga/2024/10/contents/enacted

    4. Richard's Queen Mary University of London profile: https://www.qmul.ac.uk/sef/staff/richardsaldanha.html

    5. Richard's Academic Speakers Bureau profile: https://www.academicspeakersbureau.com/speakers/richard-saldanha

    6. The UK Institute of Science and Technology (IST) website: https://istonline.org.uk/

    7. IST AI professional accreditation:
    https://istonline.org.uk/professional-registration/registered-artificial-intelligence-practitioners/

    8. IST AI training: https://istonline.org.uk/ist-artificial-intelligence-training/

    Dr. Richard Saldanha is one of the founder members of the Institute of Science and Technology's Artificial Intelligence Special Interest Group in the UK. He is actively involved in the development of the Institute's AI professional accreditation as well as host of its online AI Seminar Series. Richard is a Visiting Lecturer at Queen Mary University of London where he teaches Machine Learning in Finance on the Master’s Degree Programme in the School of Economics and Finance. He is also an Industrial Collaborator in the AI for Control Problems Project at The Alan Turing Institute. Richard's earlier career was in quantitative finance (risk, trading and investments) gaining over two decades of experience working for institutions in the City of London. He is still actively engaged in quantitative finance via Oxquant, a consulting firm he co-heads with Dr Drago Indjic. Richard attended Oriel College, University of Oxford, and holds a doctorate (DPhil) in graph theory and multivariate analysis. He is a Fellow and Chartered Statistician of the Royal Statistical Society; a Science Council Chartered Scientist; a Fellow and Advanced Practitioner in Artificial Intelligence of the Institute of Science and Technology; a Member of the Institution of Engineering and Technology; and has recently joined the Responsibl

    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    21 分
  • The Intersection of AI and Healthcare with Dr. Jolley-Paige and Caraline Bruzinski, mpathic | EP 02
    2024/10/24

    In this episode of the Responsible AI Report, Patrick speaks with Caroline Brzezinski and Dr. Amber Jolley-Paige from mpathic about the intersection of AI and healthcare. They discuss the importance of measuring AI accuracy, the need for standardized testing, acceptable error rates in medical AI, and current trends in AI adoption within the healthcare sector. The conversation emphasizes the critical role of human oversight and expert involvement in ensuring the safety and efficacy of AI tools in medical applications.

    Takeaways

    • AI in healthcare requires domain-specific validation.
    • Human oversight is essential for AI accuracy.
    • Standardized testing for medical AI is crucial.
    • Acceptable error rates depend on potential harm.
    • Different healthcare sectors adopt AI at varying rates.
    • Generative AI is just one aspect of healthcare AI.
    • AI tools must be tailored to specific medical needs.
    • Experts should guide AI development and deployment.
    • The healthcare industry is still figuring out best practices.
    • AI advancements necessitate ongoing regulatory discussions.

    Learn more by visiting:
    https://mpathic.ai/
    https://www.linkedin.com/in/amber-jolley-paige-ph-d-72041b46/
    https://www.linkedin.com/in/caraline-7b22588b/

    Dr. Jolley is a licensed professional counselor, researcher, and educator with over a decade of experience in the mental health field. As the Vice President of Clinical Product and a founding team member at mpathic, she leads a team that utilizes an evidence-based labeling system to advance natural language processing technologies. Dr. Jolley leverages her extensive clinical, research, and teaching background to develop a conversation and insights engine, providing individuals and organizations with actionable insights for enhanced understanding.

    Caraline Bruzinski is a Senior Machine Learning Engineer at mpathic, where she models clinical trial data from therapist-client sessions with a focus on measuring empathy and therapist-patient conversational outcomes. Caraline specializes in refining models to achieve higher accuracy and reliability, developing custom ML models tailored to address specific clinical setting challenges, and conducting statistical analysis to enhance the accuracy and fairness of machine learning outcomes. With a Master’s degree in Computer Science, specifically focusing on AI/ML, from New York University and a background in data engineering, she brings extensive experience from her previous roles, including as Tech Lead at Glossier Inc. There, she developed a recommendation system that boosted sales by over $2M annually.

    The Responsible AI Report is produced by the Responsible AI Institute.

    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    18 分
  • Towards a Global & Cooperative AI Future with Renata Dwan, Special Advisor to the UN Secretary General's Envoy on Technology | EP 01
    2024/10/10

    In episode 01 of the Responsible AI Report, Renata Dwan discusses the critical need for global governance of artificial intelligence (AI) and the challenges that arise from differing national perspectives. She emphasizes the importance of collaboration, equity, and transparency in developing effective AI governance frameworks. Dwan outlines strategies for achieving consensus among nations and highlights the role of the UN in facilitating dialogue and cooperation. The discussion also touches on the implications of AI for society and the necessity of addressing market failures and ensuring equitable distribution of AI benefits.

    Takeaways

    • Global AI governance is essential due to the borderless nature of technology.
    • There is a significant asymmetry in AI development and access.
    • A global dialogue is necessary for effective AI governance.
    • Equity in AI benefits distribution is crucial for stability.
    • Trust can be built through collaborative efforts in AI governance.
    • The UN's role is to maintain international order amidst AI advancements.
    • Market failures in AI need to be addressed proactively.
    • Representation in AI governance discussions is lacking and needs improvement.
    • The lessons from climate change governance can inform AI governance.
    • Capacity building is vital for equitable participation in AI development.

    Learn more by visiting:
    https://www.un.org/en/
    https://www.un.org/techenvoy/
    https://www.un.org/techenvoy/global-digital-compact

    Renata Dwan is Special Adviser to the UN Secretary-General’s Envoy on Technology where she led support to the elaboration of the Global Digital Compact approved by heads of state at the UN Summit of the Future. Renata has driven multilateral cooperation initiatives for over 25 years within and outside the
    UN. As Director of the United Nations Institute for Disarmament Research (UNIDIR) she led initiatives on digital technology governance and arms control. She drove major UN-wide initiatives on UN reform and partnerships, and dialogue on the responsible use of technologies inUN peace operations. Previously, Renata was Deputy Director of Chatham House, the Royal Institute of International Affairs where she oversaw the Institute’s research agenda and digital initiatives. She was
    Programme Director at Stockholm International Peace Research Institute (SIPRI) and visiting fellow to the EU Institute for Security Studies. She received her B.A, M.Phil and D.Phil in International Relations from Oxford University, UK. Renata has published widely on international policy and security issues. She is an Irish national.

    The Responsible AI Report is produced by the Responsible AI Institute.

    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    25 分
  • Trailer: Welcome to the Responsible AI Report!
    2024/09/25

    The Responsible AI Report is a brand new podcast, brought to you by the Responsible AI Institute. Each week we will bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it's unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly. We are excited to have you join us!


    Support the show

    Visit our website at responsible.ai


    続きを読む 一部表示
    1 分