
Should AI Agents Be Trusted? The Problem and Solution, w/ Billions.Network CEO Evin McMullen
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
What happens when an AI agent says something harmful, or makes a costly mistake? Who’s responsible—and how can we even know who the agent belongs to in the first place?
In this episode of AI-Curious, we talk with Evin McMullen, CEO and co-founder of Billions.Network, a startup building cryptographic trust infrastructure to verify the identity and accountability of AI agents and digital content.
We explore the unsettling rise of synthetic media and deepfakes, why identity verification is foundational to AI safety, and how platforms—not users—should be responsible for determining what’s real. Evin explains how Billions uses zero knowledge proofs to establish trust without compromising privacy, and offers a vision for a future where billions of AI agents operate transparently, under clear reputational and legal frameworks.
Along the way, we cover:
- The problem with unverified AI agents (2:00)
- Why 50% of online traffic is now bots—and why that matters (2:45)
- The Air Canada chatbot legal fiasco (15:00)
- The difference between chatbots and agentic AI (13:00)
- What “identity” means in an AI-first internet (10:00)
- Deepfakes, misinformation, and the limits of user responsibility (22:00)
- Billions’ “deep trust” framework, explained (29:00)
- How platforms can earn trust by verifying content authenticity (34:00)
- Breaking news: Billions’ work with the European Commission (38:20)
This one dives deep into the infrastructure of digital trust—and why the future of AI may depend on getting this right.
Learn more: https://billions.network
🎧 Subscribe to AI-Curious:
• Apple Podcasts
https://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308
• Spotify
https://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b
• YouTube
https://www.youtube.com/@jeffwilser