『Stopping AI Oversharing with Knostic』のカバーアート

Stopping AI Oversharing with Knostic

Stopping AI Oversharing with Knostic

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

Large language models are most useful to your business when they have access to your data. But these models also overshare by default, providing need-to-know information without sophisticated access controls. But organizations that try to limit the data accessed by an LLM risk undersharing within their organization, not giving the information users need to do their jobs more efficiently.

In this episode, Sounil Yu, CTO at Knostic, explains how they address internal knowledge segmentation, offer continuous assessments, and help prevent oversharing while also identifying under-sharing opportunities. Joining him are our panelists, Ross Young, CISO-in-residence at Team8, and David Cross, CISO at Atlassian.

Huge thanks to our sponsor, Knostic


Knostic protects enterprises from LLM oversharing by applying need-to-know access controls to AI tools like Microsoft 365 Copilot. Get visibility into overshared data, fix risky exposures, and deploy AI confidently—without data leakage. If you’re rolling out Copilot or Glean, you need Knostic.

Stopping AI Oversharing with Knosticに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。