
Stopping AI Oversharing with Knostic
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Large language models are most useful to your business when they have access to your data. But these models also overshare by default, providing need-to-know information without sophisticated access controls. But organizations that try to limit the data accessed by an LLM risk undersharing within their organization, not giving the information users need to do their jobs more efficiently.
In this episode, Sounil Yu, CTO at Knostic, explains how they address internal knowledge segmentation, offer continuous assessments, and help prevent oversharing while also identifying under-sharing opportunities. Joining him are our panelists, Ross Young, CISO-in-residence at Team8, and David Cross, CISO at Atlassian.
Huge thanks to our sponsor, Knostic
Knostic protects enterprises from LLM oversharing by applying need-to-know access controls to AI tools like Microsoft 365 Copilot. Get visibility into overshared data, fix risky exposures, and deploy AI confidently—without data leakage. If you’re rolling out Copilot or Glean, you need Knostic.