AI risk

How to know whether your RAG is exposing internal data

A RAG system does not fail only because of the model. It fails because of inherited permissions, broad data sources, accidental logging, weak prompts, connected tools and lack of human review.

Baseline question: can the user receive information they should not see even if the document exists somewhere?

Risks to map

  • Connected data sources and sensitivity classification.
  • Effective permissions, not just expected permissions.
  • Indexed content that should no longer be available.
  • Logs storing prompts, answers or documents.
  • Tools the agent can execute.
  • Providers retaining or processing data.

Minimum guardrails

Before scaling, define data scope, permission controls, redaction, safe logging, abuse monitoring, human review for critical actions and a process to remove compromised sources.

Back to resources

Kronixial

Need to turn this into evidence for a real decision?

We can define a scoped sprint to review scope, evidence, red flags and a 30/60/90 plan.

Talk to Kronixial