How PacketProof uses AI

Plain-language summary of what gets sent to the AI provider, what does not, and how a workspace owner controls it. The matching toggle lives in Settings → AI Data Processing.

What is sent

  • The text of the question being answered.
  • Each shortlisted evidence item’s id, title, and a description of up to 600 characters.
  • For text-format evidence (plain text, Markdown, CSV, JSON), a single excerpt of up to 1 500 characters of the extracted text. Binary formats (PDF, images) never have any body content sent.
  • The standard AI provider request envelope (model, temperature, system prompt).

What is never sent

  • Raw evidence files (PDFs, images, attachments).
  • Stored evidence URLs or external source URLs you paste into an evidence row.
  • Storage paths, file names, or anything else derived from how files are saved.
  • Member email addresses, names, or roles.
  • Stripe customer ids, billing data, or audit log rows.
  • Data from any other workspace.

Provider + region

Default provider: OpenAI. Default region label: EU. EU-hosted inference is available by pointing the AI client at Azure OpenAI EU via configuration. Owners see the active provider + region in Settings → AI Data Processing.

Consent gate

AI generation is gated behind an explicit acceptance step in Settings. Until a workspace owner or admin accepts the AI Data Processing addendum, no question text leaves the platform.

Hallucination guard

Drafts must cite at least one of the shortlisted evidence ids. When the model proposes an answer that does not cite a real evidence row, our validator rejects it and writes a documented gap row instead — never a fabricated answer.

Human review required

Drafts always start as generated. An editor approves, edits, or rejects each draft before export. The product surfaces this state — it does not block export by itself.

Questions? Mail support@packetproof.io.