Skip to content

Privacy Model

Daneel AI is designed with a privacy gradient — you choose how much (or how little) of your data leaves your machine. This page explains the data flow for each provider and feature.

Every AI provider in Daneel has a data residency classification:

LevelMeaningProviders
On-deviceData never leaves your browser processWebGPU, Gemini Nano
Local networkData goes to a server on your LAN, never to the internetOllama
Your cloudData goes to infrastructure you controlAzure OpenAI
Third-party cloudData goes to an external API providerClaude (Anthropic)

You can filter models by privacy level in Settings > AI Models to find models that match your requirements.

Regardless of which LLM provider you use, these operations never leave your browser:

  • Embedding — All vector embeddings are generated locally by the BGE Small model running on WebGPU (or WASM fallback). Your text is chunked and embedded on-device.
  • Vector search — Cosine similarity search runs in IndexedDB or GPU-accelerated memory. Search queries never leave the browser.
  • Document storage — Vault documents, site indexes, and knowledge graphs are stored in IndexedDB in your browser profile.
  • Settings and credentials — All configuration data, including encrypted API keys, stays in Chrome’s local storage.
  • Content extraction — Page text extraction (Readability.js, Turndown) runs in the content script or service worker.

When you select a cloud LLM provider, the following data is sent to that provider’s API:

  • The assembled prompt (page content or RAG context + your question + conversation history)
  • The AI’s response streams back

This is the standard flow for any AI chat application. The difference is that with Daneel, you can avoid it entirely by using WebGPU or Ollama.

  • Data sent to Anthropic’s API servers
  • API key is encrypted with AES-256-GCM before storage; transmitted via HTTPS
  • Anthropic’s data usage policy applies
  • The anthropic-dangerous-direct-browser-access: true header is set (required for browser-based API calls)
  • Data sent to your Ollama server (default: localhost:11434)
  • Stays on your local network — nothing reaches the internet
  • You control the server and its data retention
  • Data sent to your Azure OpenAI deployment in your tenant
  • Your Azure data residency and compliance policies apply
  • Authentication via API key or Entra ID (your Azure AD)

When using MCP servers, tool call parameters and results are exchanged with the remote server. Each MCP server has its own data handling policy. OAuth-connected servers (Stripe, Notion, etc.) operate under their respective privacy policies.

When enabled, Daneel injects your approximate location (city level) and current datetime into agent system prompts. This data is:

  • Location — resolved once per session via browser geolocation + OpenStreetMap reverse geocoding. Stored only in memory (never persisted to disk). Sent to your LLM provider as part of the prompt.
  • Datetime — computed locally from Date and Intl.DateTimeFormat. No network calls.

Both are gated by toggles in Settings > Privacy and are off/on by default respectively. The telemetry geolocation system (below) is completely separate and does not share data with context injection.

See Environment Context for the full architecture.

Daneel includes optional analytics (GA4 Measurement Protocol). When enabled:

Collected: feature usage counters (chat, search, crawl, model load), provider and model name, OS, Chrome version, language, country/region.

Never collected: page content, URLs you visit, chat messages, documents, API keys, or any personally identifiable information.

Telemetry is opt-in/out in Settings > Privacy. Disabling it stops all analytics collection immediately.

  • Claude API keys: AES-256-GCM encryption at rest in Chrome storage
  • MCP OAuth tokens: stored in Chrome’s local storage with auto-migration from legacy formats
  • S3 credentials: stored in Chrome storage, excluded from data exports
  • Azure SAS URLs: stored in Chrome storage, excluded from data exports
  • Maximum privacy: Use WebGPU for LLM + default local embedding. Zero data leaves your machine.
  • Privacy with power: Use Ollama on localhost. Data stays on your machine but you get access to larger models.
  • Enterprise compliance: Use Azure OpenAI. Data stays in your Azure tenant under your compliance umbrella.
  • Best quality: Use Claude. Prompts are sent to Anthropic’s API, but embedding and search remain local.

To see this in action, follow Your First Page Chat with the WebGPU provider — everything runs locally.