
This tutorial walks you through connecting a cloud AI provider. By the end, you'll have switched from the default local model to a more powerful cloud backend.

Daneel AI works out of the box with WebGPU (local inference). But if you want higher-quality responses, you can connect a cloud provider. This tutorial covers the three main options.

## Option A: Claude (Anthropic API)

Claude is Anthropic's flagship model family. It offers the highest quality responses and supports native tool calling with MCP servers.

1. Open Daneel's settings (gear icon on the launcher).
2. Navigate to **Claude** in the sidebar.
3. Paste your Anthropic API key. The key is encrypted with AES-256-GCM and stored locally — it never leaves your browser unencrypted.
4. Select a model:
   - **Claude Opus 4.7** — most capable, hybrid reasoning for coding and vision
   - **Claude Opus 4.6** — previous flagship, same pricing as 4.7
   - **Claude Sonnet 4.6** — balanced quality and speed
   - **Claude Haiku 4.5** — fastest, lowest cost
5. Close settings. In the chat panel, switch the provider dropdown to **Claude**.

You're now chatting with Claude. You'll see a cost annotation next to each response showing token usage.

:::note
Claude requires an API key from [Anthropic's console](https://console.anthropic.com/). Usage is billed per token.
:::

## Option B: Ollama (local server)

Ollama runs open-source models on your machine. Responses stay on your local network — nothing reaches the internet.

1. [Install Ollama](https://ollama.com/) on your computer.
2. Pull a model: `ollama pull llama3.2` (or any model you prefer).
3. In Daneel's settings, navigate to **Ollama**.
4. Set the base URL (default: `http://localhost:11434`). Daneel auto-probes the connection.
5. Select a model from the dropdown — Daneel lists all models installed on your Ollama server.
6. Close settings and switch the provider dropdown to **Ollama**.

Ollama supports tool calling with MCP servers, model management (pull, delete), and think-block streaming.

## Option C: Azure OpenAI (enterprise)

For enterprise environments with Azure OpenAI Service deployments.

1. In Daneel's settings, navigate to **Azure OpenAI**.
2. Enter your Azure endpoint URL and deployment name.
3. Choose an authentication method:
   - **API Key** — paste your Azure API key
   - **Entra ID (OAuth2)** — authenticate via Microsoft identity
4. Select your deployed model.
5. Close settings and switch the provider to **Azure OpenAI**.

See [How to Set Up Azure OpenAI](/how-to/azure-openai/) for the detailed guide.

## Option D: Gemini Nano (Chrome built-in)

Gemini Nano is a small model built into Chrome. No downloads, no API keys.

1. Make sure you're on Chrome 120+ with the Gemini Nano flag enabled.
2. In Daneel's settings, navigate to **Gemini Nano**.
3. Daneel detects availability automatically. If available, select a language.
4. Switch the provider dropdown to **Gemini Nano**.

Gemini Nano runs on-device with no internet required, but it's a small model — expect lower quality than Claude or Ollama with larger models.

## Comparing providers

For a deeper comparison of trade-offs between local and cloud providers, see [The Provider Spectrum](/concepts/providers/).

## Next steps

- [Connect an MCP server](/how-to/mcp-server/) to give your AI access to external tools
- [Create a custom agent](/how-to/agents/) with a specialized prompt
- Read about the [privacy model](/concepts/privacy/) to understand the data flow for each provider
