Skip to main content

Remote Providers

Remote Providers let you connect Osaurus to external inference APIs (OpenAI, Anthropic, Open Responses, and compatible endpoints), giving you cloud models alongside your local MLX models — all behind the same Osaurus URL.

Why this matters

  • One client connection (your script's OpenAI SDK pointed at Osaurus) gets access to every model — local and cloud — by name
  • API keys are stored in the macOS Keychain, never in plain-text config files
  • Switch backends without touching client code; same memory and agent context follows you across providers

Adding a provider

Via the UI

  1. Open the Management window (⌘ ⇧ M)
  2. Click Providers in the sidebar
  3. Click Add Provider
  4. Select a preset or Custom
  5. Configure connection settings
  6. Click Save

Provider presets

PresetHostPortBase pathAPI formatAuth
Anthropicapi.anthropic.com443/v1AnthropicAPI key required
OpenAIapi.openai.com443/v1OpenAIAPI key required
xAIapi.x.ai443/v1OpenAIAPI key required
OpenRouteropenrouter.ai443/api/v1OpenAIAPI key required
Custom(you specify)/v1OpenAIOptional

For Ollama, LM Studio, Venice AI, or any other OpenAI-compatible endpoint, use Custom and configure host/port manually. See Provider-specific notes below.

API format types

FormatEndpointDescription
OpenAI/chat/completionsOpenAI Chat Completions
Anthropic/messagesAnthropic Messages
Open Responses/responsesOpen Responses

Configuration options

Basic settings

SettingDescription
NameDisplay name for the provider
HostHostname or IP (e.g. api.openai.com)
ProtocolHTTP or HTTPS
PortServer port (optional, uses protocol default)
Base pathAPI path prefix (usually /v1)

Authentication

SettingDescription
Auth typeNone or API Key
API keyStored in Keychain, never in plain text

Advanced

SettingDescriptionDefault
EnabledWhether the provider is activetrue
Auto-connectConnect automatically when Osaurus startstrue
TimeoutRequest timeout in seconds60
Custom headersAdditional HTTP headers

Custom headers

You can add custom HTTP headers for specialized authentication or configuration:

X-Custom-Header: value
Authorization: Bearer token

For headers containing secrets, mark them as "secret" to store values in the Keychain rather than in plain-text configuration.

Using remote models

Once a provider is connected, its models appear alongside local models.

In the Chat UI

  • Click the model selector dropdown
  • Remote models are grouped under their provider name
  • Select one and chat

Via the OpenAI SDK

from openai import OpenAI

client = OpenAI(base_url="http://127.0.0.1:1337/v1", api_key="osaurus")

# Use a remote model — name matches what the upstream provider expects
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)

Via curl

curl http://127.0.0.1:1337/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'

The model name should match what the remote provider expects.

Connection states

StateIndicatorDescription
ConnectedGreenActive connection, models available
ConnectingBlue (animated)Establishing connection
DisconnectedGrayNot connected
DisabledGrayManually disabled
ErrorRedConnection failed (see error message)

Troubleshooting

  1. Verify the endpoint — host, port, base path
  2. Check credentials — API key is correct
  3. Test directlycurl the upstream endpoint to confirm it's reachable
  4. Check network — no firewall blocking the connection
  5. Review error message — the provider card shows detailed error info

Provider-specific notes

OpenAI

Host: api.openai.com
Protocol: HTTPS
Base: /v1
Auth: API key (platform.openai.com)

All ChatGPT models via the OpenAI API.

OpenRouter

Host: openrouter.ai
Protocol: HTTPS
Base: /api/v1
Auth: API key (openrouter.ai)

OpenRouter aggregates many providers. Use IDs like:

  • openai/gpt-4o
  • anthropic/claude-3.5-sonnet
  • google/gemini-pro

Anthropic

Host: api.anthropic.com
Protocol: HTTPS
Base: /v1
Auth: API key (console.anthropic.com)
Format: Anthropic

xAI

Host: api.x.ai
Protocol: HTTPS
Base: /v1
Auth: API key (x.ai)

Venice AI

Use the Custom preset:

Host: api.venice.ai
Protocol: HTTPS
Base: /v1
Auth: API key (venice.ai)
Format: OpenAI

Venice provides uncensored, privacy-focused inference with no data retention.

Ollama

Use the Custom preset:

Host: localhost (or remote Ollama IP)
Protocol: HTTP
Port: 11434
Base: /v1
Auth: None (unless you've configured Ollama auth)

To expose Ollama on the network:

OLLAMA_HOST=0.0.0.0:11434 ollama serve

LM Studio

Use the Custom preset:

Host: localhost
Protocol: HTTP
Port: 1234
Base: /v1
Auth: None

Make sure "Start Server" is enabled in LM Studio.

Security

API key storage

API keys are stored in the macOS Keychain, not in plain-text configuration files:

  • Encrypted at rest
  • Protected by your macOS login
  • Never exposed in config files or logs

Secret headers

Custom headers marked as "secret" are also stored in the Keychain.

Configuration files

Non-secret provider configuration is stored at:

~/.osaurus/providers/remote.json

This file contains connection settings but not API keys or secret headers.

Managing providers

ActionHow
EditClick the pencil icon on the provider card → modify → Save. Connection re-establishes with new settings.
DeleteClick the trash icon → confirm. Removes the provider and its credentials from the Keychain.
Enable/disableToggle the switch on the provider card

Related:

  • Models — how cloud and local models share the same picker
  • HTTP API — what callers see once a provider is connected
  • Remote MCP Providers — connecting Osaurus to remote tool providers (different feature)