Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends. 🤝
A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. 🌉
- OpenAI API key 🔑
- Google AI Studio (Gemini) API key (if using default provider) 🔑
- uv installed.
-
Clone this repository:
git clone https://github.com/1rgs/claude-code-openai.git cd claude-code-openai -
Install uv (if you haven't already):
curl -LsSf https://astral.sh/uv/install.sh | sh(
uvwill handle dependencies based onpyproject.tomlwhen you run the server) -
Configure Environment Variables: Copy the example environment file:
cp .env.example .env
Edit
.envand fill in your API keys and model configurations:ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.OPENAI_API_KEY: Your OpenAI API key (Required if using OpenAI models as fallback or primary).GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if using the default Gemini preference).PREFERRED_PROVIDER(Optional): Set togoogle(default) oropenai. This determines the primary backend for mappinghaiku/sonnet.BIG_MODEL(Optional): The model to mapsonnetrequests to. Defaults togemini-2.5-pro-preview-03-25(ifPREFERRED_PROVIDER=googleand model is known) orgpt-4o.SMALL_MODEL(Optional): The model to maphaikurequests to. Defaults togemini-2.0-flash(ifPREFERRED_PROVIDER=googleand model is known) orgpt-4o-mini.
Mapping Logic:
- If
PREFERRED_PROVIDER=google(default),haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withgemini/if those models are in the server's knownGEMINI_MODELSlist. - Otherwise (if
PREFERRED_PROVIDER=openaior the specified Google model isn't known), they map toSMALL_MODEL/BIG_MODELprefixed withopenai/.
-
Run the server:
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
(
--reloadis optional, for development)
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect to your proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 claude
-
That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯
The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:
| Claude Model | Default Mapping | When BIG_MODEL/SMALL_MODEL is a Gemini model |
|---|---|---|
| haiku | openai/gpt-4o-mini | gemini/[model-name] |
| sonnet | openai/gpt-4o | gemini/[model-name] |
The following OpenAI models are supported with automatic openai/ prefix handling:
- o3-mini
- o1
- o1-mini
- o1-pro
- gpt-4.5-preview
- gpt-4o
- gpt-4o-audio-preview
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-4o-mini-audio-preview
The following Gemini models are supported with automatic gemini/ prefix handling:
- gemini-2.5-pro-preview-03-25
- gemini-2.0-flash
The proxy automatically adds the appropriate prefix to model names:
- OpenAI models get the
openai/prefix - Gemini models get the
gemini/prefix - The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists
For example:
gpt-4obecomesopenai/gpt-4ogemini-2.5-pro-preview-03-25becomesgemini/gemini-2.5-pro-preview-03-25- When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to
gemini/[model-name]
You can customize which models are used via environment variables:
BIG_MODEL: The model to use for Claude Sonnet models (default: "gpt-4o")SMALL_MODEL: The model to use for Claude Haiku models (default: "gpt-4o-mini")PREFERRED_PROVIDER: Set to "google" (default), "openai" or "custom" to choose the primary backend
Add these to your .env file to customize:
OPENAI_API_KEY=your-openai-key
# For OpenAI models
PREFERRED_PROVIDER=openai
BIG_MODEL=gpt-4o
SMALL_MODEL=gpt-4o-mini
# For Gemini models
PREFERRED_PROVIDER=google
BIG_MODEL=gemini-2.5-pro-preview-03-25
SMALL_MODEL=gemini-2.0-flash
# For custom OpenAI-compatible models
PREFERRED_PROVIDER=custom
BIG_MODEL=my-large-model
SMALL_MODEL=my-small-model
You can add support for custom OpenAI-compatible models by creating a custom_models.yaml file:
- model_id: "my-model"
api_base: "https://api.example.com"
api_key_name: "MY_API_KEY"
can_stream: true
max_tokens: 8192
model_name: "actual-model-name" # Optional - if different from model_idKey features of custom models:
- Supports any OpenAI-compatible API endpoint
- Handles streaming responses if
can_stream: true - Automatically adds
custom/prefix to model names - Uses specified API key from environment variables
- Supports tool use (function calling) if the backend supports it
- Maintains Anthropic API compatibility while using custom backends
- Automatically loads from
custom_models.yamlon server startup - Supports multiple custom models in the same configuration file
Then set these environment variables:
PREFERRED_PROVIDER=custom
BIG_MODEL=my-model
MY_API_KEY=your-api-key
Or set them directly when running the server:
# Using OpenAI models (with uv)
BIG_MODEL=gpt-4o SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082
# Using custom models (with uv)
PREFERRED_PROVIDER=custom BIG_MODEL=my-large-model SMALL_MODEL=my-small-model uv run uvicorn server:app --host 0.0.0.0 --port 8082
# Using Gemini models (with uv)
BIG_MODEL=gemini-2.5-pro-preview-03-25 SMALL_MODEL=gemini-2.0-flash uv run uvicorn server:app --host 0.0.0.0 --port 8082
# Using custom models (with uv)
PREFERRED_PROVIDER=custom BIG_MODEL=my-large-model SMALL_MODEL=my-small-model uv run uvicorn server:app --host 0.0.0.0 --port 8082This proxy works by:
- Receiving requests in Anthropic's API format 📥
- Translating the requests to OpenAI format via LiteLLM 🔄
- Sending the translated request to OpenAI 📤
- Converting the response back to Anthropic format 🔄
- Returning the formatted response to the client ✅
The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊
Contributions are welcome! Please feel free to submit a Pull Request. 🎁
