Skip to content

6sinxyz/claude-code-proxy

 
 

Repository files navigation

Anthropic API Proxy for Gemini & OpenAI Models 🔄

Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends. 🤝

A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. 🌉

Anthropic API Proxy

Quick Start ⚡

Prerequisites

  • OpenAI API key 🔑
  • Google AI Studio (Gemini) API key (if using default provider) 🔑
  • uv installed.

Setup 🛠️

  1. Clone this repository:

    git clone https://github.com/1rgs/claude-code-openai.git
    cd claude-code-openai
  2. Install uv (if you haven't already):

    curl -LsSf https://astral.sh/uv/install.sh | sh

    (uv will handle dependencies based on pyproject.toml when you run the server)

  3. Configure Environment Variables: Copy the example environment file:

    cp .env.example .env

    Edit .env and fill in your API keys and model configurations:

    • ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.
    • OPENAI_API_KEY: Your OpenAI API key (Required if using OpenAI models as fallback or primary).
    • GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if using the default Gemini preference).
    • PREFERRED_PROVIDER (Optional): Set to google (default) or openai. This determines the primary backend for mapping haiku/sonnet.
    • BIG_MODEL (Optional): The model to map sonnet requests to. Defaults to gemini-2.5-pro-preview-03-25 (if PREFERRED_PROVIDER=google and model is known) or gpt-4o.
    • SMALL_MODEL (Optional): The model to map haiku requests to. Defaults to gemini-2.0-flash (if PREFERRED_PROVIDER=google and model is known) or gpt-4o-mini.

    Mapping Logic:

    • If PREFERRED_PROVIDER=google (default), haiku/sonnet map to SMALL_MODEL/BIG_MODEL prefixed with gemini/ if those models are in the server's known GEMINI_MODELS list.
    • Otherwise (if PREFERRED_PROVIDER=openai or the specified Google model isn't known), they map to SMALL_MODEL/BIG_MODEL prefixed with openai/.
  4. Run the server:

    uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload

    (--reload is optional, for development)

Using with Claude Code 🎮

  1. Install Claude Code (if you haven't already):

    npm install -g @anthropic-ai/claude-code
  2. Connect to your proxy:

    ANTHROPIC_BASE_URL=http://localhost:8082 claude
  3. That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯

Model Mapping 🗺️

The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:

Claude Model Default Mapping When BIG_MODEL/SMALL_MODEL is a Gemini model
haiku openai/gpt-4o-mini gemini/[model-name]
sonnet openai/gpt-4o gemini/[model-name]

Supported Models

OpenAI Models

The following OpenAI models are supported with automatic openai/ prefix handling:

  • o3-mini
  • o1
  • o1-mini
  • o1-pro
  • gpt-4.5-preview
  • gpt-4o
  • gpt-4o-audio-preview
  • chatgpt-4o-latest
  • gpt-4o-mini
  • gpt-4o-mini-audio-preview

Gemini Models

The following Gemini models are supported with automatic gemini/ prefix handling:

  • gemini-2.5-pro-preview-03-25
  • gemini-2.0-flash

Model Prefix Handling

The proxy automatically adds the appropriate prefix to model names:

  • OpenAI models get the openai/ prefix
  • Gemini models get the gemini/ prefix
  • The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists

For example:

  • gpt-4o becomes openai/gpt-4o
  • gemini-2.5-pro-preview-03-25 becomes gemini/gemini-2.5-pro-preview-03-25
  • When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to gemini/[model-name]

Customizing Model Mapping

You can customize which models are used via environment variables:

  • BIG_MODEL: The model to use for Claude Sonnet models (default: "gpt-4o")
  • SMALL_MODEL: The model to use for Claude Haiku models (default: "gpt-4o-mini")
  • PREFERRED_PROVIDER: Set to "google" (default), "openai" or "custom" to choose the primary backend

Add these to your .env file to customize:

OPENAI_API_KEY=your-openai-key
# For OpenAI models
PREFERRED_PROVIDER=openai
BIG_MODEL=gpt-4o
SMALL_MODEL=gpt-4o-mini

# For Gemini models
PREFERRED_PROVIDER=google
BIG_MODEL=gemini-2.5-pro-preview-03-25
SMALL_MODEL=gemini-2.0-flash

# For custom OpenAI-compatible models
PREFERRED_PROVIDER=custom
BIG_MODEL=my-large-model
SMALL_MODEL=my-small-model

Custom OpenAI-Compatible Models

You can add support for custom OpenAI-compatible models by creating a custom_models.yaml file:

- model_id: "my-model"
  api_base: "https://api.example.com"
  api_key_name: "MY_API_KEY"
  can_stream: true
  max_tokens: 8192
  model_name: "actual-model-name"  # Optional - if different from model_id

Key features of custom models:

  • Supports any OpenAI-compatible API endpoint
  • Handles streaming responses if can_stream: true
  • Automatically adds custom/ prefix to model names
  • Uses specified API key from environment variables
  • Supports tool use (function calling) if the backend supports it
  • Maintains Anthropic API compatibility while using custom backends
  • Automatically loads from custom_models.yaml on server startup
  • Supports multiple custom models in the same configuration file

Then set these environment variables:

PREFERRED_PROVIDER=custom
BIG_MODEL=my-model
MY_API_KEY=your-api-key

Or set them directly when running the server:

# Using OpenAI models (with uv)
BIG_MODEL=gpt-4o SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082

# Using custom models (with uv)
PREFERRED_PROVIDER=custom BIG_MODEL=my-large-model SMALL_MODEL=my-small-model uv run uvicorn server:app --host 0.0.0.0 --port 8082

# Using Gemini models (with uv)
BIG_MODEL=gemini-2.5-pro-preview-03-25 SMALL_MODEL=gemini-2.0-flash uv run uvicorn server:app --host 0.0.0.0 --port 8082

# Using custom models (with uv)
PREFERRED_PROVIDER=custom BIG_MODEL=my-large-model SMALL_MODEL=my-small-model uv run uvicorn server:app --host 0.0.0.0 --port 8082

How It Works 🧩

This proxy works by:

  1. Receiving requests in Anthropic's API format 📥
  2. Translating the requests to OpenAI format via LiteLLM 🔄
  3. Sending the translated request to OpenAI 📤
  4. Converting the response back to Anthropic format 🔄
  5. Returning the formatted response to the client ✅

The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊

Contributing 🤝

Contributions are welcome! Please feel free to submit a Pull Request. 🎁

About

Run Claude Code on OpenAI models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Makefile 0.1%