fix: show provider-specific rate limit message instead of hardcoded OpenAI text#1730
Conversation
|
I'm my fork I just display such messages as they come from the API, including markdown or HTML formatting if it exists https://github.com/endolith/open-interpreter |
|
Thanks! That's a nice approach in your fork — passing through the raw provider message preserves whatever formatting/details the API returns. This PR is a more conservative change: it just removes the misleading hardcoded "Check OpenAI API key" text when the actual provider isn't OpenAI, and shows the underlying error message instead. It's a smaller surface change but addresses the same underlying issue users hit when seeing OpenAI guidance for non-OpenAI rate limit errors. A more comprehensive rewrite (like yours) could be a good follow-up. |
Fixes #1706
Problem
When users hit a rate limit or quota error while using non-OpenAI providers (e.g., Groq, Anthropic, Mistral), the error handler always displays a message that says "You ran out of current quota for OpenAI's API" and links to the OpenAI billing page. This is confusing and misleading for users who aren't using OpenAI at all.
Solution
Detect the API provider from the model prefix (LiteLLM convention:
provider/model-name) and display an appropriate message:openai/prefix): show the existing OpenAI-specific message with the OpenAI billing page linkgroq/llama3-8b-8192): show a generic message naming the actual provider (e.g., "You have exceeded your quota for Groq's API") without incorrect OpenAI-specific linksThe extraction uses
model.split("/")[0].title()which correctly maps:groq/llama3-8b-8192→Groqanthropic/claude-3→Anthropicmistral/mistral-7b→Mistralgpt-4,o1-mini(no prefix) →OpenAITesting