-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Closed
Labels
Description
The Feature
LiteLLM has fallbacks, including a default fallback. However, I am finding that in order to fallback for a given model, that model needs to be in the router's model_list Otherwise, you get BadRequestError.
For example, the guide tells you how to fallback for bad-model, but I want to send a model ID that is not configured in the router at all, like bad-modellll. This is important because it allows the caller to send new models over time, without breaking the flow.
Consider this script:
from litellm import Router
import os
model_list=[
{
"model_name": "gpt-4o",
"litellm_params": {
"model": "gpt-4o",
"api_key": os.environ["LITELLM_OPENAI_KEY"],
}
}
]
router = Router(
model_list=model_list,
num_retries=0,
fallbacks=[{"bad-model": ["gpt-4o"]}],
default_fallbacks="gpt-4o")
# This model is good.
response = router.completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Hey, how's it going?"}])
print(response)
# This model is bad but its in our fallbacks.
response = router.completion(
model="bad-model",
messages=[{"role": "user", "content": "Hey, how's it going?"}])
print(response)
# This doesn't work because the model isn't in the list of fallbacks :(.
# Seems odd because we have the default_fallbacks set.
response = router.completion(
model="bad-modelll",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
default_fallbacks=["gpt-4o"])
print(response)
Motivation, pitch
If you are sharing a router in a large codebase, it seems reasonable to be robust to unknown model strings, as long as there are some default_fallbacks set.
LiteLLM is hiring a founding backend engineer, are you interested in joining us and shipping to all our users?
No
Twitter / LinkedIn details
No response
mwufigma and markhcmwufigma