Day 0 Support: GPT-5.4
LiteLLM now supports fully GPT-5.4!
Docker Image​
docker pull ghcr.io/berriai/litellm:v1.81.14-stable.gpt-5.4_patch
Usage​
- LiteLLM Proxy
- LiteLLM SDK
1. Setup config.yaml
model_list:
- model_name: gpt-5.4
litellm_params:
model: openai/gpt-5.4
api_key: os.environ/OPENAI_API_KEY
2. Start the proxy
docker run -d \
-p 4000:4000 \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-v $(pwd)/config.yaml:/app/config.yaml \
ghcr.io/berriai/litellm:v1.81.14-stable.gpt-5.4_patch \
--config /app/config.yaml
3. Test it
curl -X POST "http://0.0.0.0:4000/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LITELLM_KEY" \
-d '{
"model": "gpt-5.4",
"messages": [
{"role": "user", "content": "Write a Python function to check if a number is prime."}
]
}'
from litellm import completion
response = completion(
model="openai/gpt-5.4",
messages=[
{"role": "user", "content": "Write a Python function to check if a number is prime."}
],
)
print(response.choices[0].message.content)
Notes​
- Restart your container to get the cost tracking for this model.
- Use
/responsesfor better model performance. - GPT-5.4 supports reasoning, function calling, vision, and tool-use — see the OpenAI provider docs for advanced usage.


