Skip to content

Conversation

@anmolg1997
Copy link
Contributor

@anmolg1997 anmolg1997 commented Feb 10, 2026

Link to Issue or Description of Change

Problem:

supported_models() in lite_llm.py only returns three provider patterns (openai/.*, groq/.*, anthropic/.*). When the model registry tries to resolve a model like azure/gpt-4o, it doesn't match any pattern and throws Model azure/gpt-4o not found. This breaks LLM-as-a-Judge eval and any other code path that uses LLMRegistry.resolve() for provider-prefixed models outside those three.

Solution:

Added 13 additional well-known LiteLLM provider prefixes to supported_models(): azure, azure_ai, bedrock, ollama, ollama_chat, together_ai, vertex_ai, mistral, deepseek, fireworks_ai, cohere, databricks, ai21. All patterns sourced from https://docs.litellm.ai/docs/providers.

Testing Plan

Unit Tests:

$ pytest tests/unittests/models/test_litellm.py
187 passed in 3.68s

Also verified end-to-end that LLMRegistry.resolve("azure/gpt-4o") now correctly resolves to LiteLlm, and same for bedrock, ollama, vertex_ai etc.

@google-cla
Copy link

google-cla bot commented Feb 10, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @anmolg1997, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the LiteLLM integration by expanding the list of supported model patterns. By registering a wider array of provider prefixes, it resolves critical model resolution failures, ensuring that various LLM providers can be correctly identified and utilized within the system. This change improves the robustness and flexibility of the LLM framework, particularly for evaluation and registry-dependent functionalities.

Highlights

  • Expanded LiteLLM Provider Support: Added 'azure/.*' and 12 other common LiteLLM provider prefixes to the 'supported_models()' function, enabling broader compatibility with various LLM services.
  • Resolved Model Resolution Issues: Fixed model resolution failures for providers such as Azure OpenAI, AWS Bedrock, Ollama, Vertex AI, Mistral, DeepSeek, and more, which previously occurred due to unregistered patterns.
  • Impact on LLM Evaluation: Corrected an issue that previously broke LLM-as-a-Judge evaluation and other features relying on the model registry for provider-prefixed models.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/google/adk/models/lite_llm.py
    • Added regular expression patterns for 13 new LiteLLM providers to the 'supported_models' list, including Azure, AWS Bedrock, Ollama, Vertex AI, Mistral, DeepSeek, Fireworks AI, Cohere, Databricks, and AI21.
Activity
  • Verified all 187 existing LiteLLM unit tests pass.
  • Verified 'LLMRegistry.resolve("azure/gpt-4o")' resolves to 'LiteLlm'.
  • Verified all new provider patterns match their respective model strings via regex.
  • Confirmed adherence to the Contributor License Agreement and contributing guidelines.
  • Ensured tests cover changes, all tests pass, code follows style, linting/formatting checks passed, branch rebased, and self-reviewed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for several new LiteLLM providers by including their model name patterns in the supported_models method of the LiteLlm class. This change fixes model resolution failures for various providers like Azure OpenAI, AWS Bedrock, and others. The added patterns ensure that the LLMRegistry.resolve() method can correctly identify and resolve models from these providers. The changes include adding regular expression patterns for each provider to the supported_models list.

Comment on lines +2041 to +2070
# For Azure OpenAI models (e.g., "azure/gpt-4o")
r"azure/.*",
# For Azure AI models (e.g., "azure_ai/command-r-plus")
r"azure_ai/.*",
# For Groq models via Groq API (e.g., "groq/llama3-70b-8192")
r"groq/.*",
# For Anthropic models (e.g., "anthropic/claude-3-opus-20240229")
r"anthropic/.*",
# For AWS Bedrock models (e.g., "bedrock/anthropic.claude-3-sonnet")
r"bedrock/.*",
# For Ollama models (e.g., "ollama/llama3")
r"ollama/.*",
# For Ollama chat models (e.g., "ollama_chat/llama3")
r"ollama_chat/.*",
# For Together AI models (e.g., "together_ai/meta-llama/Llama-3-70b")
r"together_ai/.*",
# For Vertex AI non-Gemini models (e.g., "vertex_ai/claude-3-sonnet")
r"vertex_ai/.*",
# For Mistral AI models (e.g., "mistral/mistral-large-latest")
r"mistral/.*",
# For DeepSeek models (e.g., "deepseek/deepseek-chat")
r"deepseek/.*",
# For Fireworks AI models (e.g., "fireworks_ai/llama-v3-70b")
r"fireworks_ai/.*",
# For Cohere models (e.g., "cohere/command-r-plus")
r"cohere/.*",
# For Databricks models (e.g., "databricks/dbrx-instruct")
r"databricks/.*",
# For AI21 models (e.g., "ai21/jamba-1.5-large")
r"ai21/.*",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The regular expression patterns for the supported models are added sequentially. Consider grouping them by category (e.g., Azure, AWS, etc.) to improve readability and maintainability.

…supported_models

The LiteLLM integration only registered three provider prefixes
(openai, groq, anthropic) in supported_models(), causing the model
registry to fail resolution for widely-used providers like Azure
OpenAI, AWS Bedrock, Ollama, Vertex AI, and others.

This meant that `registry.resolve("azure/gpt-4o")` raised
"Model azure/gpt-4o not found", breaking features like
LLM-as-a-Judge evaluation for Azure users.

Added the following provider patterns:
- azure, azure_ai (Azure OpenAI / Azure AI)
- bedrock (AWS Bedrock)
- ollama, ollama_chat (Ollama)
- together_ai (Together AI)
- vertex_ai (Vertex AI non-Gemini models)
- mistral (Mistral AI)
- deepseek (DeepSeek)
- fireworks_ai (Fireworks AI)
- cohere (Cohere)
- databricks (Databricks)
- ai21 (AI21)

Fixes google#4325
@anmolg1997 anmolg1997 force-pushed the fix/add-azure-litellm-supported-models branch from 8e4a3d0 to 55b825b Compare February 10, 2026 14:20
@adk-bot adk-bot added the models [Component] Issues related to model support label Feb 10, 2026
@adk-bot
Copy link
Collaborator

adk-bot commented Feb 10, 2026

Response from ADK Triaging Agent

Hello @anmolg1997, thank you for your contribution!

Before we can merge this pull request, you'll need to sign the Contributor License Agreement (CLA). You can do so here: https://cla.developers.google.com/

Thank you!

@anmolg1997
Copy link
Contributor Author

I have signed the CLA - @googlebot please verify.

@anmolg1997 anmolg1997 force-pushed the fix/add-azure-litellm-supported-models branch from 55b825b to 533733b Compare February 10, 2026 14:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unable to use LLM-as-a-Judge with Azure model.

2 participants