Skip to content

Conversation

@garyzhang99
Copy link
Collaborator

Description

As the title says. We use an OpenAI-compatible format for our tool calls.

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @garyzhang99, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces support for OpenAI-compatible tool calls within our vLLM model integration. It enables the model to understand and respond to function calls, allowing for more dynamic and interactive AI capabilities by leveraging external tools.

Highlights

  • Tool Call Configuration: Introduced new configuration options (enable_auto_tool_choice and tool_call_parser) to control the behavior of OpenAI-compatible tool calls for vLLM models.
  • vLLM API Server Integration: Extended the vLLM API server to accept and utilize these new tool call configuration parameters, allowing the underlying vLLM engine to process tool calls and function definitions.
  • Comprehensive Tool Call Testing: Added a new end-to-end test case (test_api_tool_calls) that simulates a full OpenAI API tool call workflow, including tool definition, model decision to call a tool, simulated tool execution, and the model's final natural language response based on the tool's output.
  • UI Exposure: Integrated the new tool call configuration options into the configuration management system, making them accessible and configurable via the user interface.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@garyzhang99
Copy link
Collaborator Author

image Test passed when Qwen2.5-1.5B-Instruct serves as the model.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for tool calls using an OpenAI-compatible format with the vLLM model. The changes are well-structured, touching upon configuration, the model wrapper, API patching, and testing. My review has identified a critical issue in the UI configuration logic, a high-severity correctness issue in a test assertion, and several medium-severity items to improve code maintainability and quality. Addressing these points will strengthen the implementation and ensure the new feature is robust.

问昊 and others added 3 commits August 4, 2025 14:13
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@garyzhang99
Copy link
Collaborator Author

/unittest-module-common

@garyzhang99
Copy link
Collaborator Author

/unittest-module-common

@garyzhang99
Copy link
Collaborator Author

/unittest-module-common

1 similar comment
@garyzhang99
Copy link
Collaborator Author

/unittest-module-common

@github-actions
Copy link

github-actions bot commented Aug 4, 2025

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
31 31 0 0 0 0 866ms

Tests

Test Name Status Flaky Duration
tests/common/config_test.py::TestConfig::test_all_examples_are_valid 2ms
tests/common/config_test.py::TestConfig::test_continue_from_checkpoint_is_valid 1ms
tests/common/config_test.py::TestConfig::test_load_default_config 4ms
tests/common/experience_test.py::TestEID::test_eid_properties 1ms
tests/common/experience_test.py::TestExperience::test_action_mask_and_logprobs_type 1ms
tests/common/experience_test.py::TestExperience::test_assertions 1ms
tests/common/experience_test.py::TestExperience::test_dpo_experience 1ms
tests/common/experience_test.py::TestExperience::test_gather 1ms
tests/common/experience_test.py::TestExperience::test_multi_turn_experience 1ms
tests/common/experience_test.py::TestExperience::test_serialize_deserialize 1ms
tests/common/experience_test.py::TestExperience::test_single_turn_experience 1ms
tests/common/experience_test.py::TestExperience::test_to_dict 1ms
tests/common/experience_test.py::TestExperienceConversion::test_batch_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_dpo_experience_batch_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_experience_model_experience_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_multiturn_experience_batch_converstion 1ms
tests/common/synchronizer_test.py::TestStateDictBasedSynchronizer_0::test_synchronizer 82ms
tests/common/synchronizer_test.py::TestStateDictBasedSynchronizer_1::test_synchronizer 75ms
tests/common/synchronizer_test.py::TestStateDictBasedSynchronizer_2::test_synchronizer 114ms
tests/common/synchronizer_test.py::TestStateDictBasedSynchronizer_3::test_synchronizer 150ms
tests/common/synchronizer_test.py::TestNCCLBasedSynchronizer_0::test_synchronizer 61ms
tests/common/synchronizer_test.py::TestNCCLBasedSynchronizer_1::test_synchronizer 61ms
tests/common/vllm_test.py::ModelWrapperTest_0::test_generate 39ms
tests/common/vllm_test.py::ModelWrapperTest_1::test_generate 50ms
tests/common/vllm_test.py::ModelWrapperTest_2::test_generate 50ms
tests/common/vllm_test.py::ModelWrapperTest_3::test_generate 38ms
tests/common/vllm_test.py::ModelWrapperTest_4::test_generate 50ms
tests/common/vllm_test.py::TestAPIServer::test_api 30ms
tests/common/vllm_test.py::TestTokenizer::test_assistant_token_mask 1ms
tests/common/vllm_test.py::TestAPIServerToolCall_0_deepseek_r1::test_api_tool_calls 23ms
tests/common/vllm_test.py::TestAPIServerToolCall_1::test_api_tool_calls 22ms

Github Test Reporter by CTRF 💚

@pan-x-c pan-x-c merged commit d3224e1 into modelscope:main Aug 4, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants