Skip to content

Conversation

@hiyuchang
Copy link
Collaborator

Description

This PR aims to reduce redundancy and improve clarity of math-related scoring utilities across the codebase.

  1. Merge duplicate implementations of compute_score in benchmark/plugins/guru_math/naive_dapo.py and examples/bots/workflow/bots_reward.py into a single, unified implementation located at trinity/utils/math_utils.py.

  2. Rename the old compute_score function in trinity/utils/eval_utils.py to compute_score_v0.

  3. Rename the (toolcall-related) compute_score_v0 function in trinity/common/workflows/customized_toolcall_workflows.py to compute_score_with_toolN1.

  4. Rename trinity/utils/math_eval_utils.py to trinity/utils/qwen25_eval_utils.py to clearly indicate its origin and scope.

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hiyuchang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the codebase by centralizing and clarifying math-related scoring utilities. It addresses redundancy by merging duplicate functions, renames existing functions for better semantic distinction, and reorganizes files to improve modularity and maintainability. The changes aim to streamline the evaluation logic across the project, making it easier to manage and extend in the future.

Highlights

  • Centralized Math Scoring: Consolidated duplicate compute_score implementations from benchmark/plugins/guru_math/naive_dapo.py and examples/bots/workflow/bots_reward.py into a single, unified compute_score function located at trinity/utils/math_utils.py.
  • Function Renaming for Clarity: Renamed compute_score in trinity/utils/eval_utils.py to compute_score_v0 and the toolcall-related compute_score_v0 in trinity/common/workflows/customized_toolcall_workflows.py to compute_score_v0_toolN1 to avoid naming conflicts and improve semantic clarity.
  • File Renaming for Scope: Renamed trinity/utils/math_eval_utils.py to trinity/utils/qwen25_eval_utils.py to better reflect its specific origin and scope.
  • Codebase Cleanup: Removed redundant math utility functions from examples/bots/workflow/bots_reward.py and updated all relevant files to import these utilities from their new, centralized locations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors and consolidates several compute_score functions, which significantly improves code clarity and reduces redundancy. The renaming of functions and files also makes their purpose clearer. My review includes one suggestion for examples/bots/workflow/bots_reward.py to refactor a complex block of code, aiming to improve its readability and efficiency.

@HYLcool
Copy link
Collaborator

HYLcool commented Dec 8, 2025

I ran the BOTS example for tens of steps and the training curves are also aligned with the ones before, so the changes in this PR are alright to me.

@pan-x-c
Copy link
Collaborator

pan-x-c commented Dec 9, 2025

/unittest-diff

@github-actions
Copy link

github-actions bot commented Dec 9, 2025

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
62 62 0 0 0 0 8m 29s

Tests

Test Name Status Flaky Duration
tests/common/config_test.py::TestConfig::test_all_examples_are_valid 34.4s
tests/common/config_test.py::TestConfig::test_chat_template_path 96ms
tests/common/config_test.py::TestConfig::test_config_flatten 41ms
tests/common/config_test.py::TestConfig::test_continue_from_checkpoint_is_valid 193ms
tests/common/config_test.py::TestConfig::test_default_workflow 94ms
tests/common/config_test.py::TestConfig::test_load_default_config 3.3s
tests/common/config_test.py::TestConfig::test_max_token_len_per_gpu_set_correctly 94ms
tests/common/config_test.py::TestConfig::test_optimizer_config_propagation 95ms
tests/common/config_test.py::TestConfig::test_update_config_from_ray_cluster 1.7s
tests/common/experience_test.py::TestEID::test_eid_properties 1ms
tests/common/experience_test.py::TestExperience::test_action_mask_and_logprobs_type 1ms
tests/common/experience_test.py::TestExperience::test_assertions 1ms
tests/common/experience_test.py::TestExperience::test_dpo_experience 1ms
tests/common/experience_test.py::TestExperience::test_gather 1ms
tests/common/experience_test.py::TestExperience::test_gather_with_token_level_reward 1ms
tests/common/experience_test.py::TestExperience::test_hf_datasets_conversion 15ms
tests/common/experience_test.py::TestExperience::test_multi_turn_experience 1ms
tests/common/experience_test.py::TestExperience::test_serialize_deserialize 2ms
tests/common/experience_test.py::TestExperience::test_single_turn_experience 1ms
tests/common/experience_test.py::TestExperience::test_to_dict 1ms
tests/common/experience_test.py::TestExperienceConversion::test_batch_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_dpo_experience_batch_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_experience_model_experience_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_gather_experiences_with_custom_fields 1ms
tests/common/experience_test.py::TestExperienceConversion::test_multiturn_experience_batch_converstion 1ms
tests/common/vllm_test.py::ModelWrapperTest_0::test_generate 52.3s
tests/common/vllm_test.py::ModelWrapperTest_1::test_generate 32.4s
tests/common/vllm_test.py::ModelWrapperTest_2::test_generate 43.4s
tests/common/vllm_test.py::TestModelLen_0::test_model_len 17.9s
tests/common/vllm_test.py::TestModelLen_1::test_model_len 17.3s
tests/common/vllm_test.py::TestModelLen_2::test_model_len 16.9s
tests/common/vllm_test.py::TestModelLenWithoutPromptTruncation::test_model_len 17.3s
tests/common/vllm_test.py::TestAPIServer::test_api 22.7s
tests/common/vllm_test.py::TestLogprobs::test_logprobs 19.3s
tests/common/vllm_test.py::TestAsyncAPIServer::test_api_async 22.9s
tests/common/vllm_test.py::TestTokenizer::test_action_mask 258ms
tests/common/vllm_test.py::TestTokenizer::test_action_mask_with_tools 253ms
tests/common/vllm_test.py::TestAPIServerToolCall_0_deepseek_r1::test_api_tool_calls 19.6s
tests/common/vllm_test.py::TestAPIServerToolCall_1::test_api_tool_calls 17.9s
tests/common/vllm_test.py::TestSuperLongGeneration::test_generate 1m 9s
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_equivalent 17ms
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_ground_truth 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_solution_string 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_multiple_boxed_answers_in_solution 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_not_boxed 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_raw_and_ground_truth_boxed_equivalent 1ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_extract_answer 4ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_verify_math_answer 540ms
tests/utils/eval_utils_test.py::TestEvalUtils::test_is_equiv 5ms
tests/utils/log_test.py::LogTest::test_actor_log 4.9s
tests/utils/log_test.py::LogTest::test_group_by_node 4.7s
tests/utils/log_test.py::LogTest::test_no_actor_log 912ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_0__workspace_tests_utils_plugins 100ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_1_tests_utils_plugins 96ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_0__workspace_tests_utils_plugins 21.9s
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_1_tests_utils_plugins 21.7s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_0__workspace_tests_utils_plugins 11.7s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_1_tests_utils_plugins 11.4s
tests/utils/registry_test.py::TestRegistry::test_dynamic_import 3.9s

Github Test Reporter by CTRF 💚

@hiyuchang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request primarily refactors and consolidates math evaluation utilities across the codebase. Key changes include moving compute_score and related helper functions from trinity.utils.eval_utils to trinity.common.rewards.eval_utils and trinity.common.rewards.naive_dapo_score, with compute_score being renamed to compute_score_v0 in some contexts and compute_score_bots in others. Many internal math normalization and parsing functions were removed from examples/bots/workflow/bots_reward.py, indicating a shift towards using shared utilities like verl_math_equal and functions from trinity.common.rewards.naive_dapo_score and trinity.common.rewards.qwen25_eval. Test files were updated to reflect these new import paths and function names. Review comments highlighted an issue where verl_math_equal was called without a pi argument within a loop designed to test different pi values, and suggested improving the docstring for last_boxed_only_string in trinity/common/rewards/eval_utils.py for better clarity.

@pan-x-c
Copy link
Collaborator

pan-x-c commented Dec 9, 2025

/unittest-module-trainer

@github-actions
Copy link

github-actions bot commented Dec 9, 2025

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
22 20 0 2 0 0 43m 37s

Skipped

Tests Status
tests/trainer/trainer_test.py::TestMultiModalGRPO::test_trainer skipped ⏭️
tests/trainer/trainer_test.py::TestMultiModalSFT::test_trainer skipped ⏭️

Tests

Test Name Status Flaky Duration
tests/trainer/trainer_test.py::TestTrainerCountdown_0_fsdp::test_trainer 3m 15s
tests/trainer/trainer_test.py::TestTrainerCountdown_1_megatron::test_trainer 4m 49s
tests/trainer/trainer_test.py::TestStepAheadAsyncRL::test_trainer 1m 29s
tests/trainer/trainer_test.py::TestTrainerGSM8K_0_fsdp::test_trainer 1m 19s
tests/trainer/trainer_test.py::TestTrainerGSM8K_1_fsdp2::test_trainer 1m 23s
tests/trainer/trainer_test.py::TestTrainerGSM8K_2_fsdp::test_trainer 1m 22s
tests/trainer/trainer_test.py::TestTrainerGSM8K_3_fsdp2::test_trainer 1m 34s
tests/trainer/trainer_test.py::TestTrainerSFTWarmupGSM8K::test_trainer 2m 31s
tests/trainer/trainer_test.py::TestTrainerDPO::test_trainer 1m
tests/trainer/trainer_test.py::TestTrainerSFT::test_trainer 57.8s
tests/trainer/trainer_test.py::TestTrainerToolsSFT::test_trainer_tools 58.8s
tests/trainer/trainer_test.py::TestFullyAsyncMode_0_fsdp::test_fully_async_mode 2m
tests/trainer/trainer_test.py::TestFullyAsyncMode_1_fsdp::test_fully_async_mode 1m 54s
tests/trainer/trainer_test.py::TestFullyAsyncMode_2_megatron::test_fully_async_mode 2m 41s
tests/trainer/trainer_test.py::TestTrainerCheckpointSave_0_fsdp::test_trainer 2m 20s
tests/trainer/trainer_test.py::TestTrainerCheckpointSave_1_megatron::test_trainer 4m 24s
tests/trainer/trainer_test.py::TestTrainerMIX::test_trainer 2m 34s
tests/trainer/trainer_test.py::TestMultiModalGRPO::test_trainer ⏭️ 810ms
tests/trainer/trainer_test.py::TestMultiModalSFT::test_trainer ⏭️ 808ms
tests/trainer/trainer_test.py::TestTrainerLoRA::test_trainer 3m 56s
tests/trainer/trainer_test.py::TestOverRollout::test_trainer 1m 29s
tests/trainer/trainer_test.py::TestTrainerPromptTruncation::test_trainer 1m 11s

Github Test Reporter by CTRF 💚

@pan-x-c pan-x-c merged commit cecbaed into modelscope:main Dec 9, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants