Skip to content

[doc] [en]Add docs for judge evaluate and gedit extend benchmark#170

Merged
GaoHuaZhang merged 2 commits intoAISBench:masterfrom
SJTUyh:gedit_doc_en_pr
Mar 5, 2026
Merged

[doc] [en]Add docs for judge evaluate and gedit extend benchmark#170
GaoHuaZhang merged 2 commits intoAISBench:masterfrom
SJTUyh:gedit_doc_en_pr

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Mar 5, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。
add en docs for Judge Model

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。
add en docs for Judge Model

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@github-actions github-actions bot added the docs label Mar 5, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the English documentation for the AISBench tool by introducing detailed guides for advanced evaluation methodologies. It provides a new tutorial on utilizing 'judge models' for scenarios where traditional ground truth comparisons are insufficient, and integrates documentation for the GEdit-Bench, a benchmark for real-world image editing. These additions improve the clarity and scope of the evaluation framework, making it more accessible for users to understand and implement complex benchmarking tasks.

Highlights

  • New Judge Model Evaluation Documentation: A comprehensive English documentation file docs/source_en/advanced_tutorials/judge_model_evaluate.md has been added, detailing the purpose, setup, execution, and result interpretation for evaluations using a judge model.
  • GEdit-Bench Benchmark Documentation: New English documentation docs/source_en/extended_benchmark/lmm_generate/gedit_bench.md has been introduced, providing an overview of the GEdit-Bench benchmark for real-world instruction-based image editing and a practical guide for its evaluation within AISBench.
  • Documentation Navigation Enhancements: The main English documentation index (docs/source_en/index.rst) has been updated to seamlessly integrate the new judge model evaluation tutorial and a dedicated section for extended multimodal generation benchmarks.
  • AIME2025 Dataset Task Expansion: The README_en.md for the AIME2025 dataset now includes a new entry for aime2025_gen_0_shot_llmjudge, indicating support for evaluations leveraging a judge model.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/benchmark/configs/datasets/aime2025/README_en.md
    • Added a new entry for the 'aime2025_gen_0_shot_llmjudge' task, detailing its use with a judge model for accuracy evaluation.
  • docs/source_en/advanced_tutorials/judge_model_evaluate.md
    • Added a new document explaining the concept, setup, execution, and result interpretation for judge model-based evaluations.
  • docs/source_en/extended_benchmark/lmm_generate/gedit_bench.md
    • Added a new document introducing the GEdit-Bench benchmark and providing a step-by-step guide for evaluating the Qwen-Image-Edit model using AISBench.
  • docs/source_en/extended_benchmark/lmm_generate/index.rst
    • Added a new index file to organize extended multimodal generation benchmarks, including a link to the GEdit-Bench documentation.
  • docs/source_en/index.rst
    • Updated the main documentation index to link to the new judge model evaluation tutorial and the extended multimodal generation benchmarks section.
Activity
  • The pull request was created by SJTUyh.
  • The PR is categorized as a 'Docs (文档更新)' type.
  • The primary motivation for this PR is to 'add en docs for Judge Model'.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@SJTUyh SJTUyh changed the title [doc] [en]add en docs for Judge Model [doc] [en]Add docs for judge evaluate and gedit extend benchmark Mar 5, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces comprehensive documentation for the Judge Model evaluation and GEdit-Bench, enhancing the clarity and usability of the AISBench tool. The new documentation covers the motivation, quick start, configuration, execution details, and various evaluation scenarios for the Judge Model, along with a detailed guide for GEdit-Bench evaluation. The changes also correctly integrate these new documentation pages into the main English documentation structure. Some minor improvements in clarity and consistency within the new documentation have been identified.

| --- | --- | --- | --- | --- | --- |
| aime2025_gen | Generative task for the AIME2025 dataset | Accuracy | 0-shot | Chat format | aime2025_gen_0_shot_chat_prompt.py | No newline at end of file
| aime2025_gen | Generative task for the AIME2025 dataset | Accuracy | 0-shot | Chat format | aime2025_gen_0_shot_chat_prompt.py |
| aime2025_gen_0_shot_llmjudge | AIME2025 | Generative task for the AIME2025 dataset | Accuracy evaluated by judge model | 0-shot | Chat format | aime2025_gen_0_shot_llmjudge.py |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The 'Introduction' for aime2025_gen_0_shot_llmjudge is currently AIME2025, which appears to be the dataset name rather than an introduction to the task itself. For consistency with the previous entry, consider providing a brief description of the task.

Suggested change
| aime2025_gen_0_shot_llmjudge | AIME2025 | Generative task for the AIME2025 dataset | Accuracy evaluated by judge model | 0-shot | Chat format | aime2025_gen_0_shot_llmjudge.py |
| aime2025_gen_0_shot_llmjudge | Generative task for the AIME2025 dataset with judge model evaluation | Accuracy evaluated by judge model | 0-shot | Chat format | aime2025_gen_0_shot_llmjudge.py |

ais_bench --models vllm_api_general_chat --datasets aime2025_gen_0_shot_llmjudge
```

> Note: Judge model dataset tasks differ from regular dataset tasks in configuration, but both types of dataset tasks can be mixed in a single dataset task.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This note could be more specific. Clarify what aspects of the configuration differ and what 'mixed in a single dataset task' implies (e.g., running both types in one command or within one config file).


### Pre-run Preparation

- `--models`: Using `vllm_api_general_chat` model task requires preparing an inference service that supports `v1/chat/completions` sub-service. You can refer to 🔗 [VLLM Launch OpenAI Compatible Server](https://docs.vllm.com.cn/en/latest/getting_started/quickstart.html#openai-compatible-server) to start the inference service (the tested model is one inference service, and the judge model is another inference service; for quick start, you can also share one service if convenient).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The sentence is a bit long and could be rephrased for better readability. Consider splitting it or simplifying the phrasing.

Suggested change
- `--models`: Using `vllm_api_general_chat` model task requires preparing an inference service that supports `v1/chat/completions` sub-service. You can refer to 🔗 [VLLM Launch OpenAI Compatible Server](https://docs.vllm.com.cn/en/latest/getting_started/quickstart.html#openai-compatible-server) to start the inference service (the tested model is one inference service, and the judge model is another inference service; for quick start, you can also share one service if convenient).
to start the inference service (typically, the tested model and the judge model each require a separate inference service; however, for quick start, you can share one service if convenient).

```

- The configuration method for `vllm_api_general_chat` corresponding to the tested model task configuration file is the same as in Quick Start, not repeated here.
- In the `aime2025_gen_0_shot_llmjudge` corresponding judge model dataset task configuration file, you need to modify the judge model configuration:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The phrasing "In the ... corresponding judge model dataset task configuration file" is a bit awkward. Consider rephrasing for better flow.

Suggested change
- In the `aime2025_gen_0_shot_llmjudge` corresponding judge model dataset task configuration file, you need to modify the judge model configuration:
For the `aime2025_gen_0_shot_llmjudge` judge model dataset task configuration file, you need to modify the judge model configuration:

pred_postprocessor=dict(type=extract_non_reasoning_content),
),
```
The meaning of judge model task configuration is exactly the same as the tested model task configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This statement is a bit redundant after providing the detailed configuration block. If kept, it could be rephrased to emphasize that the parameters within the judge_model dictionary have similar meanings to those in the tested model's configuration.

# ====== User configuration parameters =========
qwen_image_edit_models[0]["path"] = "/path/to/Qwen-Image-Edit-2509/" # Modify to actual model weight path
qwen_image_edit_models[0]["infer_kwargs"]["num_inference_steps"] = 50 # Modify to the required inference steps
device_list = [0] # [0, 1, 2, 3] Modify to the actual available NPU device ID list, not necessarily in order, each device will separately load a weight
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment [0, 1, 2, 3] is an example, but the current value is [0]. It might be clearer to show a more representative example or explicitly state that [0] is just a placeholder for a single device.

Suggested change
device_list = [0] # [0, 1, 2, 3] Modify to the actual available NPU device ID list, not necessarily in order, each device will separately load a weight
device_list = [0] # Example: [0, 1, 2, 3]. Modify to the actual available NPU device ID list; each device will separately load a weight.

judge_model=dict(
attr="service",
type=VLLMCustomAPIChat,
abbr=f"{metric}_judge", # Be added after dataset abbr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment "Be added after dataset abbr" is a bit vague. Clarify what abbr represents in this context (e.g., "This abbreviation is appended to the dataset abbreviation to form a unique identifier for the judge model.").

| gedit-3-SC_judge | 16dd59 | SC | gen | - | - | - | 7.31 |
| gedit-3-PQ_judge | 16dd59 | PQ | gen | - | - | - | 7.24 |

[2026-03-04 15:40:45,583] [ais_bench] [INFO] write markdown summary to /workplace/benchmark/ais_bench/configs/lmm_exmaple/outputs/default/20260213_150110/summary/summary_20260304_152835.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a typo in the path: lmm_exmaple should be lmm_example.

Suggested change
[2026-03-04 15:40:45,583] [ais_bench] [INFO] write markdown summary to /workplace/benchmark/ais_bench/configs/lmm_exmaple/outputs/default/20260213_150110/summary/summary_20260304_152835.md
[2026-03-04 15:40:45,583] [ais_bench] [INFO] write markdown summary to /workplace/benchmark/ais_bench/configs/lmm_example/outputs/default/20260213_150110/summary/summary_20260304_152835.md

```bash
#
# python3 -m ais_bench.tools.dataset_processors.gedit.display_results --config_path {CONFIG_PATH} --timestamp_path {TIMESTAMP_PATH}
python3 -m ais_bench.tools.dataset_processors.gedit.display_results --config_path ./multi_device_run_qwen_image_edit.py --timestamp_path outputs/default/20260213_150110/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The timestamp 20260213_150110 is hardcoded in the example. It would be beneficial to add a note indicating that this is an example timestamp and the actual value will vary based on execution.

Suggested change
python3 -m ais_bench.tools.dataset_processors.gedit.display_results --config_path ./multi_device_run_qwen_image_edit.py --timestamp_path outputs/default/20260213_150110/
python3 -m ais_bench.tools.dataset_processors.gedit.display_results --config_path ./multi_device_run_qwen_image_edit.py --timestamp_path outputs/default/{ACTUAL_TIMESTAMP}/


```bash
# python3 -m ais_bench.tools.dataset_processors.gedit.display_results --config_path {CONFIG_PATH} --timestamp_path {TIMESTAMP_PATH}
python3 -m ais_bench.tools.dataset_processors.gedit.convert_results --config_path ./multi_device_run_qwen_image_edit.py --timestamp_path outputs/default/20260213_150110/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the previous comment, the timestamp 20260213_150110 is hardcoded. A note about it being an example would improve clarity.

Suggested change
python3 -m ais_bench.tools.dataset_processors.gedit.convert_results --config_path ./multi_device_run_qwen_image_edit.py --timestamp_path outputs/default/20260213_150110/
python3 -m ais_bench.tools.dataset_processors.gedit.convert_results --config_path ./multi_device_run_qwen_image_edit.py --timestamp_path outputs/default/{ACTUAL_TIMESTAMP}/

@GaoHuaZhang GaoHuaZhang merged commit 48ba345 into AISBench:master Mar 5, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants