Skip to content

[For merge][part 2] Support Gedit Evaulate#161

Merged
GaoHuaZhang merged 4 commits intoAISBench:masterfrom
SJTUyh:gedit_2
Mar 4, 2026
Merged

[For merge][part 2] Support Gedit Evaulate#161
GaoHuaZhang merged 4 commits intoAISBench:masterfrom
SJTUyh:gedit_2

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Mar 4, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the ais_bench benchmark by integrating comprehensive support for evaluating generative image editing models (GEdit) and improving the LLM-based judging framework. It introduces new dataset configurations, a foundational base class for judge datasets, and optimized file handling, enabling more robust and efficient evaluation of multimodal models.

Highlights

  • GEdit Evaluation Support: New configurations and datasets were added to support the evaluation of Generative Image Editing (GEdit) models, including specific grader templates for Scene Consistency (SC) and Perceptual Quality (PQ) metrics.
  • AIME2025 LLM Judge Integration: The AIME2025 dataset now includes configurations for LLM-based judging, defining a detailed grading template and an evaluator for correctness.
  • BaseJDGDataset Introduction: A new BaseJDGDataset class was introduced, providing a generic framework for datasets that involve LLM-based judging, including prediction loading, processing, and task state management.
  • Image Processing Utilities: New utilities for converting PIL images to Base64 strings were added, which are crucial for handling image data in Large Multimodal Model (LMM) judging.
  • JSONL File Handling Optimization: load_jsonl and dump_jsonl functions were introduced using mmap and orjson for potentially faster and more efficient JSONL file operations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/benchmark/configs/datasets/aime2025/aime2025_gen_0_shot_llmjudge.py
    • Added new configuration for AIME2025 dataset with LLM judging.
  • ais_bench/benchmark/configs/datasets/gedit/gedit_gen_0_shot_llmjudge.py
    • Added new configuration for GEdit dataset with LMM judging, including SC and PQ metrics.
  • ais_bench/benchmark/configs/models/lmm_models/qwen_image_edit.py
    • Added configuration for Qwen-Image-Edit model.
  • ais_bench/benchmark/datasets/aime2025.py
    • Modified to include Aime2025JDGDataset for LLM judging.
  • ais_bench/benchmark/datasets/base.py
    • Updated BaseDataset to include task_state_manager.
    • Introduced BaseJDGDataset for judge datasets.
  • ais_bench/benchmark/datasets/g_edit.py
    • Added new dataset GEditDataset and its judge dataset variants (GEditSCJDGDataset, GEditPQJDGDataset) for image editing evaluation.
  • ais_bench/benchmark/datasets/utils/datasets.py
    • Updated comments and removed trailing whitespace.
  • ais_bench/benchmark/datasets/utils/llm_judge.py
    • Added new file with LLMJudgeDataset and LLMJudgeCorrectEvaluator for text-based LLM judging.
  • ais_bench/benchmark/datasets/utils/lmm_judge.py
    • Added new file with LMMImgJDGDataset, ImgSCJDGDataset, ImgPQJDGDataset, and LMMJudgeImageEditEvaluator for image-based LMM judging.
  • ais_bench/benchmark/utils/config/build.py
    • Modified build_dataset_from_cfg to accept task_state_manager.
  • ais_bench/benchmark/utils/file/file.py
    • Added mmap and orjson imports.
    • Introduced load_jsonl and dump_jsonl functions for optimized JSONL handling.
  • ais_bench/benchmark/utils/image_process.py
    • Added new file with pil_to_base64 utility function.
  • ais_bench/benchmark/utils/prompt/prompt.py
    • Modified format_mm to use deepcopy instead of copy for prompt components to prevent unintended modifications.
Activity
  • The pull request was created by SJTUyh and is categorized as a 'Feature' type, indicating new functionality.
  • The pull request description includes a detailed template for contributions, suggesting a structured approach to development and review.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for evaluating the G-Edit and AIME2025 datasets, including new configurations, dataset loaders, and LLM-based evaluation logic. The changes are extensive, adding base classes for judge datasets and utilities for handling multimodal data and efficient file I/O. A critical security concern has been identified: a path traversal vulnerability in the image loading logic for multimodal judge datasets, which could allow arbitrary file reads. Additionally, potential prompt injection instances were found where untrusted model outputs or dataset instructions are directly embedded into prompts without proper sanitization. Beyond security, areas for improvement include fixing contradictory instructions in prompts, addressing potential runtime errors, optimizing dataset creation for better performance, and cleaning up code style.

Comment on lines +49 to +53
image_path = os.path.join(base_path, pred_item.get('prediction', ''))
if image_path and os.path.exists(image_path):
try:
# 加载图片
with Image.open(image_path) as img:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The _load_from_predictions method constructs an image path by joining a base directory with a prediction string obtained from a model's output. If the model's output is manipulated (e.g., via prompt injection), an attacker can specify a path like ../../../../etc/passwd. While Image.open() might fail on non-image files, this still allows for file existence probing and potential arbitrary file read of any image-compatible files on the system. The content of the file is then base64-encoded and potentially exposed to the judge model.

Comment on lines +67 to +68
example['content'] = AIS_IMAGE_START + image_url + AIS_CONTENT_TAG \
+ AIS_TEXT_START + example['instruction'] + AIS_CONTENT_TAG
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

A prompt injection vulnerability exists here: the dataset instruction is directly concatenated into the content field using special tags like AIS_CONTENT_TAG. If the instruction itself contains these tags (e.g., <AIS_CONTENT_TAG>, <AIS_IMAGE_START>), it can break the prompt structure and inject arbitrary segments (text or images) into the final prompt processed by the format_mm method. This allows for prompt injection attacks via the dataset content. Additionally, the current approach of creating a new Dataset object for each example and merging them with concatenate_datasets is inefficient; consider processing examples into a list of dictionaries and creating a single Dataset at the end using Dataset.from_list() for better performance.


<Original Question Begin>: \n{question}\n<Original Question End>\n\n
<Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n
<Predicted Answer Begin>: \n{model_answer}\n<Predicted End>\n\n
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The model_answer is directly embedded into the GRADER_TEMPLATE. Since model_answer is the output of the model being evaluated, it is untrusted. A malicious or manipulated model could output content that breaks out of the intended block (e.g., by including <Predicted End>) and injects new instructions to the judge model, potentially biasing the evaluation results.

Comment on lines +48 to +49
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The instruction on line 48, "Simply reply with either CORRECT, INCORRECT", contradicts the instructions on lines 46 and 55, which ask for "A" or "B". This inconsistency could confuse the LLM judge. Since the post-processor get_a_or_b expects 'A' or 'B', these lines should be removed for consistency.

Comment on lines +24 to +25
A score from 0 to 10 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instruction at all. 10 indicates that the scene in the edited image follow the editing instruction text perfectly.)
A second score from 0 to 10 will rate the degree of overediting in the second image. (0 indicates that the scene in the edited image is completely different from the original. 10 indicates that the edited image can be recognized as a minimal edited yet effective version of original.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are a couple of grammatical errors in the prompt which could be corrected for clarity:

  • On line 24, "follow" should be "follows".
  • On line 25, "version of original" should be "version of the original".
Suggested change
A score from 0 to 10 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instruction at all. 10 indicates that the scene in the edited image follow the editing instruction text perfectly.)
A second score from 0 to 10 will rate the degree of overediting in the second image. (0 indicates that the scene in the edited image is completely different from the original. 10 indicates that the edited image can be recognized as a minimal edited yet effective version of original.)
A score from 0 to 10 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instruction at all. 10 indicates that the scene in the edited image follows the editing instruction text perfectly.)
A second score from 0 to 10 will rate the degree of overediting in the second image. (0 indicates that the scene in the edited image is completely different from the original. 10 indicates that the edited image can be recognized as a minimal edited yet effective version of the original.)

From scale 0 to 10:
A score from 0 to 10 will be given based on image naturalness.
(
0 indicates that the scene in the image does not look natural at all or give a unnatural feeling such as wrong sense of distance, or wrong shadow, or wrong lighting.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a grammatical error here: "give a unnatural" should be "gives an unnatural".

Suggested change
0 indicates that the scene in the image does not look natural at all or give a unnatural feeling such as wrong sense of distance, or wrong shadow, or wrong lighting.
0 indicates that the scene in the image does not look natural at all or gives an unnatural feeling such as wrong sense of distance, or wrong shadow, or wrong lighting.

if len(dataset_batches) == 1:
return dataset_batches[0]
else:
from datasets import concatenate_datasets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's a best practice to place all imports at the top of the file. This improves readability and helps avoid potential issues like circular dependencies. Please move from datasets import concatenate_datasets to the top of the file.

from PIL import Image
from tqdm import tqdm

from ais_bench.benchmark.datasets.needlebench_v2 import origin
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The import from ais_bench.benchmark.datasets.needlebench_v2 import origin appears to be unused in this file. It should be removed to maintain code cleanliness.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Convert PIL Image to base64 string
"""
if not isinstance(image, Image.Image):
raise ValueError("Input must be a PIL Image object")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】使用aisbench错误码维护

SJTUyh and others added 2 commits March 4, 2026 17:23
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@GaoHuaZhang GaoHuaZhang merged commit e32ceeb into AISBench:master Mar 4, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants