Skip to content

[For merge][part 3] Support Gedit Evaulate#162

Merged
GaoHuaZhang merged 2 commits intoAISBench:masterfrom
SJTUyh:gedit_3
Mar 4, 2026
Merged

[For merge][part 3] Support Gedit Evaulate#162
GaoHuaZhang merged 2 commits intoAISBench:masterfrom
SJTUyh:gedit_3

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Mar 4, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates Gedit evaluation capabilities, particularly for the Qwen-Image-Edit model, by adding new configuration and data processing tools. Concurrently, it significantly strengthens the testing framework by resolving issues in worker configurations and output handling, leading to a more reliable and consistent evaluation pipeline.

Highlights

  • Gedit Evaluation Support: Introduced new configuration and processing scripts to support evaluation of the Qwen-Image-Edit model on Gedit datasets, including multi-device inference capabilities.
  • Improved Test Robustness: Enhanced unit tests for worker configurations, output handlers, and evaluation tasks to ensure more robust and accurate benchmark execution, addressing several edge cases and data structure inconsistencies.
  • Refined Output Handling: Modified output handlers to consistently include data_abbr in results and improved UUID generation logic for string outputs, ensuring better traceability and data management.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/configs/lmm_exmaple/multi_device_run_qwen_image_edit.py
    • Added a new configuration file for multi-device inference of the Qwen-Image-Edit model on Gedit datasets.
  • ais_bench/tools/dataset_processors/gedit/convert_preds.py
    • Added a new script to parse and convert Gedit predictions into a standardized format.
  • ais_bench/tools/dataset_processors/gedit/display_results.py
    • Added a new script to display and dump evaluation results for Gedit datasets, including SC and PQ points.
  • tests/UT/cli/test_workers.py
    • Updated test cases for MockConfigDict and worker classes to fix formatting and ensure proper handling of configurations and task execution.
    • Added datasets field to MockConfigDict in eval worker tests to support cfg.datasets access.
  • tests/UT/openicl/icl_inferencer/output_handler/test_bfcl_v3_output_handler.py
    • Modified get_result calls to include data_abbr parameter for consistency and improved test coverage.
  • tests/UT/openicl/icl_inferencer/output_handler/test_gen_inferencer_output_handler.py
    • Modified get_result calls to include data_abbr parameter.
    • Improved UUID generation comments and added mocks for get_prediction_result in failure scenarios.
  • tests/UT/openicl/icl_inferencer/output_handler/test_ppl_inferencer_output_handler.py
    • Modified get_result calls to include data_abbr parameter for consistency.
  • tests/UT/tasks/test_openicl_api_infer.py
    • Added task_state_manager mock to _create_task method to support _get_data_list functionality.
  • tests/UT/tasks/test_openicl_eval.py
    • Added task_state_manager mock to _create_task and _score methods to ensure proper state management.
    • Fixed various test setups for dataset_cfgs, model_cfg, eval_cfg, and cfg["datasets"] to ensure correct data structures and method calls.
    • Updated run method call to accept TaskStateManager mock for improved test isolation.
  • tests/pytest.ini
    • Added qwen_image_edit_mindie_sd.py to the omit list for coverage reports.
Activity
  • The pull request was created by SJTUyh, indicating a new feature to support Gedit evaluation.
  • The PR description includes a comprehensive checklist for quality assurance and maintainer guidance, suggesting a structured development process.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Gedit evaluation, including a new configuration file and scripts for processing and displaying results. A security audit identified a potential Path Traversal and Arbitrary File Read/Write vulnerability in the convert_preds.py tool script, as it constructs file paths from dataset and prediction file content without proper sanitization. It is recommended to validate and sanitize all file-system-related inputs, especially those from external data files. Additionally, the code review suggests improvements for robustness, clarity, and adherence to Python best practices, including using deep copies for configurations, refactoring repetitive code, using more specific exception handling, and removing unused code.

Comment on lines +19 to +21
model_config = {k: v for k, v in qwen_image_edit_models[0].items()}
model_config['abbr'] = f"{model_config['abbr']}-{i}"
model_config['device_kwargs'] = dict(model_config['device_kwargs'])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using a shallow copy with {k: v for ...} can lead to unexpected side effects if qwen_image_edit_models[0] contains nested mutable objects (like dictionaries or lists). It's safer to use copy.deepcopy to ensure each model_config is a completely independent copy. This also makes the explicit copy of device_kwargs on line 21 redundant.

Please also add import copy at the top of the file.

Suggested change
model_config = {k: v for k, v in qwen_image_edit_models[0].items()}
model_config['abbr'] = f"{model_config['abbr']}-{i}"
model_config['device_kwargs'] = dict(model_config['device_kwargs'])
model_config = copy.deepcopy(qwen_image_edit_models[0])
model_config['abbr'] = f"{model_config['abbr']}-{i}"


dataset_configs = []
for dataset in gedit_datasets:
dataset_config = {k: v for k, v in dataset.items()}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the model configuration, using a shallow copy for the dataset configuration can lead to unintended sharing of nested mutable objects. Using copy.deepcopy ensures that each dataset configuration is fully independent.

Please ensure import copy is added at the top of the file.

Suggested change
dataset_config = {k: v for k, v in dataset.items()}
dataset_config = copy.deepcopy(dataset)

Comment on lines +91 to +119
evaluate_result_list = []

lang_count = {"zh": 0, "en": 0}
for id in self.all_data_results.keys():
lang_count[self.all_data_results[id]["language"]] += 1

for lang in ["zh", "en"]:
sc_point_sum = 0
pq_point_sum = 0
o_point_sum = 0
count = 0
for id in self.all_data_results.keys():
if self.all_data_results[id]["language"] == lang:
sc_point_sum += self.all_data_results[id]["SC_point"]
pq_point_sum += self.all_data_results[id]["PQ_point"]
o_point_sum += self.all_data_results[id]["O_point"]
count += 1
if count > 0:
evaluate_result_list.append(copy.deepcopy([lang, sc_point_sum / count, pq_point_sum / count, o_point_sum / count]))

sc_point_sum = 0
pq_point_sum = 0
o_point_sum = 0
count = len(self.all_data_results)
for id in self.all_data_results.keys():
sc_point_sum += self.all_data_results[id]["SC_point"]
pq_point_sum += self.all_data_results[id]["PQ_point"]
o_point_sum += self.all_data_results[id]["O_point"]
evaluate_result_list.append(copy.deepcopy(["all case", sc_point_sum / count, pq_point_sum / count, o_point_sum / count]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for calculating points for each language and then for all cases is repetitive. This can be refactored into a single loop over the results to improve readability and maintainability. You can use a defaultdict to accumulate sums and counts for each language.

Remember to add from collections import defaultdict at the top of the file.

        evaluate_result_list = []
        lang_sums = defaultdict(lambda: {'sc': 0, 'pq': 0, 'o': 0, 'count': 0})

        for res in self.all_data_results.values():
            lang = res['language']
            lang_sums[lang]['sc'] += res['SC_point']
            lang_sums[lang]['pq'] += res['PQ_point']
            lang_sums[lang]['o'] += res['O_point']
            lang_sums[lang]['count'] += 1

        total_sc, total_pq, total_o, total_count = 0, 0, 0, 0
        for lang, sums in lang_sums.items():
            count = sums['count']
            if count > 0:
                evaluate_result_list.append([lang, sums['sc'] / count, sums['pq'] / count, sums['o'] / count])
                total_sc += sums['sc']
                total_pq += sums['pq']
                total_o += sums['o']
                total_count += count

        if total_count > 0:
            evaluate_result_list.append(["all case", total_sc / total_count, total_pq / total_count, total_o / total_count])

return 0
try:
point_list = json.loads(results_list)
except BaseException as e:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Catching BaseException is generally a bad practice as it can suppress exceptions like KeyboardInterrupt and SystemExit, making the program harder to terminate. It's better to catch a more specific exception. In this case, since you are parsing JSON, json.JSONDecodeError would be more appropriate.

Suggested change
except BaseException as e:
except json.JSONDecodeError as e:

Comment on lines +80 to +83
dump_dir = os.path.join(save_path, item["task_type"], item["instruction_language"])
os.makedirs(dump_dir, exist_ok=True)
# 将output_img_path copy到dump_dir
shutil.copy(item['output_img_path'], os.path.join(dump_dir, item['key'] + '.png'))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The dump_gedit_format_result method uses several fields from the dataset (item["task_type"], item["instruction_language"], item['key']) and the prediction file (item['output_img_path']) to construct file system paths without any sanitization or validation.

Specifically:

  • dump_dir is constructed using item["task_type"] and item["instruction_language"] (line 80).
  • shutil.copy is called with item['output_img_path'] as the source and a path constructed with item['key'] as the destination (line 83).

If an attacker provides a malicious dataset or a malicious prediction JSONL file, they can perform an Arbitrary File Read by setting item['output_img_path'] to a sensitive file path (e.g., /etc/passwd), or a Path Traversal / Arbitrary File Write by including traversal sequences (e.g., ../) in the dataset fields. This could allow reading or overwriting files outside of the intended output directory.

Comment on lines +44 to +48
self.dataset = {}
for i in tqdm(range(len(dataset)), desc="Converting dataset to dictionary"):
item = dataset[i]
# 使用索引作为id,因为Dataset中可能没有'id'键
self.dataset[i] = item
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This loop can be simplified into a more concise and Pythonic dictionary comprehension.

Suggested change
self.dataset = {}
for i in tqdm(range(len(dataset)), desc="Converting dataset to dictionary"):
item = dataset[i]
# 使用索引作为id,因为Dataset中可能没有'id'键
self.dataset[i] = item
self.dataset = {i: dataset[i] for i in tqdm(range(len(dataset)), desc="Converting dataset to dictionary")}

self.all_data_results = {}

for uuid in tqdm(org_pred_data_dict.keys(), desc="Parsing results"):
id = org_pred_data_dict[uuid]["id"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable name id shadows the built-in function id(). It's a good practice to avoid this to prevent potential confusion and bugs. Consider renaming it to something like item_id. This applies to other occurrences of id as a variable in this file (e.g., line 79).

merged_data = []
start_index = 0
for path in self.paths_map[path_kind]:
offset_index = copy.deepcopy(start_index)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using copy.deepcopy() on an integer is unnecessary as integers are immutable. A direct assignment offset_index = start_index is sufficient and more efficient.

Suggested change
offset_index = copy.deepcopy(start_index)
offset_index = start_index

import csv
import tabulate

from ais_bench.benchmark.configs.datasets.needlebench_v2.needlebench_v2_4k.needlebench_v2_multi_reasoning_4k import language
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The import language from ais_bench.benchmark.configs.datasets.needlebench_v2.needlebench_v2_4k.needlebench_v2_multi_reasoning_4k appears to be unused and can be removed.

merged_data = []
start_index = 0
for path in self.paths_map[path_kind]:
offset_index = copy.deepcopy(start_index)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using copy.deepcopy() on an integer is unnecessary as integers are immutable. A direct assignment offset_index = start_index is sufficient and more efficient.

Suggested change
offset_index = copy.deepcopy(start_index)
offset_index = start_index


def main():
"""主函数"""
parser = argparse.ArgumentParser(description="显示gedit数据集的推理结果")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】统一用英文说明


def main():
"""主函数"""
parser = argparse.ArgumentParser(description="显示gedit数据集的推理结果")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】统一采用英文说明

@GaoHuaZhang GaoHuaZhang merged commit 51fa3d7 into AISBench:master Mar 4, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants