Skip to content

[For merge][part 1] Support Gedit Evaluate#159

Merged
GaoHuaZhang merged 2 commits intoAISBench:masterfrom
SJTUyh:gedit_1
Mar 4, 2026
Merged

[For merge][part 1] Support Gedit Evaluate#159
GaoHuaZhang merged 2 commits intoAISBench:masterfrom
SJTUyh:gedit_1

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Mar 4, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the benchmark's capabilities by introducing robust support for Large Multimodal Models (LMMs), particularly for image-editing tasks. It establishes new operational modes for judging and combined inference-judging, alongside a dedicated worker to orchestrate these processes. The changes also involve foundational refactoring of model output handling to correctly manage and store multimodal results, ensuring a more comprehensive and flexible evaluation framework.

Highlights

  • New Evaluation Modes: Introduced 'judge' and 'infer_judge' running modes to support evaluation of judge models and combined inference-judging workflows.
  • JudgeInfer Worker Implementation: Added a dedicated JudgeInfer worker class to manage the inference process for judge models, including configuration, task partitioning, dataset merging, and result post-processing.
  • Large Multimodal Model (LMM) Support: Integrated support for LMMs, exemplified by the new QwenImageEditModel for image editing, and refactored base model classes to accommodate multimodal capabilities.
  • Enhanced Output Handling for LMMs: Developed specialized output handling for LMMs, allowing for the saving of generated images and improved display of base64 image URLs in results.
  • Improved Task State Management: Updated OpenICLEvalTask, OpenICLApiInferTask, and OpenICLInferTask to pass task_state_manager for better context and state handling during dataset building and task execution.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/benchmark/cli/argument_parser.py
    • Added 'judge' and 'infer_judge' options to the running mode choices.
  • ais_bench/benchmark/cli/config_manager.py
    • Removed an unnecessary empty line at the beginning of the file.
    • Adjusted spacing for a method definition.
  • ais_bench/benchmark/cli/workers.py
    • Imported os, shutil, PredictionInvalidException, TMAN_CODES, load_jsonl, and dump_jsonl.
    • Introduced the JudgeInfer class, which handles configuration updates, task partitioning, dataset merging, and result post-processing for judge model inference.
    • Modified the Eval class to include a _cfg_pre_process method and updated its _update_tasks_cfg to handle judge_infer_cfg.
    • Added a _cfg_pre_process method to the AccViz class.
    • Updated the WORK_FLOW dictionary to incorporate JudgeInfer into 'all', 'eval', 'judge', and 'infer_judge' modes.
    • Changed WorkflowExecutor.execute to use a deep copy of the configuration for each worker to prevent unintended modifications.
  • ais_bench/benchmark/models/local_models/base.py
    • Renamed the abstract method _generate to generate.
    • Removed the generate_from_template method.
    • Introduced BaseLMModel as a new base class for language models, defining a generate method.
  • ais_bench/benchmark/models/local_models/qwen_image_edit_mindie_sd.py
    • Added a new file implementing the QwenImageEditModel wrapper for Qwen-Image-Edit-2509, supporting image editing tasks.
    • Included custom torch.library decorators to address compatibility issues with diffuser 0.35.1 and torch 2.1.
    • Implemented model loading, device configuration, quantization, and inference logic for image generation.
  • ais_bench/benchmark/models/output.py
    • Added imports for os, Image, and Union.
    • Introduced the LMMOutput class to manage multimodal outputs, including methods for saving images and handling text content.
  • ais_bench/benchmark/openicl/icl_inferencer/icl_lmm_gen_inferencer.py
    • Added a new file implementing LMMGenInferencer, a specialized inferencer for LMMs that uses LMMOutput and LMMGenInferencerOutputHandler.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/base_handler.py
    • Added BASE64_MAX_DISPLAY_LEN constant for truncating long base64 strings.
    • Modified get_prediction_result and get_result methods to accept a data_abbr argument.
    • Adjusted run_cache_consumer to correctly pass arguments to get_result.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/bfcl_v3_output_handler.py
    • Modified get_prediction_result to accept a data_abbr argument.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/gen_inferencer_output_handler.py
    • Imported BASE64_MAX_DISPLAY_LEN.
    • Modified get_prediction_result to accept a data_abbr argument and added logic to truncate long base64 image URLs for display purposes.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/lmm_gen_inferencer_output_handler.py
    • Added a new file implementing LMMGenInferencerOutputHandler, which handles LMM outputs, including saving generated images and truncating base64 URLs.
  • ais_bench/benchmark/openicl/icl_inferencer/output_handler/ppl_inferencer_output_handler.py
    • Modified get_prediction_result to accept a data_abbr argument.
  • ais_bench/benchmark/openicl/icl_prompt_template/icl_prompt_template_mm.py
    • Applied a minor formatting fix by removing an empty line.
    • Added a newline after template = template.format_mm(**entry) for consistent formatting.
  • ais_bench/benchmark/tasks/openicl_api_infer.py
    • Modified _get_data_list to pass task_state_manager to build_dataset_from_cfg.
    • Set task_state_manager as an instance attribute within the run method.
  • ais_bench/benchmark/tasks/openicl_eval.py
    • Modified the run method to accept task_state_manager and set it as an instance attribute.
    • Updated _score to pass task_state_manager to build_dataset_from_cfg.
    • Modified parse_args to pass task_state_manager to evaluator.run().
  • ais_bench/benchmark/tasks/openicl_infer.py
    • Modified _inference to pass task_state_manager to build_dataset_from_cfg.
Activity
  • The pull request description provides a template for contributions, but no specific activity or comments from reviewers are available in the provided context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new functionality, including a "judge" evaluation workflow and support for a multimodal image editing model. The changes are extensive, adding new workers, handlers, and model wrappers. While the core logic appears sound, I've identified several areas for improvement regarding code duplication, robustness, and maintainability. My review includes a critical issue with a hardcoded file path, several high-severity bugs related to configuration and file handling, and some medium-severity issues concerning code quality and logging.

Note: Security Review did not run due to the size of the PR.

PromptType = Union[PromptList, str]

# 模型推理相关配置常量
DEFAULT_MODEL_PATH = "/home/yanhe/models/Qwen-Image-Edit-2509/"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The DEFAULT_MODEL_PATH is hardcoded to an absolute user-specific path (/home/yanhe/...). This will cause the code to fail on any other machine. This path should be made configurable, for example, through an environment variable or a configuration file parameter.

#self.device_id = device_kwargs.get('device_id', DEFAULT_DEVICE_ID)
# 在这里声明环境变量
self.logger.debug(f"device id from kwargs: {device_kwargs.get('device_id', DEFAULT_DEVICE_ID)}")
os.environ["ASCEND_RT_VISIBLE_DEVICES"] = f"{device_kwargs.get('device_id', DEFAULT_DEVICE_ID)}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Modifying environment variables like ASCEND_RT_VISIBLE_DEVICES within a library function is generally considered unsafe, as it can have unintended side effects on other parts of the application, especially in multi-threaded or multi-process environments. It's better to manage device selection using library-specific functions like torch.npu.set_device() if possible.

Comment on lines +239 to +240
os.remove(judge_org_prediction_path)
dump_jsonl(judge_preds, judge_org_prediction_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation removes the original prediction file before writing the new one. If the dump_jsonl operation fails for any reason (e.g., disk full, permission error), the original data will be lost. It's safer to write to a temporary file first and then atomically rename it to the final destination.

Suggested change
os.remove(judge_org_prediction_path)
dump_jsonl(judge_preds, judge_org_prediction_path)
tmp_path = f"{judge_org_prediction_path}.tmp"
dump_jsonl(judge_preds, tmp_path)
shutil.move(tmp_path, judge_org_prediction_path)

class BaseLMModel(BaseModel):
"""Base class for language models.
"""
def generate(self, inputs, outputs, **kwargs) -> List[str]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The generate method is type-hinted to return List[str], but its implementation in subclasses (e.g., QwenImageEditModel) modifies the outputs object in-place and does not return a value. This mismatch is misleading and can cause issues with static analysis and developer understanding. The return type should be None.

Suggested change
def generate(self, inputs, outputs, **kwargs) -> List[str]:
def generate(self, inputs, outputs, **kwargs) -> None:

Comment on lines +177 to +183
key = (
task["models"][0]["abbr"] # same model
+ "_"
+ str(task['datasets'][0][0]['type']) # same dataset type
+ "_"
+ str(task["datasets"][0][0]["infer_cfg"]["inferencer"]) # same inferencer with the same args
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The key for grouping tasks is generated by converting the inferencer dictionary to a string using str(). The string representation of a dictionary is not guaranteed to be consistent across different Python versions or even runs, which could lead to tasks not being merged correctly. A more robust approach would be to create a canonical representation, for example by serializing to a sorted JSON string. You will need to import json for this suggestion to work.

Suggested change
key = (
task["models"][0]["abbr"] # same model
+ "_"
+ str(task['datasets'][0][0]['type']) # same dataset type
+ "_"
+ str(task["datasets"][0][0]["infer_cfg"]["inferencer"]) # same inferencer with the same args
)
key = (
task["models"][0]["abbr"] # same model
+ "_"
+ str(task['datasets'][0][0]['type']) # same dataset type
+ "_"
+ json.dumps(task["datasets"][0][0]["infer_cfg"]["inferencer"], sort_keys=True) # same inferencer with the same args
)

Comment on lines +173 to +192
def _merge_datasets(self, tasks):
# merge datasets with the same model, dataset type and inferencer
task_groups = defaultdict(list)
for task in tasks:
key = (
task["models"][0]["abbr"] # same model
+ "_"
+ str(task['datasets'][0][0]['type']) # same dataset type
+ "_"
+ str(task["datasets"][0][0]["infer_cfg"]["inferencer"]) # same inferencer with the same args
)
task_groups[key].append(task)
new_tasks = []
for key, task_group in task_groups.items():
new_task = copy.deepcopy(task_group[0])
if len(task_group) > 1:
for t in task_group[1:]:
new_task["datasets"][0].extend(t["datasets"][0])
new_tasks.append(new_task)
return new_tasks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This _merge_datasets method is identical to the one in the Infer class. To improve maintainability and reduce redundancy, consider moving this method to the BaseWorker class or a shared utility module.

Comment on lines +231 to +258
print(f"in _generate")
#self.logger.info(f"输入: {input}")
if isinstance(input, str):
prompts.append(input)
neg_prompts.append("")
elif isinstance(input, list):
# 处理包含图像的输入
for item in input[0]["prompt"]:
if item["type"] == "image_url":
base64_url = item["image_url"]["url"].split(",")[1]
img = Image.open(io.BytesIO(base64.b64decode(base64_url))).convert("RGB")
images.append(img)
elif item["type"] == "text":
prompts.append(item["text"])
neg_prompts.append("")
else:
prompts.append("")
neg_prompts.append("")

# 如果没有图像输入,使用默认图像
if not images:
raise AISBenchRuntimeError(MODEL_CODES.UNKNOWN_ERROR, "QwenImageEditModel requires image input, but can't get image info from input.")

# 执行推理
results = []
for prompt, neg_prompt in zip(prompts, neg_prompts):
# 准备输入参数
print("in _generate loop")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are leftover print statements on lines 231 and 258 that seem to be for debugging. These should be removed or replaced with proper logging before merging.

raise AISBenchRuntimeError(MODEL_CODES.UNKNOWN_ERROR, "QwenImageEditModel requires image input, but can't get image info from input.")

# 执行推理
results = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable results is initialized but never used. It should be removed.

Comment on lines +44 to +58
if (
isinstance(input, list)
and len(input) > 0
and isinstance(input[0], dict)
and isinstance(input[0].get("prompt"), list)
):
for item in input[0]["prompt"]:
if not isinstance(item, dict):
continue
image_url = item.get("image_url")
if not isinstance(image_url, dict):
continue
url = image_url.get("url")
if isinstance(url, str) and len(url) > BASE64_MAX_DISPLAY_LEN:
image_url["url"] = url[:BASE64_MAX_DISPLAY_LEN] + " ..."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for truncating long base64 image URLs is also present in LMMGenInferencerOutputHandler. To improve maintainability, this logic should be extracted into a shared utility function.

Comment on lines +80 to +82
import traceback
print(f"[ERROR] LMMGenInferencerOutputHandler.get_prediction_result failed: {type(e).__name__}: {e}")
print(f"[ERROR] Traceback: {traceback.format_exc()}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The exception handler uses print to output error information. It's better to use the class's logger (self.logger) for consistent and configurable logging.

Suggested change
import traceback
print(f"[ERROR] LMMGenInferencerOutputHandler.get_prediction_result failed: {type(e).__name__}: {e}")
print(f"[ERROR] Traceback: {traceback.format_exc()}")
import traceback
self.logger.error(f"LMMGenInferencerOutputHandler.get_prediction_result failed: {type(e).__name__}: {e}")
self.logger.error(f"Traceback: {traceback.format_exc()}")

PromptType = Union[PromptList, str]

# 模型推理相关配置常量
DEFAULT_MODEL_PATH = "/home/yanhe/models/Qwen-Image-Edit-2509/"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】不应该保留默认路径,而是必须用户指定


def _load_model(self):
"""加载模型"""
self.logger.info(f"从 {self.path} 加载模型...")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】统一英文注释

# 量化配置
if self.quant_desc_path:
from mindiesd import quantize
self.logger.info("Quantizing Transformer (单独量化核心组件)...")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】统一英文日志

prompts = []
neg_prompts = []
print(f"in _generate")
#self.logger.info(f"输入: {input}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】去除多余代码和print调试日志

torch.npu.synchronize()
end_time = time.time()
infer_time = end_time - start_time
self.logger.info(f"Current image finish generated, cost: {infer_time:.2f} second.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】进度条会提示推理时间,此处打印会和进度条日志冲突

result_data["gold"] = gold
except Exception as e:
import traceback
print(f"[ERROR] LMMGenInferencerOutputHandler.get_prediction_result failed: {type(e).__name__}: {e}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【reivew】去除print打印,采用logger提示并且错误信息尽量简洁明了


self.output_handler = LMMGenInferencerOutputHandler(perf_mode=self.perf_mode,
save_every=self.save_every)
def inference(self, retriever: BaseRetriever, output_json_filepath: Optional[str] = None) -> List:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【reivew】保留代码格式清晰,添加换行

@GaoHuaZhang GaoHuaZhang merged commit 8eecf48 into AISBench:master Mar 4, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants