Skip to content

[UT] [part 1] Add ut for gedit evaluate#165

Merged
GaoHuaZhang merged 1 commit intoAISBench:masterfrom
SJTUyh:gedit_ut_0
Mar 5, 2026
Merged

[UT] [part 1] Add ut for gedit evaluate#165
GaoHuaZhang merged 1 commit intoAISBench:masterfrom
SJTUyh:gedit_ut_0

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Mar 5, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

related scripts coverage
image
image
image
image

coverage sum:
image

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the test coverage for the benchmark system by introducing new unit tests for the JudgeInfer worker and extending tests for base dataset functionalities. It also adds entirely new test suites for image editing datasets (GEdit) and both LLM and LMM judging utilities, ensuring the robustness and correctness of these critical components. The added tests validate various aspects, from configuration handling and data processing to prediction loading and evaluation logic, thereby improving the overall reliability of the benchmark framework.

Highlights

  • New JudgeInfer Worker Tests: Added comprehensive unit tests for the newly introduced JudgeInfer worker, covering its configuration update, preprocessing, task merging, execution, and result post-processing logic.
  • Base Dataset Functionality Tests: Expanded unit tests for BaseDataset and BaseJDGDataset to cover initialization with task_state_manager and abbr, error handling for invalid k and n parameters, and prediction loading mechanisms.
  • GEdit Dataset and Evaluator Tests: Introduced new unit tests for GEditDataset, GEditSCJDGDataset, GEditPQJDGDataset, and GEditEvaluator, ensuring the correct loading, splitting, and scoring of image editing datasets.
  • LLM Judge Utility Tests: Added new unit tests for LLM judge utilities, including get_a_or_b for response parsing, LLMJudgeDataset for prediction loading, and LLMJudgeCorrectEvaluator for scoring LLM judge results.
  • LMM Judge Utility Tests: Implemented new unit tests for LMM judge utilities, covering get_lmm_point_list for extracting scores, LMMImgJDGDataset for handling image predictions (including base64 conversion), and LMMJudgeImageEditEvaluator for scoring image editing tasks with SC and PQ metrics.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/UT/cli/test_workers.py
    • Imported mock_open for enhanced mocking capabilities.
    • Imported JudgeInfer worker to enable testing of its functionalities.
    • Updated the WORK_FLOW dictionary to include new 'judge' and 'infer_judge' keys, integrating the JudgeInfer worker into the system's workflow definitions.
    • Added a new test class TestJudgeInfer with comprehensive tests for update_cfg, _cfg_pre_process, _merge_datasets, do_work, _result_post_process, and _update_tasks_cfg methods of the JudgeInfer worker.
  • tests/UT/datasets/test_base.py
    • Imported pytest for advanced testing features.
    • Imported BaseJDGDataset to enable testing of judge dataset functionalities.
    • Corrected minor whitespace inconsistencies in existing DummyDataset tests.
    • Added new tests for DummyDataset to verify proper initialization with task_state_manager and abbr parameters.
    • Included tests for DummyDataset to ensure ParameterValueError is raised when k is greater than n (or max k in a list is greater than n).
    • Added tests for DummyDatasetDict to verify correct handling of DatasetDict types in repeated_dataset processing.
    • Introduced TestBaseJDGDataset class with tests for _init_org_datasets_instance, _process_single_item, _modify_dataset_item, and load method when processing predictions.
  • tests/UT/datasets/test_g_edit.py
    • Added a new file to house unit tests for GEdit-related components.
    • Included TestGEditEvaluator with tests for the score method, covering basic scoring and handling of empty predictions.
    • Added TestGEditDataset with tests for the load method, verifying basic loading, split handling, and raw image usage.
    • Implemented TestGEditSCJDGDataset and TestGEditPQJDGDataset with tests for the _get_dataset_class method, ensuring they correctly return GEditDataset.
  • tests/UT/datasets/utils/test_llm_judge.py
    • Added a new file for unit tests related to LLM judge utilities.
    • Included TestGetAOrB with tests for the get_a_or_b function, verifying correct extraction of 'A' or 'B' from text.
    • Added TestLLMJudgeDataset with tests for _load_from_predictions, covering scenarios where the prediction file does not exist and successful loading.
    • Implemented TestLLMJudgeCorrectEvaluator with tests for the score method, validating correct, wrong, and mixed predictions, as well as handling length mismatches and empty inputs.
  • tests/UT/datasets/utils/test_lmm_judge.py
    • Added a new file for unit tests related to LMM judge utilities.
    • Included TestGetLmmPointList with tests for the get_lmm_point_list function, verifying correct extraction of list-like strings.
    • Added TestLMMImgJDGDataset with tests for _load_from_predictions, covering file existence, successful loading with image base64 conversion, and handling of nonexistent image files.
    • Implemented TestImgSCJDGDataset and TestImgPQJDGDataset with tests for the _modify_dataset_item method, verifying correct content manipulation for single-choice and pairwise quality judging.
    • Introduced TestLMMJudgeImageEditEvaluator with tests for its initialization (default and PQ metrics) and the score method, including success cases, PQ metric scoring, length mismatches, non-string predictions, invalid prediction formats, empty predictions, and detail structure validation.
Activity
  • The pull request is categorized as 'Test-Cases', indicating its primary focus on improving the testing infrastructure.
  • The author, SJTUyh, has provided several images showing coverage reports, suggesting a thorough approach to ensuring the new tests contribute to higher code coverage.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@SJTUyh SJTUyh changed the title [UT] [part 1] Add [UT] [part 1] Add ut for gedit evaluate Mar 5, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a comprehensive suite of unit tests for new and existing components, significantly improving test coverage. The new tests for JudgeInfer, BaseJDGDataset, and various dataset utilities are well-structured. I've identified a few opportunities to enhance the new tests by addressing code duplication and correcting a minor issue with logger mocking. Overall, this is a valuable contribution to the project's stability.

assert result["accuracy"] == 0.0


class TestGEditDataset:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The setup logic for creating a GEditDataset instance (ds = GEditDataset.__new__(...) and mocking its attributes) is duplicated across test_load_basic, test_load_with_split, and test_load_use_raw. To improve maintainability and reduce code duplication, consider extracting this common setup into a setup_method for the TestGEditDataset class.

Comment on lines +47 to +54
def test_load_from_predictions_file_not_exists(self):
"""测试文件不存在的情况"""
ds = LLMJudgeDataset.__new__(LLMJudgeDataset)
ds.logger = MagicMock()

with patch('os.path.exists', return_value=False):
result = ds._load_from_predictions('/test/nonexistent.jsonl')
assert result == []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The _load_from_predictions method uses a module-level logger, but this test mocks an instance-level logger (ds.logger), which is not used. To correctly test the logging behavior, you should patch the module-level logger ais_bench.benchmark.datasets.utils.llm_judge.logger and assert that its warning method is called.

Suggested change
def test_load_from_predictions_file_not_exists(self):
"""测试文件不存在的情况"""
ds = LLMJudgeDataset.__new__(LLMJudgeDataset)
ds.logger = MagicMock()
with patch('os.path.exists', return_value=False):
result = ds._load_from_predictions('/test/nonexistent.jsonl')
assert result == []
@patch('ais_bench.benchmark.datasets.utils.llm_judge.logger')
def test_load_from_predictions_file_not_exists(self, mock_logger):
"""测试文件不存在的情况"""
ds = LLMJudgeDataset.__new__(LLMJudgeDataset)
with patch('os.path.exists', return_value=False):
result = ds._load_from_predictions('/test/nonexistent.jsonl')
assert result == []
mock_logger.warning.assert_called_once_with('Prediction file does not exist: /test/nonexistent.jsonl')

assert result == "[10, 20, 30, 40]"


class TestLMMImgJDGDataset:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The setup logic for creating an LMMImgJDGDataset instance is duplicated across the tests in this class (test_load_from_predictions_file_not_exists, test_load_from_predictions_success, test_load_from_predictions_with_nonexistent_image). To improve code clarity and reduce duplication, consider extracting this common setup into a setup_method.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds unit tests for the GEdit evaluate functionality, covering LMM judge utilities, LLM judge utilities, the GEdit dataset and evaluator classes, the BaseJDGDataset class, and the JudgeInfer worker class. It is part 1 of a test coverage improvement effort for the evaluate pipeline.

Changes:

  • Added new test files for lmm_judge (image editing evaluator, dataset classes, point list extraction) and llm_judge (A/B extraction, dataset loading, correctness evaluation)
  • Added new test file for g_edit (GEdit dataset, evaluator, SC/PQ judge dataset classes) and extended test_base.py with tests for BaseJDGDataset, task_state_manager, parameter validation, and DatasetDict handling
  • Extended test_workers.py with tests for JudgeInfer worker (config updates, preprocessing, dataset merging, result post-processing, task config updates) and WORK_FLOW dict assertions

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
tests/UT/datasets/utils/test_lmm_judge.py New tests for get_lmm_point_list, LMMImgJDGDataset, ImgSCJDGDataset, ImgPQJDGDataset, and LMMJudgeImageEditEvaluator
tests/UT/datasets/utils/test_llm_judge.py New tests for get_a_or_b, LLMJudgeDataset, and LLMJudgeCorrectEvaluator
tests/UT/datasets/test_g_edit.py New tests for GEditEvaluator, GEditDataset, GEditSCJDGDataset, and GEditPQJDGDataset
tests/UT/datasets/test_base.py Extended with tests for BaseJDGDataset, task state management, parameter validation, and DatasetDict path
tests/UT/cli/test_workers.py Added JudgeInfer import, WORK_FLOW assertions for judge workflows, and comprehensive TestJudgeInfer class

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

import os
import pytest
from unittest.mock import patch, MagicMock, call
from unittest.mock import patch, MagicMock, call, mock_open
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mock_open is imported but never used in this file.

Suggested change
from unittest.mock import patch, MagicMock, call, mock_open
from unittest.mock import patch, MagicMock, call

Copilot uses AI. Check for mistakes.
@@ -1,9 +1,10 @@
import unittest
from unittest.mock import patch, MagicMock
import pytest
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytest is imported but never used in this file.

Suggested change
import pytest

Copilot uses AI. Check for mistakes.
import base64

# 添加项目根目录到Python路径
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../../')))
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sys.path.insert path traverses one level too many. This file is at tests/UT/datasets/test_g_edit.py (3 directories deep), so '../../../../' goes one level above the project root. It should be '../../../' to reach the project root, consistent with other test files at the same directory depth (e.g., tests/UT/datasets/test_base.py doesn't need this at all, and files at depth 4 like tests/UT/datasets/utils/test_llm_judge.py correctly use '../../../../').

Suggested change
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../../')))
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))

Copilot uses AI. Check for mistakes.
Comment on lines +3 to +7
import pytest
from unittest.mock import patch, MagicMock, mock_open
from io import BytesIO
from PIL import Image
import base64
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Several imports are unused: pytest, mock_open, BytesIO, and base64 are imported but never used in this file. Remove unused imports to keep the code clean.

Suggested change
import pytest
from unittest.mock import patch, MagicMock, mock_open
from io import BytesIO
from PIL import Image
import base64
from unittest.mock import patch, MagicMock
from PIL import Image

Copilot uses AI. Check for mistakes.
Comment on lines +4 to +6
from unittest.mock import patch, MagicMock, mock_open
import tempfile
import json
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused imports: mock_open, tempfile, and json are imported but never used in this file.

Suggested change
from unittest.mock import patch, MagicMock, mock_open
import tempfile
import json
from unittest.mock import patch, MagicMock

Copilot uses AI. Check for mistakes.
Comment on lines +4 to +6
from unittest.mock import patch, MagicMock, mock_open
import tempfile
import json
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused imports: mock_open and json are imported but never used in this file.

Suggested change
from unittest.mock import patch, MagicMock, mock_open
import tempfile
import json
from unittest.mock import patch, MagicMock
import tempfile

Copilot uses AI. Check for mistakes.
@GaoHuaZhang GaoHuaZhang merged commit 1c52560 into AISBench:master Mar 5, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants