Skip to content

[UT] Add new UT for Gedit feature #163

Open
SJTUyh wants to merge 9 commits intoAISBench:masterfrom
SJTUyh:gedit_new_ut
Open

[UT] Add new UT for Gedit feature #163
SJTUyh wants to merge 9 commits intoAISBench:masterfrom
SJTUyh:gedit_new_ut

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Mar 5, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the testing infrastructure for the benchmark system, particularly focusing on new 'Judge Infer' workflows and the handling of Large Multimodal Model (LMM) outputs. It introduces new worker classes and base dataset functionalities to support judging tasks, alongside dedicated test suites for GEdit, LLM, and LMM-specific datasets and evaluators. The changes also refine the output model to better accommodate complex LMM generation results, ensuring robust processing and storage of diverse content types, including images.

Highlights

  • New Judge Infer Worker: Introduced a new JudgeInfer worker in the CLI, expanding the benchmark's capabilities to include judge-based inference workflows. This worker is integrated into the WORK_FLOW dictionary, enabling its use in 'all', 'judge', and 'infer_judge' scenarios.
  • Enhanced Base Dataset for Judging: Added BaseJDGDataset to datasets/base.py, providing a foundational class for datasets used in judging tasks. This includes new methods for handling task state management, abbreviation, and robust error checking for k and n parameters.
  • GEdit Feature Unit Tests: New unit tests were added for the GEdit feature, covering GEditDataset, GEditSCJDGDataset, GEditPQJDGDataset, and GEditEvaluator. These tests validate the loading, splitting, and evaluation logic specific to GEdit datasets.
  • LLM and LMM Judge Utility Tests: Comprehensive unit tests were introduced for LLM and LMM Judge utilities, including get_a_or_b, LLMJudgeDataset, LLMJudgeCorrectEvaluator, get_lmm_point_list, LMMImgJDGDataset, ImgSCJDGDataset, ImgPQJDGDataset, and LMMJudgeImageEditEvaluator. These tests ensure the correct parsing of judge responses, loading of predictions (including image handling), and evaluation of results for both text and image-based judging.
  • LMM Output Handling and Inferencer Tests: New FunctionCallOutput and LMMOutput classes were added to models/output.py to specifically handle outputs from Large Multimodal Models (LMMs), including image content. Corresponding unit tests for LMMGenInferencerOutputHandler and LMMGenInferencer were added, validating the initialization, prediction result processing (text and image), and batch inference logic for LMMs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/UT/cli/test_workers.py
    • Imported mock_open for enhanced mocking capabilities.
    • Imported the new JudgeInfer class.
    • Extended the WORK_FLOW dictionary to include 'judge' and 'infer_judge' keys.
    • Added JudgeInfer to the 'all', 'judge', and 'infer_judge' lists within WORK_FLOW.
    • Introduced TestJudgeInfer class with multiple test methods covering configuration updates, preprocessing, dataset merging, and work execution for the JudgeInfer worker.
  • tests/UT/datasets/test_base.py
    • Imported pytest for testing utilities.
    • Imported the new BaseJDGDataset class.
    • Corrected minor whitespace inconsistencies.
    • Added new test cases to DummyDataset for task_state_manager integration, abbr parameter handling, validation of k and n parameters, and processing of DatasetDict types.
    • Added TestBaseJDGDataset class with tests for initializing original dataset instances, processing single items, modifying dataset items, and loading with predictions.
  • tests/UT/datasets/test_g_edit.py
    • Added a new test file to cover GEdit related datasets and evaluators.
    • Included imports for necessary modules such as sys, os, pytest, unittest.mock components, io.BytesIO, PIL.Image, base64, datasets.Dataset, and specific GEdit classes.
    • Defined TestGEditEvaluator with tests for the score method, including scenarios with empty predictions.
    • Implemented TestGEditDataset with tests for basic dataset loading, loading with split configurations, and loading using raw image data.
    • Created TestGEditSCJDGDataset and TestGEditPQJDGDataset classes, each testing their respective _get_dataset_class method.
  • tests/UT/datasets/utils/test_llm_judge.py
    • Added a new test file for LLM Judge utility functions.
    • Included imports for sys, os, pytest, unittest.mock components, tempfile, json, and specific llm_judge functions and classes.
    • Defined TestGetAOrB with tests for extracting 'A' or 'B' from strings, covering various cases including no match and empty strings.
    • Implemented TestLLMJudgeDataset with tests for the _load_from_predictions method, addressing file existence and successful data loading.
    • Created TestLLMJudgeCorrectEvaluator with tests for the score method, covering correct, incorrect, mixed, length mismatch, and empty prediction/reference scenarios.
  • tests/UT/datasets/utils/test_lmm_judge.py
    • Added a new test file for LMM Judge utility functions.
    • Included imports for sys, os, pytest, unittest.mock components, tempfile, json, base64, io.BytesIO, PIL.Image, and specific lmm_judge functions and classes.
    • Defined TestGetLmmPointList with tests for extracting point lists from strings, including various formats and edge cases.
    • Implemented TestLMMImgJDGDataset with tests for the _load_from_predictions method, covering file non-existence, successful loading (including base64 image conversion), and scenarios with nonexistent image files.
    • Created TestImgSCJDGDataset and TestImgPQJDGDataset classes, each testing their _modify_dataset_item method for image editing content modification.
    • Defined TestLMMJudgeImageEditEvaluator with tests for initialization (default and PQ metric), the score method (success, PQ metric, length mismatch, non-string predictions, invalid format, empty predictions), and the structure of detailed results.
  • tests/UT/models/test_output.py
    • Imported tempfile, os, and PIL.Image for image-related testing.
    • Imported new FunctionCallOutput and LMMOutput classes.
    • Refined comments and assertions in existing tests for Output and RequestOutput classes to improve clarity and robustness.
    • Added TestFunctionCallOutput class with tests for initialization, performance mode, and updating extra details data from text responses (including edge cases like empty or missing choices).
    • Added TestLMMOutput class with tests for initialization, performance mode, handling text and image content, and get_prediction for various content types (single text, single image, multiple items).
    • Included tests for the default implementations of update_extra_perf_data_from_stream_response, update_extra_perf_data_from_text_response, update_extra_details_data_from_stream_response, and update_extra_details_data_from_text_response in the Output class.
  • tests/UT/openicl/icl_inferencer/output_handler/test_lmm_gen_inferencer_output_handler.py
    • Added a new test file for LMMGenInferencerOutputHandler.
    • Included imports for sys, os, pytest, unittest.mock components, tempfile, uuid, pathlib.Path, LMMGenInferencerOutputHandler, LMMOutput, and AISBenchRuntimeError.
    • Defined TestLMMGenInferencerOutputHandler with tests for initialization, setting output paths, and get_prediction_result for various output types (string, LMMOutput, image output), directory creation, handling long base64 inputs, dictionary inputs, empty inputs, cases without gold data, failed outputs, and non-dictionary/non-string prompt items.
  • tests/UT/openicl/icl_inferencer/test_icl_lmm_gen_inferencer.py
    • Added a new test file for LMMGenInferencer.
    • Included imports for sys, os, pytest, unittest.mock components, uuid, and LMMOutput.
    • Defined TestLMMGenInferencer with tests for initialization (default and custom parameters), the inference method, and the batch_inference method (including scenarios with and without gold data, and UUID generation).
Activity
  • The pull request is titled "[UT] Add new UT for Gedit feature", indicating the primary purpose is to add new unit tests.
  • The PR type is explicitly marked as "Test-Cases", confirming the focus on testing.
  • The description includes standard sections for motivation, modification, test results, BC-breaking changes, performance degradation, use cases, and a checklist, which are currently empty. This suggests the PR is either an initial draft or awaiting further details from the author, SJTUyh.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a comprehensive set of unit tests for the new Gedit and JudgeInfer features. The tests cover various scenarios, including different model types, configurations, and edge cases, which significantly improves the test coverage and reliability of these new components. The use of mocking is appropriate for isolating units under test.

I've identified a few minor areas for improvement in the new tests in tests/UT/datasets/utils/test_lmm_judge.py to enhance consistency and robustness. Specifically, I've suggested mocking the logger attribute in tests where the class __init__ is bypassed to prevent potential future test failures.

Overall, this is a great contribution to improving the project's test suite.

Comment on lines +59 to +60
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.task_state_manager = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and future-proofing, it's good practice to mock the logger attribute here, similar to how it's done in other tests in this file (e.g., TestImgSCJDGDataset.test_modify_dataset_item). Although _load_from_predictions doesn't currently use self.logger, the base class defines it, and future changes might introduce logging.

Suggested change
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.task_state_manager = None
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.logger = MagicMock()
ds.task_state_manager = None

Comment on lines +79 to +81
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.task_state_manager = MagicMock()
ds.update_task_state = MagicMock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and future-proofing, it's good practice to mock the logger attribute here. The base class BaseDataset initializes a logger instance, but since __init__ is being bypassed, it's better to mock it explicitly. This makes the test more robust to future changes.

Suggested change
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.task_state_manager = MagicMock()
ds.update_task_state = MagicMock()
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.logger = MagicMock()
ds.task_state_manager = MagicMock()
ds.update_task_state = MagicMock()

Comment on lines +109 to +111
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.task_state_manager = MagicMock()
ds.update_task_state = MagicMock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with other tests and to make the test more robust, please mock the logger attribute here. Since __init__ is bypassed, attributes from the base class like logger are not initialized. Explicitly mocking it prevents potential issues if logging is added to the tested method in the future.

Suggested change
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.task_state_manager = MagicMock()
ds.update_task_state = MagicMock()
ds = LMMImgJDGDataset.__new__(LMMImgJDGDataset)
ds.logger = MagicMock()
ds.task_state_manager = MagicMock()
ds.update_task_state = MagicMock()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant