[For merge][part 2] Support Gedit Evaulate#161
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for evaluating the G-Edit and AIME2025 datasets, including new configurations, dataset loaders, and LLM-based evaluation logic. The changes are extensive, adding base classes for judge datasets and utilities for handling multimodal data and efficient file I/O. A critical security concern has been identified: a path traversal vulnerability in the image loading logic for multimodal judge datasets, which could allow arbitrary file reads. Additionally, potential prompt injection instances were found where untrusted model outputs or dataset instructions are directly embedded into prompts without proper sanitization. Beyond security, areas for improvement include fixing contradictory instructions in prompts, addressing potential runtime errors, optimizing dataset creation for better performance, and cleaning up code style.
| image_path = os.path.join(base_path, pred_item.get('prediction', '')) | ||
| if image_path and os.path.exists(image_path): | ||
| try: | ||
| # 加载图片 | ||
| with Image.open(image_path) as img: |
There was a problem hiding this comment.
The _load_from_predictions method constructs an image path by joining a base directory with a prediction string obtained from a model's output. If the model's output is manipulated (e.g., via prompt injection), an attacker can specify a path like ../../../../etc/passwd. While Image.open() might fail on non-image files, this still allows for file existence probing and potential arbitrary file read of any image-compatible files on the system. The content of the file is then base64-encoded and potentially exposed to the judge model.
| example['content'] = AIS_IMAGE_START + image_url + AIS_CONTENT_TAG \ | ||
| + AIS_TEXT_START + example['instruction'] + AIS_CONTENT_TAG |
There was a problem hiding this comment.
A prompt injection vulnerability exists here: the dataset instruction is directly concatenated into the content field using special tags like AIS_CONTENT_TAG. If the instruction itself contains these tags (e.g., <AIS_CONTENT_TAG>, <AIS_IMAGE_START>), it can break the prompt structure and inject arbitrary segments (text or images) into the final prompt processed by the format_mm method. This allows for prompt injection attacks via the dataset content. Additionally, the current approach of creating a new Dataset object for each example and merging them with concatenate_datasets is inefficient; consider processing examples into a list of dictionaries and creating a single Dataset at the end using Dataset.from_list() for better performance.
|
|
||
| <Original Question Begin>: \n{question}\n<Original Question End>\n\n | ||
| <Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n | ||
| <Predicted Answer Begin>: \n{model_answer}\n<Predicted End>\n\n |
There was a problem hiding this comment.
The model_answer is directly embedded into the GRADER_TEMPLATE. Since model_answer is the output of the model being evaluated, it is untrusted. A malicious or manipulated model could output content that breaks out of the intended block (e.g., by including <Predicted End>) and injects new instructions to the judge model, potentially biasing the evaluation results.
| Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer. | ||
|
|
There was a problem hiding this comment.
The instruction on line 48, "Simply reply with either CORRECT, INCORRECT", contradicts the instructions on lines 46 and 55, which ask for "A" or "B". This inconsistency could confuse the LLM judge. Since the post-processor get_a_or_b expects 'A' or 'B', these lines should be removed for consistency.
| A score from 0 to 10 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instruction at all. 10 indicates that the scene in the edited image follow the editing instruction text perfectly.) | ||
| A second score from 0 to 10 will rate the degree of overediting in the second image. (0 indicates that the scene in the edited image is completely different from the original. 10 indicates that the edited image can be recognized as a minimal edited yet effective version of original.) |
There was a problem hiding this comment.
There are a couple of grammatical errors in the prompt which could be corrected for clarity:
- On line 24, "follow" should be "follows".
- On line 25, "version of original" should be "version of the original".
| A score from 0 to 10 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instruction at all. 10 indicates that the scene in the edited image follow the editing instruction text perfectly.) | |
| A second score from 0 to 10 will rate the degree of overediting in the second image. (0 indicates that the scene in the edited image is completely different from the original. 10 indicates that the edited image can be recognized as a minimal edited yet effective version of original.) | |
| A score from 0 to 10 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instruction at all. 10 indicates that the scene in the edited image follows the editing instruction text perfectly.) | |
| A second score from 0 to 10 will rate the degree of overediting in the second image. (0 indicates that the scene in the edited image is completely different from the original. 10 indicates that the edited image can be recognized as a minimal edited yet effective version of the original.) |
| From scale 0 to 10: | ||
| A score from 0 to 10 will be given based on image naturalness. | ||
| ( | ||
| 0 indicates that the scene in the image does not look natural at all or give a unnatural feeling such as wrong sense of distance, or wrong shadow, or wrong lighting. |
There was a problem hiding this comment.
There is a grammatical error here: "give a unnatural" should be "gives an unnatural".
| 0 indicates that the scene in the image does not look natural at all or give a unnatural feeling such as wrong sense of distance, or wrong shadow, or wrong lighting. | |
| 0 indicates that the scene in the image does not look natural at all or gives an unnatural feeling such as wrong sense of distance, or wrong shadow, or wrong lighting. |
| if len(dataset_batches) == 1: | ||
| return dataset_batches[0] | ||
| else: | ||
| from datasets import concatenate_datasets |
| from PIL import Image | ||
| from tqdm import tqdm | ||
|
|
||
| from ais_bench.benchmark.datasets.needlebench_v2 import origin |
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
| Convert PIL Image to base64 string | ||
| """ | ||
| if not isinstance(image, Image.Image): | ||
| raise ValueError("Input must be a PIL Image object") |
There was a problem hiding this comment.
【review】使用aisbench错误码维护
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。
PR Type / PR类型
Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)
🔍 Motivation / 变更动机
Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。
📝 Modification / 修改内容
Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。
📐 Associated Test Results / 关联测试结果
Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。
Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。
If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。
🌟 Use cases (Optional) / 使用案例(可选)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。
✅ Checklist / 检查列表
Before PR:
After PR:
👥 Collaboration Info / 协作信息
🌟 Useful CI Command / 实用的CI命令
/gemini review/gemini summary/gemini help/readthedocs build