[For merge][part 1] Support Gedit Evaluate#159
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the benchmark's capabilities by introducing robust support for Large Multimodal Models (LMMs), particularly for image-editing tasks. It establishes new operational modes for judging and combined inference-judging, alongside a dedicated worker to orchestrate these processes. The changes also involve foundational refactoring of model output handling to correctly manage and store multimodal results, ensuring a more comprehensive and flexible evaluation framework. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces significant new functionality, including a "judge" evaluation workflow and support for a multimodal image editing model. The changes are extensive, adding new workers, handlers, and model wrappers. While the core logic appears sound, I've identified several areas for improvement regarding code duplication, robustness, and maintainability. My review includes a critical issue with a hardcoded file path, several high-severity bugs related to configuration and file handling, and some medium-severity issues concerning code quality and logging.
Note: Security Review did not run due to the size of the PR.
| PromptType = Union[PromptList, str] | ||
|
|
||
| # 模型推理相关配置常量 | ||
| DEFAULT_MODEL_PATH = "/home/yanhe/models/Qwen-Image-Edit-2509/" |
There was a problem hiding this comment.
| #self.device_id = device_kwargs.get('device_id', DEFAULT_DEVICE_ID) | ||
| # 在这里声明环境变量 | ||
| self.logger.debug(f"device id from kwargs: {device_kwargs.get('device_id', DEFAULT_DEVICE_ID)}") | ||
| os.environ["ASCEND_RT_VISIBLE_DEVICES"] = f"{device_kwargs.get('device_id', DEFAULT_DEVICE_ID)}" |
There was a problem hiding this comment.
Modifying environment variables like ASCEND_RT_VISIBLE_DEVICES within a library function is generally considered unsafe, as it can have unintended side effects on other parts of the application, especially in multi-threaded or multi-process environments. It's better to manage device selection using library-specific functions like torch.npu.set_device() if possible.
| os.remove(judge_org_prediction_path) | ||
| dump_jsonl(judge_preds, judge_org_prediction_path) |
There was a problem hiding this comment.
The current implementation removes the original prediction file before writing the new one. If the dump_jsonl operation fails for any reason (e.g., disk full, permission error), the original data will be lost. It's safer to write to a temporary file first and then atomically rename it to the final destination.
| os.remove(judge_org_prediction_path) | |
| dump_jsonl(judge_preds, judge_org_prediction_path) | |
| tmp_path = f"{judge_org_prediction_path}.tmp" | |
| dump_jsonl(judge_preds, tmp_path) | |
| shutil.move(tmp_path, judge_org_prediction_path) |
| class BaseLMModel(BaseModel): | ||
| """Base class for language models. | ||
| """ | ||
| def generate(self, inputs, outputs, **kwargs) -> List[str]: |
There was a problem hiding this comment.
The generate method is type-hinted to return List[str], but its implementation in subclasses (e.g., QwenImageEditModel) modifies the outputs object in-place and does not return a value. This mismatch is misleading and can cause issues with static analysis and developer understanding. The return type should be None.
| def generate(self, inputs, outputs, **kwargs) -> List[str]: | |
| def generate(self, inputs, outputs, **kwargs) -> None: |
| key = ( | ||
| task["models"][0]["abbr"] # same model | ||
| + "_" | ||
| + str(task['datasets'][0][0]['type']) # same dataset type | ||
| + "_" | ||
| + str(task["datasets"][0][0]["infer_cfg"]["inferencer"]) # same inferencer with the same args | ||
| ) |
There was a problem hiding this comment.
The key for grouping tasks is generated by converting the inferencer dictionary to a string using str(). The string representation of a dictionary is not guaranteed to be consistent across different Python versions or even runs, which could lead to tasks not being merged correctly. A more robust approach would be to create a canonical representation, for example by serializing to a sorted JSON string. You will need to import json for this suggestion to work.
| key = ( | |
| task["models"][0]["abbr"] # same model | |
| + "_" | |
| + str(task['datasets'][0][0]['type']) # same dataset type | |
| + "_" | |
| + str(task["datasets"][0][0]["infer_cfg"]["inferencer"]) # same inferencer with the same args | |
| ) | |
| key = ( | |
| task["models"][0]["abbr"] # same model | |
| + "_" | |
| + str(task['datasets'][0][0]['type']) # same dataset type | |
| + "_" | |
| + json.dumps(task["datasets"][0][0]["infer_cfg"]["inferencer"], sort_keys=True) # same inferencer with the same args | |
| ) |
| def _merge_datasets(self, tasks): | ||
| # merge datasets with the same model, dataset type and inferencer | ||
| task_groups = defaultdict(list) | ||
| for task in tasks: | ||
| key = ( | ||
| task["models"][0]["abbr"] # same model | ||
| + "_" | ||
| + str(task['datasets'][0][0]['type']) # same dataset type | ||
| + "_" | ||
| + str(task["datasets"][0][0]["infer_cfg"]["inferencer"]) # same inferencer with the same args | ||
| ) | ||
| task_groups[key].append(task) | ||
| new_tasks = [] | ||
| for key, task_group in task_groups.items(): | ||
| new_task = copy.deepcopy(task_group[0]) | ||
| if len(task_group) > 1: | ||
| for t in task_group[1:]: | ||
| new_task["datasets"][0].extend(t["datasets"][0]) | ||
| new_tasks.append(new_task) | ||
| return new_tasks |
| print(f"in _generate") | ||
| #self.logger.info(f"输入: {input}") | ||
| if isinstance(input, str): | ||
| prompts.append(input) | ||
| neg_prompts.append("") | ||
| elif isinstance(input, list): | ||
| # 处理包含图像的输入 | ||
| for item in input[0]["prompt"]: | ||
| if item["type"] == "image_url": | ||
| base64_url = item["image_url"]["url"].split(",")[1] | ||
| img = Image.open(io.BytesIO(base64.b64decode(base64_url))).convert("RGB") | ||
| images.append(img) | ||
| elif item["type"] == "text": | ||
| prompts.append(item["text"]) | ||
| neg_prompts.append("") | ||
| else: | ||
| prompts.append("") | ||
| neg_prompts.append("") | ||
|
|
||
| # 如果没有图像输入,使用默认图像 | ||
| if not images: | ||
| raise AISBenchRuntimeError(MODEL_CODES.UNKNOWN_ERROR, "QwenImageEditModel requires image input, but can't get image info from input.") | ||
|
|
||
| # 执行推理 | ||
| results = [] | ||
| for prompt, neg_prompt in zip(prompts, neg_prompts): | ||
| # 准备输入参数 | ||
| print("in _generate loop") |
| raise AISBenchRuntimeError(MODEL_CODES.UNKNOWN_ERROR, "QwenImageEditModel requires image input, but can't get image info from input.") | ||
|
|
||
| # 执行推理 | ||
| results = [] |
| if ( | ||
| isinstance(input, list) | ||
| and len(input) > 0 | ||
| and isinstance(input[0], dict) | ||
| and isinstance(input[0].get("prompt"), list) | ||
| ): | ||
| for item in input[0]["prompt"]: | ||
| if not isinstance(item, dict): | ||
| continue | ||
| image_url = item.get("image_url") | ||
| if not isinstance(image_url, dict): | ||
| continue | ||
| url = image_url.get("url") | ||
| if isinstance(url, str) and len(url) > BASE64_MAX_DISPLAY_LEN: | ||
| image_url["url"] = url[:BASE64_MAX_DISPLAY_LEN] + " ..." |
| import traceback | ||
| print(f"[ERROR] LMMGenInferencerOutputHandler.get_prediction_result failed: {type(e).__name__}: {e}") | ||
| print(f"[ERROR] Traceback: {traceback.format_exc()}") |
There was a problem hiding this comment.
The exception handler uses print to output error information. It's better to use the class's logger (self.logger) for consistent and configurable logging.
| import traceback | |
| print(f"[ERROR] LMMGenInferencerOutputHandler.get_prediction_result failed: {type(e).__name__}: {e}") | |
| print(f"[ERROR] Traceback: {traceback.format_exc()}") | |
| import traceback | |
| self.logger.error(f"LMMGenInferencerOutputHandler.get_prediction_result failed: {type(e).__name__}: {e}") | |
| self.logger.error(f"Traceback: {traceback.format_exc()}") |
| PromptType = Union[PromptList, str] | ||
|
|
||
| # 模型推理相关配置常量 | ||
| DEFAULT_MODEL_PATH = "/home/yanhe/models/Qwen-Image-Edit-2509/" |
There was a problem hiding this comment.
【review】不应该保留默认路径,而是必须用户指定
|
|
||
| def _load_model(self): | ||
| """加载模型""" | ||
| self.logger.info(f"从 {self.path} 加载模型...") |
| # 量化配置 | ||
| if self.quant_desc_path: | ||
| from mindiesd import quantize | ||
| self.logger.info("Quantizing Transformer (单独量化核心组件)...") |
| prompts = [] | ||
| neg_prompts = [] | ||
| print(f"in _generate") | ||
| #self.logger.info(f"输入: {input}") |
There was a problem hiding this comment.
【review】去除多余代码和print调试日志
| torch.npu.synchronize() | ||
| end_time = time.time() | ||
| infer_time = end_time - start_time | ||
| self.logger.info(f"Current image finish generated, cost: {infer_time:.2f} second.") |
There was a problem hiding this comment.
【review】进度条会提示推理时间,此处打印会和进度条日志冲突
| result_data["gold"] = gold | ||
| except Exception as e: | ||
| import traceback | ||
| print(f"[ERROR] LMMGenInferencerOutputHandler.get_prediction_result failed: {type(e).__name__}: {e}") |
There was a problem hiding this comment.
【reivew】去除print打印,采用logger提示并且错误信息尽量简洁明了
|
|
||
| self.output_handler = LMMGenInferencerOutputHandler(perf_mode=self.perf_mode, | ||
| save_every=self.save_every) | ||
| def inference(self, retriever: BaseRetriever, output_json_filepath: Optional[str] = None) -> List: |
Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。
PR Type / PR类型
Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)
🔍 Motivation / 变更动机
Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。
📝 Modification / 修改内容
Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。
📐 Associated Test Results / 关联测试结果
Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。
Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。
If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。
🌟 Use cases (Optional) / 使用案例(可选)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。
✅ Checklist / 检查列表
Before PR:
After PR:
👥 Collaboration Info / 协作信息
🌟 Useful CI Command / 实用的CI命令
/gemini review/gemini summary/gemini help/readthedocs build