Skip to content

Conversation

@missflash
Copy link

🔧 Changes

  • Refactored latent-space realignment logic (the “realignment checks”) to be more robust / clearer. 
  • Improved device (i.e. CPU / GPU / “HF device”) handling for attention masks in the latent-batch generation function. 
  • Overall cleanup of a small part of the code (just one file changed: models.py, with 5 lines added and 3 removed).

🎯 Why this may matter

  • The refactor of the latent-space realignment check makes the code more robust — the model won’t crash or misbehave if latent_space_realign isn’t defined.
  • The device-handling improvement helps ensure that attention masks live on the correct device. This reduces errors or inefficient device transfers when using different hardware configurations (for example, if part of the model runs on a different device than the rest).
  • Because only a small portion changed, this is a relatively minor but important maintenance / stability improvement rather than a new feature or major change.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces minor but important robustness and device-handling improvements to the model wrapper code. The changes make the latent-space realignment logic more defensive against missing attributes and ensure attention masks are consistently placed on the appropriate device (HF_device when available, otherwise the main device).

  • Added defensive attribute access for latent_space_realign using getattr with a default fallback
  • Improved device handling for attention masks by introducing a device selection pattern that prioritizes HF_device when available
  • Enhanced code robustness with minimal changes (5 additions, 3 deletions)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@missflash
Copy link
Author

@jiaruzouu Thank you for sharing the excellent code—it was very helpful in understanding the LatentMAS algorithm.
I would appreciate it if you could review the simple modifications I’ve submitted. Thank you.

@jiaruzouu
Copy link
Member

jiaruzouu commented Dec 21, 2025

Hi @missflash, thank you so much for your efforts in improving our code! We are currently planning to update our codebase and will ensure to review your code with detailed feedback shortly!

Thanks again for your efforts!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants