Extend base Trainer with LoRATrainer using PEFT#51
Open
SaviNimz wants to merge 1 commit intopyfenn:mainfrom
Open
Extend base Trainer with LoRATrainer using PEFT#51SaviNimz wants to merge 1 commit intopyfenn:mainfrom
SaviNimz wants to merge 1 commit intopyfenn:mainfrom
Conversation
blkdmr
requested changes
Feb 11, 2026
Collaborator
blkdmr
left a comment
There was a problem hiding this comment.
Hi, the code is well written, but there are a few issues that make this class unusable as it is. For example, at least a base LLM model, a tokenizer and a PEFT config object should be passed to the trainer. Also, the training procedure needs to be reworked, since we have attention masks, tokenizers, and other components that aren’t supported by the base Trainer class. Do you think you could address these issues?
Author
|
Hi I've been looking in to the possible implementations and there are few issues I've came across. It would be helpful to have some insights:
I'm also not much familiar with LORA. But from what I see even if I override the existing fit function it won't be generalizable right? |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR introduces a LoRATrainer class that extends the base Trainer to support Parameter-Efficient Fine-Tuning (PEFT) using LoRA via the peft library. If a LoraConfig is provided, the model is wrapped using get_peft_model; otherwise, it behaves exactly like the standard Trainer.