How to train a lora with distilled flux model, such as flux-schnell??? #12945
Replies: 2 comments
-
|
you generally don't. There is a training adapter here https://huggingface.co/ostris/FLUX.1-schnell-training-adapter but I'd consider this expert use and not something that diffusers as a generic library will want to provide a script for (correct me if I'm wrong, I'm not associated with diffusers) |
Beta Was this translation helpful? Give feedback.
-
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is your feature request related to a problem? Please describe.
I can use flux as base model to train a lora, but it need 20 steps , it cost a lot of time , and I want to train a lora base on distill model to implement use fewer step make a better image, such as based on flux-schnell model train a lora it only need 4 steps can generate a good image !! and I can train many lora like this, only need 4 steps generated
Describe the solution you'd like.
I need a script , maybe it locate in examples\dreambooth\train_dreambooth_lora_flux_schennl.py
I want to know to train a lora based on distilled model and get a good result ?
Describe alternatives you've considered.
I want to train many lora for base model( flux or flux-schnell), not only one lora , and I want to generated with fewer steps. So , I want to train loras with distilled model ... how to implment it ? I test scripts : train_dreambooth_lora_flux.py by modify based mode from flux to flux-schnell ,but the result is bad...
Additional context.
any other implement method is OK ,
Beta Was this translation helpful? Give feedback.
All reactions