Open
Conversation
3a26e3e to
743091e
Compare
fix non-diffusers lora key handling for flux2
* update * update * update * update * update * update * update * update
…gface#13122) Improve docstring scheduling edm dpmsolver multistep
…ly in `GlmImagePipeline` (huggingface#13092) * allow loose input Signed-off-by: JaredforReal <w13431838023@gmail.com> * add tests Signed-off-by: JaredforReal <w13431838023@gmail.com> * format test_glm_image Signed-off-by: JaredforReal <w13431838023@gmail.com> --------- Signed-off-by: JaredforReal <w13431838023@gmail.com>
ddec8fb to
4bbedfb
Compare
743091e to
55834ed
Compare
…ingface#13127) Improve docstring scheduling flow match euler discrete
…e} (huggingface#13066) * initial conversion script * cosmos control net block * CosmosAttention * base model conversion * wip * pipeline updates * convert controlnet * pipeline: working without controls * wip * debugging * Almost working * temp * control working * cleanup + detail on neg_encoder_hidden_states * convert edge * pos emb for control latents * convert all chkpts * resolve TODOs * remove prints * Docs * add siglip image reference encoder * Add unit tests * controlnet: add duplicate layers * Additional tests * skip less * skip less * remove image_ref * minor * docs * remove skipped test in transfer * Don't crash process * formatting * revert some changes * remove skipped test * make style * Address comment + fix example * CosmosAttnProcessor2_0 revert + CosmosAttnProcessor2_5 changes * make style * make fix-copies
* add tests for robust model loading. * apply review feedback.
…ed (huggingface#13121) Fix LTX-2 inference when num_videos_per_prompt > 1 and CFG is enabled
Try to fix setuptools pkg_resources issue on CI
0d0eeae to
bff6af9
Compare
…ngface#13130) Improve docstring scheduling flow match heun discrete
…ce#13132) Try to fix setuptools pkg_resources error for PR GPU test workflow
…le (huggingface#12524) * drop python 3.8 * remove list, tuple, dict from typing * fold Unions into | * up * fix a bunch and please me. * up * up * up * up * up * up * enforce 3.10.0. * up * up * up * up * up * up * up * up * Update setup.py Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * up. * python 3.10. * ifx * up * up * up * up * final * up * fix typing utils. * up * up * up * up * up * up * fix * up * up * up * up * up * up * handle modern types. * up * up * fix ip adapter type checking. * up * up * up * up * up * up * up * revert docstring changes. * keep deleted files deleted. * keep deleted files deleted. --------- Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
…12994) * feat: implement apply_lora_scale to remove boilerplate. * apply to the rest. * up * remove more. * remove. * fix * apply feedback.
* fix ltx2 i2v docstring. * up
* up * style + copies * fix --------- Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
* update create pipeline section * update more * update more * more * add a section on running pipeline moduarly * refactor update_components, remove support for spec * style * bullet points * update the pipeline block * small fix in state doc * update sequential doc * fix link * small update on quikstart * add a note on how to run pipeline without the componen4ts manager * Apply suggestions from code review Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * remove the supported models mention * update more * up * revert type hint changes --------- Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* up * up up * update outputs * style * add modular_auto_docstring! * more auto docstring * style * up up up * more more * up * address feedbacks * add TODO in the description for empty docstring * refactor based on dhruv's feedback: remove the class method * add template method * up * up up up * apply auto docstring * make style * rmove space in make docstring * Apply suggestions from code review * revert change in z * fix * Apply style fixes * include auto-docstring check in the modular ci. (huggingface#13004) * initial support: workflow * up up * treeat loop sequential pipeline blocks as leaf * update qwen image docstring note * add workflow support for sdxl * add a test suit * add test for qwen-image * refactor flux a bit, seperate modular_blocks into modular_blocks_flux and modular_blocks_flux_kontext + support workflow * refactor flux2: seperate blocks for klein_base + workflow * qwen: remove import support for stuff other than the default blocks * add workflow support for wan * sdxl: remove some imports: * refactor z * update flux2 auto core denoise * add workflow test for z and flux2 * Apply suggestions from code review * Apply suggestions from code review * add test for flux * add workflow test for flux * add test for flux-klein * sdxl: modular_blocks.py -> modular_blocks_stable_diffusion_xl.py * style * up * add auto docstring * workflow_names -> available_workflows * fix workflow test for klein base * Apply suggestions from code review Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * fix workflow tests * qwen: edit -> image_conditioned to be consistent with flux kontext/2 such * remove Optional * update type hints * update guider update_components * fix more * update docstring auto again --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
change lora mixin Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* remove k-diffusion * fix copies
…ingface#12811) * support device type device_maps to work with offloading. * add tests. * fix tests * skip tests where it's not supported. * empty * up * up * fix allegro.
* [Bug Fix][Qwen-Image-Edit] Fix Qwen-Image-Edit series on NPU * Enhance NPU attention handling by converting attention mask to boolean and refining mask checks. * Refine attention mask handling in NPU attention function to improve validation and conversion logic. * Clean Code * Refine attention mask processing in NPU attention functions to enhance performance and validation. * Remove item() ops on npu fa backend. * Reuse NPU attention mask by `_maybe_modify_attn_mask_npu` * Apply style fixes * Update src/diffusers/models/attention_dispatch.py --------- Co-authored-by: zhangtao <zhangtao529@huawei.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* update * update * update * update * update * update
Improve docstring scheduling flow match lcm
* add example * feedback
…12777) * split tensors inside the transformer blocks to avoid checkpointing issues * clean up, fix type hints * fix merge error * Apply style fixes --------- Co-authored-by: s <you@example.com> Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
…ed (huggingface#13149) * Pin setuptools version for dependencies which explicitly depend on pkg_resources * Revert setuptools pin as k-diffusion pipelines are now deprecated --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Guard ftfy import with is_ftfy_available * Remove xfail for PRX pipeline tests as they appear to work on transformers>4.57.1 * make style and make quality
55b09c0 to
16602ad
Compare
update Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
…ize_gguf_tensor (huggingface#13166) [gguf] Convert to plain tensor earlier in dequantize_gguf_tensor Once dequantize_gguf_tensor fetches the quant_type attributed from the GGUFParamter tensor subclass, there is no further need of running the actual dequantize operations on the Tensor subclass, we can just convert to plain tensor right away. This not only makes PyTorch eager faster, but reduces torch.compile tracer compile time from 36 seconds to 10 seconds, because there is lot less code to trace now.
peft (fal) lora format
…3123) * update * update
Fix typing import by converting to Python 3.9+ style type hint
* switch to transformers main again./ * more * up * up * fix group offloading. * attributes * up * up * tie embedding issue. * fix t5 stuff for more. * matrix configuration to see differences between 4.57.3 and main failures. * change qwen expected slice because of how init is handled in v5. * same stuff. * up * up * Revert "up" This reverts commit 515dd06. * Revert "up" This reverts commit 5274ffd. * up * up * fix with peft_format. * just keep main for easier debugging. * remove torchvision. * empty * up * up with skyreelsv2 fixes. * fix skyreels type annotation. * up * up * fix variant loading issues. * more fixes. * fix dduf * fix * fix * fix * more fixes * fixes * up * up * fix dduf test * up * more * update * hopefully ,final? * one last breath * always install from main * up * audioldm tests * up * fix PRX tests. * up * kandinsky fixes * qwen fixes. * prx * hidream
…gface#13060) * fix: graceful fallback when attention backends fail to import ## Problem External attention backends (flash_attn, xformers, sageattention, etc.) may be installed but fail to import at runtime due to ABI mismatches. For example, when `flash_attn` is compiled against PyTorch 2.4 but used with PyTorch 2.8, the import fails with: ``` OSError: .../flash_attn_2_cuda.cpython-311-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda9SetDeviceEab ``` The current code uses `importlib.util.find_spec()` to check if packages exist, but this only verifies the package is installed—not that it can actually be imported. When the import fails, diffusers crashes instead of falling back to native PyTorch attention. ## Solution Wrap all external attention backend imports in try-except blocks that catch `ImportError` and `OSError`. On failure: 1. Log a warning message explaining the issue 2. Set the corresponding `_CAN_USE_*` flag to `False` 3. Set the imported functions to `None` This allows diffusers to gracefully degrade to PyTorch's native SDPA (scaled_dot_product_attention) instead of crashing. ## Affected backends - flash_attn (Flash Attention) - flash_attn_3 (Flash Attention 3) - aiter (AMD Instinct) - sageattention (SageAttention) - flex_attention (PyTorch Flex Attention) - torch_npu (Huawei NPU) - torch_xla (TPU/XLA) - xformers (Meta xFormers) ## Testing Tested with PyTorch 2.8.0 and flash_attn 2.7.4.post1 (compiled for PyTorch 2.4). Before: crashes on import. After: logs warning and uses native attention. * address review: use single logger and catch RuntimeError - Move logger to module level instead of creating per-backend loggers - Add RuntimeError to exception list alongside ImportError and OSError Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Apply style fixes --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Fix torchrun command argument order in docs
16602ad to
03b666a
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fixes # (issue)
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.