NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 49140, 24, 128) (torch.float32)
key : shape=(1, 49140, 24, 128) (torch.float32)
value : shape=(1, 49140, 24, 128) (torch.float32)
attn_bias : <class 'xformers.ops.fmha.attn_bias.BlockDiagonalPaddedKeysMask'>
p : 0.0
`fa3F@0.0.0` is not supported because:
requires device with capability > (9, 0) but your GPU has capability (7, 5) (too old)
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
operator wasn't built - see `python -m xformers.info` for more info
`fa2F@v2.7.2.post1` is not supported because:
requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
`cutlassF-pt` is not supported because:
attn_bias type is <class 'xformers.ops.fmha.attn_bias.BlockDiagonalPaddedKeysMask'>
It's clearly old GPU issues, can you suggest configs to use or add code to support older GPUs? Do I have to bring this to the Hyvideo team's repo if it's out of the scope of your optimizations?
I'm using these flags
python gradio_server.py --i2v --profile 5 --attention xformers --precision fp16 --server-name 127.0.0.1 --open-browserBut still it'd return this error:
It's clearly old GPU issues, can you suggest configs to use or add code to support older GPUs? Do I have to bring this to the Hyvideo team's repo if it's out of the scope of your optimizations?