Skip to content

Actually unloading VRAM when the logger claims to do so.#2

Open
Erquint wants to merge 1 commit intoIceFog72:mainfrom
Erquint:main
Open

Actually unloading VRAM when the logger claims to do so.#2
Erquint wants to merge 1 commit intoIceFog72:mainfrom
Erquint:main

Conversation

@Erquint
Copy link
Copy Markdown

@Erquint Erquint commented Mar 9, 2026

The bug this fixes is that VRAM remains occupied when switching to CPU.

Now calling torch.cuda.empty_cache() when 🔄 Moving model back to CPU is printed.
Conditionals rearranged accordingly for call safety.

It could be argued that preserving the model cached in VRAM for some time is a benefit for a multi-user environment, but that would need to be handled and parametrized in the config.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant