Skip to content

Refactor SWF processing to fix memory exhaustion and improve compilation speed#2060

Open
Tutez64 wants to merge 3 commits into
openfl:developfrom
Tutez64:bugfix/swf-handler-memory-growth
Open

Refactor SWF processing to fix memory exhaustion and improve compilation speed#2060
Tutez64 wants to merge 3 commits into
openfl:developfrom
Tutez64:bugfix/swf-handler-memory-growth

Conversation

@Tutez64
Copy link
Copy Markdown

@Tutez64 Tutez64 commented May 12, 2026

Lime currently sends all libraries handled by the same asset handler to a single haxelib run <handler> process invocation. For SWF projects with many libraries, this means one Neko process processes every SWF in sequence.

In large real-world projects with hundreds of SWFs, memory keeps growing throughout preprocessing until it reaches several GB and can eventually fail with a Neko GC/heap error. When that happens after some SWF caches have already been written, later rebuilds may reuse the partial cache state and fail with missing generated Haxe classes.

This change processes SWF libraries in separate handler runs, one library per invocation, while preserving the existing merge behavior for each handler output. This bounds memory usage to a single SWF processing run instead of accumulating across the entire SWF set.

swf-memory-repro.zip
It generates SWF files locally and demonstrates memory growth in the current batched mode.

@Tutez64
Copy link
Copy Markdown
Author

Tutez64 commented May 12, 2026

This commit fixes a performance regression. Spawning a new neko process for each individual SWF library caused significant overhead, making repeat builds (where assets are already cached and up-to-date) excessively slow.

To resolve this, I introduced a "fast-check" pass before calling runLibraryHandler. The logic now checks if the generated cache files (.zip and .classes.txt) are newer than both the source .swf file and the swf tool itself.

If a valid cache is found, Lime now bypasses the neko process entirely and manually merges the cached assets, generated source paths (haxe/_generated), haxelibs (swf), and required haxeflags into the project. This restores near-instant performance for cached SWF assets while leaving the remaining stale ones to be processed correctly.

@Tutez64
Copy link
Copy Markdown
Author

Tutez64 commented May 12, 2026

Since the memory-limit fix isolated each SWF into its own separate neko process, it unlocked the perfect opportunity to process them concurrently. This commit introduces a multi-threaded worker pool using sys.thread to take advantage of this new architecture.

How it works:

  • Uncached SWF libraries are pushed into an array of LibraryHandlerJob.
  • runLibraryHandlers processes these jobs in parallel, utilizing all available CPU cores (System.processorCores).
  • A Mutex coordinates the queue, and a Lock ensures the main thread waits for all workers to complete before merging the resulting HXProject fragments.

Result:
Because the SWF generation is heavily CPU-bound, parallelizing these independent neko instances drastically reduces cold-build times (e.g., dropping from ~9 minutes to under 2 minutes on my project), while still safely bypassing the memory limits of the original single-process architecture.

@Tutez64 Tutez64 changed the title Process SWF libraries in separate handler runs to avoid unbounded memory growth Refactor SWF processing to fix memory exhaustion and improve compilation speed May 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant