Conversation
|
Thanks for handing that in! Timing is not great though. The next glTFast version will feature an animation API that refactored the base you've worked on. I'll see if I can merge your efforts. Why not use |
I tried this. Turns out the pooling has immense impact on the performance. While GC goes down big time (unsurprisingly as this is moved to malloc in native land), the time is back up 60-70% (tested on BrainStem.gltf). I think the best long-term solution is to combine array pooling in conjunction with NativeArrays. The question now becomes, is the degrade in garbage allocation worth the speed-up? Even the relatively small BrainStem model now has 200kB additional garbage (at a ~50% speed-up). I gravitate towards no, but I'm open to take a look at more profiling and discuss. |
|
I've re-based your work onto the current development branch. If you continue work, please do it from the updated branch: |
|
Another remark: After transition to NativeArray for Keyframes, every structure is thread safe and Burst compatible, so things could be sped up even further! |
|
Here's a PoC with (amateurishly) pooled NativeArrays in branch user/joverral/animationUtil_opts-native. It's comparably fast (sometimes faster) without the garbage allocations (150kB vs. 4kB for vec3 on BrainStem). What's unresolved is reliably allocating and freeing up those pools (across glTFs). Would be great if we could polish that. |
AddKey is slow, as it does a sort every time. It is much faster to use the newer SetKeys method and pass in a Span of the array of keys we're using from a shared Array pool. In addition, we're seeing some GLTF models with hundreds of curves, so paying for string reformatting 900 times seems unnecessary.

Before:
After:
