Currently EOS VM OC's code cache is implemented via a boost interprocess allocator that is mapped from a file (much like how chainbase works).
This is over-complication.
It should be refactored to simply de/serialize the code from/to disk on start/stop. This will incur a small performance impact at start/stop but if the serialized file is compressed I expect it to be only a minimal impact even when a slow disk is used. The benefits are substantial:
- No longer subject to boost version breakage
- Can eliminate the multi-process view of the cache cache
- Can eliminate the multi-process architecture needing to track lifecycle of main-process' OC
code_cache instances
- nodeos data-dir can be on a
noexec filesystem
- OC's code cache can be in huge pages
- It'll be easier to accommodate in-place upgrades
- Probably more
Currently EOS VM OC's code cache is implemented via a boost interprocess allocator that is mapped from a file (much like how chainbase works).
This is over-complication.
It should be refactored to simply de/serialize the code from/to disk on start/stop. This will incur a small performance impact at start/stop but if the serialized file is compressed I expect it to be only a minimal impact even when a slow disk is used. The benefits are substantial:
code_cacheinstancesnoexecfilesystem