[SharedCache] Improvements and fixes for MMappedFileAccessor stuff#6318
[SharedCache] Improvements and fixes for MMappedFileAccessor stuff#6318WeiN76LQh wants to merge 1 commit into
MMappedFileAccessor stuff#6318Conversation
`MMappedFileAccessor::Open` wasn't really behaving as I think it was intended. It tried to use `fileAccessorSemaphore` to limit the number of concurrent files Binary Ninja could open. It didn't really confirm it had acquired a reference on `fileAccessorSemaphore` but then would always release one back when a `MMappedFileAccessor` was deallocated. This could actually inflate the count of `fileAccessorSemaphore` meaning it could end up with a higher file limit than originally set. Also using a worker to deallocate a `MMappedFileAccessor` on another thread felt like a bit of a gross hack and made synchronizing dropping a ref and acquiring one on `fileAccessorSemaphore` harder than it needed to be. This resulted in a bit of a rewrite of `MMappedFileAccessor::Open` to mitigate these issues. It now should respect the file limit set on startup but can go over it temporarily in extreme circumstances that I don't expect to occur in real world experiences. Additionally there seemed to be some insufficient use of locking to synchronize accesses to certain data structures. I believe this commit has cleaned those up. They were causing very intermittent crashes.
|
As an FYI, #6316 also changes |
|
Yes I hadn't seen your PRs until after I pushed this one. As you state it doesn't really conflict with any of your changes. If an updated PR is required in the future then I can do that. |
|
I've identified another bug in this area of code to do with the I would provide a fix in this PR via an additional commit but I'm unsure how to proceed as I don't fully understand the mechanics of everything in this context and the levers that can be worked with. |
|
There's another issue with how In practice the file descriptor limit seems high enough that files for a single shared cache tend to stay open indefinitely, avoiding this problem. I guess if you open several iOS shared caches with macOS's default low file descriptor limit you might hit the problem. I was able to trigger it by hard-coding a file descriptor limit of 8 and opening an iOS shared cache. I can see files being closed then reopened, with no slide being applied. Perhaps the approach to file accessor caching should be revisited? A lot of the hoops that the existing code jumps through to try to limit the number of open files could be avoided by using |
|
I sent #6392 which removes the complex synchronization in
|
|
Closing because this is no longer relevant after #6392 was merged |
MMappedFileAccessor::Openwasn't really behaving as I think it was intended. It tried to usefileAccessorSemaphoreto limit the number of concurrent files Binary Ninja could open. It didn't really confirm it had acquired a reference onfileAccessorSemaphorebut then would always release one back when aMMappedFileAccessorwas deallocated. This could actually inflate the count offileAccessorSemaphoremeaning it could end up with a higher file limit than originally set. Also using a worker to deallocate aMMappedFileAccessoron another thread felt like a bit of a gross hack and made synchronizing dropping a ref and acquiring one onfileAccessorSemaphoreharder than it needed to be. This resulted in a bit of a rewrite ofMMappedFileAccessor::Opento mitigate these issues. It now should respect the file limit set on startup but can go over it temporarily in extreme circumstances that I don't expect to occur in real world experiences.Additionally there seemed to be some insufficient use of locking to synchronize accesses to certain data structures. I believe this commit has cleaned those up. They were causing very intermittent crashes.