Slow createChild when limit of opened descriptors is set to high value#38
Slow createChild when limit of opened descriptors is set to high value#38trnila wants to merge 3 commits intovstinner:mainfrom
Conversation
ptrace/debugger/child.py
Outdated
|
|
||
|
|
||
| def _closeFds(ignore_fds): | ||
| for path in ['/proc/self/fd', '/dev/fd']: |
There was a problem hiding this comment.
/dev/fd is a symlink to /proc/self/fd. Is it really useful to test it?
There was a problem hiding this comment.
/dev/fd is symlink in Linux.
Other systems like BSD and Mac have /dev/fd.
On FreeBSD until fdescfs is mounted it contains only static descriptors 0, 1, 2 so there will be needed another test, otherwise it wont close any descriptor.
I will look at python subprocess.
There was a problem hiding this comment.
If you want to support BSD, see this test for FreeBSD:
https://github.com/python/cpython/blob/6c3d5274687c97f9c13800ad50e73e15b54f629d/Modules/_posixsubprocess.c#L87
|
Ah yes, Python 3 has a similar code in subprocess. My http://www.python.org/dev/peps/pep-0446/ might allow to avoid completely closing all FDs, but I didn't try. |
|
I don't maintain this project anymore, I'm looking for a new maintainer. |
When system has set a large limit of opened file descriptors, then the function createChild is getting slow, because it tries to close all descriptors up to that limit. On my system it currently takes about 1-2 seconds.
If system has /proc/self/fd or /dev/fd then we can go only though currently opened descriptors and close them. Otherwise we can fallback to the current solution, that tries to call close on all possible descriptors.