Port of test scripts to unit tests#164
Conversation
Signed-off-by: Oleg Höfling <oleg.hoefling@gmail.com>
| @functools.wraps(f) | ||
| def wrapper(*args, **kwargs): | ||
| result = f(*args, **kwargs) | ||
| testclient = args[0] |
There was a problem hiding this comment.
better make this the first wrapper argument explicitly
| exc_info = sys.exc_info() | ||
| exc_type = exc_info[0] | ||
| tb = exc_info[2] | ||
| self.pipe.send((exc_type, e, tb)) |
| def free_port(): | ||
| with contextlib.closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: | ||
| s.bind(('', 0)) | ||
| s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) |
There was a problem hiding this comment.
I guess this might break on systems that don't support reuseaddr?
There was a problem hiding this comment.
Indeed, socket.REUSEADDR should be handled same as socket.REUSEPORT.
| raise_(*exc_info) # python2 compat | ||
|
|
||
| @reraise | ||
| def get(self, path=None, **kwargs): |
There was a problem hiding this comment.
Maybe refactor these into a single method and use functools.partial? But that's nitpicking
There was a problem hiding this comment.
No, I agree on that, good suggestion. I have mainly added the method copies because of code completion in IDE, but typing stubs would be way better here.
| * also, filenames longer than 64 chars cause urllib hickups, see https://bugs.python.org/issue32958 | ||
| * another thing: tempfile.gettempdir() on MacOS resorts to $TMPDIR | ||
| which is autogenerated into /var/folders and is also longer than 64 chars | ||
| """ |
There was a problem hiding this comment.
btw there's also anonymous sockets with @..., but not supported by all platforms. maybe we can test these as well.
There was a problem hiding this comment.
Good idea! Unfortunately, it looks like urllib can't parse the URL, so requests_unixsocket can't connect - although it states the support for abstract unix sockets. I have left a comment with the error message, waiting for the author's response. Maybe extract it in a separate task?
| return app | ||
|
|
||
|
|
||
| @pytest.mark.parametrize('file', ('README.rst',), ids=('README.rst',)) |
There was a problem hiding this comment.
Better use lists here, (foo,) is difficult to parse
| ) | ||
| @pytest.mark.parametrize( | ||
| 'wrapper', | ||
| ( |
| lambda f, env: env['wsgi.file_wrapper'](f), | ||
| lambda f, env: env['wsgi.file_wrapper'](f, 1), | ||
| ), | ||
| ids=('callable-iterator', 'xreadlines', 'filewrapper', 'filewrapper2'), |
There was a problem hiding this comment.
And here and in any place where an actual list of things is provided ;)
|
|
||
|
|
||
| @pytest.mark.parametrize( | ||
| 'expect_header_value,content_length,body,response', |
There was a problem hiding this comment.
This is also difficult to parse
| expect_header_value, content_length, body | ||
| ) | ||
| raw_client.send(request.encode('utf-8')) | ||
| resp = raw_client.recv(1 << 10) |
There was a problem hiding this comment.
This is due to code copying from tests/expect100.py. Shouldn't it even be bjoern.DEFAULT_LISTEN_BACKLOG?
There was a problem hiding this comment.
The reason I did that in expect100.py (I'm the author of that specific test) is because I consider it more legible when dealing with byte counts between KB -> MB -> GB. E.g.,
KB = 1024
MB = 1048576
GB = 1073741824
vs.
KB = 1 << 10
MB = 1 << 20
GB = 1 << 30
or, if your test grows the content-length exponentially with each iteration, such as that test does here
|
This is really great. Thanks a million. Minor suggestions in the review. Maybe you can also add some proper test description docstrings rather than One thing we have to think about is that some of the tests are useful only when called 1000s of times, for example the exception tests, which are used to check for memory leaks. Not sure if that's easily automated; otherwise there should be a way to run a list of WSGI applications without the actual test code so we can use the "old" way of bombarding bjoern using |
Signed-off-by: Oleg Höfling <oleg.hoefling@gmail.com>
I will look into it, however I'm not sure whether it is a task for unit tests and |
3252372 to
c0006eb
Compare
…ded where applicable Signed-off-by: Oleg Höfling <oleg.hoefling@gmail.com>
c0006eb to
5f4d11d
Compare
Sorry for the delay! Locust is fine for me, or any other tool pretty much. Yes it's more like load testing. Best case we can load test a bjoern instance and monitor memory use |
|
BUT some of the tests I also used for manual load/performance testing, for example when comparing requests/s for two revisions of bjoern. So if we could have a way to start an instance with a single test case WSGI app that would be awesome |
|
Is there any plans to merge this PR? Its great for helping development. |
|
Yes when the things I mentioned above are implemented. |
|
I think that an idea could be to merge this PR and add an issue (improvement) for missing tests, probably, if @hoefling are currently busy, other contributors could help on. @jonashaag what do you think about? |
|
No. That does not solve the problem that nobody is working on this PR at the moment. The current version is strictly less valuable than the one in the repo. |
|
This PR (and the other one I have started working at for #121) were parts of making However, I don't see any issues with merging the current state as IIRC the existing test scripts are not removed so they are still usable for manual load tests. |
|
You have a policy that 3rd party application test suites must be automated? 😀 |
Closes #52.