-
-
Notifications
You must be signed in to change notification settings - Fork 120
perf: JSON-RPC: faster eventLoop: request buffer
#7795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Before this commit we would not start another `getNextEvent()`
before we have received a response for the previous one.
This commit makes a pool of `getNextEvent()` requests
so that the server always has a request
that it can readily respond to.
Measurements
I have tested this on Delta Chat desktop.
This seems to provide a measurable speedup in handling event bursts.
I did `console.time()` when we start
a `BackendRemote.rpc.maybeNetwork()` request
(which we do every time the main window is focused),
and a `console.timeEnd()` when we receive the first
"IDLE entering wait-on-remote state" entry, inside of
[`ipcBackend.on('json-rpc-message'`](https://github.com/deltachat/deltachat-desktop/blob/3846aef67c1644d8ec8b57b229329855a378ee4e/packages/target-electron/runtime-electron/runtime.ts#L52).
For these measurements I also disabled request-response logging
with `config['log-debug'] = false`.
Each such `maybeNetwork()` resulted in ~1000 `getNextEvent()` responses.
With the original event loop (without a pool) the average time
based on 150 measurements was 1152.28 ms.
With the new event loop with a pool of 20
based on 150 measurements it was 774.58 ms.
That is 67.22% of the original average duration.
Related:
- deltachat/deltachat-desktop#5282
264a1e3 to
7a7f040
Compare
|
I'm not sure what exactly the problem is and how this pooling improves performance. We already have concurrent calls working on the JSON-RPC level, tested with a Python test. Maybe there are some large responses and just reading them blocks reception of events? The other option is adding a batched API that fetches multiple events at once (up to some limit) and returns them all at once. This way as soon as you get an event call response, you get all events that were in the queue at the moment of the call processing and not have to wait for every other response to be read. Getting all events at once makes it easier to deduplicate them, e.g. if you have 10 events in the queue about some chat being modified, you are only interested in the last one. |
IDK, but it just makes sense that there is a non-zero delay between the |
I doubt there is any significant delay in passing the event from one process to another via stdio even with JSON serialization. More likely the difference comes from different scheduling, if you get events faster than heavy API responses and have some debouncing logic in desktop, you may do less work. |
Don't want to go off-topic, but I had an idea of removing the event queue from the Core, and instead only sending the event if there is a pending |
|
I implemented batching at #7825, would be great if you can measure it in the same way. It also applies to python. |
The problem with this approach is that it may introduce reordering:
So far event loops always delivered events in the order they are produced, tests depend on this. There are not many JS tests, but in general this kind of reordering may make tests flaky and bots may assume that events are always generated in the same order, e.g. if you receive two messages then event about the first message should be delivered before the event about the second message. |
|
Ah, good catch. Our core/deltachat-jsonrpc/src/api.rs Lines 189 to 195 in 4ccd3cb
Lines 72 to 82 in 53ebf2c
OK, so this MR is not ready then I'd say. I will try your MR. |

Before this commit we would not start another
getNextEvent()before we have received a response for the previous one.
This commit makes a pool of
getNextEvent()requestsso that the server always has a request
that it can readily respond to.
Measurements
I have tested this on Delta Chat desktop.
This seems to provide a measurable speedup in handling event bursts.
I did
console.time()when we starta
BackendRemote.rpc.maybeNetwork()request(which we do every time the main window is focused),
and a
console.timeEnd()when we receive the first"IDLE entering wait-on-remote state" entry, inside of
ipcBackend.on('json-rpc-message'.For these measurements I also disabled request-response logging
with
Edit:config['log-debug'] = falselogJsonrpcConnection = false.Each such
maybeNetwork()resulted in ~1000getNextEvent()responses.With the original event loop (without a pool) the average time
based on 150 measurements was 1152.28 ms.
With the new event loop with a pool of 20
based on 150 measurements it was 774.58 ms.
That is 67.22% of the original average duration.
TODO:
Please check that this doesn't break stuff.
I heard that there was some issue with Delta Chat bots
trying to not miss a single event.
If that's the case, or if in doubt, we can set
the default pool size to 1 instead of 20,
and only set it to 20 in Delta Chat Desktop.
Related:
Infoevents deltachat/deltachat-desktop#5282