Replies: 1 comment 1 reply
-
|
Implementing a custom worker implies you want to use the Dask scheduler for task scheduling still. If you already have a distributed codebase do you really need Dask to do the scheduling? You haven't described how your workers communicate with each other or coordinate, but assuming they can connect to each other and perform work then you could just use Dask to bootstrap. You could submit your workers as tasks to the Dask workers. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I have a complex use case where I need to integrate a Dask cluster with an existing legacy cluster of workers. These legacy workers consist of a large C++ codebase with their own protocol. My current idea is to create a custom Dask worker that would act as a proxy between the legacy workers and Dask.
I’ve been looking through worker.py and was surprised by the amount of code and the number of components involved. Am I approaching this the wrong way, or is implementing a custom Worker genuinely a high-effort task?
If there is a recommended pattern, extension point, or higher-level abstraction for this kind of integration, I’d really appreciate any pointers or examples.
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions