Releases: dstackai/dstack
0.18.0
RunPod
The update adds the long-awaited integration with RunPod, a distributed GPU cloud that offers GPUs at affordable prices.
To use RunPod, specify your RunPod API key in ~/.dstack/server/config.yml:
projects:
- name: main
backends:
- type: runpod
creds:
type: api_key
api_key: US9XTPDIV8AR42MMINY8TCKRB8S4E7LNRQ6CAUQ9Once the server is restarted, go ahead and run workloads.
Clusters
Another major change with the update is the ability to run multi-node tasks over an interconnected cluster of instances.
type: task
nodes: 2
commands:
- git clone https://github.com/r4victor/pytorch-distributed-resnet.git
- cd pytorch-distributed-resnet
- mkdir -p data
- cd data
- wget -c --quiet https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
- tar -xvzf cifar-10-python.tar.gz
- cd ..
- pip3 install -r requirements.txt torch
- mkdir -p saved_models
- torchrun --nproc_per_node=$DSTACK_GPUS_PER_NODE
--node_rank=$DSTACK_NODE_RANK
--nnodes=$DSTACK_NODES_NUM
--master_addr=$DSTACK_MASTER_NODE_IP
--master_port=8008 resnet_ddp.py
--num_epochs 20
resources:
gpu: 1Currently supported providers for this feature include AWS, GCP, and Azure.
Other
- The
commandsproperty is now not required for tasks and services if you use animagethat has a default entrypoint configured. - The permissions required for using
dstackwith GCP are more granular.
What's changed
- Add
usernamefilter to/api/runs/listby @r4victor in #1068 - Inherit core models from DualBaseModel by @r4victor in #967
- Fixed the YAML schema validation for
replicasby @peterschmidt85 in #1055 - Improve the
server/config.ymlreference documentation by @peterschmidt85 in #1077 - Add the
runpodbackend by @Bihan in #1063 - Support JSON log handler by @TheBits in #1085
- Added lock to the
terminate_idle_instanceby @TheBits in #1081 dstack initdoesn't work with a remote Git repo by @peterschmidt85 in #1090- Minor improvements of
dstack serveroutput by @peterschmidt85 in #1088 - Return an error information from
dstack-shimby @TheBits in #1061 - Replace
RetryPolicy.limittoRetryPolicy.durationby @TheBits in #1074 - Make
dstack versionconfigurable when deploying docs by @peterschmidt85 in #1095 dstack initdoesn't work with a local Git repo by @peterschmidt85 in #1096- Fix infinite
create_instance()on thecudoprovider by @r4victor in #1082 - Do not update the
latestDocker image and YAML scheme for pre-release builds by @peterschmidt85 in #1099 - Support multi-node tasks by @r4victor in #1103
- Make
commandsoptional in run configurations by @jvstme in #1104 - Allow the
cudobackend use non-gpu instances by @Bihan in #1092 - Make GCP permissions more granular by @r4victor in #1107
Full changelog: 0.17.0...0.18.0
0.17.0
Service auto-scaling
Previously, dstack always served services as single replicas. While this is suitable for development, in production, the service must automatically scale based on the load.
That's why in 0.17.0, we extended dstack with the capability to configure replicas (the number of replicas) as well as scaling (the auto-scaling policy).
Regions and instance types
The update brings support for specifying regions and instance types (in dstack run and .dstack/profiles.yml)
Environment variables
Firstly, it's now possible to configure an environment variable in the configuration without hardcoding its value. Secondly, dstack run now inherits environment variables from the current process.
For more details on these new features, check the changelog.
What's changed
- Support running multiple replicas for a service by @Egor-S in #986 and #1015
- Allow to specify
instance_typevia CLI and profiles by @r4victor in #1023 - Allow to specify regions via CLI and profiles by @r4victor in #947
- Allow specifying required env variables by @spott in #1003
- Allow configuring CA for gateways by @jvstme in #1022
- Support Python 3.12 by @peterschmidt85 in #1031
- The
shm_sizeproperty in resources doesn't take effect by @peterschmidt85 in #1007 - Sometimes, runs get stuck at pulling by @TheBits in #1035
vastaidoesn't show any offers since0.16.0by @iRohith in #959- It's not possible to configure projects other than
mainby @peterschmidt85 in #992 - Spot instances don't work on GCP by @peterschmidt85 in #996
New contributors
Full changelog: 0.16.5...0.17.0
0.16.5
0.16.4
CUDO Compute
The 0.16.4 update introduces the cudo backend, which allows running workloads with CUDO Compute, a cloud GPU marketplace.
To configure the cudo backend, you simply need to specify your CUDO Compute project ID and API key:
projects:
- name: main
backends:
- type: cudo
project_id: my-cudo-project
creds:
type: api_key
api_key: 7487240a466624b48de22865589Once it's done, you can restart the dstack server and use the dstack CLI or API to run workloads.
Note
Limitations
- The
dstack gatewayfeature is not yet compatible withcudo, but it is expected to be supported in version0.17.0,
planned for release within a week. - The
cudobackend cannot yet be used with dstack Sky, but it will also be enabled within a week.
Full changelog: 0.16.3...0.16.4
0.16.3
Bug-fixes
- [Bug] The
shm_sizeproperty inresourcesdoesn't take effect #1006 - [Bug]: It's not possible to configure projects other than main via
~/.dstack/server/config.yml#991 - [Bug] Spot instances don't work on GCP if the username has upper case letters #975
Full changelog: 0.16.2...0.16.3
0.16.1
Improvements to dstack pool
- Change default idle duration for
dstack pool addto72h#964 - Set the default spot policy in
dstack pool addtoon-demand#962 - Add pool support for
lambda,azure, andtensordock#923 - Allow to pass idle duration and spot policy in
dstack pool add#918 dstack rundoes not respect pool-relatedprofiles.ymlparameters #949
Bug-fixes
- Runs submitted via Python API have no termination policy #955
- The
vastaibackend doesn't show any offers since0.16.0#958 - Handle permission error when adding Include to
~/.ssh/config#937 - The SSH tunnel fails because of a messy
~/.ssh/config#933 - The
PATHis overridden when logging via SSH #930 - The SSH tunnel fails with
Too many authentication failures#927
We've also updated our guide on how to add new backends. It's now available here.
New contributors
- @iRohith made their first contribution in #959
- @spott made their first contribution in #934
- @KevKibe made their first contribution in #917
Full Changelog: 0.16.0...0.16.1
0.16.0
Pools
The 0.16.0 release is the next major update, which, in addition to many bug fixes, introduces pools, a major new feature that enables a more efficient way to manage instance lifecycles and reuse instances across runs.
dstack run
Previously, when running a dev environment, task, or service, dstack provisioned an instance in a configured
backend, and upon completion of the run, deleted the instance.
Now, when using the dstack run command, it tries to reuse an instance from a pool. If no ready instance meets the
requirements, dstack automatically provisions a new one and adds it to the pool.
Once the workload finishes, the instance is marked as idle.
If the instance remains idle for the configured duration, dstack tears it down.
dstack pool
The dstack pool command allows for managing instances within pools.
To manually add an instance to a pool, use dstack pool add:
dstack pool add --gpu 80GB --idle-duration 1dThe dstack pool add command allows specifying resource requirements, along with the spot policy, idle duration, max
price, retry policy, and other policies.
If no idle duration is configured, by default, dstack sets it to 72h.
To override it, use the --idle-duration DURATION argument.
To learn more about pools, refer to the official documentation. To learn more about 0.16.0, refer to the changelog.
What's changed
- Add dstack pool by @TheBits in #880
- Pools: fix failed instance status by @Egor-S in #889
- Add columns to
dstack pool showby @TheBits in #898 - Add submit stop by @TheBits in #895
- Add kubernetes logo by @plutov in #900
- Handle exceptions from backend.compute().get_offers by @r4victor in #904
- Fix process_finished_jobs parsing None job_model.job_provisioning_data by @r4victor in #905
- Validate run_name by @r4victor in #906
- Filter out private subnets when provisioning in custom aws vpc by @r4victor in #909
- Issue 894 rework failed instance status by @TheBits in #899
- Handle unexpected exceptions from run_job by @r4victor in #911
- Request GPU in docker with --gpus=all by @Egor-S in #913
- Issue 918 fix cli argimenuts for dstack pool add by @TheBits in #919
- Added router tests for pools by @TheBits in #916
- Fix #921 by @TheBits in #922
New contributors
Full changelog: 0.15.1...0.16.0
0.15.2rc2
Bug-fixes
- Exclude private subnets when provisioning in AWS #908
- Ollama doesn't detect the GPU (requires
--gpus==allinstead of--runtime=nvidia) #910
Full changelog: 0.15.1...0.15.2rc2
0.15.1
Kubernetes
With the latest update, it's now possible to configure a Kubernetes backend. In this case, if you run a workload, dstack will provision infrastructure within your Kubernetes cluster. This may work with both self-managed and managed clusters.
Specifying a custom VPC for AWS
If you're using dstack with AWS, it's now possible to configure a vpc_name via ~/.dstack/server/config.yml.
** Learn more about the new features in detail on the changelog page.**
What's changed
- Print total offers count in run plan by @Egor-S in #862
- Add OpenAPI reference to the docs by @Egor-S in #863
- Fixes #864 by pinning the APScheduler dep to < 4 by @tleyden in #867
- Support gateway creation for Kubernetes by @r4victor in #870
- Improve
get_latest_runner_buildby @Egor-S in #871 - Added ruff by @TheBits in #850
- Handle ResourceNotExistsError instead of 404 by @r4victor in #875
- Simplify Kubernetes backend config by @r4victor in #879
- Add SSH keys to GCP metadata by @Egor-S in #881
- Allow to configure VPC for an AWS backend by @r4victor in #883
New contributors
Full Changelog: 0.15.0...0.15.1
0.15.0
Resources
It is now possible to configure resources in the YAML configuration file:
type: dev-environment
python: 3.11
ide: vscode
# (Optional) Configure `gpu`, `memory`, `disk`, etc
resources:
gpu: 24GBSupported properties include: gpu, cpu, memory, disk, and shm_size.
If you specify memory size, you can either specify an explicit size (e.g. 24GB) or a
range (e.g. 24GB.., or 24GB..80GB, or ..80GB).
The gpu property allows specifying not only memory size but also GPU names
and their quantity. Examples: A100 (one A100), A10G,A100 (either A10G or A100),
A100:80GB (one A100 of 80GB), A100:2 (two A100), 24GB..40GB:2 (two GPUs between 24GB and 40GB), etc.
Authorization in services
Service endpoints now require the Authentication header with "Bearer <dstack token>". This also includes the OpenAI-compatible endpoints.
from openai import OpenAI
client = OpenAI(
base_url="https://gateway.example.com",
api_key="<dstack token>"
)
completion = client.chat.completions.create(
model="mistralai/Mistral-7B-Instruct-v0.1",
messages=[
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."}
]
)
print(completion.choices[0].message)Authentication can be disabled by setting auth to false in the service configuration file.
OpenAI format in model mapping
Model mapping (required to enable OpenAI interact) now supports format: openai.
For example, if you run vLLM using the OpenAI mode, it's possible to configure model mapping for it.
type: service
python: "3.11"
env:
- MODEL=NousResearch/Llama-2-7b-chat-hf
commands:
- pip install vllm
- python -m vllm.entrypoints.openai.api_server --model $MODEL --port 8000
port: 8000
resources:
gpu: 24GB
model:
format: openai
type: chat
name: NousResearch/Llama-2-7b-chat-hfWhat's changed
- Configuration resources & ranges by @Egor-S in #844
- Range.str always returns a string by @Egor-S in #845
- Add infinity example by @deep-diver in #847
- error in documentation: use --url instead of --server by @promsoft in #852
- Support authorization on the gateway by @Egor-S in #851
- Implement Kubernetes backend by @r4victor in #853
- Add gpu support for kubernetes by @r4victor in #856
- Resources parse and store by @Egor-S in #857
- Use python3.11 in generate-json-schema by @r4victor in #859
- Implement OpenAI to OpenAI adapter for gateway by @Egor-S in #860
New contributors
- @deep-diver made their first contribution in #847
- @promsoft made their first contribution in #852
Full Changelog: 0.14.0...0.15.0