Skip to content

Commit bcc9339

Browse files
committed
Merge remote-tracking branch 'origin/master' into issue_3265_fix_files_in_place_update
2 parents 09c59ed + a172672 commit bcc9339

49 files changed

Lines changed: 1320 additions & 869 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/build-docs.yml

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,6 @@ name: Build Docs
22

33
on:
44
workflow_call:
5-
inputs:
6-
release-tag:
7-
type: string
8-
required: false
95

106
jobs:
117
build-docs:
@@ -17,18 +13,10 @@ jobs:
1713
python-version: 3.11
1814
- name: Install dstack
1915
run: |
20-
uv pip install examples/plugins/example_plugin_server
21-
if [ -n "${{ inputs.release-tag }}" ]; then
22-
uv pip install "dstack[server]==${{ inputs.release-tag }}"
23-
else
24-
uv pip install -e '.[server]'
25-
fi
16+
uv sync --extra server
2617
- name: Build
2718
run: |
28-
uv pip install pillow cairosvg
2919
sudo apt-get update && sudo apt-get install -y libcairo2-dev libfreetype6-dev libffi-dev libjpeg-dev libpng-dev libz-dev
30-
uv pip install mkdocs-material "mkdocs-material[imaging]" mkdocs-material-extensions mkdocs-redirects mkdocs-gen-files "mkdocstrings[python]" mkdocs-render-swagger-plugin --upgrade
31-
uv pip install git+https://${{ secrets.GH_TOKEN }}@github.com/squidfunk/mkdocs-material-insiders.git
3220
uv run mkdocs build -s
3321
- uses: actions/upload-artifact@v4
3422
with:

.github/workflows/docs.yaml

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,10 @@ name: Build & Deploy Docs
22

33
on:
44
workflow_dispatch:
5-
inputs:
6-
release-tag:
7-
description: "dstack version"
85

96
jobs:
107
build-docs:
118
uses: ./.github/workflows/build-docs.yml
12-
with:
13-
release-tag: ${{ inputs.release-tag }}
149
secrets: inherit
1510

1611
deploy-docs:

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Backends can be set up in `~/.dstack/server/config.yml` or through the [project
5252

5353
For more details, see [Backends](https://dstack.ai/docs/concepts/backends).
5454

55-
> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh) once the server is up.
55+
> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets) once the server is up.
5656
5757
##### Start the server
5858

contributing/DEVELOPMENT.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,3 +48,7 @@ pyright -p .
4848
## 6. Frontend
4949

5050
See [FRONTEND.md](FRONTEND.md) for the details on how to build and develop the frontend.
51+
52+
## 7. Documentation
53+
54+
See [DOCS.md](DOCS.md) for the details on how to preview or build the documentation.

contributing/DOCS.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# Documentation setup
2+
3+
## 1. Clone the repo:
4+
5+
```shell
6+
git clone https://github.com/dstackai/dstack
7+
cd dstack
8+
```
9+
10+
## 2. Install uv:
11+
12+
https://docs.astral.sh/uv/getting-started/installation
13+
14+
```shell
15+
curl -LsSf https://astral.sh/uv/install.sh | sh
16+
```
17+
18+
## 3. Install `dstack` with all extras and dev dependencies:
19+
20+
> [!WARNING]
21+
> Building documentation requires `python_version >= 3.11`.
22+
23+
```shell
24+
uv sync --all-extras
25+
```
26+
27+
`dstack` will be installed into the project's `.venv` in editable mode.
28+
29+
## 4. (Recommended) Install pre-commit hooks:
30+
31+
Code formatting and linting can be done automatically on each commit with `pre-commit` hooks:
32+
33+
```shell
34+
uv run pre-commit install
35+
```
36+
37+
## 5. Preview documentation
38+
39+
To preview the documentation, run the follow command:
40+
41+
```shell
42+
uv run mkdocs serve -w examples -s
43+
```
44+
45+
If you want to build static files, you can use the following command:
46+
47+
```shell
48+
uv run mkdocs build -s
49+
```

docker/server/entrypoint.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ fi
1010
DB_PATH="${HOME}/.dstack/server/data/sqlite.db"
1111
mkdir -p "$(dirname "$DB_PATH")"
1212
if [[ -z "${LITESTREAM_REPLICA_URL}" ]]; then
13-
exec dstack server --host 0.0.0.0
13+
dstack server --host 0.0.0.0
1414
else
1515
if [[ ! -f "$DB_PATH" ]]; then
1616
echo "Attempting Litestream restore..."
@@ -23,5 +23,5 @@ else
2323
fi
2424
fi
2525
fi
26-
exec litestream replicate -exec "dstack server --host 0.0.0.0" "$DB_PATH" "$LITESTREAM_REPLICA_URL"
26+
litestream replicate -exec "dstack server --host 0.0.0.0" "$DB_PATH" "$LITESTREAM_REPLICA_URL"
2727
fi

docs/blog/posts/amd-on-tensorwave.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ to orchestrate AI containers with any AI cloud vendor, whether they provide on-d
1515

1616
In this tutorial, we’ll walk you through how `dstack` can be used with
1717
[TensorWave :material-arrow-top-right-thin:{ .external }](https://tensorwave.com/){:target="_blank"} using
18-
[SSH fleets](../../docs/concepts/fleets.md#ssh).
18+
[SSH fleets](../../docs/concepts/fleets.md#ssh-fleets).
1919

2020
<img src="https://dstack.ai/static-assets/static-assets/images/dstack-tensorwave-v2.png" width="630"/>
2121

@@ -235,6 +235,6 @@ Want to see how it works? Check out the video below:
235235
<iframe width="750" height="520" src="https://www.youtube.com/embed/b1vAgm5fCfE?si=qw2gYHkMjERohdad&rel=0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
236236
237237
!!! info "What's next?"
238-
1. See [SSH fleets](../../docs/concepts/fleets.md#ssh)
238+
1. See [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets)
239239
2. Read about [dev environments](../../docs/concepts/dev-environments.md), [tasks](../../docs/concepts/tasks.md), and [services](../../docs/concepts/services.md)
240240
3. Join [Discord :material-arrow-top-right-thin:{ .external }](https://discord.gg/u8SmfwPpMd)

docs/blog/posts/benchmark-amd-containers-and-partitions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ The full, reproducible steps are available in our GitHub repository. Below is a
122122

123123
#### Creating a fleet
124124

125-
We first defined a `dstack` [SSH fleet](../../docs/concepts/fleets.md#ssh) to manage the two-node cluster.
125+
We first defined a `dstack` [SSH fleet](../../docs/concepts/fleets.md#ssh-fleets) to manage the two-node cluster.
126126

127127
```yaml
128128
type: fleet

docs/blog/posts/gh200-on-lambda.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ categories:
1111
# Supporting ARM and NVIDIA GH200 on Lambda
1212

1313
The latest update to `dstack` introduces support for NVIDIA GH200 instances on [Lambda](../../docs/concepts/backends.md#lambda)
14-
and enables ARM-powered hosts, including GH200 and GB200, with [SSH fleets](../../docs/concepts/fleets.md#ssh).
14+
and enables ARM-powered hosts, including GH200 and GB200, with [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets).
1515

1616
<img src="https://dstack.ai/static-assets/static-assets/images/dstack-arm--gh200-lambda-min.png" width="630"/>
1717

@@ -78,7 +78,7 @@ $ dstack apply -f .dstack.yml
7878
!!! info "Retry policy"
7979
Note, if GH200s are not available at the moment, you can specify the [retry policy](../../docs/concepts/dev-environments.md#retry-policy) in your run configuration so that `dstack` can run the configuration once the GPU becomes available.
8080

81-
> If you have GH200 or GB200-powered hosts already provisioned via Lambda, another cloud provider, or on-prem, you can now use them with [SSH fleets](../../docs/concepts/fleets.md#ssh).
81+
> If you have GH200 or GB200-powered hosts already provisioned via Lambda, another cloud provider, or on-prem, you can now use them with [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets).
8282

8383
!!! info "What's next?"
8484
1. Sign up with [Lambda :material-arrow-top-right-thin:{ .external }](https://cloud.lambda.ai/sign-up?_gl=1*1qovk06*_gcl_au*MTg2MDc3OTAyOS4xNzQyOTA3Nzc0LjE3NDkwNTYzNTYuMTc0NTQxOTE2MS4xNzQ1NDE5MTYw*_ga*MTE2NDM5MzI0My4xNzQyOTA3Nzc0*_ga_43EZT1FM6Q*czE3NDY3MTczOTYkbzM0JGcxJHQxNzQ2NzE4MDU2JGo1NyRsMCRoMTU0Mzg1NTU1OQ..){:target="_blank"}

docs/blog/posts/gpu-health-checks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ For active checks today, you can run [NCCL tests](../../examples/clusters/nccl-t
5555

5656
## Supported backends
5757

58-
Passive GPU health checks work on AWS (except with custom `os_images`), Azure (except A10 GPUs), GCP, OCI, and [SSH fleets](../../docs/concepts/fleets.md#ssh) where DCGM is installed and configured for background checks.
58+
Passive GPU health checks work on AWS (except with custom `os_images`), Azure (except A10 GPUs), GCP, OCI, and [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets) where DCGM is installed and configured for background checks.
5959

6060
> Fleets created before version 0.19.22 need to be recreated to enable this feature.
6161

0 commit comments

Comments
 (0)