Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@

</div>

`dstack` is an open-source alternative to Kubernetes and Slurm, designed to simplify GPU allocation and AI workload
orchestration for ML teams across top clouds and on-prem clusters.
`dstack` is an open-source container orchestrator that simplifies workload orchestration and drives GPU utilization for ML teams. It works with any GPU cloud, on-prem cluster, or accelerated hardware.

#### Accelerators

Expand All @@ -32,8 +31,8 @@ orchestration for ML teams across top clouds and on-prem clusters.
## How does it work?

<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://dstack.ai/static-assets/static-assets/images/dstack-architecture-diagram-v8-dark.svg"/>
<img src="https://dstack.ai/static-assets/static-assets/images/dstack-architecture-diagram-v8.svg" width="750" />
<source media="(prefers-color-scheme: dark)" srcset="https://dstack.ai/static-assets/static-assets/images/dstack-architecture-diagram-v10-dark.svg"/>
<img src="https://dstack.ai/static-assets/static-assets/images/dstack-architecture-diagram-v10.svg" width="750" />
</picture>

### Installation
Expand Down
70 changes: 49 additions & 21 deletions docs/assets/stylesheets/landing.css
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,24 @@
background: white;
}


.md-typeset .md-button--primary.shell span {
font-family: var(--md-code-font-family) !important;
font-size: 16px;
}

.md-header__buttons .md-button--primary.shell:before,
.md-typeset .md-button--primary.shell:before {
color: #e37cff;
position: relative;
top: 1px;
content: '$';
display: inline-block;
margin-right: 10px;
font-size: 16px;
font-family: var(--md-code-font-family) !important;
}

.md-header__buttons .md-button-secondary.github:before,
.md-typeset .md-button-secondary.github:before {
position: relative;
Expand Down Expand Up @@ -754,8 +772,8 @@
.tx-landing__major_feature h2 {
font-size: 1.7em;
max-width: 500px;
margin-top: 0.75em;
margin-bottom: 0.75em;
margin-top: 0;
margin-bottom: 1.5em;
background: black;
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
Expand All @@ -777,12 +795,16 @@
width: 100%;
}

.tx-landing__major_feature .block.margin {
.tx-landing__major_feature .block.margin.right {
margin-right: 50px;
}

.tx-landing__major_feature .block.margin.left {
margin-left: 50px;
}

.tx-landing__major_feature .block.large {
width: 650px;
width: 700px;
max-width: 100%;
flex: 0 0 auto;
}
Expand All @@ -796,7 +818,7 @@
}

/*.tx-landing__trusted_by*/ [data-termynal] {
font-size: 14px;
font-size: 15px;
}

.tx-landing__trusted_by .termy {
Expand Down Expand Up @@ -911,7 +933,8 @@

@media screen and (min-width: 44.984375em) {
.supported_clouds {
grid-template-columns: repeat(4, 1fr) !important;
grid-template-columns: repeat(6, 1fr) !important;
row-gap: 20px;
}
}

Expand All @@ -924,18 +947,22 @@
.supported_clouds_item {
display: flex;
/* align-items: center; */
gap: 15px;
padding: 20px 26px;
gap: 10px;
padding: 21px;
/* border-radius: 2px; */
border: 0.5px solid black;
/* font-size: .85em; */
color: #2A292D !important;
line-height: 1.44;
aspect-ratio: 1;
/* aspect-ratio: 1; */
flex-direction: column;
font-weight: 300;
font-size: 85%;

align-items: center;
justify-content: center;


&:hover {
background: -webkit-linear-gradient(45deg, rgba(0, 42, 255, 0.05), rgba(0, 42, 255, 0.05), rgba(225, 101, 254, 0.08));
}
Expand All @@ -948,39 +975,40 @@

@media screen and (min-width: 44.984375em) {
.supported_clouds_item {
&:nth-child(1) {
border-right: none;
border-left: none;

&:nth-child(1), &:nth-child(7) {
border-top-left-radius: 3px;
border-bottom-left-radius: 3px;
}

&:nth-child(4) {
&:nth-child(6), &:nth-child(12) {
border-top-right-radius: 3px;
border-bottom-right-radius: 3px;
}

&:nth-child(9) {
border-bottom-left-radius: 3px;
}

&:nth-child(12) {
border-bottom-right-radius: 3px;
&:nth-child(1), &:nth-child(7) {
border-left: 0.5px solid black;
}

&:nth-child(4), &:nth-child(8), &:nth-child(11) {
&:nth-child(6), &:nth-child(12) {
border-right: 0.5px solid black;
}

&:nth-child(n+8) {
&:nth-child(n+0) {
border-bottom: 0.5px solid black;
}
}
}

@media screen and (max-width: 44.984375em) {
.supported_clouds_item {
&:nth-child(3), &:nth-child(6), &:nth-child(9), &:nth-child(11) {
&:nth-child(3), &:nth-child(6), &:nth-child(9), &:nth-child(12) {
border-right: 0.5px solid black;
}

&:nth-child(n+9) {
&:nth-child(n+10) {
border-bottom: 0.5px solid black;
}
}
Expand Down
60 changes: 25 additions & 35 deletions docs/docs/concepts/backends.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,15 @@
# Backends

To use `dstack` with cloud providers, configure backends
via the [`~/.dstack/server/config.yml`](../reference/server/config.yml.md) file.
The server loads this file on startup.
Backends allow `dstack` to manage compute across various providers.
They can be configured via `~/.dstack/server/config.yml` (or through the [project settings page](../concepts/projects.md#backends) in the UI).

Alternatively, you can configure backends on the [project settings page](../concepts/projects.md#backends) via UI.
See below for examples of backend configurations.

> For using `dstack` with on-prem servers, no backend configuration is required.
> Use [SSH fleets](../concepts/fleets.md#ssh) instead once the server is up.
??? info "SSH fleets"
For using `dstack` with on-prem servers, no backend configuration is required.
Use [SSH fleets](../concepts/fleets.md#ssh) instead once the server is up.

Below examples of how to configure backends via `~/.dstack/server/config.yml`.

## Cloud providers

### AWS
## AWS

There are two ways to configure AWS: using an access key or using the default credentials.

Expand Down Expand Up @@ -248,7 +244,7 @@ There are two ways to configure AWS: using an access key or using the default cr
* Docker is installed
* (For NVIDIA instances) NVIDIA/CUDA drivers and NVIDIA Container Toolkit are installed

### Azure
## Azure

There are two ways to configure Azure: using a client secret or using the default credentials.

Expand Down Expand Up @@ -399,7 +395,7 @@ There are two ways to configure Azure: using a client secret or using the defaul
Using private subnets assumes that both the `dstack` server and users can access the configured VPC's private subnets.
Additionally, private subnets must have outbound internet connectivity provided by [NAT Gateway or other mechanism](https://learn.microsoft.com/en-us/azure/nat-gateway/nat-overview).

### GCP
## GCP

There are two ways to configure GCP: using a service account or using the default credentials.

Expand Down Expand Up @@ -583,7 +579,7 @@ gcloud projects list --format="json(projectId)"
Using private subnets assumes that both the `dstack` server and users can access the configured VPC's private subnets.
Additionally, [Cloud NAT](https://cloud.google.com/nat/docs/overview) must be configured to provide access to external resources for provisioned instances.

### Lambda
## Lambda

Log into your [Lambda Cloud :material-arrow-top-right-thin:{ .external }](https://lambdalabs.com/service/gpu-cloud) account, click API keys in the sidebar, and then click the `Generate API key`
button to create a new API key.
Expand All @@ -604,7 +600,7 @@ projects:

</div>

### Nebius
## Nebius

Log into your [Nebius AI Cloud :material-arrow-top-right-thin:{ .external }](https://console.eu.nebius.com/) account, navigate to Access, and select Service Accounts. Create a service account, add it to the editors group, and upload its authorized key.

Expand Down Expand Up @@ -673,7 +669,7 @@ projects:



### RunPod
## RunPod

Log into your [RunPod :material-arrow-top-right-thin:{ .external }](https://www.runpod.io/console/) console, click Settings in the sidebar, expand the `API Keys` section, and click
the button to create a Read & Write key.
Expand Down Expand Up @@ -731,7 +727,7 @@ projects:

</div>

### Vultr
## Vultr

Log into your [Vultr :material-arrow-top-right-thin:{ .external }](https://www.vultr.com/) account, click `Account` in the sidebar, select `API`, find the `Personal Access Token` panel and click the `Enable API` button. In the `Access Control` panel, allow API requests from all addresses or from the subnet where your `dstack` server is deployed.

Expand All @@ -751,7 +747,7 @@ projects:

</div>

### Vast.ai
## Vast.ai

Log into your [Vast.ai :material-arrow-top-right-thin:{ .external }](https://cloud.vast.ai/) account, click Account in the sidebar, and copy your
API Key.
Expand All @@ -774,7 +770,7 @@ projects:

Also, the `vastai` backend supports on-demand instances only. Spot instance support coming soon.

<!-- ### TensorDock
<!-- ## TensorDock

Log into your [TensorDock :material-arrow-top-right-thin:{ .external }](https://dashboard.tensordock.com/) account, click Developers in the sidebar, and use the `Create an Authorization` section to create a new authorization key.

Expand All @@ -797,7 +793,7 @@ projects:

The `tensordock` backend supports on-demand instances only. Spot instance support coming soon. -->

### CUDO
## CUDO

Log into your [CUDO Compute :material-arrow-top-right-thin:{ .external }](https://compute.cudo.org/) account, click API keys in the sidebar, and click the `Create an API key` button.

Expand All @@ -818,7 +814,7 @@ projects:

</div>

### OCI
## OCI

There are two ways to configure OCI: using client credentials or using the default credentials.

Expand Down Expand Up @@ -892,7 +888,7 @@ There are two ways to configure OCI: using client credentials or using the defau
compartment_id: ocid1.compartment.oc1..aaaaaaaa
```

### DataCrunch
## DataCrunch

Log into your [DataCrunch :material-arrow-top-right-thin:{ .external }](https://cloud.datacrunch.io/) account, click Keys in the sidebar, find `REST API Credentials` area and then click the `Generate Credentials` button.

Expand All @@ -913,7 +909,7 @@ projects:

</div>

### CloudRift
## CloudRift

Log into your [CloudRift :material-arrow-top-right-thin:{ .external }](https://console.cloudrift.ai/) console, click `API Keys` in the sidebar and click the button to create a new API key.

Expand All @@ -935,14 +931,7 @@ projects:

</div>

## On-prem servers

### SSH fleets

> For using `dstack` with on-prem servers, no backend configuration is required.
> See [SSH fleets](fleets.md#ssh) for more details.

### Kubernetes
## Kubernetes

To configure a Kubernetes backend, specify the path to the kubeconfig file,
and the port that `dstack` can use for proxying SSH traffic.
Expand Down Expand Up @@ -1035,8 +1024,9 @@ In case of a self-managed cluster, also specify the IP address of any node in th
## dstack Sky

If you're using [dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"},
backends are pre-configured to use compute from `dstack`'s marketplace.

You can reconfigure backends via the UI, to use your own cloud accounts instead.
backends come pre-configured to use compute from the dstack marketplace. However, you can update the configuration via UI
to use your own cloud accounts instead.

[//]: # (TODO: Add link to the server config reference page)
!!! info "What's next"
1. See the [`~/.dstack/server/config.yml`](../reference/server/config.yml.md) reference
2. Check [Projects](../concepts/projects.md)
6 changes: 2 additions & 4 deletions docs/docs/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# What is dstack?

`dstack` is a streamlined alternative to Kubernetes and Slurm, specifically designed for AI. It simplifies container orchestration
for AI workloads both in the cloud and on-prem, speeding up the development, training, and deployment of AI models.

`dstack` is easy to use with any cloud providers as well as on-prem servers.
`dstack` is an open-source container orchestrator that simplifies workload orchestration
and drives GPU utilization for ML teams. It works with any GPU cloud, on-prem cluster, or accelerated hardware.

#### Accelerators

Expand Down
15 changes: 5 additions & 10 deletions docs/docs/installation/index.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,20 @@
# Installation

[//]: # (??? info "dstack Sky")
[//]: # ( If you don't want to host the `dstack` server yourself or would like to access GPU from the `dstack` marketplace, you can use)
[//]: # ( `dstack`'s hosted version, proceed to [dstack Sky]&#40;#dstack-sky&#41;.)

To use the open-source version of `dstack` with your own cloud accounts or on-prem clusters, follow this guide.

> If you don't want to host the `dstack` server (or want to access GPU marketplace),
> skip installation and proceed to [dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"}.

## Set up the server

### (Optional) Configure backends

To use `dstack` with cloud providers, configure backends
via the `~/.dstack/server/config.yml` file.
Backends allow `dstack` to manage compute across various providers.
They can be configured via `~/.dstack/server/config.yml` (or through the [project settings page](../concepts/projects.md#backends) in the UI).

For more details on how to configure backends, check [Backends](../concepts/backends.md).

> For using `dstack` with on-prem servers, create [SSH fleets](../concepts/fleets.md#ssh)
> once the server is up.
??? info "SSH fleets"
For using `dstack` with on-prem servers, create [SSH fleets](../concepts/fleets.md#ssh)
once the server is up.

### Start the server

Expand Down
4 changes: 2 additions & 2 deletions docs/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ hide:
</a>
</div>

## Misc
<!-- ## Misc

<div class="tx-landing__highlights_grid">
<a href="/examples/misc/docker-compose"
Expand All @@ -243,4 +243,4 @@ hide:
Use Docker and Docker Compose inside runs
</p>
</a>
</div>
</div> -->
Loading
Loading