diff --git a/README.md b/README.md
index 53a4a899a0..e483922690 100644
--- a/README.md
+++ b/README.md
@@ -14,8 +14,7 @@
-`dstack` is an open-source alternative to Kubernetes and Slurm, designed to simplify GPU allocation and AI workload
-orchestration for ML teams across top clouds and on-prem clusters.
+`dstack` is an open-source container orchestrator that simplifies workload orchestration and drives GPU utilization for ML teams. It works with any GPU cloud, on-prem cluster, or accelerated hardware.
#### Accelerators
@@ -32,8 +31,8 @@ orchestration for ML teams across top clouds and on-prem clusters.
## How does it work?
-
-
+
+
### Installation
diff --git a/docs/assets/stylesheets/landing.css b/docs/assets/stylesheets/landing.css
index fd16dfa33b..9845343928 100644
--- a/docs/assets/stylesheets/landing.css
+++ b/docs/assets/stylesheets/landing.css
@@ -182,6 +182,24 @@
background: white;
}
+
+.md-typeset .md-button--primary.shell span {
+ font-family: var(--md-code-font-family) !important;
+ font-size: 16px;
+}
+
+.md-header__buttons .md-button--primary.shell:before,
+.md-typeset .md-button--primary.shell:before {
+ color: #e37cff;
+ position: relative;
+ top: 1px;
+ content: '$';
+ display: inline-block;
+ margin-right: 10px;
+ font-size: 16px;
+ font-family: var(--md-code-font-family) !important;
+}
+
.md-header__buttons .md-button-secondary.github:before,
.md-typeset .md-button-secondary.github:before {
position: relative;
@@ -754,8 +772,8 @@
.tx-landing__major_feature h2 {
font-size: 1.7em;
max-width: 500px;
- margin-top: 0.75em;
- margin-bottom: 0.75em;
+ margin-top: 0;
+ margin-bottom: 1.5em;
background: black;
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
@@ -777,12 +795,16 @@
width: 100%;
}
-.tx-landing__major_feature .block.margin {
+.tx-landing__major_feature .block.margin.right {
margin-right: 50px;
}
+.tx-landing__major_feature .block.margin.left {
+ margin-left: 50px;
+}
+
.tx-landing__major_feature .block.large {
- width: 650px;
+ width: 700px;
max-width: 100%;
flex: 0 0 auto;
}
@@ -796,7 +818,7 @@
}
/*.tx-landing__trusted_by*/ [data-termynal] {
- font-size: 14px;
+ font-size: 15px;
}
.tx-landing__trusted_by .termy {
@@ -911,7 +933,8 @@
@media screen and (min-width: 44.984375em) {
.supported_clouds {
- grid-template-columns: repeat(4, 1fr) !important;
+ grid-template-columns: repeat(6, 1fr) !important;
+ row-gap: 20px;
}
}
@@ -924,18 +947,22 @@
.supported_clouds_item {
display: flex;
/* align-items: center; */
- gap: 15px;
- padding: 20px 26px;
+ gap: 10px;
+ padding: 21px;
/* border-radius: 2px; */
border: 0.5px solid black;
/* font-size: .85em; */
color: #2A292D !important;
line-height: 1.44;
- aspect-ratio: 1;
+ /* aspect-ratio: 1; */
flex-direction: column;
font-weight: 300;
font-size: 85%;
+ align-items: center;
+ justify-content: center;
+
+
&:hover {
background: -webkit-linear-gradient(45deg, rgba(0, 42, 255, 0.05), rgba(0, 42, 255, 0.05), rgba(225, 101, 254, 0.08));
}
@@ -948,27 +975,28 @@
@media screen and (min-width: 44.984375em) {
.supported_clouds_item {
- &:nth-child(1) {
+ border-right: none;
+ border-left: none;
+
+ &:nth-child(1), &:nth-child(7) {
border-top-left-radius: 3px;
+ border-bottom-left-radius: 3px;
}
- &:nth-child(4) {
+ &:nth-child(6), &:nth-child(12) {
border-top-right-radius: 3px;
+ border-bottom-right-radius: 3px;
}
- &:nth-child(9) {
- border-bottom-left-radius: 3px;
- }
-
- &:nth-child(12) {
- border-bottom-right-radius: 3px;
+ &:nth-child(1), &:nth-child(7) {
+ border-left: 0.5px solid black;
}
- &:nth-child(4), &:nth-child(8), &:nth-child(11) {
+ &:nth-child(6), &:nth-child(12) {
border-right: 0.5px solid black;
}
- &:nth-child(n+8) {
+ &:nth-child(n+0) {
border-bottom: 0.5px solid black;
}
}
@@ -976,11 +1004,11 @@
@media screen and (max-width: 44.984375em) {
.supported_clouds_item {
- &:nth-child(3), &:nth-child(6), &:nth-child(9), &:nth-child(11) {
+ &:nth-child(3), &:nth-child(6), &:nth-child(9), &:nth-child(12) {
border-right: 0.5px solid black;
}
- &:nth-child(n+9) {
+ &:nth-child(n+10) {
border-bottom: 0.5px solid black;
}
}
diff --git a/docs/docs/concepts/backends.md b/docs/docs/concepts/backends.md
index 21208d74b9..fe9d7df74e 100644
--- a/docs/docs/concepts/backends.md
+++ b/docs/docs/concepts/backends.md
@@ -1,19 +1,15 @@
# Backends
-To use `dstack` with cloud providers, configure backends
-via the [`~/.dstack/server/config.yml`](../reference/server/config.yml.md) file.
-The server loads this file on startup.
+Backends allow `dstack` to manage compute across various providers.
+They can be configured via `~/.dstack/server/config.yml` (or through the [project settings page](../concepts/projects.md#backends) in the UI).
-Alternatively, you can configure backends on the [project settings page](../concepts/projects.md#backends) via UI.
+See below for examples of backend configurations.
-> For using `dstack` with on-prem servers, no backend configuration is required.
-> Use [SSH fleets](../concepts/fleets.md#ssh) instead once the server is up.
+??? info "SSH fleets"
+ For using `dstack` with on-prem servers, no backend configuration is required.
+ Use [SSH fleets](../concepts/fleets.md#ssh) instead once the server is up.
-Below examples of how to configure backends via `~/.dstack/server/config.yml`.
-
-## Cloud providers
-
-### AWS
+## AWS
There are two ways to configure AWS: using an access key or using the default credentials.
@@ -248,7 +244,7 @@ There are two ways to configure AWS: using an access key or using the default cr
* Docker is installed
* (For NVIDIA instances) NVIDIA/CUDA drivers and NVIDIA Container Toolkit are installed
-### Azure
+## Azure
There are two ways to configure Azure: using a client secret or using the default credentials.
@@ -399,7 +395,7 @@ There are two ways to configure Azure: using a client secret or using the defaul
Using private subnets assumes that both the `dstack` server and users can access the configured VPC's private subnets.
Additionally, private subnets must have outbound internet connectivity provided by [NAT Gateway or other mechanism](https://learn.microsoft.com/en-us/azure/nat-gateway/nat-overview).
-### GCP
+## GCP
There are two ways to configure GCP: using a service account or using the default credentials.
@@ -583,7 +579,7 @@ gcloud projects list --format="json(projectId)"
Using private subnets assumes that both the `dstack` server and users can access the configured VPC's private subnets.
Additionally, [Cloud NAT](https://cloud.google.com/nat/docs/overview) must be configured to provide access to external resources for provisioned instances.
-### Lambda
+## Lambda
Log into your [Lambda Cloud :material-arrow-top-right-thin:{ .external }](https://lambdalabs.com/service/gpu-cloud) account, click API keys in the sidebar, and then click the `Generate API key`
button to create a new API key.
@@ -604,7 +600,7 @@ projects:
-### Nebius
+## Nebius
Log into your [Nebius AI Cloud :material-arrow-top-right-thin:{ .external }](https://console.eu.nebius.com/) account, navigate to Access, and select Service Accounts. Create a service account, add it to the editors group, and upload its authorized key.
@@ -673,7 +669,7 @@ projects:
-### RunPod
+## RunPod
Log into your [RunPod :material-arrow-top-right-thin:{ .external }](https://www.runpod.io/console/) console, click Settings in the sidebar, expand the `API Keys` section, and click
the button to create a Read & Write key.
@@ -731,7 +727,7 @@ projects:
-### Vultr
+## Vultr
Log into your [Vultr :material-arrow-top-right-thin:{ .external }](https://www.vultr.com/) account, click `Account` in the sidebar, select `API`, find the `Personal Access Token` panel and click the `Enable API` button. In the `Access Control` panel, allow API requests from all addresses or from the subnet where your `dstack` server is deployed.
@@ -751,7 +747,7 @@ projects:
-### Vast.ai
+## Vast.ai
Log into your [Vast.ai :material-arrow-top-right-thin:{ .external }](https://cloud.vast.ai/) account, click Account in the sidebar, and copy your
API Key.
@@ -774,7 +770,7 @@ projects:
Also, the `vastai` backend supports on-demand instances only. Spot instance support coming soon.
-
-### CUDO
+## CUDO
Log into your [CUDO Compute :material-arrow-top-right-thin:{ .external }](https://compute.cudo.org/) account, click API keys in the sidebar, and click the `Create an API key` button.
@@ -818,7 +814,7 @@ projects:
-### OCI
+## OCI
There are two ways to configure OCI: using client credentials or using the default credentials.
@@ -892,7 +888,7 @@ There are two ways to configure OCI: using client credentials or using the defau
compartment_id: ocid1.compartment.oc1..aaaaaaaa
```
-### DataCrunch
+## DataCrunch
Log into your [DataCrunch :material-arrow-top-right-thin:{ .external }](https://cloud.datacrunch.io/) account, click Keys in the sidebar, find `REST API Credentials` area and then click the `Generate Credentials` button.
@@ -913,7 +909,7 @@ projects:
-### CloudRift
+## CloudRift
Log into your [CloudRift :material-arrow-top-right-thin:{ .external }](https://console.cloudrift.ai/) console, click `API Keys` in the sidebar and click the button to create a new API key.
@@ -935,14 +931,7 @@ projects:
-## On-prem servers
-
-### SSH fleets
-
-> For using `dstack` with on-prem servers, no backend configuration is required.
-> See [SSH fleets](fleets.md#ssh) for more details.
-
-### Kubernetes
+## Kubernetes
To configure a Kubernetes backend, specify the path to the kubeconfig file,
and the port that `dstack` can use for proxying SSH traffic.
@@ -1035,8 +1024,9 @@ In case of a self-managed cluster, also specify the IP address of any node in th
## dstack Sky
If you're using [dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"},
-backends are pre-configured to use compute from `dstack`'s marketplace.
-
-You can reconfigure backends via the UI, to use your own cloud accounts instead.
+backends come pre-configured to use compute from the dstack marketplace. However, you can update the configuration via UI
+to use your own cloud accounts instead.
-[//]: # (TODO: Add link to the server config reference page)
+!!! info "What's next"
+ 1. See the [`~/.dstack/server/config.yml`](../reference/server/config.yml.md) reference
+ 2. Check [Projects](../concepts/projects.md)
diff --git a/docs/docs/index.md b/docs/docs/index.md
index cbc3d87f21..c32a2c3264 100644
--- a/docs/docs/index.md
+++ b/docs/docs/index.md
@@ -1,9 +1,7 @@
# What is dstack?
-`dstack` is a streamlined alternative to Kubernetes and Slurm, specifically designed for AI. It simplifies container orchestration
-for AI workloads both in the cloud and on-prem, speeding up the development, training, and deployment of AI models.
-
-`dstack` is easy to use with any cloud providers as well as on-prem servers.
+`dstack` is an open-source container orchestrator that simplifies workload orchestration
+and drives GPU utilization for ML teams. It works with any GPU cloud, on-prem cluster, or accelerated hardware.
#### Accelerators
diff --git a/docs/docs/installation/index.md b/docs/docs/installation/index.md
index 72dea0b0b9..85819ba9d3 100644
--- a/docs/docs/installation/index.md
+++ b/docs/docs/installation/index.md
@@ -1,11 +1,5 @@
# Installation
-[//]: # (??? info "dstack Sky")
-[//]: # ( If you don't want to host the `dstack` server yourself or would like to access GPU from the `dstack` marketplace, you can use)
-[//]: # ( `dstack`'s hosted version, proceed to [dstack Sky](#dstack-sky).)
-
-To use the open-source version of `dstack` with your own cloud accounts or on-prem clusters, follow this guide.
-
> If you don't want to host the `dstack` server (or want to access GPU marketplace),
> skip installation and proceed to [dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"}.
@@ -13,13 +7,14 @@ To use the open-source version of `dstack` with your own cloud accounts or on-pr
### (Optional) Configure backends
-To use `dstack` with cloud providers, configure backends
-via the `~/.dstack/server/config.yml` file.
+Backends allow `dstack` to manage compute across various providers.
+They can be configured via `~/.dstack/server/config.yml` (or through the [project settings page](../concepts/projects.md#backends) in the UI).
For more details on how to configure backends, check [Backends](../concepts/backends.md).
-> For using `dstack` with on-prem servers, create [SSH fleets](../concepts/fleets.md#ssh)
-> once the server is up.
+??? info "SSH fleets"
+ For using `dstack` with on-prem servers, create [SSH fleets](../concepts/fleets.md#ssh)
+ once the server is up.
### Start the server
diff --git a/docs/examples.md b/docs/examples.md
index cb2bd9e558..dd93af7532 100644
--- a/docs/examples.md
+++ b/docs/examples.md
@@ -230,7 +230,7 @@ hide:
-## Misc
+
diff --git a/docs/overrides/header-2.html b/docs/overrides/header-2.html
index ae581b969f..2c7e676796 100644
--- a/docs/overrides/header-2.html
+++ b/docs/overrides/header-2.html
@@ -61,8 +61,8 @@
{% endif %}-->
dstack is an open-source alternative to
- Kubernetes and Slurm, designed
- to simplify GPU allocation and AI workload orchestration
- for ML teams across top clouds, on-prem clusters, and accelerators.
+
+ dstack is an open-source container orchestrator that simplifies workload orchestration
+ and drives GPU utilization for ML teams. It works with any GPU cloud, on-prem cluster, or accelerated hardware.
+
- dstack natively integrates with top GPU clouds, streamlining the
- provisioning, allocation, and utilization of cloud GPUs and high-performance interconnected
- clusters.
+ dstack redefines the container orchestration layer for AI workloads,
+ tailoring the developer experience for ML teams while keeping it open and vendor-agnostic.
- dstack provides a unified interface on top of GPU
- clouds, simplifying development, training, and deployment for ML teams.
+ dstack natively integrates with leading GPU clouds
+ and on-prem clusters, regardless of hardware vendor.
+ dstack manages instances and clusters via native GPU cloud integration for efficient provisioning with its built-in scheduler.
+ It also runs smoothly on Kubernetes if that fits your environment.
+
+
+
+ In both cases, dstack acts as an enhanced control plane, making AI compute orchestration more efficient.
+
Whether you have an on-prem cluster of GPU-equipped bare-metal machines or a pre-provisioned
- cluster of GPU-enabled VMs, you just need to list the hostnames and SSH credentials of the hosts
- to add the cluster as a fleet for running any AI workload.
+
+ Bare-metal and manually provisioned clusters can be connected to dstack using SSH fleets.
+
Before running training jobs or deploying model endpoints, ML engineers often experiment with
their code in a desktop IDE while using cloud or on-prem GPU machines.
@@ -246,28 +222,18 @@