diff --git a/README.md b/README.md
index 766c35c2bc..1f8ec4296f 100644
--- a/README.md
+++ b/README.md
@@ -14,26 +14,28 @@
-`dstack` is an open-source container orchestrator that simplifies workload orchestration and drives GPU utilization for ML teams. It works with any GPU cloud, on-prem cluster, or accelerated hardware.
+`dstack` provides a unified control plane for running development, training, and inference on GPUs — across cloud VMs, Kubernetes, or on-prem clusters. It helps your team avoid vendor lock-in and reduce GPU costs.
#### Accelerators
`dstack` supports `NVIDIA`, `AMD`, `Google TPU`, `Intel Gaudi`, and `Tenstorrent` accelerators out of the box.
## Latest news ✨
+- [2025/09] [dstack 0.19.27: Offers UI, Digital Ocean and AMD Developer Cloud](https://github.com/dstackai/dstack/releases/tag/0.19.27)
+- [2025/08] [dstack 0.19.26: Repos – explicit repo configuration via YAML](https://github.com/dstackai/dstack/releases/tag/0.19.26)
+- [2025/08] [dstack 0.19.25: `dstack offer` CLI command](https://github.com/dstackai/dstack/releases/tag/0.19.25)
+- [2025/08] [dstack 0.19.22: Service probes, GPU health-checks, Tenstorrent Galaxy, Secrets UI](https://github.com/dstackai/dstack/releases/tag/0.19.22)
+- [2025/07] [dstack 0.19.21: Scheduled tasks](https://github.com/dstackai/dstack/releases/tag/0.19.21)
- [2025/07] [dstack 0.19.17: Secrets, Files, Rolling deployment](https://github.com/dstackai/dstack/releases/tag/0.19.17)
- [2025/06] [dstack 0.19.16: Docker in Docker, CloudRift](https://github.com/dstackai/dstack/releases/tag/0.19.16)
- [2025/06] [dstack 0.19.13: InfiniBand support in default images](https://github.com/dstackai/dstack/releases/tag/0.19.13)
- [2025/06] [dstack 0.19.12: Simplified use of MPI](https://github.com/dstackai/dstack/releases/tag/0.19.12)
-- [2025/05] [dstack 0.19.10: Priorities](https://github.com/dstackai/dstack/releases/tag/0.19.10)
-- [2025/05] [dstack 0.19.8: Nebius clusters, GH200 on Lambda](https://github.com/dstackai/dstack/releases/tag/0.19.8)
-- [2025/04] [dstack 0.19.6: Tenstorrent, Plugins](https://github.com/dstackai/dstack/releases/tag/0.19.6)
## How does it work?
+
- dstack gives your team a single control plane to run development, training, and inference - jobs on GPU—whether on hyperscalers, neoclouds, or your on-prem hardware. Avoid vendor lock-in and minimize GPU spend. + dstack provides a unified control plane for running development, training, and inference + on GPUs — across cloud VMs, Kubernetes, or on-prem clusters. It helps your team avoid vendor lock-in and reduce GPU + costs.
@@ -74,8 +75,8 @@- dstack natively provisions GPU instances and clusters across your preferred cloud - providers—maximizing efficiency and minimizing overhead. + dstack natively integrates with top GPU clouds—automating cluster provisioning and + workload orchestration to maximize efficiency and minimize overhead.
- Connect GPU clouds directly to dstack natively, or run dstack on top of Kubernetes if needed. + It can provision and manage VM clusters through native integrations or via Kuberenetes.
- + Backends - + + + + + Kubernetes +
- Have bare-metal servers or on-prem GPU clusters? dstack makes it easy to integrate them - and manage everything alongside your cloud resources. + For provisioned Kuberenetes clusters, connect them to dstack using the Kubernetes backend. + If you run vanilla bare-metal servers or VMs without Kuberenetes, use SSH fleets + instead.
-With SSH fleets, you can connect any existing cluster in minutes. Once added to dstack, it's a first-class resource — available for dev environments, tasks, and - services. -
-- + + Kubernetes + + + + SSH fleets - +
+ ++ Either way, connecting existing on-prem clusters to dstack takes just minutes. +
Before training or deployment, ML engineers explore and debug their code.
@@ -247,7 +273,7 @@Move from single-instance experiments to multi-node distributed training without friction. Run complex training with simple config Tasks + + + Clusters + +
With dstack, you can easily deploy any model as a secure, @@ -518,17 +550,17 @@
Set up backends or SSH fleets, then add your team.
- - Installation + Quickstart - Quickstart - + -->