diff --git a/docs/blog/posts/hotaisle.md b/docs/blog/posts/hotaisle.md
new file mode 100644
index 0000000000..4d2e761c00
--- /dev/null
+++ b/docs/blog/posts/hotaisle.md
@@ -0,0 +1,114 @@
+---
+title: Supporting Hot Aisle AMD AI Developer Cloud
+date: 2025-08-11
+description: "TBA"
+slug: hotaisle
+image: https://dstack.ai/static-assets/static-assets/images/dstack-hotaisle.png
+categories:
+ - Changelog
+---
+
+# Supporting Hot Aisle AMD AI Developer Cloud
+
+As the ecosystem around AMD GPUs matures, developers are looking for easier ways to experiment with ROCm, benchmark new architectures, and run cost-effective workloads—without manual infrastructure setup.
+
+`dstack` is an open-source orchestrator designed for AI workloads, providing a lightweight, container-native alternative to Kubernetes and Slurm.
+
+
+
+Today, we’re excited to announce native integration with [Hot Aisle :material-arrow-top-right-thin:{ .external }](https://www.hotaisle.io/){:target="_blank"}, an AMD-only GPU neocloud offering VMs and clusters at highly competitive on-demand pricing.
+
+
+
+## About Hot Aisle
+
+Hot Aisle is a next-generation GPU cloud built around AMD’s flagship AI accelerators.
+
+Highlights:
+
+- AMD’s flagship AI-optimized accelrators
+- On-demand pricing: $1.99/hour for 1-GPU VMs
+- No commitment – start and stop when you want
+- First AMD-only GPU backend in `dstack`
+
+While it has already been possible to use HotAisle’s 8-GPU MI300X bare-metal clusters via [`SSH fleets`](../../docs/concepts/fleets.md#ssh-fleets), this integration now enables automated provisioning of VMs—made possible by HotAisle’s newly added API for MI300X instances.
+
+## Why dstack
+
+`dstack` is a new open-source container orchestrator built specifically for GPU workloads.
+It fills the gaps left by Kubernetes and Slurm when it comes to GPU provisioning and orchestration:
+
+- Unlike Kubernetes, `dstack` offers a high-level, AI-engineer-friendly interface, and GPUs work out of the box, with no need to wrangle custom operators, device plugins, or other low-level setup.
+- Unlike Slurm, it’s use-case agnostic — equally suited for training, inference, benchmarking, or even setting up long-running dev environments.
+- It works across clouds and on-prem without vendor lock-in.
+
+With the new Hot Aisle backend, you can automatically provision MI300X VMs for any workload — from experiments to production — with a single `dstack` CLI command.
+
+## Getting started
+
+Before configuring `dstack` to use Hot Aisle’s VMs, complete these steps:
+
+1. Create a project via `ssh admin.hotaisle.app`
+2. Get credits or approve a payment method
+3. Create an API key
+
+Then, configure the backend in `~/.dstack/server/config.yml`:
+
+