Releases: dstackai/dstack
0.19.27
Run configurations
Repo directory
It's now possible to specify the directory in the container where the repo is mounted:
type: dev-environment
ide: vscode
repos:
- local_path: .
path: my_repo
# or using short syntax:
# - .:my_repoThe path property can be an absolute path or a relative path (with respect to working_dir). It's available inside run as the $DSTACK_REPO_DIR environment variable. If path is not set, the /workflow path is used.
Working directory
Previously, the working_dir property had complicated semantics: it defaulted to the repo path (/workflow), but for tasks and services without commands, the image working directory was used. You could also specify custom working_dir relative to the repo directory. This is now reversed: you specify working_dir as absolute path, and the repo path can be specified relative to it.
Note
During transitioning period, the legacy behavior of using /workflow is preserved if working_dir is not set. In future releases, this will be simplified, and working_dir will always default to the image working directory.
Fleet configuration
Nodes, retry, and target
dstack now indefinitely maintains nodes.min specified for cloud fleets. If instances get terminated for any reason and there are fewer instances than nodes.min, dstack will provision new fleet instances in the background.
There is also a new nodes.target property that specifies the number of instances to provision on fleet apply. Since now nodes.min is always maintained, you may specify nodes.target different from nodes.min to provision more instances than needs to be maintained.
Example:
type: fleet
name: default-fleet
nodes:
min: 1 # Maintain one instance
target: 2 # Provision two instances initially
max: 3dstack will provision two instances. After deleting one instance, there will be one instances left. Deleting the last instance will trigger dstack to re-create the instance.
Offers
The UI now has a dedicated page showing GPU offers available across all configured backends.
Digital Ocean and AMD Developer Cloud
The release adds native integration with DigitalOcean and
AMD Developer Cloud.
A backend configuration example:
projects:
- name: main
backends:
- type: amddevcloud
project_name: TestProject
creds:
type: api_key
api_key: ...For DigitalOcean, set type to digitalocean.
The digitalocean and amddevcloud backends support NVIDIA and AMD GPU VMs, respectively, and allow you to run
dev environments (interactive development), tasks
(training, fine-tuning, or other batch jobs), and services (inference).
Security
Important
This update fixes a vulnerability in the cloudrift, cudo, and datacrunch backends. Instances created with earlier dstack versions lack proper firewall rules, potentially exposing internal APIs and allowing unauthorized access.
Users of these backends are advised to update to the latest version and re-create any running instances.
What's changed
- Minor Hot Aisle Cleanup by @Bihan in #2978
- UI for offers #3004 by @olgenn in #3042
- Add
repos[].pathproperty by @un-def in #3041 - style(frontend): Add missing final newline by @un-def in #3044
- Implement fleet state-spec consolidation to maintain
nodes.minby @r4victor in #3047 - Add digital ocean and amd dev backend by @Bihan in #3030
- test: include amddevcloud and digitalocean in backend types by @Bihan in #3053
- Fix missing digitaloceanbase configurator methods by @Bihan in #3055
- Expose job working dir via environment variable by @un-def in #3049
- [runner] Ensure
working_direxists by @un-def in #3052 - Fix server compatibility with pre-0.19.27 runners by @un-def in #3054
- Bind shim and exposed container ports to localhost by @jvstme in #3057
- Fix client compatibility with pre-0.19.27 servers by @un-def in #3063
- [Docs] Reflect the repo and working directory changes (#3041) by @peterschmidt85 in #3064
- Show a CLI warning when using autocreated fleets by @r4victor in #3060
- Improve UX with private repos by @un-def in #3065
- Set up instance-level firewall on all backends by @jvstme in #3058
- Exclude target when equal to min for responses by @r4victor in #3070
- [Docs] Shorten the default
working_dirwarning by @peterschmidt85 in #3072 - Do not issue empty update for deleted_fleets_placement_groups by @r4victor in #3071
- Exclude target when equal to min for responses (attempt 2) by @r4victor in #3074
Full changelog: 0.19.26...0.19.27
0.19.26
Repos
Previously, dstack always required running the dstack init command before use. This also meant that dstack would always mount the current folder as a repo.
With this update, repo configuration is now explicit and declarative. If you want to use a repo in your run, you must specify it with the new repos property. The dstack init command is now only used to provide custom Git credentials when working with private repos.
For example, imagine you have a cloned Git repo with an examples subdirectory containing a .dstack.yml file:
type: dev-environment
name: vscode
repos:
# Mounts the parent directory of `examples` (must be a Git repo)
# to `/workflow` (the default working directory)
- ..
ide: vscodeWhen you run this configuration, dstack fetches the repo on the instance, applies your local changes, and mounts it—so the container always matches your local repo.
Sometimes you may want to mount a Git repo without cloning it locally. In that case, simply provide a URL in repos:
type: dev-environment
name: vscode
repos:
# Clone the specified repo to `/workflow` (the default working directory)
- https://github.com/dstackai/dstack
ide: vscodeIf the repo is private, dstack will automatically try to use your default Git credentials (from ~/.ssh/config or ~/.config/gh/hosts.yml).
To configure custom Git credentials, use dstack init.
Note
If you previously initialized a repo via dstack init, it will still be mounted. Be sure to migrate to repos, as implicitly configured repos are deprecated and will stop working in future releases.
If you no longer want to use the implicitly configured repo, run dstack init --remove.
Note
Currently, you can configure only one repo per run configuration.
Fleets
Previously, when dstack added new instances to existing fleets, it ignored the fleet configuration and used only the run configuration for which the instance was created. This could result in fleets containing instances that didn’t match their configuration.
This has now been fixed: fleet configurations and run configurations are intersected so that provisioned instances respect both. For example, given a fleet configuration:
type: fleet
name: cloud-fleet
placement: any
nodes: 0..2
backends:
- runpodand a run configuration:
type: dev-environment
ide: vscode
spot_policy: spot
fleets:
- cloud-fleetdstack will provision a RunPod spot instance in cloud-fleet.
This change lets you define main provisioning parameters in fleet configurations, while adjusting them in run configurations as needed.
Note
Currently, the run plan does not take fleet configuration into account when showing offers, since the target fleet may not be known beforehand. We plan to improve this by showing offers for all candidate fleets.
Examples
Wan2.2
We've added a new example demonstrating how to use Wan2.2, the new open-source SOTA text-to-video model, to generate videos.
Internals
Pyright integration
We now use pyright for type checking dstack Python code in CI. If you contribute to dstack, we recommend you configure your IDE to use pyright/pylance with standard type checking mode.
What's changed
- Fix typing issues and add pyright to CI by @r4victor in #3011
- [Internal] Update Ask AI integration ID by @olgenn in #3009
- Make Configurator generic by @r4victor in #3013
- Type check cli.commands by @r4victor in #3014
- [Docs] Improve the docs regarding
dstack initand repos to reflect the recent changes. by @peterschmidt85 in #3015 - Respect fleet spec when provisioning on run apply by @r4victor in #3022
- Consider elastic busy fleets for provisioning by @r4victor in #3024
- Fix duplicate instance_num by @r4victor in #3025
- Add declarative repo configuration by @un-def in #3023
- Allow gpu.name as string in json schema by @r4victor in #3027
- [Bug]: nebius.aio.service_error.RequestError: Request error DEADLINE_EXCEEDED: Deadline Exceeded #2962 by @peterschmidt85 in #3028
- Fix DataCrunchCompute exception when terminating already removed instance by @r4victor in #3032
- [DataCrunch] Ensure dstack is using fixed pricing #3033 by @peterschmidt85 in #3034
- Document
reposby @peterschmidt85 in #3026 - Add Wan2.2 example by @r4victor in #3029
- Automatically remove dangling tasks from shim by @jvstme in #3036
dstack offerfixes by @peterschmidt85 in #3038- Remove dstack init from help by @r4victor in #3039
Full changelog: 0.19.25...0.19.26
0.19.25
CLI
dstack offer --group-by
The dstack offer command can now display aggregated information about available offers. For example, to see what GPUs are available in different clouds, use --group-by gpu.
> dstack offer --group-by gpu
# GPU SPOT $/GPU BACKENDS
1 T4:16GB:1..8 spot, on-demand 0.1037..1.3797 gcp, aws
2 L4:24GB:1..8 spot, on-demand 0.1829..2.1183 gcp, aws
3 P100:16GB:1..4 spot, on-demand 0.2115..2.4043 gcp, oci
4 V100:16GB:1..8 spot, on-demand 0.3152..4.234 gcp, aws, oci, lambda
5 A10G:22GB:1..8 spot, on-demand 0.3623..2.5845 aws
6 L40S:44GB:1..8 spot, on-demand 0.6392..4.7095 aws
7 A100:40GB:1..16 spot, on-demand 0.6441..4.0496 gcp, aws, oci, lambda
8 A10:24GB:1..4 on-demand 0.75..2 oci, lambda
9 H100:80GB:1..8 spot, on-demand 1.079..15.7236 gcp, aws, lambda
10 A100:80GB:1..8 spot, on-demand 1.2942..5.7077 gcp, aws, lambdaRefer to the docs for information about the available aggregations.
Deprecations
- Local repos are now deprecated. If you need to deliver a local directory or file to a run, use
filesinstead. If the run doesn't require a repo, usedstack apply --no-repo. Remote repos remain the recommended way to deliver Git repos to runs.
What's changed
- Document Deployment-compatible migrations by @r4victor in #2987
- [Bug]: Server Docker image fails because of Unable to locate package … by @peterschmidt85 in #2983
- Only register service replicas after probes pass by @jvstme in #2986
- [Changelog] Introducing service probes by @peterschmidt85 in #2988
- Deprecate local repos by @un-def in #2984
- Support elastic fleets by @r4victor in #2967
- fix typo config.yml.md by @jspablo in #2991
- Check if kapa.ai can also be integrated into dstack Sky #296 by @olgenn in #2990
- Typo in URLs by @mashcroft3 in #2995
- [shim] Fix
DCGMWrapperInterfacenil check (bis) by @un-def in #3001 - The logs section is too short in the UI by @olgenn in #2989
- [Feature]: Allow
dstack offerto aggregate GPU information by @peterschmidt85 in #2992 - [Internal]: CI refactoring by @jvstme in #3006
- Update examples by @un-def in #3007
- Minor CLI fixes by @peterschmidt85 in #3008
New Contributors
- @mashcroft3 made their first contribution in #2995
Full Changelog: 0.19.24...0.19.25
0.19.24
Migration guide
Warning
This update requires stopping all dstack server replicas before deploying, due to database schema changes.
Make sure no replicas from the previous version and the new version run at the same time.
What's changed
- [Internal] Replace enums with strings in the DB,
JobSubmission.termination_reason, andRun.termination_reasonby @r4victor in #2949 - [Internal] Fix macOS build for shim by @un-def in #2958
- [Bug] Increase the secrets max character length by @james-boydell in #2971
- [Internal] Introduce
InstanceAvailability.NO_BALANCE(for external integrations) by @peterschmidt85 in #2975 - [Bug]: Cannot manage secrets in UI as project admin by @olgenn in #2972
- [Bug] Fix
DCGMWrapperInterfacenil check in shim by @un-def in #2980
Full changelog: 0.19.23...0.19.24
0.19.23
Major bug-fixes
- This release resolves an issue introduced in 0.19.22 that caused instance provisioning to fail consistently for certain instance types.
Backends
Nebius
The nebius backend now supports spot instances and the NVIDIA B200 GPU.
> dstack offer -b nebius --spot
# BACKEND RESOURCES PRICE
1 nebius (eu-north1) cpu=16 mem=200GB disk=100GB H100:80GB:1 (spot) $1.25
2 nebius (eu-north1) cpu=16 mem=200GB disk=100GB H200:141GB:1 (spot) $1.45
3 nebius (eu-west1) cpu=16 mem=200GB disk=100GB H200:141GB:1 (spot) $1.45
4 nebius (us-central1) cpu=16 mem=200GB disk=100GB H200:141GB:1 (spot) $1.45
5 nebius (eu-north1) cpu=128 mem=1600GB disk=100GB H100:80GB:8 (spot) $10
6 nebius (eu-north1) cpu=128 mem=1600GB disk=100GB H200:141GB:8 (spot) $11.6
7 nebius (eu-west1) cpu=128 mem=1600GB disk=100GB H200:141GB:8 (spot) $11.6
8 nebius (us-central1) cpu=128 mem=1600GB disk=100GB H200:141GB:8 (spot) $11.6
> dstack offer -b nebius --gpu 8:b200
# BACKEND RESOURCES PRICE
1 nebius (us-central1) cpu=160 mem=1792GB disk=100GB B200:180GB:8 $44What's changed
- Fix
dstack-shimrelease build by @jvstme in #2964 - [Nebius] Support spot instances and B200 by @peterschmidt85 in #2965
Full Changelog: 0.19.22...0.19.23
0.19.22
Warning
When updating, make sure to install 0.19.23, the latest bug-fix release.
Services
Probes
You can now configure HTTP probes to check the health of your service.
type: service
name: my-service
port: 80
image: my-app:latest
probes:
- type: http
url: /health
interval: 15sProbe statuses are displayed in dstack ps --verbose and are considered during rolling deployments. This enables you to deploy new versions of your service with zero downtime.
> dstack ps --verbose
NAME BACKEND STATUS PROBES SUBMITTED
my-service deployment=1 running 11 mins ago
replica=0 job=0 deployment=0 aws (us-west-2) running ✓ 11 mins ago
replica=1 job=0 deployment=1 aws (us-west-2) running × 1 min agoLearn more about probes in the docs.
Accelerators
NVIDIA GPU health checks
dstack now monitors NVIDIA GPU health using DCGM background health checks:
> dstack fleet
FLEET INSTANCE BACKEND RESOURCES PRICE STATUS CREATED
my-fleet 0 aws (us-east-1) T4:16GB:1 $0.526 idle 11 mins ago
1 aws (us-east-1) T4:16GB:1 $0.526 idle (warning) 11 mins ago
2 aws (us-east-1) T4:16GB:1 $0.526 idle (failure) 11 mins agoIn this example, the first instance is healthy, the second has a non-fatal issue and can still be used, and the last has a fatal error that makes it inoperable.
Note
GPU health checks are supported on AWS (except with custom os_images), Azure (except for A10 GPUs), GCP, and OCI, as well as SSH fleet instances with DCGM installed and configured for background health checks. To use GPU health checks, re-create the fleets that were created before 0.19.22.
Tenstorrent Galaxy
dstack now supports Tenstorrent Galaxy cards via SSH fleets.
Backends
Hot Aisle
This release features an integration with Hot Aisle, a cloud provider that offers on-demand access to AMD MI300x GPUs at competitive prices.
> dstack offer -b hotaisle
# BACKEND RESOURCES INSTANCE TYPE PRICE
1 hotaisle (us-michigan-1) cpu=13 mem=224GB disk=12288GB MI300X:192GB:1 1x MI300X 13x Xeon Platinum 8470 $1.99
2 hotaisle (us-michigan-1) cpu=8 mem=224GB disk=12288GB MI300X:192GB:1 1x MI300X 8x Xeon Platinum 8470 $1.99
Refer to the docs for instructions on configuring the hotaisle backend in your dstack project.
CLI
Reading configurations from stdin
dstack apply can now read configurations from stdin using the -y -f - flags. This allows configuration files to be parameterized in arbitrary ways:
> cat .dstack/volume.dstack.yml
type: volume
name: my-vol
backend: aws
region: us-east-1
size: $VOL_SIZE
> export VOL_SIZE=50
> envsubst '$VOL_SIZE' < .dstack/volume.dstack.yml | dstack apply -y -f -Debug logs
The dstack CLI now saves debug logs to the ~/.dstack/logs/cli/ directory. These logs can be useful for troubleshooting failed commands or submitting bug reports.
UI
Secrets
The project settings page now has a section to manage secrets.
Logs improvements
The UI can now optionally display timestamps in front of each message in run logs. This can be a lifesaver when debugging runs that write log messages without built-in timestamps.
Additionally, if the dstack server is configured to use external log storage, such as AWS CloudWatch or GCP Logging, a button will appear in the UI to view the logs in that storage system.
What's changed
- [Feature]: Add UI for managing Secrets #2882 by @olgenn in #2911
- [Blog]: Benchmarking AMD GPUs: bare-metal, VMs by @peterschmidt85 in #2924
- [Feature]: Implement reading apply configuration from stdin by @r4victor in #2938
- Fix precommit by @olgenn in #2936
- Fix gateway docs URL by @jspablo in #2941
- [Feature]: Service probes by @jvstme in #2927
- Return logs
external_urlfor AWS and GCP by @r4victor in #2944 - [Feature]: Default CLI log level is DEBUG; WARNING and above go to STDOUT, DEBUG logs to a file by @peterschmidt85 in #2940
- [Feature]: Support for Tenstorrent Galaxy by @peterschmidt85 in #2943
- Disallow duplicate project members by @r4victor in #2945
- [Feature]: If GCP logging or AWS Cloudwatch logging is configured, show link in the UI to the log stream by @olgenn in #2948
- Specify
sentry-sdk[fastapi]>=2.27.0to fix missingSamplingContextby @r4victor in #2950 - [Feature]: Showing timestamp for logs by @olgenn in #2937
- [Landing]: Highlight dstack Sky + CTA improvements by @peterschmidt85 in #2947
- Fix Lambda backend instance unreachable after dstack server restart by @Bihan in #2946
- Fix configuring CLI logging on Python 3.9/3.10 by @jvstme in #2953
- [Feature]: Add NVIDIA GPU passive health checks by @un-def in #2952
- Fix
_check_instancelog spam by @un-def in #2956 - Add more probe request configuration options by @jvstme in #2955
- [Feature]: Add Hot Aisle backend by @Bihan in #2935
- [Internal]: Fix release workflow by @jvstme in #2959
New Contributors
Full Changelog: 0.19.21...0.19.22
0.19.21
Runs
Scheduled runs
Runs get a new schedule property that allows starting runs periodically by specifying a cron expression:
type: task
nodes: 1
schedule:
cron: "*/15 * * * *"
commands:
- ...dstack will start a scheduled run at cron times unless the run is already running. It can then be stopped manually to prevent it from starting again. Learn more about scheduled runs in the docs.
CLI
Startup time
The CLI startup time was significantly improved up to 4 times by optimizing Python imports.
Server
Optimized DB queries
We optimized DB queries issues by the dstack server. This improves API response times and decreases the load on the DB, which was previously noticeable on small Postgres instances.
What's Changed
- Support scheduled runs by @r4victor in #2914
- Autoset UTC timezone for datetimes loaded from the db by @r4victor in #2922
- Refactor backends module to avoid importing deps on models import by @r4victor in #2923
- Optimize db queries by @r4victor in #2928
- Optimize db queries (part 2) by @r4victor in #2929
- [UI] Add justfile to build frontend by @peterschmidt85 in #2897
- Fix project loading in _check_instance() by @r4victor in #2931
- Set up background tasks Sentry tracing by @r4victor in #2932
Full Changelog: 0.19.20...0.19.21
0.19.20
User interface
Logs
This is a hotfix release addressing three major issues related to the UI:
- The UI didn’t display newer AWS CloudWatch logs if there was a long gap between old and new logs.
- Logs received before the 19th appeared as base64-encoded in the UI. The UI now includes a button to decode them automatically.
- Logs were loaded from start to end, which made viewing very slow for long runs.
Note
The dstack logs CLI command may still be affected by the issues above. However, it’s less critical and will be addressed separately.
What's changed
- [chore]: Drop duplicate utility
split_chunksby @jvstme in #2912 - [backends/CloudRift] Fixed issue with terminating inactive instance by @6erun in #2918
- Expose GPU metrics collected by runner as Prometheus metrics by @un-def in #2916
- [UI] Query logs using descending by @peterschmidt85 in #2915
- [UI] Fix logs loading #2892 by @olgenn in #2920
Full changelog: 0.19.19...0.19.20
0.19.19
Fleets
SSH fleets in-place updates
You can now add and remove instances in SSH fleets without recreating the entire fleet.
type: fleet
name: ssh-fleet
ssh_config:
user: dstack
identity_file: ~/.ssh/dstack
hosts:
- 10.0.0.1
- 10.0.0.2$ dstack apply -f fleet.dstack.yml
...
Fleet ssh-fleet does not exist yet.
Create the fleet? [y/n]: y
...
FLEET INSTANCE BACKEND RESOURCES PRICE STATUS CREATED
ssh-fleet 0 ssh (remote) cpu=4 mem=4GB disk=30GB $0 idle 09:08
1 ssh (remote) cpu=2 mem=4GB disk=30GB $0 idle 09:08
Then, if you update the hosts configuration property to
hosts:
#- 10.0.0.1 # removed
- 10.0.0.2
- 10.0.0.3 # addedand apply the same configuration again, the fleet will be updated in-place, meaning that you don't need to stop runs on the fleet instances if they are not affected by the changes (in this example, it's okay if the instance 1 is currenty busy, you can still apply the configuration).
$ dstack apply -f fleet.dstack.yml
...
Found fleet ssh-fleet. Configuration changes detected.
Update the fleet in-place? [y/n]: y
...
FLEET INSTANCE BACKEND RESOURCES PRICE STATUS CREATED
ssh-fleet 1 ssh (remote) cpu=2 mem=4GB disk=30GB $0 idle 09:08
2 ssh (remote) cpu=8 mem=4GB disk=30GB $0 idle 09:12
Note
For in-place updates it's only allowed to add and/or remove instances, the root configuration and configurations of hosts that are not changed must not be changed, otherwise the full fleet recreation is triggered, as before. This restriction may be lifted in the future.
Volumes
Automatic cleanup of unused volumes
The volume configuration gets a new auto_cleanup_duration property:
type: volume
name: my-volume
backend: aws
region: eu-west-1
availability_zone: eu-west-1a
auto_cleanup_duration: 1hThe volume will be automatically deleted after it's not being used for the specified duration.
Logs
Browsable, queryable, and searchable logs
dstack now stores run logs in plaintext, which were previously base64-encoded. This allows you to use the configured log storage, be it AWS CloudWatch or GCP Logging, to browse and query dstack run logs.
Note
Logs generated before this release will be shown as base64-encoded in the UI and CLI after the update.
Server
Faster API response times
The dstack server API has been optimized to serialize json responses faster. The API endpoints are up to 2x faster than before.
Benchmarks
Benchmarking AMD GPUs: bare-metal, containers, partitions
Our new benchmark explores two important areas for optimizing AI workloads on AMD GPUs: First, do containers introduce a performance penalty for network-intensive tasks compared to a bare-metal setup? Second, how does partitioning a powerful GPU like the MI300X affect its real-world performance for different types of AI workloads?
What's Changed
- [Internal] Some runner tests fail on macOS by @peterschmidt85 in #2879
- Introduce job_submissions_limit for /api/runs/list by @r4victor in #2883
- Speed up json serialization with orjson and custom FastAPI responses by @r4victor in #2880
- [Docs]: Service rolling deployments by @jvstme in #2870
- Do not lose
provisioninggateways on restart by @jvstme in #2887 - Add/remove SSH instances via in-place update by @un-def in #2884
- [Docs]: Add example of setting a PostgreSQL URL by @jvstme in #2888
- [Blog] Added new changelog by @peterschmidt85 in #2891
- Fix job_submissions_limit backward compatibility by @r4victor in #2894
- Fix run and job status_message calculation by @r4victor in #2889
- Fix 500 errors when requesting file logs by @r4victor in #2896
- Rolling deployments for
portby @jvstme in #2893 - [Feature] Strip ANSI codes from run logs and store them as plain text instead of bytes by @peterschmidt85 in #2876
- [Feature]: Add ability to disable background processing and only run Web UI and API server #2901 by @james-boydell in #2902
- [shim] Don't check image downloaded size by @un-def in #2903
- Fix rolling deployment migration locking by @r4victor in #2904
- feat: add volume idle duration cleanup feature (#2497) by @haydnli-shopify in #2842
- [Blog] Benchmarking AMD GPUs: bare-metal, containers, partitions by @peterschmidt85 in #2905
- Fix /users/list by @r4victor in #2908
- Return logs in base64 for backward compatibility by @r4victor in #2910
Full Changelog: 0.19.18...0.19.19
0.19.18
Server
Optimized resources processing
This release includes major improvements that allow the dstack server process more resources quickly. It also allows scaling processing rates of one server replica to take advantage of big Postgres instances by setting the DSTACK_SERVER_BACKGROUND_PROCESSING_FACTOR environment variable.
The result is:
- Faster processing rates: provisioning 100 runs on SQLite with default settings went from ~5m to ~2m.
- Better scaling: provisioning additional 100 runs is even quicker due to warm cache. Before, it was slower than the first 100 runs.
- Ability to process more runs per server replica: provisioning 300 runs on Postgres with
DSTACK_SERVER_BACKGROUND_PROCESSING_FACTOR=4is ~4m.
For more details on scaling backgraound processing rates, see the Server deployment guide.
Backends
Private GCP gateways
It's now possible to create GCP gateways without public IPs:
type: gateway
name: example
domain: gateway.example.com
backend: gcp
region: europe-west9
public_ip: false
certificate: nullNote that configuring HTTPS certificates for private GCP gateways is not yet supported, so you need to specify certificate: null.
What's Changed
- Ignore SSH keys when calculating fleet conf diff by @un-def in #2869
- [Blog] Refactoring by @peterschmidt85 in #2873
- Implemented fronted precommit linting by @olgenn in #2868
- Support processing more resources per replica by @r4victor in #2871
- Use uvloop by default by @r4victor in #2874
- Add server profiling by @r4victor in #2875
- Fix NVIDIA container toolkit bug in all backends by @jvstme in #2877
- Private GCP gateways by @jvstme in #2881
- Switch to
e2-mediumfor GCP gateways by @jvstme in #2886
Full Changelog: 0.19.17...0.19.18