From 98b1d2ecabf49023caa3400029482f9b20886733 Mon Sep 17 00:00:00 2001 From: Felipe Mello Date: Thu, 23 Apr 2026 09:47:14 -0700 Subject: [PATCH 1/2] docs: note development pause in README --- README.md | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/README.md b/README.md index bcf0e181d..ada565e59 100644 --- a/README.md +++ b/README.md @@ -13,12 +13,7 @@ Key features: - Hackability for power users (all parts of the RL loop can be easily modified without interacting with infrastructure) - Scalability (ability to shift between async and synchronous training and across thousands of GPUs) -> ⚠️ **Early Development Warning** torchforge is currently in an experimental -> stage. You should expect bugs, incomplete features, and APIs that may change -> in future versions. The project welcomes bugfixes, but to make sure things are -> well coordinated you should discuss any significant change before starting the -> work. It's recommended that you signal your intention to contribute in the -> issue tracker, either by filing a new issue or by claiming an existing one. +> ⚠️ **Development paused:** Development in Forge has paused. LLM training at PyTorch is being consolidated in [torchtitan](https://github.com/pytorch/torchtitan). ## 📖 Documentation From c9c51b217df8ae09a09c27e9827e1ba2e7834877 Mon Sep 17 00:00:00 2001 From: Felipe Mello Date: Thu, 23 Apr 2026 09:54:22 -0700 Subject: [PATCH 2/2] docs: move pause banner to top of README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ada565e59..30c3e39fd 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,5 @@ +> ⚠️ **Development paused:** Development in Forge has paused. LLM training at PyTorch is being consolidated in [torchtitan](https://github.com/pytorch/torchtitan). + # image torchforge #### A PyTorch-native agentic RL library that lets you focus on algorithms—not infra. @@ -13,8 +15,6 @@ Key features: - Hackability for power users (all parts of the RL loop can be easily modified without interacting with infrastructure) - Scalability (ability to shift between async and synchronous training and across thousands of GPUs) -> ⚠️ **Development paused:** Development in Forge has paused. LLM training at PyTorch is being consolidated in [torchtitan](https://github.com/pytorch/torchtitan). - ## 📖 Documentation View torchforge's hosted documentation: https://meta-pytorch.org/torchforge.