This repository is the Official code for A-FloPS: Accelerating Diffusion Sampling with Adaptive Flow Path Sampler. It contains:
utils.py: Implementation of AFloPS for DiT and A‑Euler for SDv3.5.vis.ipynb: Code for generating Figure 1 in the paper.vis_sd3.ipynb: Code for generating Figure 2 in the paper.generate_distributed.py: Generates results for baseline samplers.generate_distributed_flops.py: Generates results for FloPS and AFloPS.gen_sd3.py: Generates results for SDv3.5.eval.py: Modified from openai/guided-diffusion to support folder inputs; used for evaluating DiT results.eval_clip_ir.py: Evaluates SDv3.5 results using CLIP and ImageReward (IR) metrics.gpt.py: LLM-assisted evaluation script (requires API key).
# Create a virtual environment
conda create -n aflops python=3.10
conda activate aflops
# Install PyTorch and torchvision matching your system
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
# Install remaining dependencies
pip -r requirement.txt
- Figure 1: Run
vis.ipynb - Figure 2: Run
vis_sd3.ipynb
torchrun --nproc_per_node=8 generate_distributed_flops.py \
--sampler=aflops \
--num_inference_steps=5 \
--guidance_scale=1.5 \
--mode=1 \ # 1 = AFloPS, 0 = FloPS
--c_max=1 \
--save_dir=./imagestorchrun --nproc_per_node=8 generate_distributed.py \
--sampler=[unipc/ddim/dpmsolver] \
--num_inference_steps=5 \
--guidance_scale=1.5 \
--save_dir=./imagespython gen_sd3.py \
--prompt_dir=/coco_path/coco/annotations/captions_val2017.json \
--device=cuda:0 \
--sampler=aeuler \
--num_inference_steps=5 \
--save_dir=./images- Download ImageNet reference batch (see guided-diffusion/evaluations).
- Run:
python eval.py \
--ref_batch /path/to/reference/VIRTUAL_imagenet256_labeled.npz \
--sample_batch /path/to/sample \
--save_path /path/to/save/resultspython eval_clip_ir.py \
--caption_json /coco_path/coco/annotations/captions_val2017.json \
--image_folder /generate_samples_path \
--save_csv- Edit the following variables in
gpt.py:
API_KEY = "your_api_key_here"
base_url = "YOUR_OPENAI_BASE_URL"
BASE_PATH1 = "/path/to/aeuler"
BASE_PATH2 = "/path/to/euler"
COCO_FILE = "/coco_path/coco/annotations/captions_val2017.json"- Run:
python gpt.pyIf you find this project useful in your research, please consider citing:
@misc{jin2025aflopsacceleratingdiffusionsampling,
title={A-FloPS: Accelerating Diffusion Sampling with Adaptive Flow Path Sampler},
author={Cheng Jin and Zhenyu Xiao and Yuantao Gu},
year={2025},
eprint={2509.00036},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2509.00036},
}