A web application for generating images using Z-Image-Turbo model on DGX Spark with GB10 GPU (ARM).
- 🎨 Web interface for image generation
- ⚡ Fast generation with Z-Image-Turbo
- 🎛️ Customizable parameters (width, height, steps, seed)
- 💾 Automatic image saving
- 🐳 Docker & GPU support
- Build and start the service:
docker-compose up -d --build- Access the web interface:
http://localhost:5000
- View logs:
docker-compose logs -f- Stop the service:
docker-compose downYou can also generate images directly from the command line:
python image_generation.py --prompt "Your prompt here" --output output.pngOptions:
--prompt, -p: Text prompt for image generation (required)--output, -o: Output filename (default:/tim/data/z-image-turbo/output.png)--width, -w: Image width (default: 1024)--height: Image height (default: 1024)--steps, -s: Number of inference steps (default: 9)--seed: Random seed for reproducibility (default: 42)
z_image/
├── webapp/
│ ├── app.py # Flask application
│ ├── requirements.txt # Python dependencies
│ └── templates/
│ └── index.html # Web interface
├── image_generation.py # CLI script
├── Dockerfile # Container definition
├── docker-compose.yml # Service orchestration
└── README.md # This file
The docker-compose.yml is configured for:
- NVIDIA GB10 GPU (ARM architecture)
- Device ID 0
- 8GB shared memory
Generated images are saved to /tim/data/z-image-turbo/
HuggingFace models are cached in a Docker volume to avoid re-downloading.
GET /- Web interfacePOST /generate- Generate image from promptGET /image/<filename>- Serve generated imagesGET /health- Health check
- Docker & Docker Compose
- NVIDIA GPU with CUDA support
- NVIDIA Container Toolkit
nvidia-smi # Verify GPU is available
docker run --rm --gpus all nvidia/cuda:12.1.0-runtime-ubuntu22.04 nvidia-smiChange the port mapping in docker-compose.yml:
ports:
- "5001:5000" # Use port 5001 insteadReduce image size or batch size in the web interface.