Web application for generating images using AI models with automatic translation of prompts into English.
- Clone the repository:
git clone https://github.com/zdenek-stursa/replicator.git
cd replicator- Create and activate a virtual environment:
sudo ./app.sh --install- Create a .env file based on the .env.example template and fill in the necessary API keys:
cp .env.example .env- Important: In the
.envfile, set theREPLICATE_MODELSvariable with a list of models from Replicate that you want to use. Separate models with a comma (e.g.,owner/model-name:version,another/model:version). See.env.examplefor the format. - Note: The application uses liteLLM for flexible LLM provider support. Configure your API key for your preferred provider (OpenAI, Anthropic, xAI, etc.) and set the
LLM_MODELvariable accordingly.
- Clone the repository:
git clone https://github.com/zdenek-stursa/replicator.git
cd replicator- Create and activate a virtual environment:
python3 -m venv venv
source venv/bin/activate # On Windows: .\venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Create a .env file based on the .env.example template and fill in the necessary API keys:
cp .env.example .env- Important: In the
.envfile, set theREPLICATE_MODELSvariable with a list of models from Replicate that you want to use. Separate models with a comma (e.g.,owner/model-name:version,another/model:version). See.env.examplefor the format. - Note: The application uses liteLLM for flexible LLM provider support. Configure your API key for your preferred provider (OpenAI, Anthropic, xAI, etc.) and set the
LLM_MODELvariable accordingly.
The application loads the list of available models for generation from the REPLICATE_MODELS environment variable defined in the .env file.
The format is a comma-separated list of model identifiers:
REPLICATE_MODELS=model_identifier_1,model_identifier_2,...
The identifier should be in the format owner/model-name:version (the version is optional; if not provided, the latest version will be used).
The user interface then dynamically loads the parameters for each selected model directly from the Replicate API and displays the corresponding form.
./app.sh --debug # Or: FLASK_ENV=development python app.py- Run the application using Gunicorn:
./app.sh --production- Set production variables in .env:
FLASK_ENV=production
SECRET_KEY=your-secure-secret-key
RATELIMIT_STORAGE_URL=redis://localhost:6379/0- Run the application using Gunicorn:
gunicorn --bind 0.0.0.0:5000 --workers 4 app:app- Image Generation: Generate images using various AI models from Replicate (configurable via
.env) - Model Selection: Choose from multiple pre-configured models with dynamic parameter loading
- Multi-LLM Support: Flexible LLM integration via liteLLM library
- Automatic translation of prompts into English
- Prompt enhancement for better image generation
- Support for multiple providers: OpenAI, Anthropic, xAI, Groq, Mistral, and more
- Easy model switching via environment configuration
- Image Gallery: View previously generated images with metadata and PhotoSwipe lightbox
- Image Format Conversion: Download images in multiple formats (WebP original, JPG 90% quality, PNG without transparency) with mobile-optimized UI
- Aspect Ratio Support: Select from predefined aspect ratios or set custom dimensions
- Rate Limiting: Built-in API protection with configurable limits
- Security: Security headers, CORS protection, and secure cookie settings
- Responsive Design: Works on desktop and mobile devices
- Comprehensive Logging: File logging with rotation for debugging and monitoring
- Modular Frontend Architecture: ES6 modules for better code organization and maintainability
The application uses liteLLM for flexible LLM provider support. Configure your preferred model via the LLM_MODEL environment variable:
# OpenAI models (default)
LLM_MODEL=gpt-4
LLM_MODEL=gpt-4.1
LLM_MODEL=gpt-4.1-mini
LLM_MODEL=gpt-4.1-nano
LLM_MODEL=gpt-4-turbo
LLM_MODEL=gpt-4o
LLM_MODEL=gpt-4o-mini
LLM_MODEL=gpt-3.5-turbo
LLM_MODEL=o3
LLM_MODEL=o3-mini
LLM_MODEL=o4-mini
LLM_MODEL=o1-mini
LLM_MODEL=o1-preview
# Anthropic Claude
LLM_MODEL=claude-3-opus-20240229
LLM_MODEL=claude-3-sonnet-20240229
# xAI Grok
LLM_MODEL=xai/grok-beta
# Groq
LLM_MODEL=groq/llama3-8b-8192
# Mistral
LLM_MODEL=mistral/mistral-7b-instructNote: Special model versions (e.g., gpt-4-1-2025-04-14) are automatically mapped to standard names for compatibility.
The application supports on-demand image format conversion with automatic cleanup:
- WebP (Original): Direct download of the original image from Replicate
- JPEG: Converted with 90% quality compression, transparency removed
- PNG: Converted without transparency support
- On-demand conversion: Images are converted only when requested
- Temporary file management: Converted files are automatically cleaned up after 2 hours
- Rate limiting: 30 conversion requests per minute
- Error handling: Comprehensive error handling with user feedback
- Mobile-optimized UI: Touch-friendly dropdown menu with backdrop and improved positioning
GET /api/convert/<image_id>/jpg- Convert and download as JPEGGET /api/convert/<image_id>/png- Convert and download as PNGGET /api/temp-files-info- Get temporary files statistics (debugging)
The application uses PhotoSwipe v5.4.4 for professional image viewing experience:
- Keyboard Navigation: Arrow keys for navigation, Escape to close
- Touch/Swipe Support: Full touch gesture support on mobile devices
- Zoom Functionality: Zoom up to 3x including 1:1 pixel viewing
- Proper Aspect Ratios: Automatic detection of image dimensions to prevent distortion
- Responsive Design: Optimized for both desktop and mobile viewing
- Smooth Animations: Professional fade transitions and zoom effects
- Uses PhotoSwipe UMD version loaded from CDN
- Automatic image dimension detection using
Imageobjects - Asynchronous loading for optimal performance
- Fallback to new tab if PhotoSwipe fails to load
- Image generation: 5 requests/minute
- Prompt enhancement: 10 requests/minute
- Gallery listing: 30 requests/minute
- Image download: 60 requests/minute
- Image conversion: 30 requests/minute
The frontend uses a modular ES6 architecture for better maintainability and code organization:
constants.js- Application constants and configurationstorage.js- localStorage management and form state persistenceui.js- UI utilities (error handling, loading states, notifications)gallery.js- Image gallery management and paginationform-generator.js- Dynamic form generation and aspect ratio handlingapi-client.js- API communication and data fetchingphotoswipe-gallery.js- PhotoSwipe lightbox integration with automatic image dimension detectionmobile-dropdown.js- Mobile-optimized dropdown menu enhancements for touch devicesmain.js- Application initialization and event handling
- Separation of Concerns: Each module has a specific responsibility
- Maintainability: Smaller, focused files are easier to understand and modify
- Reusability: Modules can be imported and used independently
- Testing: Individual modules can be tested in isolation
- Modern Standards: Uses ES6 import/export syntax
- Rate limiting
- Security headers
- CORS protection
- Secure cookie settings