A web-based application for processing and viewing Multiple Sclerosis (MS) brain MRI scans with automated lesion detection and reporting.
- v1.0: First approach to create the report from the results of MSReport-Provider and a handcraft Viewer with the axial plane with the lesion map overlapping
- v2.0: Incorporation of the MSReport Provider Pipeline (Preprocessing, MSXplain Report Provider, Report generator). A new upload page allows users to upload folders containing multiple patients following the required structure: Folder/Patients/Session/Images (T1 and FLAIR).
- v3.0: Complete incorporation of Report Provider with skull stripping, updating the model, converting the lesion_map to dicom_seg, and registering to flair space. Start the incorporation of the OHIF Viewer + Orthanc.
- v4.0: Fully dockerized application with complete integration of OHIF Viewer and Orthanc PACS. All components run in Docker containers with proper user permissions. Image registration now uses ANTs tools instead of elastix for improved performance and consistency.
- v5.0: Replaced FreeSurfer/SAMSEG parcellation with WMH-SynthSeg for faster and lighter brain structure segmentation. Added SwinUNETR ensemble inference (5 models) with voxel-level, lesion-level (LLU), and patient-level (PSU) uncertainty quantification. Uncertainty-filtered DICOM-SEG exports (LLU < 0.25 threshold). Upgraded to Python 3.11, PyTorch 2.7 (CUDA 12.8), pytorch-lightning 2.6, and MONAI 1.4. Built dcm2niix from source with JPEG 2000 and JPEG-LS support. False Positive lesions are now excluded from DICOM-SEG exports while remaining visible in the web report.
.
├── backend/ # FastAPI server with MSXplain processing pipeline
│ ├── app.py # Main FastAPI application
│ ├── msxplain/ # MSXplain report provider modules
│ ├── files/ # Data directory (uploads, processed)
│ └── hd_bet_models/ # Brain extraction models
├── frontend/ # React web application
│ ├── src/ # React components and logic
│ └── public/ # Static assets
├── config_ohif_orthnac/ # OHIF Viewer and Orthanc PACS configuration
├── docker-compose.yml # Docker orchestration configuration
└── .env # Environment configuration (user-specific)
- Docker Engine installed
- For GPU Support (Optional but Recommended):
- NVIDIA GPU with compatible drivers
- NVIDIA Docker runtime installed (see section 4.3 below for installation)
- At least 8GB RAM (16GB recommended)
- 50GB+ free disk space for processing data
The backend Docker image includes the following neuroimaging tools using official Docker images for optimal performance:
- FSL 6.0.7.4 - FMRIB Software Library for brain imaging analysis
- WMH-SynthSeg - White matter hyperintensity and brain structure segmentation (replaces FreeSurfer/SAMSEG)
- ANTs 2.5.3 - Advanced Normalization Tools for image registration and transformation (from official
antsx/ants:2.5.3) - dcm2niix - DICOM to NIfTI conversion (built from source with JPEG 2000 and JPEG-LS support)
- HD-BET 1.1 - Brain extraction tool
Important: The application runs as the user specified in the .env file. This ensures that all files created by the Docker containers have the correct permissions and can be accessed by your host user.
First Time Setup:
- Create your
.envfile from the example:
cp .env.example .env- Edit
.envwith your user configuration:
# Set your user ID and group ID (important for file permissions!)
# Run these commands to get your values:
id -u # Your user ID
id -g # Your group ID
# Edit .env and set:
UID=1000 # Replace with your user ID
GID=1000 # Replace with your group IDOr create it automatically:
echo "UID=$(id -u)" > .env
echo "GID=$(id -g)" >> .env- Configure additional options in
.env(optional):
# Port Configuration
FRONTEND_PORT=3001 # Web interface port
BACKEND_PORT=5000 # API server port
ORTHANC_HTTP_PORT=8042 # PACS web interface
ORTHANC_DICOM_PORT=4242 # DICOM protocol port
# CORS Configuration
CORS_ORIGINS=* # For development: allow all origins
# CORS_ORIGINS=http://your-server:3001 # For production: specify exact originsBefore building, ensure these files are in place:
Backend:
backend/msxplain/model/*- MSXplain UNet model weightsbackend/msxplain/ensemble_models/*- SwinUNETR ensemble checkpoints (5 models)backend/hd_bet_models/*- Brain extraction models
Frontend:
frontend/public/*- Static assets
Create necessary data directories:
mkdir -p backend/files/uploads
mkdir -p backend/files/processedImportant: These directories will be owned by the user specified in .env (UID:GID), ensuring proper permissions.
# Build and start all services
docker compose up -d
# View logs
docker compose logs -f
# Stop services
docker compose downOnce running, access these services:
- Frontend Web UI: http://localhost:3001 (or http://YOUR_SERVER_IP:3001)
- Backend API: http://localhost:5000
- Orthanc PACS: http://localhost:8042
- Orthanc DICOM: port 4242
Default Orthanc credentials: orthanc / orthanc (configure in config_ohif_orthnac/config/orthanc.json)
The entire MSXplain application runs in Docker containers, providing:
- Isolation: Each component runs in its own container with specific dependencies
- Reproducibility: Same environment across development and production
- Portability: Easy deployment on any system with Docker
- User Permissions: Containers run as your host user (UID/GID from
.env) to ensure:- Files created by containers are owned by your user
- No permission issues when accessing files from host
- Safe file system operations without root privileges
Container Services:
-
msxplain_backend (FastAPI + Python)
- Runs as user
UID:GIDspecified in.env - Processes MRI scans using neuroimaging tools
- Includes: FSL, WMH-SynthSeg, ANTs, HD-BET, PyTorch
- GPU-enabled (optional) for faster processing
- Data persisted in
./backend/filesvolume
- Runs as user
-
msxplain_frontend (React + Nginx)
- Serves web interface on port 3001
- Communicates with backend API
- Static files built and served by Nginx
-
orthanc (PACS Server)
- DICOM image storage and retrieval
- Web viewer on port 8042
- DICOM protocol on port 4242
- Data persisted in
./config_ohif_orthnac/volumes/orthanc-db
Volume Mounts:
Host → Container
./backend/files → /app/files (Data persistence)
./config_ohif_orthnac/volumes → /var/lib/orthanc/db/ (PACS data)
All volumes maintain your host user ownership, preventing permission conflicts.
Build all images:
docker compose buildBuild specific service:
docker compose build msxplain_backend
docker compose build msxplain_frontendRebuild without cache (if needed):
docker compose build --no-cache msxplain_frontendNote: The Docker images are already GPU-ready with CUDA libraries. You only need to install NVIDIA Docker runtime on your host computer to enable GPU passthrough to containers.
What's already in the Docker images:
- ✅ CUDA 12.8 runtime libraries (via PyTorch)
- ✅ GPU-enabled PyTorch 2.7 and MONAI 1.4
- ✅ All necessary CUDA dependencies
Requirements on your host computer:
- NVIDIA GPU with recent drivers
- NVIDIA Docker runtime
Installation steps for your host machine:
-
Install NVIDIA Docker runtime:
# Ubuntu/Debian distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update && sudo apt-get install -y nvidia-docker2 sudo systemctl restart docker
-
Test GPU access:
# This should show your GPU info docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi -
Enable GPU in docker-compose.yml: The GPU configuration is already present in
docker-compose.yml. It's enabled by default:deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu]
The dockerized application uses environment variables instead of configuration files. All neuroimaging tool paths are automatically configured in the Docker container:
Environment variables set in backend container:
FSLDIR=/usr/share/fsl- FSL installation directoryFSLOUTPUTTYPE=NIFTI_GZ- FSL output formatWMHSYNTHSEG_HOME=/app/wmh_synthseg- WMH-SynthSeg installation directoryANTSPATH=/opt/ants/bin- ANTs binaries directoryORTHANC_URL=http://orthanc:8042- Orthanc PACS server URLPYTHONPATH=/app- Python module pathCORS_ORIGINS- Set via.envfile for API access control
Note: After dockerization (v4.0), the application no longer uses config.yml. All paths are hardcoded in the Docker container for consistency and reproducibility.
If you encounter permission denied errors:
-
Check your
.envfile:cat .env # Should show your actual UID and GID -
Fix ownership of existing files:
sudo chown -R $(id -u):$(id -g) backend/files sudo chown -R $(id -u):$(id -g) config_ohif_orthnac/volumes
-
Restart containers:
docker compose down docker compose up -d
Problem: Browser shows ERR_CONNECTION_REFUSED when loading JavaScript bundle.
Solution: Rebuild the frontend image from scratch:
docker compose build msxplain_frontend --no-cache
docker compose up -d msxplain_frontendThis ensures:
- Node.js dependencies are properly installed
- Webpack builds the React application
bundle.jsandindex.htmlare copied to Nginx
-
Check logs:
docker compose logs -f msxplain_backend
-
Verify required files exist:
backend/msxplain/model/*backend/msxplain/ensemble_models/*backend/hd_bet_models/*
-
Check GPU access (if using GPU):
docker exec msxplain_backend nvidia-smi
-
Check container status:
docker compose ps docker compose logs [service_name]
-
Verify ports are not in use:
netstat -tuln | grep -E '3001|5000|8042|4242'
-
Check Docker resources:
docker system df docker system prune # Clean up unused resources
The neuroimaging tools require significant RAM. Ensure Docker has at least 8GB RAM allocated.
# Check Docker memory limit
docker info | grep Memory
# Increase memory in Docker Desktop settings if neededIf GPU processing fails:
-
Verify NVIDIA Docker runtime is installed:
docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
-
Check GPU configuration in docker-compose.yml (should be uncommented)
-
Test GPU inside container:
docker exec msxplain_backend nvidia-smi -
Check NVIDIA drivers on host:
nvidia-smi
The initial build may take 30-60 minutes due to downloading large neuroimaging tools:
# Use BuildKit for better progress output
DOCKER_BUILDKIT=1 docker compose build --progress=plainBackend requirements:
msxplain/model/*- UNet model weights and configurationsmsxplain/ensemble_models/*- SwinUNETR ensemble checkpoints (5 seeds)hd_bet_models/*- Pre-trained brain extraction models
Frontend requirements:
public/*- HTML templates and static assets
Data directories (created automatically, owned by UID:GID from .env):
backend/files/uploads/- Uploaded patient databackend/files/processed/- Processing resultsbackend/files/DICOMS/- DICOM files
# View all logs
docker compose logs -f
# View specific service logs
docker compose logs -f msxplain_backend
docker compose logs -f msxplain_frontend
docker compose logs -f orthanc
# Check container resource usage
docker statsAfter changing configuration files:
-
Orthanc config (
config_ohif_orthnac/config/orthanc.json):docker compose restart orthanc
-
Environment variables (
.env):docker compose down docker compose up -d
- Report_template_USB.pdf - Report template documentation
- docker-compose.yml - Docker services configuration
- .env.example - Environment configuration template
Note: Since v4.0, all configuration is done via Docker environment variables and the .env file. The config.yml file is no longer used. Since v5.0, FreeSurfer is no longer required — parcellation is handled by WMH-SynthSeg.
For issues or questions:
- Check the troubleshooting section above
- Review logs:
docker compose logs -f - Verify
.envconfiguration matches your user - Ensure all required files are in place before building