This guide provides comprehensive installation and setup instructions for the IPFS Accelerate Python framework.
- System Requirements
- Installation Methods
- Hardware Setup
- IPFS Setup
- Configuration
- Verification
- Troubleshooting
- Development Setup
- Linux: Ubuntu 18.04+, CentOS 7+, Debian 10+, Fedora 30+
- macOS: macOS 10.15+ (Intel and Apple Silicon)
- Windows: Windows 10+ (64-bit)
- Python Version: 3.8 or higher
- Architecture: x86_64, ARM64 (Apple Silicon), ARM (Raspberry Pi)
- CPU: 2 cores, 2.0 GHz
- RAM: 4 GB
- Storage: 10 GB free space
- Network: Broadband internet connection
- CPU: 4+ cores, 3.0+ GHz
- RAM: 8+ GB
- Storage: 50+ GB SSD
- GPU: NVIDIA GPU with CUDA support (optional)
- NVIDIA CUDA: CUDA 11.0+ with compatible GPU
- AMD ROCm: ROCm 5.0+ with compatible GPU
- Intel OpenVINO: OpenVINO 2023.1+ runtime
- Apple Silicon: macOS with M1/M2/M3 processor
- Qualcomm: Snapdragon devices with NPU support
# Install from PyPI
pip install ipfs_accelerate_py
# Verify installation
python -c "import ipfs_accelerate_py; print('Installation successful')"# For WebNN/WebGPU browser support
pip install ipfs_accelerate_py[webnn]
# For visualization tools
pip install ipfs_accelerate_py[viz]
# For complete installation with all features
pip install ipfs_accelerate_py[all]# For NVIDIA CUDA support (PyTorch)
# Note: CUDA-enabled PyTorch wheels are hosted on PyTorch's wheel index.
python -m pip install --upgrade --force-reinstall \
torch torchvision torchaudio \
--index-url https://download.pytorch.org/whl/cu124
# For Intel OpenVINO support
pip install ipfs_accelerate_py[openvino]
# For AMD ROCm support
pip install ipfs_accelerate_py[rocm]
# For development tools
pip install ipfs_accelerate_py[dev]# Clone the repository
git clone https://github.com/endomorphosis/ipfs_accelerate_py.git
cd ipfs_accelerate_py
# Install in development mode
pip install -e .
# Or install with all features
pip install -e ".[all]"# Install build dependencies
pip install build wheel
# Build the package
python -m build
# Install the built package
pip install dist/ipfs_accelerate_py-*.whl# Dockerfile example
FROM python:3.10-slim
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
# Install IPFS Accelerate Python
RUN pip install ipfs_accelerate_py[all]
# Set working directory
WORKDIR /app
# Copy your application
COPY . /app/
# Run your application
CMD ["python", "your_app.py"]# Build and run Docker container
docker build -t ipfs-accelerate-app .
docker run -it ipfs-accelerate-app# Create conda environment
conda create -n ipfs-accelerate python=3.10
conda activate ipfs-accelerate
# Install via pip (conda package coming soon)
pip install ipfs_accelerate_py[all]# Install NVIDIA drivers
sudo apt update
sudo apt install nvidia-driver-525
# Install CUDA toolkit
wget https://developer.download.nvidia.com/compute/cuda/12.0.0/local_installers/cuda_12.0.0_525.60.13_linux.run
sudo sh cuda_12.0.0_525.60.13_linux.run
# Add to PATH
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
# Verify CUDA installation
nvidia-smi
nvcc --version- Install NVIDIA GPU drivers from NVIDIA website
- Install CUDA Toolkit from NVIDIA CUDA website
- Add CUDA to PATH in environment variables
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA devices: {torch.cuda.device_count()}")# Download OpenVINO
wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.1/linux/l_openvino_toolkit_ubuntu20_2023.1.0.12185.47b736f28c6_x86_64.tgz
# Extract and install
tar -xzf l_openvino_toolkit_ubuntu20_2023.1.0.12185.47b736f28c6_x86_64.tgz
cd l_openvino_toolkit_ubuntu20_2023.1.0.12185.47b736f28c6_x86_64
sudo ./install.sh
# Setup environment
source /opt/intel/openvino_2023/setupvars.sh- Download OpenVINO from Intel website
- Run the installer
- Add OpenVINO to PATH
try:
from openvino.runtime import Core
ie = Core()
print("OpenVINO available")
print(f"Available devices: {ie.available_devices}")
except ImportError:
print("OpenVINO not available")# Add ROCm repository
wget -qO - https://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add -
echo 'deb [arch=amd64] https://repo.radeon.com/rocm/apt/5.4.3/ ubuntu main' | sudo tee /etc/apt/sources.list.d/rocm.list
# Install ROCm
sudo apt update
sudo apt install rocm-dkms rocm-libs miopen-hip
# Add user to render group
sudo usermod -a -G render $USER
sudo usermod -a -G video $USER
# Reboot required
sudo reboot# Check ROCm installation
rocm-smi
rocminfoApple Silicon support is built into macOS and PyTorch:
import torch
print(f"MPS available: {torch.backends.mps.is_available()}")
print(f"MPS built: {torch.backends.mps.is_built()}")No additional setup required for Apple Silicon Macs.
# Download and install IPFS
wget https://dist.ipfs.tech/kubo/v0.21.0/kubo_v0.21.0_linux-amd64.tar.gz
tar -xzf kubo_v0.21.0_linux-amd64.tar.gz
cd kubo
sudo bash install.sh
# Initialize IPFS
ipfs init
# Start IPFS daemon
ipfs daemon- Download IPFS from IPFS Downloads
- Extract to a folder (e.g.,
C:\ipfs) - Add to PATH environment variable
- Initialize:
ipfs init - Start daemon:
ipfs daemon
# Run IPFS in Docker
docker run -d --name ipfs-node \
-p 4001:4001 -p 5001:5001 -p 8080:8080 \
ipfs/kubo:latest
# Check IPFS status
curl http://localhost:5001/api/v0/version# Configure IPFS for better performance
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["*"]'
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "POST"]'
# Enable experimental features
ipfs config --json Experimental.FilestoreEnabled true
ipfs config --json Experimental.UrlstoreEnabled true
# Restart daemon
ipfs shutdown
ipfs daemon &# Test IPFS connectivity
curl http://localhost:5001/api/v0/version
# Test gateway
curl http://localhost:8080/ipfs/QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3NnCreate a configuration file in your home directory:
# Create configuration directory
mkdir -p ~/.ipfs_accelerate
# Create configuration file
cat > ~/.ipfs_accelerate/config.json << 'EOF'
{
"hardware": {
"prefer_cuda": true,
"allow_openvino": true,
"allow_mps": true,
"precision": "fp16",
"mixed_precision": true,
"max_memory": "8GB"
},
"ipfs": {
"gateway": "http://localhost:8080/ipfs/",
"local_node": "http://localhost:5001",
"timeout": 30,
"retry_count": 3
},
"performance": {
"cache_size": "2GB",
"parallel_requests": 4,
"enable_prefetch": true
},
"logging": {
"level": "INFO",
"file": "~/.ipfs_accelerate/ipfs_accelerate.log"
}
}
EOFSet up environment variables for automatic configuration:
# Add to ~/.bashrc or ~/.zshrc
export IPFS_ACCELERATE_HARDWARE_PREFER_CUDA=true
export IPFS_ACCELERATE_IPFS_GATEWAY="http://localhost:8080/ipfs/"
export IPFS_ACCELERATE_IPFS_LOCAL_NODE="http://localhost:5001"
export IPFS_ACCELERATE_LOG_LEVEL=INFO
# Apply changes
source ~/.bashrcCreate a project-specific configuration file:
# In your project directory
cat > ipfs_accelerate.json << 'EOF'
{
"project": {
"name": "My ML Project",
"version": "1.0.0",
"description": "Machine learning inference project"
},
"models": {
"bert-base-uncased": {
"cache_dir": "./models/bert",
"precision": "fp16"
}
},
"hardware": {
"batch_size": 8,
"precision": "fp16"
}
}
EOF#!/usr/bin/env python3
"""
Basic functionality test for IPFS Accelerate Python
"""
import anyio
from ipfs_accelerate_py import ipfs_accelerate_py
def test_basic_functionality():
"""Test basic framework functionality."""
print("Testing IPFS Accelerate Python...")
# Initialize framework
try:
accelerator = ipfs_accelerate_py({}, {})
print("✓ Framework initialization successful")
except Exception as e:
print(f"✗ Framework initialization failed: {e}")
return False
# Test hardware detection
try:
if hasattr(accelerator, 'hardware_detection'):
hardware_info = accelerator.hardware_detection.detect_all_hardware()
print(f"✓ Hardware detection successful")
print(f" Available hardware: {list(hardware_info.keys())}")
else:
print("⚠ Hardware detection not available")
except Exception as e:
print(f"✗ Hardware detection failed: {e}")
# Test basic inference
try:
result = accelerator.process(
model="bert-base-uncased",
input_data={"input_ids": [101, 2054, 2003, 102]},
endpoint_type="text_embedding"
)
print("✓ Basic inference successful")
except Exception as e:
print(f"✗ Basic inference failed: {e}")
print("Basic functionality test completed!")
return True
async def test_async_functionality():
"""Test asynchronous functionality."""
print("\nTesting async functionality...")
try:
accelerator = ipfs_accelerate_py({}, {})
result = await accelerator.process_async(
model="bert-base-uncased",
input_data={"input_ids": [101, 2054, 2003, 102]},
endpoint_type="text_embedding"
)
print("✓ Async inference successful")
except Exception as e:
print(f"✗ Async inference failed: {e}")
if __name__ == "__main__":
# Run basic tests
test_basic_functionality()
# Run async tests
anyio.run(test_async_functionality)Save this as test_installation.py and run:
python test_installation.py#!/usr/bin/env python3
"""
Hardware verification test
"""
from ipfs_accelerate_py import ipfs_accelerate_py
def test_hardware():
"""Test hardware acceleration capabilities."""
print("Hardware Verification Test")
print("=" * 30)
accelerator = ipfs_accelerate_py({}, {})
if hasattr(accelerator, 'hardware_detection'):
hardware_info = accelerator.hardware_detection.detect_all_hardware()
for hardware_type, info in hardware_info.items():
status = "✓ Available" if info.get("available", False) else "✗ Not available"
print(f"{hardware_type.upper()}: {status}")
if info.get("available", False):
for key, value in info.items():
if key != "available":
print(f" {key}: {value}")
else:
print("Hardware detection not available")
if __name__ == "__main__":
test_hardware()#!/usr/bin/env python3
"""
IPFS connectivity test
"""
import anyio
from ipfs_accelerate_py import ipfs_accelerate_py
async def test_ipfs():
"""Test IPFS connectivity."""
print("IPFS Connectivity Test")
print("=" * 25)
config = {
"ipfs": {
"gateway": "http://localhost:8080/ipfs/",
"local_node": "http://localhost:5001"
}
}
accelerator = ipfs_accelerate_py(config, {})
try:
# Test basic IPFS operations
test_data = b"Hello, IPFS!"
cid = await accelerator.store_to_ipfs(test_data)
print(f"✓ Stored data to IPFS: {cid}")
retrieved_data = await accelerator.query_ipfs(cid)
if retrieved_data == test_data:
print("✓ Data retrieval successful")
else:
print("✗ Data retrieval failed: content mismatch")
except Exception as e:
print(f"✗ IPFS test failed: {e}")
print(" Make sure IPFS daemon is running:")
print(" ipfs daemon")
if __name__ == "__main__":
anyio.run(test_ipfs)# Check if package is installed
pip list | grep ipfs-accelerate
# If not installed, install it
pip install ipfs_accelerate_py
# Check Python path
python -c "import sys; print(sys.path)"# Check NVIDIA driver
nvidia-smi
# Check CUDA installation
nvcc --version
# Install PyTorch with CUDA support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
# If you're on a very new GPU (e.g. NVIDIA GB10 / compute capability 12.1) and see
# a warning that your GPU is unsupported, install nightly CUDA 13.0 wheels instead:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu130# Check if IPFS daemon is running
ps aux | grep ipfs
# Start IPFS daemon
ipfs daemon
# Check IPFS API
curl http://localhost:5001/api/v0/version# Fix permissions
sudo chown -R $USER:$USER ~/.ipfs
sudo chown -R $USER:$USER ~/.ipfs_accelerate
# Add user to appropriate groups (Linux)
sudo usermod -a -G docker $USER
sudo usermod -a -G render $USER # For ROCm# Reduce batch size or model size
export IPFS_ACCELERATE_HARDWARE_MAX_MEMORY="4GB"
# Enable memory optimization
export IPFS_ACCELERATE_PERFORMANCE_ENABLE_MEMORY_OPTIMIZATION=trueEnable debug logging for troubleshooting:
import logging
logging.basicConfig(level=logging.DEBUG)
from ipfs_accelerate_py import ipfs_accelerate_py
# Enable debug mode in configuration
config = {
"logging": {
"level": "DEBUG",
"enable_performance_logging": True
}
}
accelerator = ipfs_accelerate_py(config, {})If you encounter issues:
- Check the GitHub Issues
- Review the documentation
- Join our community discussions
- Contact support with debug logs and system information
# Clone the repository
git clone https://github.com/endomorphosis/ipfs_accelerate_py.git
cd ipfs_accelerate_py
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run tests
python -m pytest tests/
# Run linting
flake8 ipfs_accelerate_py/
black ipfs_accelerate_py/- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
For detailed contribution guidelines, see CONTRIBUTING.md.
After successful installation and setup:
- Read the Usage Guide for detailed usage instructions
- Check out the Examples for practical examples
- Review the API Reference for complete API documentation
- Learn about Hardware Optimization for your specific setup
- Explore IPFS Integration for distributed inference
Welcome to IPFS Accelerate Python! 🚀