This guide provides step-by-step instructions to test the DTO to TypeScript Converter in both Manual (Local) and Docker modes.
Run the application directly on your machine. This is best for development and quick testing.
- Python 3.8+ installed.
- (Optional) Ollama running locally for AI features.
-
Setup Environment:
# Create virtual environment python3 -m venv venv # Activate it source venv/bin/activate # Linux/Mac # venv\Scripts\activate # Windows # Install dependencies pip install -r requirements.txt
-
Run Application:
python app.py
Output should indicate running on
http://127.0.0.1:5000. -
Verify:
- Open browser: http://127.0.0.1:5000
- Or use curl:
curl -I http://127.0.0.1:5000
- Expected Result: HTTP 200 OK.
Run the application in a containerized environment. This ensures consistency across different machines.
- Docker and Docker Compose installed.
-
Build and Run:
docker compose up --build
Wait for "Entered start loop" or similar logs indicating services are up.
-
Verify:
- Open browser: http://localhost:5000
- Or use curl:
curl -I http://localhost:5000
- Expected Result: HTTP 200 OK.
-
Test AI Model (Optional): If you want to test the Ollama integration inside Docker:
# Pull the model inside the running container docker compose exec ollama ollama pull deepseek-coder
-
Stop: Press
Ctrl+Cor run:docker compose down
-
Port Conflicts:
- If port
5000is busy, change the mapping indocker-compose.yml(e.g.,"5001:5000"). - If port
11434(Ollama) is busy, the docker-compose is already configured to use11435for the host to avoid conflicts with local Ollama.
- If port
-
Connection Refused:
- Ensure the container is running (
docker compose ps). - Check logs (
docker compose logs).
- Ensure the container is running (