Analyze architecture diagrams with AI-assisted STRIDE threat modeling.
Info Security Analyzer helps security engineers, architects, and developers turn uploaded diagrams and PDFs into structured STRIDE-oriented findings, risk summaries, and exportable reports.
Best for: fast first-pass threat modeling, security review prep, and stakeholder-friendly report exports.
Not for: replacing human judgment, security sign-off, or guaranteeing completeness.
- Upload a diagram or PDF and turn it into structured STRIDE-oriented findings
- Review risks by component, flow, and summary recommendations
- Export a stakeholder-friendly report instead of stitching together notes by hand
- Preview the UX with the built-in demo report before adding any provider credentials
Quick links: Quick start · SECURITY.md · CONTRIBUTING.md · Demo assets
If you want to evaluate the project before wiring up LLM credentials:
- Clone the repo
- Run
npm install - Run
npm run dev - Open
http://localhost:5173 - Click Load Demo Report
That path lets GitHub visitors preview the report UX immediately, without API keys or backend setup.
Threat modeling is useful but often skipped because it is slow, inconsistent, or hard to start. This project aims to lower the friction:
- upload a diagram or PDF
- identify components and data flows
- generate STRIDE-oriented findings
- review recommendations and export a report
This is an assistive tool, not a certification or guarantee of completeness. Human review is still required.
- Diagram and document analysis for PNG, JPG, and PDF inputs
- STRIDE threat modeling across components and data flows
- Multi-LLM support for Azure OpenAI, OpenAI, Anthropic, and Google Gemini
- Interactive report output with risk breakdowns and relationship views
- PDF export for sharing and documentation
- Built-in demo report so you can preview the UI without API keys
- Node.js 18+
- Python 3.10+
- Docker Desktop or Docker Engine + Docker Compose (for containerized runs)
- One API key for Azure OpenAI, OpenAI, Anthropic, or Google Gemini if you want live analysis
| Goal | Best option |
|---|---|
| Preview the UI with no API key | npm install && npm run dev, then click Load Demo Report |
| Run the full stack locally for development | Local development setup below |
| Start with containers | Docker quickstart or Docker Compose |
git clone https://github.com/Aveerayy/info_security_analyzer.git
cd info_security_analyzer
# Backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r backend/requirements.txt
uvicorn backend.main:app --reload --host 0.0.0.0 --port 8000In a second terminal:
cd info_security_analyzer
npm install
npm run devOpen:
- Frontend:
http://localhost:5173 - Backend health:
http://localhost:8000/health
Notes:
- You can click Load Demo Report immediately to validate the UI without backend credentials.
- For live analysis, configure a provider in the UI or set environment variables before starting the backend.
cp .env.example .env
# edit .env only if you want server-side provider defaults
./docker-run.sh fullThis starts:
- Backend:
http://localhost:8000 - Frontend:
http://localhost:5173
On first run, the script can still create .env from .env.example if needed, but copying and reviewing it yourself is clearer for deployments.
docker compose up --build backend frontend-devIf you want a production-style frontend image instead:
docker compose up --build backend frontend-prodThe production frontend is served on http://localhost.
A sample environment file is included as .env.example.
Configure one of the following providers via environment variables or the UI settings:
| Provider | Environment variables |
|---|---|
| Azure OpenAI | AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_DEPLOYMENT, optionally AZURE_OPENAI_API_VERSION |
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| Google Gemini | GOOGLE_API_KEY |
This repository does not ship a usable Azure endpoint, deployment, or placeholder key anymore. For self-hosting, set your own environment variables explicitly. For local evaluation, entering credentials in the UI is usually the safest starting point.
- Configure an LLM provider
- Upload an architecture diagram or PDF
- Run analysis
- Review findings across components, data flows, and summary recommendations
- Export the report if needed
Before using this on sensitive material, understand the basic trust boundaries:
- uploaded diagrams/PDFs are processed by the backend
- analysis content may be sent to the configured LLM provider
- API keys entered in the UI are kept only in memory for the current browser tab session and are cleared on refresh/close
- self-hosted users can instead provide provider credentials through environment variables
- if no provider is selected, the backend only falls back to Azure OpenAI when a complete Azure environment configuration is present
- generated findings can be helpful, but they still require human validation
If you are evaluating the tool for real environments, prefer sanitized diagrams first.
See also: SECURITY.md
├── backend/ # FastAPI backend and provider integrations
├── src/ # React frontend
├── docker-compose.yml # Multi-service local orchestration
├── Dockerfile # Production frontend image
├── Dockerfile.dev # Development frontend image
├── docker-run.sh # Convenience wrapper for compose workflows
└── README.md
Contributions are welcome.
- See
CONTRIBUTING.mdfor setup and PR expectations - Run
./scripts/smoke-test.shbefore opening a PR to verify the main local path (frontend build, lint, backend import, health endpoint, and expected no-provider API error) - Run
python -m unittest discover -s tests -p 'test_*.py' -vfor a fast backend CI-smoke check without starting the full local stack - Please report security issues privately as described in
SECURITY.md
Repo-local launch planning lives in docs/growth/.
Fastest live publish path: telegram-github-publish-checklist.md
Start here for this week’s broader launch: launch-operator-packet.md
Supporting docs:
day-of-launch-scorecard.md30-day-launch-plan.md7-day-execution-plan.mdchannel-strategy.mdchannel-execution-sequencing.mdmessaging-and-assets.mdlaunch-assets.md
The operator packet is the single execution doc for launch week; the supporting docs hold the deeper planning, messaging rationale, and reusable templates behind it.
MIT