This is the recommended way to run PacketQL.
For GitHub users and SOC teams, the preferred workflow is:
- pull the published Docker image from Docker Hub
- mount one host directory to
/data - open the UI
- upload a PCAP
Users should not need to manually install:
- Zeek
- Kafka
- Zeek Kafka plugin
- Django
- frontend runtime
The single-container image bundles:
- Zeek runtime
- Kafka in KRaft mode
- Go normalization and enrichment pipeline
- Django API served by Gunicorn
- static frontend served by nginx
docker pull jobish/packetql:beta
mkdir -p /opt/packetql-data
docker run -d \
--name packetql \
-p 3000:3000 \
-v /opt/packetql-data:/data \
-e APP_MODE=demo \
jobish/packetql:betaOpen:
http://localhost:3000
If 3000 is already used:
docker run -d \
--name packetql \
-p 8088:3000 \
-v /opt/packetql-data:/data \
-e APP_MODE=demo \
jobish/packetql:betaOpen:
http://localhost:8088
Mount a host path to /data.
That external directory stores:
- uploaded PCAP files
- generated parquet files
- source metadata
- Kafka runtime data used inside the container
For the smoothest analyst experience:
- use files below
50 MB
Larger PCAPs can work, but beta-stage throughput tuning is still improving.
This section is mainly for maintainers and contributors.
cd /opt/tools/pcapql
./docker/build-image.shThen run:
docker run -d \
--name packetql \
-p 3000:3000 \
-v /opt/packetql-data:/data \
packetql:single-optimizedImportant:
- local image builds currently expect a usable Zeek runtime on the build host
- end users pulling the published image do not need that local Zeek setup
docker logs -f packetql
curl -sS http://127.0.0.1:3000 >/dev/null && echo "UI OK"
curl -sS http://127.0.0.1:3000/api/v1/system/healthFor maintainers testing a validation container on alternate ports:
BASE_URL=http://127.0.0.1:8094 \
API_URL=http://127.0.0.1:18014 \
CONTAINER_NAME=packetql-validation-prod \
bash ./docker/validate-production.shManual deployment should be considered an advanced path.
If a user deploys outside the bundled container, they must configure and operate:
- Zeek
- Kafka
- streaming pipeline wiring
- API runtime
- frontend runtime
- persistence paths
That is why the public documentation should always lead with the Docker-based deployment model first.