FoxMQ is a decentralized MQTT 5.0 broker that acts as a high-level abstraction over Tashi Vertex. Where Vertex requires you to reason about DAG events and consensus rounds, FoxMQ exposes a familiar MQTT Pub/Sub interface while Vertex handles all ordering and fault tolerance underneath. Any standard MQTT client library in any language can publish messages to FoxMQ and receive them in the same agreed-upon order across every node in the cluster.
FoxMQ is NOT a drop-in replacement for RabbitMQ or HiveMQ. The key difference:
| Property | Centralized Broker (RabbitMQ) | FoxMQ |
|---|---|---|
| Ordering | First-come, first-served per node | BFT fair ordering via Vertex consensus |
| Failure | Single point of failure | Tolerates f = floor((n-1)/3) failed nodes |
| Bottleneck | Yes — central broker | No — every node is both producer and consumer |
| Message state | Inconsistent across replicas | Identical on every node |
| Protocol | AMQP / MQTT | MQTT 5.0 (and 3.1.1) |
| Transport | TCP/TLS | TCP/TLS + QUIC/TLS 1.3 for cluster comms |
When a client publishes a message to a local FoxMQ node, the flow is:
- Message submitted to the local node's Vertex instance.
- Vertex gossips the event to cluster peers and runs virtual voting.
- Once consensus is reached (sub-100ms), the message is dispatched to subscribers on all nodes in the same order.
- All consumers across the cluster see identical, mathematically fair message streams.
FoxMQ uses a supermajority rule (2N/3 + 1) to reach consensus. Use cluster size 3N + 1 where N is the number of simultaneous failures to tolerate.
| Cluster Size | Supermajority | Failures Tolerated | Use Case |
|---|---|---|---|
| 1 | 1 | 0 | Local dev only |
| 3 | 3 | 0 | Testing |
| 4 | 3 | 1 | Minimum HA production |
| 7 | 5 | 2 | Mission critical |
| 10 | 7 | 3 | High resilience |
If more nodes fail than tolerated, consensus stalls but local dispatch on still-functioning nodes continues. The cluster recovers automatically when failed nodes reconnect.
Only IPv4 addresses are currently supported.
| Port | Protocol | Purpose |
|---|---|---|
1883 |
TCP | MQTT client connections (plaintext) |
8883 |
TCP | MQTT-over-TLS (MQTTS) |
8080 |
TCP | MQTT-over-WebSockets |
19793 |
UDP | Inter-broker cluster comms (Vertex/QUIC/TLS 1.3) |
When running multiple nodes on one machine, offset ports per node:
- Node 0:
1883/19793 - Node 1:
1884/19794 - Node 2:
1885/19795
All config lives in a single directory (foxmq.d/ by default). Files use TOML format.
The network map — must be identical on every node. Contains the P-256 ECDSA public key and UDP cluster address of each broker.
# Generated by: foxmq address-book from-range / from-list
# Each [[addresses]] entry is one broker node.
# key_0.pem
[[addresses]]
key = """
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE48VTK97faW815TQKbU/OY/ENsLmw
9M4D46bz8T/MfSlfErjKW6aMmB+armT2Fg5/5+T53ezUUlwP1ELP/GpUog==
-----END PUBLIC KEY-----
"""
addr = "172.18.0.2:19793"
# key_1.pem
[[addresses]]
key = """
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEdJH/jhJEJ8zJfnYbu1gXZnSJ7a+g
K6BC6pGyzxP6tCQ75PX13XDaV9jdhAWvHt+gKOg+0u2nCikv8dfTOe6aTw==
-----END PUBLIC KEY-----
"""
addr = "172.18.0.3:19793"If address-book.toml differs between nodes, brokers will fail to synchronise. Debug with:
RUST_LOG=warn ./foxmq run --secret-key-file=foxmq.d/key_0.pemThe private P-256 key for node N. Copy only key_0.pem to node 0, key_1.pem to node 1, etc. Never share across nodes. All nodes share address-book.toml.
Argon2id-hashed MQTT credentials. Generated by foxmq user add. Safe to copy across nodes in HA setups. Passwords cannot be recovered — delete the entry and re-run user add to reset.
# From a port range (single-machine testing)
./foxmq address-book from-range 127.0.0.1 19793 19795
# Generates keys for: 127.0.0.1:19793, :19794, :19795
# From an explicit list (multi-machine production)
./foxmq address-book from-list 10.0.0.1:19793 10.0.0.2:19793 10.0.0.3:19793 10.0.0.4:19793
# Custom output directory
./foxmq address-book from-range 127.0.0.1 19793 19796 --output-dir /etc/foxmq/./foxmq user add
# Flags:
# --output-file path to users.toml (default: foxmq.d/users.toml)
# --write-mode append | truncate (default: append)./foxmq run --secret-key-file=foxmq.d/key_0.pemComplete flag reference:
| Flag | Default | Description |
|---|---|---|
--secret-key-file, -f |
— | PEM-encoded P-256 private key (required or use -k) |
--secret-key, -k |
— | Hex DER key (env: SECRET_KEY) |
--mqtt-addr, -L |
0.0.0.0:1883 |
TCP address for MQTT client connections |
--cluster-addr, -C |
0.0.0.0:19793 |
UDP address for inter-broker traffic |
--mqtts |
off | Enable MQTT-over-TLS |
--mqtts-addr |
0.0.0.0:8883 |
TLS MQTT address |
--tls-cert-file |
auto-generated | X.509 certificate for TLS |
--tls-key-file |
same as --secret-key |
Override TLS private key |
--server-name |
foxmq.local |
SNI domain name for TLS |
--websockets |
off | Enable MQTT-over-WebSockets |
--websockets-addr |
0.0.0.0:8080 |
WebSocket listen address |
--allow-anonymous-login |
false | Allow MQTT clients without credentials |
--silent-connect-errors |
false | Silently drop bad connections (stealth mode) |
--log, -l |
full |
Log format: full | compact | pretty | json |
JSON structured log output (for Datadog, Loki, etc.):
./foxmq run --secret-key-file=foxmq.d/key_0.pem --log=json
# {"timestamp":"2024-04-12T23:14:00Z","level":"INFO","fields":{"message":"listening for connections","listen_addr":"0.0.0.0:1883"},"target":"foxmq::mqtt::broker"}Linux (amd64)
curl -LO https://github.com/tashigit/foxmq/releases/download/v0.3.0/foxmq_0.3.0_linux-amd64.zip
unzip foxmq_0.3.0_linux-amd64.zip && chmod +x foxmqmacOS (Universal — Apple Silicon + Intel)
curl -LO https://github.com/tashigit/foxmq/releases/download/v0.3.0/foxmq_0.3.0_macos-universal.zip
unzip foxmq_0.3.0_macos-universal.zip && chmod +x foxmqWindows (PowerShell)
curl -LO https://github.com/tashigit/foxmq/releases/download/v0.3.0/foxmq_0.3.0_windows-amd64.zip
Expand-Archive foxmq_0.3.0_windows-amd64.zip .Docker
docker pull ghcr.io/tashigit/foxmq:latest./foxmq address-book from-range 127.0.0.1 19793 19793
./foxmq user add
./foxmq run --secret-key-file=foxmq.d/key_0.pem# 1. Create network
docker network create --subnet=172.18.0.0/16 foxmq_net
# 2. Generate address book + keys
docker run --rm -v ./foxmq.d/:/foxmq.d/ ghcr.io/tashigit/foxmq:latest \
address-book from-list \
172.18.0.2:19793 172.18.0.3:19793 172.18.0.4:19793 172.18.0.5:19793
# 3. Add users
docker run --rm -v ./foxmq.d/:/foxmq.d/ ghcr.io/tashigit/foxmq:latest user add
# 4. Start all 4 brokers
for i in 0 1 2 3; do
docker run -d --name foxmq-$i --network foxmq_net -p $((1883+i)):1883 \
-v ./foxmq.d/:/foxmq.d/ ghcr.io/tashigit/foxmq:latest \
run --secret-key-file=/foxmq.d/key_${i}.pem
doneClients can connect to any of the four MQTT ports — a message published to port 1883 will arrive on subscribers connected to port 1886.
pip install paho-mqttimport paho.mqtt.client as mqtt
def on_connect(client, userdata, flags, reason_code, properties):
if reason_code == 0:
client.subscribe("swarm/#", qos=1) # subscribe to all swarm topics
def on_message(client, userdata, msg):
# All subscribers on all FoxMQ nodes receive this in the same BFT order
print(f"[{msg.topic}] {msg.payload.decode()}")
client = mqtt.Client(protocol=mqtt.MQTTv5)
client.username_pw_set("myagent", "mypassword")
client.on_connect = on_connect
client.on_message = on_message
client.connect("127.0.0.1", 1883, keepalive=60)
client.loop_forever()import json, time
def publish_state(client, agent_id, role, status):
payload = json.dumps({
"agent_id": agent_id,
"timestamp_ms": int(time.time() * 1000),
"role": role,
"status": status,
})
client.publish(f"swarm/state/{agent_id}", payload, qos=1)| QoS | Guarantee | Use For |
|---|---|---|
| 0 | At most once | High-frequency telemetry where loss is acceptable |
| 1 | At least once | Heartbeats, state updates |
| 2 | Exactly once | Bids, task assignments, financial events |
QoS 1 and 2 messages are persisted and replayed to clients that reconnect after downtime. QoS 0 messages may be lost if the subscriber was offline.
Retained messages are stored per-topic and delivered immediately to new subscribers.
# Broadcast current state to all future subscribers
client.publish("swarm/state/agent_a", json.dumps(state), qos=1, retain=True)
# Clear a retained message
client.publish("swarm/state/agent_a", b"", qos=1, retain=True)client.subscribe("swarm/+/status", qos=1) # single level: swarm/agent_a/status
client.subscribe("swarm/#", qos=1) # multi level: everything under swarm/client = mqtt.Client(protocol=mqtt.MQTTv5)
client.username_pw_set("myagent", "mypassword")
client.tls_set(ca_certs="foxmq.d/key_0.crt") # self-signed cert auto-generated by FoxMQ
client.connect("127.0.0.1", 8883)const mqtt = require("mqtt");
const client = await mqtt.connectAsync("ws://127.0.0.1:8080", {
username: "myagent",
password: "mypassword",
protocolVersion: 5,
});
await client.subscribeAsync("swarm/#");
await client.publishAsync("swarm/hello", JSON.stringify({ agent: "js_agent" }));
client.on("message", (topic, payload) => {
console.log(topic, payload.toString());
});swarm/hello/<agent_id> → initial handshake payload on connect
swarm/state/<agent_id> → { role, status, last_seen_ms } — heartbeat
swarm/task/<task_id> → task broadcast from coordinator
swarm/bid/<task_id> → agent bid on a task
swarm/result/<task_id> → completed task result
FoxMQ's ordering is mathematically fair — the first bid to reach Vertex consensus wins, with no possibility of front-running.
# Publish bid with QoS 2 (exactly once) for critical correctness
client.publish(f"swarm/bid/{task_id}", json.dumps({
"agent_id": my_id,
"price": 0.05,
"timestamp_ms": int(time.time() * 1000),
}), qos=2)
# All agents apply the same deterministic rule on the same ordered stream
def on_message(client, userdata, msg):
if msg.topic.startswith("swarm/bid/"):
bid = json.loads(msg.payload)
task_id = msg.topic.split("/")[-1]
# First message IS the consensus winner — no further coordination needed
if task_id not in decided_bids:
decided_bids[task_id] = bid["agent_id"]
print(f"Task {task_id} won by {bid['agent_id']}")STALE_THRESHOLD_MS = 10_000
peers = {} # agent_id → { last_seen_ms, role, status }
def on_message(client, userdata, msg):
if msg.topic.startswith("swarm/state/"):
data = json.loads(msg.payload)
peers[data["agent_id"]] = data
def check_stale():
now = int(time.time() * 1000)
for agent_id, state in peers.items():
age = now - state["last_seen_ms"]
if age > STALE_THRESHOLD_MS:
state["status"] = "stale"
print(f"[STALE] {agent_id} — {age}ms since last heartbeat")MQTT Last Will messages are published automatically by the broker if a client disconnects ungracefully — useful for marking an agent offline.
client = mqtt.Client(protocol=mqtt.MQTTv5)
client.will_set(
topic=f"swarm/state/{my_id}",
payload=json.dumps({"agent_id": my_id, "status": "offline"}),
qos=1,
retain=True,
)
client.connect("127.0.0.1", 1883)- Fair Ordering: Vertex consensus orders messages before dispatch — no single client can manipulate order.
- Identical State: All subscribers on all FoxMQ nodes receive the same sequence. Deterministic processing logic = convergent state across all agents.
- High Availability: A 4-node cluster survives 1 node failure without operator action.
- MQTT 5.0 Native: User Properties, Topic Aliases, Session Expiry, Last Will, Flow Control all supported.
- No Additional SDK: Any MQTT library (
paho-mqtt,mqtt.js, Gopaho.mqtt.golang, Rustrumqttc, etc.) works without modification. - Stealth Mode: Pass
--silent-connect-errorsto silently reject invalid connections — useful in adversarial environments.
Reference: FoxMQ Documentation