Skip to content

tashigit/foxmq

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

FoxMQ

FoxMQ is a decentralized MQTT 5.0 broker that acts as a high-level abstraction over Tashi Vertex. Where Vertex requires you to reason about DAG events and consensus rounds, FoxMQ exposes a familiar MQTT Pub/Sub interface while Vertex handles all ordering and fault tolerance underneath. Any standard MQTT client library in any language can publish messages to FoxMQ and receive them in the same agreed-upon order across every node in the cluster.


1. What FoxMQ Is (and Isn't)

FoxMQ is NOT a drop-in replacement for RabbitMQ or HiveMQ. The key difference:

Property Centralized Broker (RabbitMQ) FoxMQ
Ordering First-come, first-served per node BFT fair ordering via Vertex consensus
Failure Single point of failure Tolerates f = floor((n-1)/3) failed nodes
Bottleneck Yes — central broker No — every node is both producer and consumer
Message state Inconsistent across replicas Identical on every node
Protocol AMQP / MQTT MQTT 5.0 (and 3.1.1)
Transport TCP/TLS TCP/TLS + QUIC/TLS 1.3 for cluster comms

When a client publishes a message to a local FoxMQ node, the flow is:

  1. Message submitted to the local node's Vertex instance.
  2. Vertex gossips the event to cluster peers and runs virtual voting.
  3. Once consensus is reached (sub-100ms), the message is dispatched to subscribers on all nodes in the same order.
  4. All consumers across the cluster see identical, mathematically fair message streams.

2. Cluster Sizing & Fault Tolerance

FoxMQ uses a supermajority rule (2N/3 + 1) to reach consensus. Use cluster size 3N + 1 where N is the number of simultaneous failures to tolerate.

Cluster Size Supermajority Failures Tolerated Use Case
1 1 0 Local dev only
3 3 0 Testing
4 3 1 Minimum HA production
7 5 2 Mission critical
10 7 3 High resilience

If more nodes fail than tolerated, consensus stalls but local dispatch on still-functioning nodes continues. The cluster recovers automatically when failed nodes reconnect.

Only IPv4 addresses are currently supported.


3. Network Ports

Port Protocol Purpose
1883 TCP MQTT client connections (plaintext)
8883 TCP MQTT-over-TLS (MQTTS)
8080 TCP MQTT-over-WebSockets
19793 UDP Inter-broker cluster comms (Vertex/QUIC/TLS 1.3)

When running multiple nodes on one machine, offset ports per node:

  • Node 0: 1883 / 19793
  • Node 1: 1884 / 19794
  • Node 2: 1885 / 19795

4. Configuration Files (foxmq.d/)

All config lives in a single directory (foxmq.d/ by default). Files use TOML format.

address-book.toml

The network map — must be identical on every node. Contains the P-256 ECDSA public key and UDP cluster address of each broker.

# Generated by: foxmq address-book from-range / from-list
# Each [[addresses]] entry is one broker node.

# key_0.pem
[[addresses]]
key = """
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE48VTK97faW815TQKbU/OY/ENsLmw
9M4D46bz8T/MfSlfErjKW6aMmB+armT2Fg5/5+T53ezUUlwP1ELP/GpUog==
-----END PUBLIC KEY-----
"""
addr = "172.18.0.2:19793"

# key_1.pem
[[addresses]]
key = """
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEdJH/jhJEJ8zJfnYbu1gXZnSJ7a+g
K6BC6pGyzxP6tCQ75PX13XDaV9jdhAWvHt+gKOg+0u2nCikv8dfTOe6aTw==
-----END PUBLIC KEY-----
"""
addr = "172.18.0.3:19793"

If address-book.toml differs between nodes, brokers will fail to synchronise. Debug with:

RUST_LOG=warn ./foxmq run --secret-key-file=foxmq.d/key_0.pem

key_N.pem

The private P-256 key for node N. Copy only key_0.pem to node 0, key_1.pem to node 1, etc. Never share across nodes. All nodes share address-book.toml.

users.toml

Argon2id-hashed MQTT credentials. Generated by foxmq user add. Safe to copy across nodes in HA setups. Passwords cannot be recovered — delete the entry and re-run user add to reset.


5. CLI Reference

foxmq address-book

# From a port range (single-machine testing)
./foxmq address-book from-range 127.0.0.1 19793 19795
# Generates keys for: 127.0.0.1:19793, :19794, :19795

# From an explicit list (multi-machine production)
./foxmq address-book from-list 10.0.0.1:19793 10.0.0.2:19793 10.0.0.3:19793 10.0.0.4:19793

# Custom output directory
./foxmq address-book from-range 127.0.0.1 19793 19796 --output-dir /etc/foxmq/

foxmq user add

./foxmq user add
# Flags:
#   --output-file    path to users.toml  (default: foxmq.d/users.toml)
#   --write-mode     append | truncate   (default: append)

foxmq run

./foxmq run --secret-key-file=foxmq.d/key_0.pem

Complete flag reference:

Flag Default Description
--secret-key-file, -f PEM-encoded P-256 private key (required or use -k)
--secret-key, -k Hex DER key (env: SECRET_KEY)
--mqtt-addr, -L 0.0.0.0:1883 TCP address for MQTT client connections
--cluster-addr, -C 0.0.0.0:19793 UDP address for inter-broker traffic
--mqtts off Enable MQTT-over-TLS
--mqtts-addr 0.0.0.0:8883 TLS MQTT address
--tls-cert-file auto-generated X.509 certificate for TLS
--tls-key-file same as --secret-key Override TLS private key
--server-name foxmq.local SNI domain name for TLS
--websockets off Enable MQTT-over-WebSockets
--websockets-addr 0.0.0.0:8080 WebSocket listen address
--allow-anonymous-login false Allow MQTT clients without credentials
--silent-connect-errors false Silently drop bad connections (stealth mode)
--log, -l full Log format: full | compact | pretty | json

JSON structured log output (for Datadog, Loki, etc.):

./foxmq run --secret-key-file=foxmq.d/key_0.pem --log=json
# {"timestamp":"2024-04-12T23:14:00Z","level":"INFO","fields":{"message":"listening for connections","listen_addr":"0.0.0.0:1883"},"target":"foxmq::mqtt::broker"}

6. Installation

Linux (amd64)

curl -LO https://github.com/tashigit/foxmq/releases/download/v0.3.0/foxmq_0.3.0_linux-amd64.zip
unzip foxmq_0.3.0_linux-amd64.zip && chmod +x foxmq

macOS (Universal — Apple Silicon + Intel)

curl -LO https://github.com/tashigit/foxmq/releases/download/v0.3.0/foxmq_0.3.0_macos-universal.zip
unzip foxmq_0.3.0_macos-universal.zip && chmod +x foxmq

Windows (PowerShell)

curl -LO https://github.com/tashigit/foxmq/releases/download/v0.3.0/foxmq_0.3.0_windows-amd64.zip
Expand-Archive foxmq_0.3.0_windows-amd64.zip .

Docker

docker pull ghcr.io/tashigit/foxmq:latest

7. Local Dev Setup (Single Node)

./foxmq address-book from-range 127.0.0.1 19793 19793
./foxmq user add
./foxmq run --secret-key-file=foxmq.d/key_0.pem

8. Production HA Setup (4 Nodes, Docker)

# 1. Create network
docker network create --subnet=172.18.0.0/16 foxmq_net

# 2. Generate address book + keys
docker run --rm -v ./foxmq.d/:/foxmq.d/ ghcr.io/tashigit/foxmq:latest \
  address-book from-list \
  172.18.0.2:19793 172.18.0.3:19793 172.18.0.4:19793 172.18.0.5:19793

# 3. Add users
docker run --rm -v ./foxmq.d/:/foxmq.d/ ghcr.io/tashigit/foxmq:latest user add

# 4. Start all 4 brokers
for i in 0 1 2 3; do
  docker run -d --name foxmq-$i --network foxmq_net -p $((1883+i)):1883 \
    -v ./foxmq.d/:/foxmq.d/ ghcr.io/tashigit/foxmq:latest \
    run --secret-key-file=/foxmq.d/key_${i}.pem
done

Clients can connect to any of the four MQTT ports — a message published to port 1883 will arrive on subscribers connected to port 1886.


9. MQTT Client Patterns (Python)

pip install paho-mqtt

9.1 Connect and Subscribe

import paho.mqtt.client as mqtt

def on_connect(client, userdata, flags, reason_code, properties):
    if reason_code == 0:
        client.subscribe("swarm/#", qos=1)  # subscribe to all swarm topics

def on_message(client, userdata, msg):
    # All subscribers on all FoxMQ nodes receive this in the same BFT order
    print(f"[{msg.topic}] {msg.payload.decode()}")

client = mqtt.Client(protocol=mqtt.MQTTv5)
client.username_pw_set("myagent", "mypassword")
client.on_connect = on_connect
client.on_message = on_message
client.connect("127.0.0.1", 1883, keepalive=60)
client.loop_forever()

9.2 Publish JSON State

import json, time

def publish_state(client, agent_id, role, status):
    payload = json.dumps({
        "agent_id":     agent_id,
        "timestamp_ms": int(time.time() * 1000),
        "role":         role,
        "status":       status,
    })
    client.publish(f"swarm/state/{agent_id}", payload, qos=1)

9.3 QoS Levels

QoS Guarantee Use For
0 At most once High-frequency telemetry where loss is acceptable
1 At least once Heartbeats, state updates
2 Exactly once Bids, task assignments, financial events

QoS 1 and 2 messages are persisted and replayed to clients that reconnect after downtime. QoS 0 messages may be lost if the subscriber was offline.

9.4 Retained Messages

Retained messages are stored per-topic and delivered immediately to new subscribers.

# Broadcast current state to all future subscribers
client.publish("swarm/state/agent_a", json.dumps(state), qos=1, retain=True)

# Clear a retained message
client.publish("swarm/state/agent_a", b"", qos=1, retain=True)

9.5 Wildcard Subscriptions

client.subscribe("swarm/+/status", qos=1)   # single level: swarm/agent_a/status
client.subscribe("swarm/#", qos=1)           # multi level: everything under swarm/

9.6 TLS Connection (MQTTS)

client = mqtt.Client(protocol=mqtt.MQTTv5)
client.username_pw_set("myagent", "mypassword")
client.tls_set(ca_certs="foxmq.d/key_0.crt")  # self-signed cert auto-generated by FoxMQ
client.connect("127.0.0.1", 8883)

9.7 WebSocket Connection (JS/Browser)

const mqtt = require("mqtt");
const client = await mqtt.connectAsync("ws://127.0.0.1:8080", {
    username: "myagent",
    password: "mypassword",
    protocolVersion: 5,
});
await client.subscribeAsync("swarm/#");
await client.publishAsync("swarm/hello", JSON.stringify({ agent: "js_agent" }));
client.on("message", (topic, payload) => {
    console.log(topic, payload.toString());
});

10. Agentic Design Patterns

10.1 Recommended Topic Schema

swarm/hello/<agent_id>      → initial handshake payload on connect
swarm/state/<agent_id>      → { role, status, last_seen_ms } — heartbeat
swarm/task/<task_id>        → task broadcast from coordinator
swarm/bid/<task_id>         → agent bid on a task
swarm/result/<task_id>      → completed task result

10.2 Fair-Ordered Task Bidding

FoxMQ's ordering is mathematically fair — the first bid to reach Vertex consensus wins, with no possibility of front-running.

# Publish bid with QoS 2 (exactly once) for critical correctness
client.publish(f"swarm/bid/{task_id}", json.dumps({
    "agent_id":     my_id,
    "price":        0.05,
    "timestamp_ms": int(time.time() * 1000),
}), qos=2)

# All agents apply the same deterministic rule on the same ordered stream
def on_message(client, userdata, msg):
    if msg.topic.startswith("swarm/bid/"):
        bid = json.loads(msg.payload)
        task_id = msg.topic.split("/")[-1]
        # First message IS the consensus winner — no further coordination needed
        if task_id not in decided_bids:
            decided_bids[task_id] = bid["agent_id"]
            print(f"Task {task_id} won by {bid['agent_id']}")

10.3 Stale Peer Detection

STALE_THRESHOLD_MS = 10_000
peers = {}  # agent_id → { last_seen_ms, role, status }

def on_message(client, userdata, msg):
    if msg.topic.startswith("swarm/state/"):
        data = json.loads(msg.payload)
        peers[data["agent_id"]] = data

def check_stale():
    now = int(time.time() * 1000)
    for agent_id, state in peers.items():
        age = now - state["last_seen_ms"]
        if age > STALE_THRESHOLD_MS:
            state["status"] = "stale"
            print(f"[STALE] {agent_id}{age}ms since last heartbeat")

10.4 Publish on Connect (Last Will)

MQTT Last Will messages are published automatically by the broker if a client disconnects ungracefully — useful for marking an agent offline.

client = mqtt.Client(protocol=mqtt.MQTTv5)
client.will_set(
    topic=f"swarm/state/{my_id}",
    payload=json.dumps({"agent_id": my_id, "status": "offline"}),
    qos=1,
    retain=True,
)
client.connect("127.0.0.1", 1883)

11. Key Properties for Agent Builders

  • Fair Ordering: Vertex consensus orders messages before dispatch — no single client can manipulate order.
  • Identical State: All subscribers on all FoxMQ nodes receive the same sequence. Deterministic processing logic = convergent state across all agents.
  • High Availability: A 4-node cluster survives 1 node failure without operator action.
  • MQTT 5.0 Native: User Properties, Topic Aliases, Session Expiry, Last Will, Flow Control all supported.
  • No Additional SDK: Any MQTT library (paho-mqtt, mqtt.js, Go paho.mqtt.golang, Rust rumqttc, etc.) works without modification.
  • Stealth Mode: Pass --silent-connect-errors to silently reject invalid connections — useful in adversarial environments.

Reference: FoxMQ Documentation

About

A distributed, Byzantine fault tolerant MQTT broker powered by Tashi Consensus Engine (TCE).

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors