Indigo Mesh: A Decentralized Information Network for the Post-Cloud Era

A framework for connecting on-premises AI systems into secure, self-governing information networks without centralized storage.


1. The Thesis

Three forces are converging to create a genuine market shift:

AI makes local compute valuable again. When your machine runs autonomous AI workers that read your files, write your code, analyze your data, and learn from your patterns, the machine itself becomes the center of gravity. Your local environment is no longer a thin client calling the cloud — it is the operating system for your work. HQ already embodies this: a city of workers, knowledge, and projects living on your machine.

Regulation and economics are pulling data home. The EU AI Act (August 2026 deadline) imposes data governance requirements with penalties up to 7% of worldwide turnover. The US CLOUD Act creates irreconcilable tension with GDPR. Meanwhile, 37signals saved $1.9M in one year by leaving AWS. 86% of CIOs plan cloud repatriation. The economic and legal incentives are aligning toward data sovereignty.

But distributed teams still need to share. This is the unsolved problem. Everyone talks about cost savings and sovereignty. Almost nobody is building the software layer that makes distributed, data-local architectures work for teams. Git solved this for code. Nothing has solved it for knowledge, project state, and organizational context.

Indigo Mesh is the answer to the third problem. Not by creating another cloud, but by connecting on-premises systems into a self-governing information network — a mesh that never stores data centrally, traces every piece of information to its source, and puts control in the hands of individuals and groups.


2. The Vision

What Indigo Mesh Is

Indigo Mesh is a protocol and software layer that connects HQ instances into overlapping, self-governing information networks. Each participant keeps their data on their own machine. The mesh provides the pathways for that data to flow — selectively, securely, and with attribution — between participants who have agreed to share.

Think of it as the postal system for a world of sovereign city-states.

The Metaphor Stack

Level Metaphor What It Is
Node City An HQ instance on someone's machine
Connection Trade Route A peered relationship between two HQ instances
Group Mesh Alliance A set of connected HQs with shared governance rules
Information Trade Goods Knowledge, worker patterns, project context, signals
Protocol Treaty The rules governing how trade happens
Constitution Charter Group-level governance defining what can be shared and how
Personal Policy Sovereignty Individual control over what leaves your city

What It Is Not

  • Not a blockchain. No tokens, no mining, no consensus mechanism. We use hash chains for integrity verification, not for consensus.
  • Not a cloud with extra steps. Data never rests on any server. Relay nodes are blind conduits — encrypted pass-through only.
  • Not federated social media. Mastodon-style federation creates admin burden and consistency problems. This is point-to-point with group overlays.
  • Not IPFS. We use content-addressing concepts (CIDs, Merkle DAGs) but not the full IPFS stack. Content lives on nodes, not on a public DHT.

3. Connecting the Pieces

Indigo already has most of the building blocks. The mesh is the connective tissue.

┌─────────────────────────────────────────────────────┐
│                    INDIGO MESH                       │
│                                                     │
│  ┌───────────┐   ┌───────────┐   ┌───────────┐    │
│  │  HQ (You) │◄─►│ HQ (Peer) │◄─►│ HQ (Peer) │    │
│  │           │   │           │   │           │    │
│  │ Workers   │   │ Workers   │   │ Workers   │    │
│  │ Knowledge │   │ Knowledge │   │ Knowledge │    │
│  │ Projects  │   │ Projects  │   │ Projects  │    │
│  └─────┬─────┘   └─────┬─────┘   └─────┬─────┘    │
│        │               │               │          │
│        └───────────┬────┴───────────────┘          │
│                    │                               │
│         ┌──────────┴──────────┐                    │
│         │   Mesh Protocol     │                    │
│         │ ┌────────────────┐  │                    │
│         │ │ HQ World       │  │ Federation & Peering│
│         │ │ (exists)       │  │                    │
│         │ ├────────────────┤  │                    │
│         │ │ HIAMP          │  │ Worker Messaging    │
│         │ │ (exists)       │  │                    │
│         │ ├────────────────┤  │                    │
│         │ │ Mesh Sync      │  │ Store-and-Forward   │
│         │ │ (NEW)          │  │ Information Flow     │
│         │ ├────────────────┤  │                    │
│         │ │ Trust Layer    │  │ Signing, Capabilities│
│         │ │ (NEW)          │  │ Provenance           │
│         │ ├────────────────┤  │                    │
│         │ │ Group Gov.     │  │ Constitutions,       │
│         │ │ (NEW)          │  │ Permissions           │
│         │ └────────────────┘  │                    │
│         └─────────────────────┘                    │
└─────────────────────────────────────────────────────┘

What Already Exists

HQ — The local city. Workers, knowledge, projects, learning system, Ralph loop. Fully functional on a single machine. This is the node.

HQ World — The federation protocol. Peering ceremony (human-gated connections), manifest schema (HQ "business cards"), transfer protocol (envelopes for knowledge, worker patterns, context), file-based export/import with integrity verification. 9/9 user stories complete, 50 tests passing. This is the foundation for node-to-node connections.

HIAMP — Inter-agent messaging protocol. Workers in different HQs can communicate through Slack. Transport-agnostic envelope format. This is how workers coordinate across cities.

What's Missing: The Mesh Layer

HQ World currently uses manual, file-based transfers — you export a bundle, share it however you want (email, Slack, USB), and the receiver imports it. This is like hand-carrying letters between cities. It works, but it doesn't scale.

The mesh layer automates this exchange. It creates:

  1. Persistent connections between peered HQs (like the phone lines in FidoNet)
  2. Store-and-forward routing so information flows even when nodes are offline (like BBS message packets)
  3. Topic-based channels so nodes subscribe to what they care about (like FidoNet echomail)
  4. Group scoping so information flows within defined boundaries
  5. Automated sync on configurable schedules (like Zone Mail Hour, but modern)

4. Architecture

Layer 1: Transport — How Nodes Talk

Model: Tailscale-inspired coordination + direct P2P

The research is clear: Tailscale's architecture is the gold standard for mesh connectivity. Separate the coordination plane (lightweight, replaceable) from the data plane (direct, encrypted).

Coordination Server (Lightweight)          Data Flow (P2P)
┌─────────────────────────────┐
│ - Node public keys           │
│ - Network topology           │         Node A ◄──────────► Node B
│ - Group membership           │              (WireGuard/Noise)
│ - Relay server locations     │
│ - NO user data ever          │         Node A ◄──► Relay ◄──► Node C
│ - Self-hostable (Headscale)  │              (Encrypted pass-through)
└─────────────────────────────┘

Implementation choices:

Component Technology Why
Node identity Ed25519 keypair (DID:key) Self-certifying, no infrastructure needed
Encrypted tunnels Noise Protocol Framework Same as WireGuard, proven
NAT traversal STUN + relay fallback "Always connect via relay, upgrade to direct"
Relay servers DERP-style regional relays Encrypted pass-through, stateless, self-hostable
Peer discovery Coordination server + mDNS (LAN) Lightweight, replaceable, works behind NATs

Critical design decision: The coordination server holds only public keys and topology metadata. It never sees content. It can be self-hosted (like Headscale for Tailscale). If it goes down, existing connections continue to work — you just can't establish new ones until it's back. This is the minimum viable centralization.

Layer 2: Sync — How Information Flows

Model: FidoNet store-and-forward + Syncthing block sync + CRDTs

This is the heart of the mesh — the "night call" mechanism that makes information flow between nodes asynchronously.

Node A                          Node B
┌──────────────────┐            ┌──────────────────┐
│ Outbox            │            │ Inbox             │
│ ┌──────────────┐ │  sync      │ ┌──────────────┐ │
│ │ Envelope 1   │─┼────────────┼─│ Envelope 1   │ │
│ │ Envelope 2   │─┼────────────┼─│ Envelope 2   │ │
│ │ Envelope 3   │ │            │ │ (pending)    │ │
│ └──────────────┘ │            │ └──────────────┘ │
│                  │            │                  │
│ Knowledge Store  │            │ Knowledge Store  │
│ (content-addr.)  │            │ (content-addr.)  │
└──────────────────┘            └──────────────────┘

Sync modes:

Mode Trigger Use Case
Push Node publishes new content Broadcasting a knowledge update
Pull Node requests missing content Catching up after being offline
Scheduled Configurable interval (e.g., every 6 hours) Low-priority background sync
Event-driven On checkpoint, handoff, or commit Tied to HQ lifecycle events

Sync protocol:

  1. Exchange manifests — Each node shares a compact manifest of what it has (bloom filter of content CIDs)
  2. Diff — Compare manifests to identify what's missing on each side
  3. Transfer — Send only missing content, using block-level sync (Syncthing pattern)
  4. Verify — Recipient verifies content integrity via hash chain
  5. Apply — Content enters the node's knowledge store, pending user approval if configured

Conflict resolution: Pragmatic, like Syncthing. For structured data (knowledge files, worker patterns), use CRDTs where possible (Automerge for JSON-like documents). For binary files and things that can't be merged, keep both versions and flag for human resolution. No clever auto-merging that might silently corrupt — honesty over cleverness.

Layer 3: Trust — How Information Is Validated

Model: Git's hash chains + Sigstore-style attribution + UCAN capabilities

Every piece of information in the mesh has a provenance chain — you can trace it from the current holder back to the original creator, with cryptographic proof at every step.

Original Source                  Derivation                    Current Holder
┌──────────────────┐         ┌──────────────────┐         ┌──────────────────┐
│ Content: "..."   │         │ Content: "..."   │         │ Content: "..."   │
│ CID: bafy...abc  │◄────────│ CID: bafy...def  │◄────────│ CID: bafy...ghi  │
│ Author: did:A    │  from   │ Author: did:B    │  from   │ Author: did:C    │
│ Sig: <Ed25519>   │         │ Sig: <Ed25519>   │         │ Sig: <Ed25519>   │
│ Time: 2026-02-15 │         │ Derived: [abc]   │         │ Derived: [def]   │
└──────────────────┘         └──────────────────┘         └──────────────────┘

Information integrity stack:

Concern Solution How
Content hasn't been tampered with Content-addressed storage (CID = SHA-256 hash) If the content changes, the CID changes — instantly detectable
I know who created this Ed25519 signatures on every piece of content Every node signs what it publishes; signature is permanently attached
I can trace it to its origin Merkle DAG of derivations Each piece of content links to its sources by CID
This was published at a specific time Append-only transparency log Like Sigstore's Rekor — distributed across mesh nodes, tamper-evident
This hasn't been revoked Signed revocation entries in the transparency log Creator can revoke; revocation propagates through mesh

What this means practically: When you receive a piece of knowledge through the mesh, you can:

  1. Verify it hasn't been modified (hash check)
  2. See who created it (signature verification)
  3. Trace the chain of custody (Merkle DAG traversal)
  4. Confirm when it was published (transparency log)
  5. Check if it's still considered valid (revocation check)

All of this happens locally — no network request needed for verification.

Layer 4: Access Control — Who Sees What

Model: UCAN (User Controlled Authorization Networks) + group constitutions

Access control operates at two levels: personal (what I choose to share) and group (what the group's rules allow).

Group Constitution (e.g., "Indigo Engineering Team")
┌──────────────────────────────────────────────────┐
│ Group ID: did:key:zIndigo...                      │
│ Members: [did:A, did:B, did:C, did:D]             │
│                                                   │
│ Default Sharing Rules:                            │
│   - Knowledge: share by default                   │
│   - Worker Patterns: share by default             │
│   - Project Context: share by default             │
│   - Personal Notes: never share                   │
│   - Client Data: never share                      │
│                                                   │
│ Content Classification:                           │
│   - public: visible to all group members          │
│   - restricted: requires explicit grant           │
│   - confidential: specific named recipients only  │
│                                                   │
│ Amendments: require 2/3 member approval           │
└──────────────────────────────────────────────────┘

Personal Policy (Node A's overrides)
┌──────────────────────────────────────────────────┐
│ My Rules (override group defaults):               │
│   - Share my knowledge: yes (default)             │
│   - Share my worker patterns: selective           │
│   - Share my project context: with approval       │
│   - Never share: companies/*, .env, credentials   │
│                                                   │
│ Per-Peer Overrides:                               │
│   - did:B (Alex): full trust, share everything    │
│   - did:C (Sam): read-only, knowledge only        │
│   - did:D (New): minimal, explicit approval only  │
└──────────────────────────────────────────────────┘

How UCAN enables this without a central server:

  1. When you share information, you create a UCAN token specifying:

    • Who can access it (audience DID)
    • What they can do (read, derive, reshare)
    • What content it covers (CID or path pattern)
    • When it expires
    • What they can delegate to others (attenuation)
  2. The recipient can prove their access by presenting the UCAN chain — verified entirely offline

  3. Delegation works: Alice shares with Bob (UCAN), Bob can create a restricted UCAN for Carol (read-only, no reshare). Carol can verify the chain back to Alice

  4. Revocation: Alice publishes a revocation to the transparency log. When Carol's node next syncs, it learns Alice revoked Bob's UCAN, invalidating Carol's derived access

This is the elegant solution to "how do I control where my information goes" — capabilities, not permissions. You don't maintain an access control list on a server. You issue cryptographic tokens that travel with the data. Anyone can verify them. No one needs to call home.


5. How People Actually Use This

Scenario 1: Viewing Information

The Signal Feed

Each node in the mesh has a signal feed — a personalized stream of information flowing in from connected peers and groups. This is where Indigo's "signals from channels" concept becomes the user-facing experience.

┌─────────────────────────────────────────────────┐
│ Indigo Mesh — Signal Feed                        │
├─────────────────────────────────────────────────┤
│                                                  │
│ 📡 From: Indigo Engineering (group)              │
│ ┌───────────────────────────────────────────────┐│
│ │ New Knowledge: "API Rate Limiting Patterns"   ││
│ │ Author: alex/architect  ·  2h ago             ││
│ │ Provenance: Original (3 endorsements)         ││
│ │ [View] [Import to Knowledge] [Endorse]        ││
│ └───────────────────────────────────────────────┘│
│                                                  │
│ 📡 From: stefan/architect (direct peer)          │
│ ┌───────────────────────────────────────────────┐│
│ │ Worker Pattern: "e2e-test-writer" v2.1        ││
│ │ Adapted from: qa-tester (Indigo Eng.)         ││
│ │ Changes: Added Playwright support, removed    ││
│ │ Cypress commands, updated knowledge base      ││
│ │ Provenance: Derived from bafy...abc (v2.0)    ││
│ │ [View Diff] [Import Worker] [Ignore]          ││
│ └───────────────────────────────────────────────┘│
│                                                  │
│ 📡 From: Open Source AI Workers (community mesh) │
│ ┌───────────────────────────────────────────────┐│
│ │ Context Update: "Local LLM Benchmarks Feb 26" ││
│ │ Author: community/analyst · 5h ago            ││
│ │ Endorsed by: 12 nodes · Verified: ✓           ││
│ │ [View] [Save] [Share to Indigo Eng.]          ││
│ └───────────────────────────────────────────────┘│
│                                                  │
└─────────────────────────────────────────────────┘

How viewing works technically:

  1. Your node's sync daemon receives envelopes from connected peers
  2. Each envelope is verified (signature, hash, provenance chain)
  3. Content matching your subscriptions enters your signal feed
  4. Content is stored locally in your HQ's content-addressed store
  5. You can view, import, endorse, derive, or ignore
  6. Viewing is entirely local — no network request needed once synced

The key UX principle: Information arrives at your city. You decide what to do with it. Nothing is forced into your knowledge base — it sits in your signal feed until you act on it. This is the "postal service" model: mail arrives, you open it.

Scenario 2: Tracing Source

When you view any piece of information in the mesh, you can inspect its full provenance:

Provenance Chain for: "API Rate Limiting Patterns"
─────────────────────────────────────────────────────

[Original]
  Author: alex/architect (did:key:z6Mk...)
  Created: 2026-02-15 14:30:00 UTC
  CID: bafy...abc123
  Signature: ✓ Verified (Ed25519)
  Context: Written during hq-cloud API development

      │
      ▼ derived-from

[Revision]
  Author: alex/architect (did:key:z6Mk...)
  Created: 2026-02-16 09:15:00 UTC
  CID: bafy...def456
  Signature: ✓ Verified
  Changes: Added token bucket algorithm section

      │
      ▼ endorsed-by

[Endorsement]
  Endorser: stefan/code-reviewer (did:key:z6Mn...)
  Endorsed: 2026-02-16 11:00:00 UTC
  Comment: "Verified against our production rate limiter"
  Signature: ✓ Verified

      │
      ▼ reshared-via

[Distribution]
  Shared to: Indigo Engineering (group)
  Shared by: alex/architect
  UCAN: Read + Derive (no reshare outside group)
  Expiry: 2026-08-15

Why this matters: In a world of AI-generated content, provenance becomes critical. You need to know: Did a human write this? Was it AI-assisted? Has it been reviewed? Who vouches for it? The mesh's provenance chain answers all of these questions with cryptographic proof, not trust.

Scenario 3: Creator Attribution

Every piece of content in the mesh is permanently tied to its creator through Ed25519 signatures. This is not a metadata field that can be edited — it is a cryptographic assertion.

Attribution rules:

  1. Original work — The creator's DID and signature are embedded in the content's CID envelope. This cannot be separated from the content without invalidating the hash.

  2. Derived work — When you build on someone else's knowledge, your content links to the original via its CID. The provenance DAG preserves the creative lineage.

  3. AI-assisted work — Content created with AI assistance can be tagged with an ai_assisted: true field in the envelope metadata. The signing author is still the human who reviewed and published it.

  4. Collaborative work — Multiple signatures can co-sign a piece of content. All contributors are attributed.

  5. Endorsements — Other nodes can endorse content without claiming authorship. Endorsements are separate signed records that link to the content CID.

The practical outcome: If your knowledge travels through three hops in the mesh — from your HQ to a colleague's, from there to a community mesh, from there to someone you've never met — your authorship is provably intact at every step. No one can claim your work without your signature being present in the chain.

Scenario 4: Keeping Information Secure

Defense in depth — four layers of security:

Layer 1: Transport Security
  └─ All connections encrypted (Noise protocol)
  └─ No plaintext ever leaves a node
  └─ Relay nodes see only encrypted blobs

Layer 2: Content Security
  └─ Content-addressed storage (CID = hash)
  └─ Tamper detection is automatic
  └─ Integrity verification is offline

Layer 3: Access Security
  └─ UCAN capability tokens
  └─ Per-content, per-recipient, time-limited
  └─ Delegation with attenuation (can only restrict, never expand)

Layer 4: Governance Security
  └─ Group constitutions define boundaries
  └─ Personal policies override defaults
  └─ Transparency log for auditing
  └─ Revocation propagates through mesh

What an attacker would need to do:

Attack What Stops It
Intercept traffic Noise protocol encryption (same as WireGuard)
Forge content Ed25519 signatures — computationally infeasible to fake
Modify content in transit Content-addressing — any change invalidates the CID
Access unauthorized content UCAN tokens — cryptographically enforced, not policy-based
Inject false identity DID:key — identity IS the public key, no spoofing possible
Compromise a relay Relays see only encrypted blobs, cannot read or modify content
Compromise a node Only that node's data is exposed; other nodes' data requires their keys
Rewrite history Append-only transparency log — insertions are detectable

Scenario 5: Controlling Information at Group and Personal Levels

The Constitution Model

Groups in the mesh are governed by constitutions — signed documents that define the rules for information sharing within the group.

# Example: Indigo Engineering Team Constitution
constitution:
  id: "indigo-engineering-v1"
  name: "Indigo Engineering Team"
  created: "2026-03-01"

  governance:
    amendment_process: "2/3 member approval via signed vote"
    membership: "invitation only, requires 1 existing member sponsorship"
    removal: "unanimous minus-one vote, or voluntary departure"

  information_classes:
    public:
      description: "Available to all group members automatically"
      examples: ["best practices", "tool configurations", "coding patterns"]
      retention: "indefinite"

    restricted:
      description: "Available only with explicit author grant"
      examples: ["project-specific knowledge", "client-adjacent work"]
      retention: "until author revokes"
      requires: "UCAN with specific CID reference"

    confidential:
      description: "Named recipients only, no forwarding"
      examples: ["security vulnerabilities", "personnel discussions"]
      retention: "time-limited (max 90 days)"
      requires: "UCAN with no-delegate flag"

  sharing_defaults:
    knowledge: "public"
    worker_patterns: "public"
    project_context: "restricted"
    personal_notes: "never shared"

  external_sharing:
    policy: "members may share 'public' content outside the group"
    restrictions: "restricted and confidential content requires author approval"
    attribution: "original authorship must be preserved"

Personal sovereignty always wins. A group constitution sets defaults and boundaries, but individuals can always be more restrictive. You can choose not to share something even if the constitution would allow it. You can never be forced to share.

Overlapping memberships. A person can belong to multiple meshes simultaneously:

┌─────────────────────────────────────────────┐
│                YOUR HQ                       │
│                                             │
│   ┌─────────────────────────────────────┐   │
│   │ Mesh: Indigo Engineering Team       │   │
│   │ (company-internal, 12 members)      │   │
│   │                                     │   │
│   │   ┌─────────────────────────────┐   │   │
│   │   │ Mesh: Cloud Infra Squad     │   │   │
│   │   │ (sub-team, 4 members)       │   │   │
│   │   └─────────────────────────────┘   │   │
│   └─────────────────────────────────────┘   │
│                                             │
│   ┌─────────────────────────────────────┐   │
│   │ Mesh: Open Source AI Workers        │   │
│   │ (community, 200+ members)           │   │
│   └─────────────────────────────────────┘   │
│                                             │
│   ┌─────────────────────────────────────┐   │
│   │ Mesh: Stefan + Alex (personal)      │   │
│   │ (direct peer, 2 members)            │   │
│   └─────────────────────────────────────┘   │
│                                             │
└─────────────────────────────────────────────┘

Each mesh has its own constitution. Your node enforces all of them simultaneously. Content you receive in one mesh is not automatically available in another — you must explicitly reshare (if the source constitution allows it).


6. The Technology Choices (And Why)

Based on thorough research of what works and what fails in the real world, here are the specific technology choices and the evidence behind each.

What Succeeded (and what we learn from it)

System Why It Worked What We Take
Tailscale Separated coordination from data; "relay first, direct later" eliminates cold-start Coordination server architecture, DERP relay pattern
Syncthing Block-level sync with hashing; pragmatic conflict resolution Sync protocol, "keep both versions" honesty
BitTorrent DHT Self-interested participation; 4 operations; 28M nodes DHT for content discovery at scale; simplicity
Nostr Simple protocol (1 event type); client-side verification; key-based identity Signed events, relay architecture
Git Content-addressed DAG; integrity without consensus; proven at massive scale Merkle DAG for provenance, hash chains for integrity
FidoNet Store-and-forward at scale; hierarchical addressing; batch processing Async message routing, topic subscriptions

What Failed (and what we avoid)

System Why It Failed What We Avoid
Textile/ThreadDB "Centralized on-ramp" trap; too many layers; data deleted on shutdown Never make the mesh dependent on a hosted service
Gun.js Browser as database node; unreliable consistency; missing fundamentals No browser-based storage of mesh data
SSB (Scuttlebutt) Append-only forever; no deletion; pub dependency; onboarding pain Allow content expiry and cleanup
Mastodon/ActivityPub Admin burden; inconsistent visibility; no economic model No "run your own server" requirement
IPFS (full stack) Content disappears when no one pins it; centralized in practice Use content-addressing concepts, not the full network

The Production-Ready Stack

Layer Technology Maturity Evidence
Identity Ed25519 + DID:key Standard (W3C) Used by SSB, Nostr, UCAN, all mesh systems
Transport encryption Noise Protocol Production WireGuard (Linux kernel), Tailscale
Content addressing CIDs (multiformats) Production IPFS, Git, Hypercore
Provenance Merkle DAG (IPLD) Production IPFS, Git
Data sync Block Exchange + CRDTs Production Syncthing (blocks), Automerge (CRDTs)
Access control UCAN Production Fission, WNFS
Content signing Ed25519 signatures Production Git commits, Nostr events
Integrity log Append-only hash chain Production Git, Certificate Transparency, Hypercore
NAT traversal STUN + relay fallback Production Tailscale, Syncthing, libp2p
Group encryption MLS (RFC 9420) Standard OpenMLS library

Every component in this stack is production-ready. No research-only technologies. No unproven cryptography. No speculative protocols. This is assembly from proven parts.


7. Implementation Path

Phase 0: What Already Exists (Today)

  • HQ as local city — workers, knowledge, projects, learning system
  • HQ World protocol — peering ceremony, manifest schema, transfer envelopes, file-based export/import
  • HIAMP — inter-agent messaging
  • Content integrity basics — HQ World uses SHA-256 hashes in transfer bundles
  • TypeScript implementation of World protocol at packages/hq-world/

Phase 1: Persistent Connections (The Wire)

Goal: Replace manual file-based transfers with persistent, encrypted connections between peered HQs.

Build:

  • Node identity service (Ed25519 keypair generation, DID:key creation, stored locally)
  • Coordination server (Node.js/Fastify — holds public keys and topology, nothing else)
  • Direct peer connections (Noise protocol over TCP/QUIC)
  • Relay server for NAT traversal (encrypted pass-through, stateless)
  • LAN discovery (mDNS for same-network peers)

Integrates with: HQ World's existing peer registry (config/world.yaml), manifest exchange, and trust levels.

Measure of success: Two HQ instances on different networks can establish an encrypted, persistent connection that survives NAT and reconnects automatically.

Phase 2: Automated Sync (The Night Call)

Goal: Information flows between connected HQs automatically, on schedule or on events.

Build:

  • Outbox/inbox system — content queued for sync, received content staged for review
  • Manifest exchange protocol — bloom filters of content CIDs for efficient diffing
  • Block-level transfer — only send what's missing (Syncthing pattern)
  • Sync schedules — configurable intervals (real-time, hourly, daily)
  • Event-driven sync — trigger on /checkpoint, /handoff, git commit
  • Content-addressed local store — CID-keyed storage for mesh content

Integrates with: HQ World's transfer envelopes (same format, automated delivery), HQ's checkpoint/handoff lifecycle.

Measure of success: Knowledge published on Node A appears in Node B's signal feed within the configured sync interval, with zero manual steps.

Phase 3: Trust and Provenance (The Chain)

Goal: Every piece of content has cryptographic attribution, integrity verification, and a traceable origin.

Build:

  • Content signing — all published content signed with node's Ed25519 key
  • Provenance DAG — Merkle DAG linking derived content to sources
  • Transparency log — distributed append-only log of publications (shared across mesh)
  • Endorsement protocol — nodes can co-sign content they've verified
  • Revocation — signed revocation entries, propagated through mesh

Integrates with: HQ World's existing integrity hashes (extends them with full provenance), HQ's learning system (learnings become signed, attributed content).

Measure of success: Any node can verify the full chain of custody for any piece of content, entirely offline.

Phase 4: Access Control (The Gate)

Goal: Granular, decentralized control over who can access what, with delegation.

Build:

  • UCAN token generation and verification
  • Per-content access policies (attached to content at publication time)
  • Group constitutions — YAML documents defining group sharing rules
  • Personal policy engine — node-level overrides for group defaults
  • Delegation chains — UCAN attenuation for sub-sharing
  • Revocation list — synced across mesh, checked on access

Integrates with: HQ World's trust levels (open, verified, trusted map to UCAN permission levels), peering ceremony (constitution exchange during connection).

Measure of success: Content shared in a group mesh is accessible only to authorized members, even after passing through multiple intermediate nodes. Personal overrides prevent sharing of content the user marks as private.

Phase 5: Group Governance (The Constitution)

Goal: Self-governing groups with formal rules, membership management, and information classification.

Build:

  • Constitution schema — YAML format for group governance rules
  • Membership protocol — invitation, sponsorship, voting, removal
  • Information classification — public/restricted/confidential with enforcement
  • Constitutional amendments — signed votes, threshold-based approval
  • Cross-mesh sharing rules — how content flows between overlapping meshes
  • Signal feed UI — the user-facing experience for viewing mesh content

Integrates with: Everything above. This is the user-facing layer that makes the mesh feel like a living network rather than a sync tool.

Measure of success: A team of 5-10 people can form a mesh group, define their constitution, share knowledge and worker patterns, control what stays internal vs. what can be shared externally, and maintain full attribution and provenance — all without any cloud storage.


8. What's Real and What's Hard

What's Real

The technology stack is proven. Every component in the proposed architecture is production-ready. Ed25519, Noise protocol, CIDs, Merkle DAGs, UCANs, CRDTs, block-level sync — all of these have production deployments at scale.

The user need is genuine. The AI-driven shift toward local compute is real. The regulatory pressure is real. The collaboration gap for distributed teams with sovereign data is real. Nobody is solving this well.

HQ already has the foundation. The city metaphor, the World protocol, HIAMP, the worker system, the knowledge learning loop — these aren't hypothetical. They exist and work.

The Indigo Mesh is a natural evolution. HQ World was designed as file-based "trade between cities." The mesh automates and enriches that trade. Every design decision in World (transport-agnostic envelopes, human-gated peering, trust levels) was made to support exactly this evolution.

What's Hard

NAT traversal is engineering-hard. Getting two machines behind different NATs to talk directly is a solved problem in principle (Tailscale proved it), but the implementation is substantial. The relay fallback is essential — don't rely on hole punching alone.

Key management for non-technical users. Every system that ties identity to cryptographic keys hits the "lost key = lost identity" wall. Nostr, SSB, and every blockchain wallet has this problem. Potential mitigation: derive keys from a master seed phrase with backup to a user-controlled location (encrypted USB, printed QR code). This is UX-hard, not crypto-hard.

Revocation in decentralized systems. When Alice revokes Bob's access, how does the rest of the mesh learn about it? The revocation list must propagate through the mesh, but propagation takes time. During that window, Bob still has access. Mitigation: short-lived UCANs (hours, not months) with refresh, so revocation happens naturally through non-renewal.

Group governance at scale. A constitution for 5 people is straightforward. For 500 people, how do you handle amendments, disputes, and enforcement? This is more of a social problem than a technical one. Mitigation: start with small groups, design the protocol to support hierarchy (sub-groups within groups), and let governance patterns evolve through use.

The initial adoption curve. The mesh is only valuable with multiple participants. The first user gets zero value from the mesh layer (though HQ itself is already valuable standalone). Mitigation: start with Indigo's own team as the first mesh. The product validates on the builders first.

What's Honestly Uncertain

Will people actually set up persistent connections? The ease of "just email the file" is hard to beat. The mesh needs to be dramatically easier, not just marginally better.

Can CRDTs handle the diversity of content types in a knowledge base? CRDTs work well for text and JSON. For complex file types (images, PDFs, binary formats), you're back to "keep both versions." This may be fine in practice.

What's the right balance of automation vs. human approval? HQ World was designed with "human approval at every step." A fully manual mesh is tedious. A fully automatic mesh is scary. The answer is probably configurable — some content flows automatically, some requires approval, based on the constitution.


9. The Bigger Picture

Where This Goes

Phase 1-3 (6-12 months): Indigo's internal team uses the mesh. Small group, high trust, rapid iteration. The mesh connects 3-5 HQ instances. Proves the sync, trust, and access control layers work in practice.

Phase 4-5 (12-18 months): Open the mesh to early adopters. HQ users can form groups, create constitutions, and share knowledge. The coordination server is self-hostable. Worker pattern "pollination" becomes the killer use case — AI workers that learn in one city can spread their patterns to others, adapting to each new environment.

Beyond (18+ months): Community meshes emerge. Open-source knowledge repositories. Companies form private meshes for internal knowledge. Cross-company meshes for supply chain collaboration. The "Central Directory" from HQ World's design (US-008) becomes relevant — a discovery service for finding meshes and peers by capability, domain, or interest.

The Narrative

The internet was born decentralized. Email is federated. The web is distributed. Then everything centralized — because centralization was easier, and the tools for decentralization weren't ready.

AI is the catalyst for the next swing back. Not because of ideology, but because of economics and physics. When your machine runs autonomous workers that read your data, the data must be where the workers are. The cloud becomes the exception, not the rule.

But local-only doesn't work for teams. You need a way to share. Not by uploading to someone else's server, but by creating direct connections between sovereign systems — your machines, your keys, your rules.

Indigo Mesh is that connection layer. Built on proven technology. Designed for the world that's emerging. Grounded in a product (HQ) that already works.

Not a protocol looking for a problem. A solution emerging from the problem we're already solving.

What Makes This Different

Every decentralized system we researched either succeeded or failed for the same reasons:

Succeeded: Tailscale (minimal centralization, direct P2P, "just works"), Syncthing (pragmatic, honest about limitations), BitTorrent DHT (self-interested participation, dead simple protocol)

Failed: Textile (centralized on-ramp trap), Mastodon (admin burden), SSB (append-only forever, onboarding pain), Gun.js (tried to do everything in the browser)

The pattern is clear: succeed by being simple, honest, and self-serving. Fail by being complex, ideological, and altruism-dependent.

Indigo Mesh is designed to be:

  • Simple — Proven components assembled, not novel cryptography invented
  • Honest — Conflict resolution is "keep both," not "magic merge"
  • Self-serving — You participate because YOUR workers get better from the mesh, not because you're a good citizen
  • Pragmatic — Coordination server is minimal centralization that's acceptable and replaceable
  • Human-gated — No automatic anything without explicit approval

The One-Line Summary

Indigo Mesh connects sovereign AI workstations into self-governing information networks — no cloud, full provenance, you control the rules.


Appendix A: Glossary

Term Definition
Node An HQ instance participating in the mesh
Peer Another node you have a direct connection with
Mesh A group of connected nodes sharing information under a constitution
Constitution A signed document defining a mesh's governance rules
Signal Feed Your personalized stream of incoming information from the mesh
Envelope A signed, content-addressed wrapper for information in transit
CID Content Identifier — a hash that uniquely identifies content
UCAN User Controlled Authorization Network — a capability token for access control
DID Decentralized Identifier — a globally unique identity tied to a keypair
Provenance DAG The chain of derivations linking content back to its original source
Endorsement A co-signature from a node that has verified a piece of content
Relay A server that forwards encrypted data when direct connection fails
Coordination Server Lightweight server that holds public keys and topology (never content)
Transparency Log Append-only record of publications, endorsements, and revocations

Appendix B: Technology References

Technology What It Is Where To Learn More
Ed25519 Elliptic curve signature scheme libsodium, tweetnacl
DID:key W3C standard for key-based decentralized identifiers w3c-ccg.github.io/did-method-key
UCAN User Controlled Authorization Networks ucan.xyz
CID Content Identifiers (IPFS/IPLD) docs.ipfs.tech/concepts/content-addressing
Noise Protocol Cryptographic handshake framework noiseprotocol.org
Automerge CRDT library for conflict-free data sync automerge.org
MLS Messaging Layer Security (RFC 9420) messaginglayersecurity.rocks
Merkle DAG Directed acyclic graph with hash-linked nodes Used by Git, IPFS
Hypercore Append-only signed log docs.holepunch.to
OpenMLS Open-source MLS implementation openmls.tech

Appendix C: Relationship to Existing HQ Components

HQ Component Mesh Role Integration Point
config/world.yaml Node identity + peer registry Phase 1: add mesh connection config
workers/registry.yaml Capability catalog for mesh manifest Phase 1: auto-derive from existing
HQ World transfer envelopes Wire format for mesh content Phase 2: same format, automated delivery
HQ World peering ceremony Mesh group onboarding Phase 1: extend for mesh membership
HIAMP Worker-to-worker messaging across mesh Phase 3+: transport option for mesh sync
/checkpoint, /handoff Sync triggers Phase 2: event-driven mesh sync
Learning system Signed, attributed knowledge Phase 3: learnings enter provenance DAG
Knowledge repos (symlinked) Shareable knowledge units Phase 2: export/import via mesh
workspace/world/transfers/ Transfer audit log Phase 3: becomes transparency log

Read more