Append-only storage with prefix-based driver mounting. Everything has a record in the DB. Nothing is deleted — cancellation is by antiparticle (a record that negates a prior record). The from key is signed so every write carries identity and a signature.
Two backends: SQLite for the structured index, files for blobs. The DB always has the record. Files hold the payload when it's large. Small payloads (fences, sections, pointers, structured overlays) live inline in the DB record's payload column.
Descended from the crossings architecture — same schema shape, but no walk, no query planner. Simplified to append-and-read-by-prefix. The query planner comes later if it comes at all. What we need now is a place to put things and a way to get them back, with every write identified and signed.
Everything is an address. Addresses are colon-delimited paths. The first segments determine which driver handles the read/write. The registry maps prefixes to drivers, longest match wins.
mounts:
":files:local": { driver: file, root: "./data/files" }
":files:ssh": { driver: ssh }
":streams:mdast": { driver: sqlite, db: "./data/streams.db" }
":db:crossings": { driver: sqlite, db: "./data/crossings.db" }
":cache:snapshots": { driver: sqlite, db: "./data/cache.db" }
":traces": { driver: sqlite, db: "./data/traces.db" }
":types": { driver: sqlite, db: "./data/meta.db" }
":sessions": { driver: sqlite, db: "./data/sessions.db" }
Longest prefix match: :files:ssh:some-host:path/to/file matches :files:ssh, the driver gets some-host:path/to/file as the remainder.
One shape. Every write produces one of these.
to_addr — the address this record belongs to. Read-by-prefix returns all records at or under this address.from_addr — who wrote it. Another address. :sessions:claude, :boundaries:yaml_loader, :services:lantern. The identity behind this address signs the record.type_addr — what kind of record. Also an address. :types:crossing, :types:poke, :types:snapshot, :types:antiparticle. Types are themselves records — you can read :types:poke to get metadata about what a poke is.payload — the data. JSON. Large leaves are extracted as IOUs (see below) — the payload stays lean, the blobs go to files.at — ISO8601 timestamp.sig — signature over from_addr + canonical payload. Proves identity. Uses the same PKI as boundary crossings.# Before extraction:
payload:
boundary: "yaml_loader"
result:
content: "# Very Long Markdown Document\n\n..." # 12KB
# After extraction:
payload:
boundary: "yaml_loader"
result:
content: { "_iou": "a1b2c3d4-...", "_size": 12482 }
module Wanderland
module Storage
class IOU
DEFAULT_THRESHOLD = 4096 # bytes
def self.extract(payload, threshold: DEFAULT_THRESHOLD)
ious = {}
cleaned = walk(payload) do |value|
if value.is_a?(String) && value.bytesize > threshold
uuid = SecureRandom.uuid
ious[uuid] = value
{ "_iou" => uuid, "_size" => value.bytesize }
else
value
end
end
[cleaned, ious]
end
def self.hydrate(payload, &reader)
walk(payload) do |value|
if value.is_a?(Hash) && value["_iou"]
reader.call(value["_iou"])
else
value
end
end
end
def self.walk(obj, &block)
case obj
when Hash then obj.transform_values { |v| walk(v, &block) }
when Array then obj.map { |v| walk(v, &block) }
else block.call(obj)
end
end
end
end
end
to_addr: ":streams:mdast:wanderland-core" # where this record lives
from_addr: ":sessions:claude" # who wrote it (signed)
type_addr: ":types:poke" # what kind of record
payload: { path: "requirements", value: "...", operation: "append" }
at: "2026-04-06T03:15:00Z" # when
sig: "abc123..." # signature over from_addr + payload
ref: "data/streams/wanderland-core/05.json" # file pointer (optional, for blobs)
Nothing is deleted. To cancel a record, write an antiparticle — a record with type_addr: ":types:antiparticle" whose payload references the original record's address and timestamp.
to_addr: ":streams:mdast:wanderland-core"
from_addr: ":sessions:claude"
type_addr: ":types:antiparticle"
payload: { cancels: { at: "2026-04-06T03:15:00Z", type: ":types:poke" } }
at: "2026-04-06T04:00:00Z"
sig: "def456..."
When reading, the overlay skips cancelled records. The antiparticle and the original both remain in the append log — the cancellation is itself a fact, signed and timestamped.
An antiparticle must be signed by an identity with authority to cancel records at that address. The storage engine checks: can from_addr cancel records at to_addr? This is the same capability check as boundary authorization.
Every record's from_addr is backed by a signature. The sig field covers from_addr + canonical payload (keys sorted, JSON-serialized). This means:
from_addr invalidates the signatureThe PKI is the same as boundary crossings. Wanderland.pki.sign(identity_id, payload) produces the signature. Wanderland.pki.verify(identity_id, payload, signature) validates it. The storage engine calls verify on read if the caller asks for verified records.
The workhorse. One table per mounted database:
CREATE TABLE records (
id INTEGER PRIMARY KEY AUTOINCREMENT,
to_addr TEXT NOT NULL,
from_addr TEXT NOT NULL,
type_addr TEXT NOT NULL,
payload TEXT, -- JSON, IOUs replace large leaves
at TEXT NOT NULL, -- ISO8601
sig TEXT -- signature over from_addr + payload
);
CREATE INDEX idx_to_addr ON records(to_addr);
CREATE INDEX idx_from_addr ON records(from_addr);
CREATE INDEX idx_type_addr ON records(type_addr);
CREATE INDEX idx_at ON records(at);
❌ Fence Execution Error: 'str' object has no attribute 'get'
Operations:
append(record) — Extract IOUs from payload, write IOU files, INSERT lean record. That's it. Append-only.at(addr) — SELECT WHERE to_addr = addr ORDER BY at. Exact match.under(prefix) — SELECT WHERE to_addr LIKE 'prefix%' ORDER BY at. Prefix scan.from(addr) — SELECT WHERE from_addr = addr ORDER BY at. Who wrote what.typed(type_addr) — SELECT WHERE type_addr = addr. All records of a type.overlay(addr) — Read all records at addr, apply antiparticles, return current state.hydrate(record) — Walk payload, replace IOUs with file contents. Explicit, on-demand.No UPDATE. No DELETE. Append and read.
Blob storage. Directory structure mirrors the address space:
data/files/
local/
streams/
mdast/
wanderland-core/
00-source.md # original markdown
01-ast.json # parsed mdast
traces/
2026-04-06/
span-uuid.json # trace file
Operations:
write(addr, content) — Write file at path derived from address. Return the path.read(addr) — Read file content.exists?(addr) — Check existence.The file driver doesn't do records — it just stores blobs. The SQLite driver holds the record that points at the file via ref.
Same as file driver but over SSH. Address remainder after the mount prefix includes the host: :files:ssh:some-host:path/to/file. The driver parses out the host and delegates to an SSH connection. Future work — the mount table and driver interface are ready for it.
Same pattern as ResolverRegistry, DataProvider, ComponentRegistry. Mount at boot from config.
module Wanderland
module Storage
@mounts = {} # prefix => driver instance
def self.mount(prefix, driver)
@mounts = driver
end
def self.resolve(addr)
# Longest prefix match
match = @mounts
.select { |prefix| addr.start_with?(prefix) }
.max_by(&:length)
raise "No driver mounted for: #{addr}" unless match
remainder = addr.delete_prefix(match)
[@mounts, remainder]
end
def self.append(record)
driver, _ = resolve(record[:to_addr])
driver.append(record)
end
def self.at(addr)
driver, _ = resolve(addr)
driver.at(addr)
end
def self.under(prefix)
driver, _ = resolve(prefix)
driver.under(prefix)
end
def self.overlay(addr)
records = at(addr)
apply_antiparticles(records)
end
end
end
| What | Address pattern | Type | Payload | Ref |
|---|---|---|---|---|
| Context crossings | :db:crossings:{request-id} |
:types:crossing |
The crossing record | — |
| Trace spans | :traces:{date}:{span-id} |
:types:trace |
{ boundary, duration, ... } |
traces/{date}/{id}.json |
| Markdown source | :streams:mdast:{slug} |
:types:source |
{ size, checksum } |
files/{slug}/source.md |
| Pokes | :streams:mdast:{slug} |
:types:poke |
{ path, value, operation } |
— (inline) |
| Fence extractions | :streams:mdast:{slug} |
:types:fence |
{ index, type, content } |
— (inline) |
| Section index | :streams:mdast:{slug} |
:types:sections |
{ sections: [...] } |
— (inline) |
| Snapshots | :cache:snapshots:{slug} |
:types:snapshot |
{ checksum } |
cache/{slug}/{at}.json |
| Scenario results | :db:scenarios:{path} |
:types:verification |
{ passed, failures, trace_ref } |
— |
| Session state | :sessions:{id} |
:types:session |
{ identity, scopes, ... } |
— |
To get the current content of a markdown node:
under(":streams:mdast:wanderland-core"):types:source — that's the base:types:poke records in order (skip any cancelled by antiparticles)This is the same model as git (base + patches) but simpler — no branching, no merge. Just append and overlay. Snapshots shortcut the replay: if there's a recent :types:snapshot, start from there instead of the source.
Storage mounts declared in site config, initialized during Wanderland.boot:
storage:
mounts:
":db:crossings": { driver: sqlite, path: data/crossings.db }
":streams:mdast": { driver: sqlite, path: data/streams.db }
":traces": { driver: sqlite, path: data/traces.db }
":cache:snapshots": { driver: sqlite, path: data/cache.db }
":files:local": { driver: file, root: data/files }
Boot reads the storage config, instantiates drivers, mounts them. Same idempotent boot as everything else.