Wanderland's append-only event log that happens to look a lot like a jellyfish.
Crossings is a hierchical datastore that composes itself into a system from simple rules, like an org chart for your data.
Crossings is an append-only, single table log of events that have occurred, including the configuration of the system itself. Each record contains the provenance of how it was created and the values it contains as a set of composition instructions. An action targets a location in the graph an requests one of its capabilities be executed. That query triggers a reductive walk through the graph to compile a plan which can be previewed before execution. Execution of the plan is performed by chaining the compiled series of boundary crossings (think stateless middleware that produces event streams) and seeding the chain with the composed initial context.
Shape based denials contrain the entire process and a single input in the form of type_addr, capability and request_context can be shared by HTTP, CLI, and JSON-RPC clients. A live REPL can be exposed to human or agentic users, with a fully constrained environment for live knowledge work.
As an example, take the story of Aloysious Stanley Thibideau, MD, our beloved child of parents with questionable naming decisions and Crossing's first intern. On his first day at work he was issued a name badge with Employee ID :stream:mdast:room-6:astmd and told to report to his to_addr for duty.
He thought it a bit odd to see his name repeated twice, but whatever money was money, so he wandered down the hall and checked in with the nearest supervisor.
"Hi, I'm Aloysious Stanley Thibideau, MD", he said reporting reaching in to give the mildly disinteresting, and otherwise uneventful man a friendly handshake.
His supervisor, unaware that people with names longer than Bub or Guy could possibly exist said "Nope, you're AST. Report to room #6", and handed AST,MD a stream of tokens to place in a conveniently located sleeve, a pad of paper, and a mostly sharpened pencil.
A few hours later, still unsure of what he was doing holding a stack of bingo chips, a man with a small cart came by and said "show me your tokens", wrote his name on our friend AST,MD's pad and affixed a googly eye to the token holder's forehead. "Thanks", he mumbled pushing his cart on, "nice to peek you".
On his way out that night, he stopped by his supervisor's office to inquire about the strange interaction. "Meh, that's just Q. He's a bit eerie, but otherwise harmless. He comes by once in a while, asks to see your tokens and generally moves on. Sometimes he'll try to leave you a present. Just be be careful if he does, he'll try to tickle you if you take it. 'Poking' he calls it, it seems harmless so we just let it go. He'll try to tickle you and take a coin too. If he does take anything, or if he leaves anything too, just write it down on your inventory sheet so we can reimburse you."
The next morning, he came in to find that his previously lifeless workspace was now a beehive of activity. In one corner, a stack of fences had been piled up around what looked to be some serious computing horsepower. In another, a dozen terminals had been setup, and a dozen men with small carts were all madly typing away. As he walked over, AST was blinded with virtual copies of his stack of tokens but they didn't look anything like the ones he was carrying nestled against sore, poked ribs.
The terminals had transformed his simple stack into a rich interactive workspace, each eerie Q was able to shape and form the tokens into whatever output they needed to take for the weird little errands they were on. Or at least that's what our Dr friend, Mr. AST could work out.
As he reported for duty he asked his supervisor what had happened to his formerly quiet abode. He was suddenly feeling a lot more enthusiastic about his role here. His supervisor looked like a changed man too, the way one does when one's entire reporting structure has grown geometrically.
"It was a busy day yesterday. People heard about us, more people stopped by. Since they were here they decided to lend a hand. Now we've got a whole team setting up and more are stopping by each day".
"How do they know what to do when they get here?" AST, MD asked, oddly confused for a doctor who is stereotypically known for his clear and concise communication. It was a fair question to ask in this case, since he could see no real reporting structure in the room other than his supervisor and they all seemed to be wearing the same Employee ID on their name badges. When he looked even closer he suddenly blurted, "and why are they all wearing my namebadge" as he had spotted that each and every one of the people scurrying around room 6 was wearing :stream:mdast:room-6:astmd on their lapels.
"You mean 'our' namebadge", his supervisor said, his own :stream:mdast:room-6 suddenly and uncomfortably appearing in AST's view. "If you look closely at their badges, they're all colour coded by the department that authorized their work. It controls what they have access to. They bring their own equipment with them, and it only does one thing and that thing only while they're in this room. It's pretty air tight", he said with a chest slightly larger than AST, and the man's buttons, remembered it.
And with that our friend AST, MD settled into his second day at Crossings, suddenly feeling brighter about his future. He'd found a family here, simple folk. Everyone did one thing and one thing well. They brought their tools with them and all it took was one call down the chain to revoke their access permanently if they did anything naughty.
He'd heard rumours too about the room in the back where all the work was purely automated. He wasn't too worried about the automation, it was just the low level stuff that fed into his team up here in the main office. Rumour was though that those guys couldn't do anything without head office approval. They could slave away all day cooking up new terminals and if they couldn't get approval from their boss, they can't even plug a terminal in, let alone grant a namebadge with execute permissions. He really hoped the guys over in InfoSec didn't get any ideas.
"Ow! Q not so hard!", he complained as Q troddled on by with his wee cart.
"Poke", the eerie Q giggled in return
A crossing is a record.
A crossing is a record that knows where it is, what it knows, why it's there, what it's for, who put it there, what caused it, and when. Four addresses define every crossing:
Crossing = {
to_addr: ":documents:foo" # where it lives (spatial)
from_addr: ":users:alice" # who put it there (identity)
type_addr: ":edits" # what kind of thing (capability)
trace_addr: ":plans:a1b2c3d4" # what caused this (temporal/causal)
payload: { content: "..." } # the data
}
Two trees live in this structure. Walk up to_addr segments and you traverse spatial hierarchy — where things are. Walk up trace_addr and you traverse causal history — what caused what. Address is spatial. Trace is temporal. Two readings of the same system.
When you query an address, the system walks up the hierarchy collecting layers:
Query: :documents:foo:intro
Walk collects:
:documents:foo:intro ← start here
:documents:foo ← parent
:documents ← parent
: ← root
At each level, it gathers all the crossings that match. The result is a stack of layers, ordered root to leaf, representing the payloads collected on the walk. When viewed bottom up, a simple merge language controls ordering for sets and lists. The hierarchy otherwise controls precedence.
:documents:foo ← query here
└── content (from: alice) ← base layer
└── edit (from: bob) ← overlay
└── edit (from: alice) ← overlay
└── version:1.0 (from: system) ← snapshot
A query for :documents:foo will return all layers merged.
A query with type = :edits will return only edit layers.
A query with from = :bob will return only Bob's contributions.
A bridge is an external computation endpoint. When a crossing needs to do something — parse markdown, call an API, authenticate a user — it routes through a bridge.
Bridge = {
transport: "json-rpc" # how to talk to it
host: "localhost"
port: 7779
}
Bridges are declared in type crossings. When you walk to build a stack, you collect the bridge configuration along with everything else. The bridge tells the executor where to send the call. What to do when it gets there comes from the action field on the instruction — stringify, parse, authenticate, etc.
The remark bridge parses and stringifies markdown. The keycloak bridge handles auth. Each is just an address in the type hierarchy with a transport configuration.
Denials and conditions are the grammar of the system — what's allowed to cross and what isn't.
A denial is a condition spec that aborts execution if the current context shape matches it. A condition is a spec that gates whether a layer merges or a step executes. Same evaluator, different consequence: abort vs skip.
Condition specs are data, not code:
# Simple value check
type: contextValue
args:
key: status
value: archived
# Existence check
type: contextValue
args:
key: mdast
exists: true
# Shape matching via operators
type: contextShape
args:
match:
role:
in: [admin, editor]
# Address pattern
type: addressMatch
args:
field: from_addr
prefix: ":sessions:admin"
Combinators compose them with short-circuit evaluation:
# AND — short-circuits on first false
all:
- type: contextValue
args:
key: status
value: draft
- type: contextValue
args:
key: role
value: admin
# OR — short-circuits on first true
any:
- type: contextValue
args:
key: format
value: markdown
- type: contextValue
args:
key: format
value: html
# NOT
not:
type: contextValue
args:
key: status
value: archived
Any condition can carry negate: true to flip its result.
Denials are checked at two points:
Conditions are checked:
conditions key is only merged if the condition passescondition key is only executed if it passesThe handler registry is open — new condition types are registered without touching existing code.
Every plan organizes its instructions into phases: Pause → Fetch → Splice.
| Phase | What Happens |
|---|---|
| Pause | Auth checks, rate limits. Are you allowed? |
| Fetch | Query external data, call bridges. Gather what's needed. |
| Splice | Transform, merge, compute. The actual work. |
The compiler reads handlers from the collapsed context and emits instructions grouped by phase. Each instruction is either an action (terminal bridge call) or a capability (runtime fork to another type's handlers).
The executor processes instructions in phase order. The core loop for each step:
1. Check denials against current context shape
2. Check step condition — skip if not met
3. Call bridge (action) or fork sub-plan (capability)
4. Write step crossing as checkpoint
5. Merge result into context
6. Check denials against updated context shape
This is the fundamental algorithm — Pause (deny), Fetch (bridge), Splice (merge), Continue (next step). The phases group instructions by intent, but the loop is the same at every step.
The compiler transforms a capability request into an executable plan. Three-way walk, collapse, flatten to instructions.
The compiler no longer concatenates type_addr:capability into a synthetic address. Instead, all three walks take an optional capability parameter. At each level of the walk, after collecting at the current address, the walk forks one level deep into {current}:{capability}.
Common config (bridge transport, host, port) lives on the main hierarchy. Capability-specific handlers (what actions to run for peek vs poke) live at the fork address. Both contribute to the same stack. Collapse handles precedence.
This applies uniformly to all three walks:
:streams:my-doc:peek):sessions:admin:peek for audit logging):bridges:json-rpc:remark:streams:mdast:peek)Universal capabilities (:repl, :introspect) live at root and are found by every walk. Type-specific capabilities override via normal precedence.
See crossing-forking-walk for the full mechanism.
The wise man on the hill has all the knowledge — every crossing, the full merged context. He doesn't lack knowledge. He lacks your query.
"What is the meaning of life?" returns noise. The walk returns everything. Full context window, no filter.
"Did my crossing happen?" returns one bit. Yes or no. A walk to a specific address that returns a specific layer.
The wisdom isn't in the answer. It's in knowing which address to walk to. The right question is the right to_addr.
The compiler performs three walks:
| Walk | Address | Collects |
|---|---|---|
| Target | to_addr → up to : |
payload, context, denials |
| Session | from_addr → up to : |
permissions, identity, session denials |
| Capability | type_addr:capability → up to : |
handlers, bridge config, capability denials |
The three stacks are passed to Collapse, which merges them in precedence order (type → to → from) using operators. Every leaf value in the merged context is wrapped in a ContextVariable that carries the layer it came from — two projections of the same tree (values and provenance).
Each instruction in the compiled plan is one of two types:
| Type | Meaning | What Happens |
|---|---|---|
action |
Terminal | Send to bridge at runtime, get result |
capability |
Fork | Executor compiles and runs a full sub-plan at runtime |
An action is a boundary crossing. One bit — it happened or it didn't.
A capability is horizontal gene transfer. The executor forks to another type's handlers, runs the full pause/fetch/splice loop, and returns control to the parent plan. The compiler doesn't expand capabilities — it just records the reference. All expansion happens at runtime.
The compiler writes the plan as a crossing:
to_addr: ":plans:a1b2c3d4"
from_addr: ":sessions:alice"
type_addr: ":plans:plan"
trace_addr: null # null for root plans
payload:
plan_id: "a1b2c3d4"
plan_address: ":plans:a1b2c3d4"
address: ":streams:my-doc"
capability: "peek"
capability_path: ":bridges:remark:peek"
type_addr: ":bridges:remark"
context: { ... } # merged from collapse
denials: [ ... ] # accumulated constraints
phases:
pause:
- { phase: pause, seq: 0, type: action, action: authenticate, ... }
fetch:
- { phase: fetch, seq: 1, type: action, action: load_document, ... }
- { phase: fetch, seq: 2, type: capability, capability: ":transforms:frontmatter" }
splice:
- { phase: splice, seq: 3, type: action, action: render, ... }
status: "compiled"
The compiler resolves template variables in produces addresses using a shared Template class. Static vars are resolved at compile time:
| Variable | Resolves to |
|---|---|
$parent |
Target address (to_addr) |
$plan.address |
:plans:<plan_id> |
$session.identifier |
Session address (from_addr) |
$capability.path |
Full capability path |
$type.addr |
Type address from target |
The executor uses the same Template class with an additional live context reference for $context.* path resolution at runtime.
The system splits execution into two components: a Scheduler that manages the queue, and an Executor that processes one step at a time.
The store IS the queue. Plans without :plans:completed crossings are pending work. The scheduler polls, reconstructs state, and dispatches.
One tick of the scheduler:
:plans:plan crossings without a matching :plans:completedNormal operation and crash recovery are the same code path. Restart is just step 1 — the query finds all incomplete plans, reconstruction rebuilds their state from step crossings, and execution continues.
Coordination across workers: shard on plan ID modulus. Worker N takes plans where id % shard_count == N. No locking, no contention — each worker owns a deterministic slice.
The executor processes exactly one step. No loop, no state between calls. The scheduler reconstructs context and hands it here.
For an action step:
For a capability step:
The executor never mutates the plan crossing in the store. Step crossings ARE the execution log. Context is reconstructed from them on each tick.
Every step writes a crossing:
Plan: :plans:a1b2c3d4 type: :plans:plan
pause step 0: :plans:a1b2c3d4:pause:0 type: :plans:step
trace_addr → :plans:a1b2c3d4
fetch step 1: :plans:a1b2c3d4:fetch:1 type: :plans:step
trace_addr → :plans:a1b2c3d4:pause:0
splice step 2: :plans:a1b2c3d4:splice:2 type: :plans:step
trace_addr → :plans:a1b2c3d4:fetch:1
completion: :plans:a1b2c3d4 type: :plans:completed
trace_addr → :plans:a1b2c3d4:splice:2
Each step's trace_addr points to the previous step — a causal chain through execution. Walk up trace_addr from any step and you get the exact execution sequence back to the plan.
Context reconstruction: load the plan's original context, then merge each step crossing's payload in order. The result is the context at the current point of execution. No mutable state — the store is the state.
Plans live in the store as crossings. There is no in-memory queue. The scheduler polls the store, reconstructs state, picks the oldest tip, and dispatches one step.
| Property | How It's Achieved |
|---|---|
| No in-memory state | Context reconstructed from step crossings each tick |
| Crash recovery | Same as normal operation — poll finds incomplete plans |
| Horizontal scaling | Shard on plan ID modulus, more workers = more shards |
| Backpressure | Query bridge load before dispatch — deny if over budget |
| Fair scheduling | Oldest tip first — no plan starves while others run |
| Causal isolation | Each plan is its own timeline, forks create new plans |
When the executor hits a capability step, it forks:
The sub-plan appears as a new pending plan. The scheduler picks it up on a future tick and runs it independently. When the sub-plan completes (its :plans:completed crossing exists), the parent plan's next reconstruction merges the sub-plan's results into context and continues.
Parent plans with pending sub-plans are skipped during scheduling — their capability step can't resolve until the sub-plan completes. No polling loop, no yield counter — the scheduler simply doesn't pick them. "
The system divides into three layers:
Request (address + capability)
│
▼
┌──────────┐
│ Compiler │ → Plan crossing written to store
└────┬─────┘
│
▼
┌──────────┐
│ Executor │ → Step crossings written per action
└────┬─────┘
│
▼
┌──────────┐
│ Bridges │ → External computation via registered transports
└──────────┘
| Layer | Input | Output | Side Effects |
|---|---|---|---|
| Compiler | address + capability | Plan crossing | Plan written to store |
| Executor | Plan crossing | Step + completion crossings | Crossings written per step |
| Bridges | Action + payload | Result | External calls (JSON-RPC, WebSocket, etc.) |
The Compiler resolves what to do. The Executor does it. Bridges are the boundary where crossings touch the outside world.
The HTTP API and CLI expose three operations that map directly to the compiler and executor pipeline. No service layer — the API calls the Compiler and Store directly.
| Endpoint | What It Does | Side Effects |
|---|---|---|
POST /preview |
Compile without persisting. Three-way walk, collapse, flatten to instructions. | None — pure what-if |
POST /plan |
Compile and persist. Same as preview, then writes the plan crossing to the store. | Plan crossing at :plans:<id> |
POST /execute |
Load a persisted plan by address, run it through the Executor. | Step crossings, completion crossing, bridge calls |
GET /crossings |
Raw address browser. Filter by address, type, from, under, trace. |
None |
POST /crossings |
Insert a crossing directly. | Crossing written to store |
The CLI mirrors the API exactly:
cross preview --address :streams:my-doc --capability peek
cross plan --address :streams:my-doc --capability peek --context "testing"
cross execute --plan-address :plans:a1b2c3d4
cross crossings list --type :fence
cross crossings insert --to-addr :test --type-addr :type --context "creating"
preview → "what's the plan?" (walks + collapse + flatten)
plan → "save the plan" (+ persist to store)
execute → "run it" (load plan, call bridges, write step crossings)
Each step adds one thing on top of the previous. Preview is compile without side effects. Plan is compile with persistence. Execute is the runtime.
The /crossings browser is orthogonal
The scheduler runs as a pool of worker processes, each owning a shard of the plan space.
| Setting | ENV | Default | Purpose |
|---|---|---|---|
workers |
CROSSING_WORKERS |
1 | Number of worker processes |
poll_interval |
CROSSING_POLL_INTERVAL |
100 | Milliseconds between scheduler ticks |
shard_count |
CROSSING_SHARD_COUNT |
0 (auto) | Shard count, 0 = same as workers |
cross-scheduler --workers 4 --poll 200
Or via the main CLI:
cross scheduler --workers 4
The startup sequence:
Scheduler.tick in a loop, sleeping on idleSingle worker mode (default) runs inline without forking.
In a containerized deployment, the scheduler process forks children — the container needs an init system like tini to handle PID 1 responsibilities (signal forwarding, zombie reaping). Without it, orphaned workers won't get cleaned up on container stop.
# Example
FROM ruby:3.3-slim
RUN apt-get update && apt-get install -y tini
ENTRYPOINT ["tini", "--"]
CMD ["cross-scheduler", "--workers", "4"]
The scheduler and the API server are separate processes. The API handles /preview, /plan, and /crossings — writing plans to the store. The scheduler picks them up and executes them. They share the database, nothing else.
— it's raw SQL-level access to the crossing store, returning records with JSON payloads as-is. No walking, no collapsing, no compilation. Just the records.
There is no in-memory queue. The store is the queue.
Plans are crossings of type :plans:plan. Step results are crossings of type :plans:step. Completion markers are crossings of type :plans:completed. Workers query the store, pick up plans, process them, and write results back.
Each bridge is its own queue within the store. The bridge's type hierarchy determines routing. A plan's instructions carry bridge config (transport, host, port, handler) from the collapsed context — the walk-up already resolved which bridge handles this capability.
Before dispatching to a bridge, the executor can query the bridge's current load:
open_crossings = store.query_under(":bridges:remark")
.reject { |c| completed?(c) }
.count
If load exceeds budget, deny the step. The budget isn't a fixed constant — it's system capacity divided by current demand. Bridge tree topology controls load coupling: bridges under the same parent (e.g., :bridges:remark:peek and :bridges:remark:stringify) share aggregate load at the :bridges:remark level.
Workers are stateless functions:
loop:
plan = pick_next_plan() # query store for unprocessed plans
executor.process(plan) # core loop: deny → bridge → write → merge
No shared mutable state between workers. All coordination happens through crossings appearing in the store. A worker picks up a plan, processes it until it backgrounds (capability fork) or completes, and moves on.
Horizontal scaling is adding more workers querying the same store. The store handles contention — append-only means no write conflicts.
Queue depth is gravitational potential. Too many plans in-flight means the store grows faster than workers can drain it.
The event horizon is the point where workers can't keep up. Before it, the system stabilizes — workers drain plans, forks complete, load decreases. After it, queue depth grows without bound. Processing time approaches infinity.
Deadlock detection: if a plan yields waiting for sub-plans more than 1000 times without progress, something is stuck. Either the sub-plan failed, there's a circular dependency, or the addresses are wrong.
Every leaf value in the collapsed context is a ContextVariable — a value that knows where it came from. The wrapper is transparent: operators, comparisons, and method calls pass through to the underlying value via method_missing. Nothing in the system needs to know it's there.
cv = ContextVariable.new("json-rpc", layer)
cv == "json-rpc" # true — delegated
cv.start_with?(":") # false — delegated
cv.value # "json-rpc"
cv.layer # the layer that set it
cv.layer.to_addr # ":bridges:remark"
The CollapsedStack exposes two readings of the same tree:
| Projection | Method | Returns |
|---|---|---|
| Values | collapsed.values |
Plain data — unwrapped, ready for use |
| Provenance | collapsed.provenance |
Layer info at every leaf — same shape, different reading |
The compiler consumes values — it needs plain data for the plan. The introspect endpoint returns both — values for the data and provenance for debugging where each value came from.
During collapse, as each layer merges in, its payload values are wrapped:
# Layer from :bridges:remark with payload { transport: "json-rpc" }
# becomes:
{ transport: ContextVariable.new("json-rpc", layer) }
Operators handle ContextVariables transparently:
ContextVariable replaces old one. The winning layer is attached.$append: arrays of ContextVariables concatenate. Each element knows its source layer.$set: new ContextVariable wraps the replacement value.ContextVariables.| Item | Description | Status |
|---|---|---|
| Bridge controller | Implement the autodiscovery controller from crossing-bridge-system design | Open |
| Bridge load budgets | Scheduler should query bridge queue depth before dispatch — deny if over budget | Open |
| Trace compaction | Crossings should compact over time like RRD databases — recent full resolution, old compressed | Open |
| Ontology schema | Base type definition for universal capabilities (repl, introspect, describe) — needed for REPL design | Open |
| Remark daemon | Node.js remark service serving parse/stringify/peek over JSON-RPC | Open |
| Fence execution bridges | Python and Ruby runtime hosts for fence execution | Open |