Trigger-driven execution model for wanderland-core. The engine is a write handler. Every mutation is a crossing written to storage. Triggers fire boundaries in response to those writes. The system chases triggers in waves until quiescence, then returns the collected chain.
All crossings are hash-chain linked — each crossing's sig is the trace field on the next. The full request lifecycle from user JWT through every boundary is a merkle chain in storage.
Routing, triggers, and game conditions are the same thing: when context matches shape X in state S, do Y.
All four are a trigger with three fields:
when: ShapeMatcher against the incoming crossing
state: ShapeMatcher against the current context (optional — omit = always eligible)
then: boundary (or chain) to execute
async: boolean (default false — sync blocks, async forks)
ShapeMatcher handles both the crossing match and the state match. The full vocabulary is available — exact values, patterns, deep shapes, capability presence, count, any, all, not.
Triggers come from three sources. All go into the same registry.
Patches declare their own triggers. The boundary knows what it responds to.
class GameEngine
include Wanderland::Boundary
boundary :game_engine,
capabilities: [:execute]
trigger when: { type: "run" },
state: { _state: "placing" }
trigger when: { type: "step" },
state: { _state: "placing" }
def call(input)
# input["trigger"] = the crossing that triggered us
# input["context"] = current context (all crossings at this prefix)
end
end
Levels add triggers as game mechanics. The puzzle designer programs in triggers.
name: "The Hidden Path"
triggers:
- when: { position: "C3", shape: "star" }
state: running
then: unlock_gate
async: false
- when: { type: "finish", success: true }
then: confetti
async: true
- when: { position: "E5", color: "blue" }
state: running
then: teleport_to_A1
This means the game teaches people how triggers work. Which means the game teaches people how the platform works.
System-level triggers for routing, observability, lifecycle.
triggers:
- when: { type: "session" }
then: session_init
- when: { type: "start" }
then: level_loader
- when: { type: "*" }
then: audit_writer
async: true
- when: { type: "disconnect" }
then: session_cleanup
async: true
The engine fires triggers in waves, up to a configurable thread cap.
write the initial crossing to storage
collect all triggers that match (crossing shape + context state)
while triggers remain:
partition into sync and async
fire async wave (fork, do not wait)
fire sync wave (up to max_threads in parallel, block until wave completes)
collect all new crossings produced by the wave
for each new crossing:
check for _halt or _fin — if found, stop after this wave
collect triggers that match the new crossing + updated context
repeat with new trigger set
return the full chain of sync crossings
max_threads (default 5) controls how many sync triggers fire in parallel per wave. If 8 triggers match, fire 5 in the first wave, 3 in the second. Each wave blocks until all its threads complete.
def chase(crossing, context, storage_addr, trace_addr)
sync_chain = [crossing]
pending = collect_triggers(crossing, context)
while pending.any?
sync_batch, async_batch = pending.partition { |t| !t.async? }
# Async: fire and forget
async_batch.each_slice(max_threads) do |wave|
wave.map { |t| Thread.new { fire(t, context, storage_addr, trace_addr) } }
end
# Sync: fire in waves, block on each wave
new_crossings = []
sync_batch.each_slice(max_threads) do |wave|
threads = wave.map do |trigger|
Thread.new { fire(trigger, context, storage_addr, trace_addr) }
end
results = threads.map(&:value)
new_crossings.concat(results.compact)
end
halted = false
pending = []
new_crossings.each do |c|
context.append(c)
sync_chain << c
halted = true if halt_or_fin?(c)
pending.concat(collect_triggers(c, context)) unless halted
end
break if halted
end
sync_chain
end
Grid execution is inherently parallel. When 5 signals are at 5 different positions, their boundary executions are independent. Fire them all at once.
When signals converge, the wave boundary naturally serializes — the next wave sees the results of the previous one.
This is a cellular automaton. Each wave is a generation. Independent cells compute in parallel. The wave boundary is the synchronization point. Fan-out is stateless parallelism. Convergence is CA synchronization.
Once a chain goes async, it stays async. Async children never rejoin the sync return.
State is derived from context, never stored. Engine and frontend derive identically.
def derive_state(context)
control_types = %w[session start placement run finish next]
last = context.events.reverse.find { |e|
control_types.include?(e.dig("payload", "type"))
}
return "no_game" unless last
case last.dig("payload", "type")
when "session", "finish" then "no_game"
when "start", "placement" then "placing"
when "run" then "running"
when "next" then "placing"
else "no_game"
end
end
The trace is a hash chain. Each crossing's sig becomes the trace field on the next crossing. No external trace ID needed.
crossing 0: sig = "aaa" <- seed
crossing 1: trace = "aaa", sig = "bbb" <- linked to 0
crossing 2: trace = "bbb", sig = "ccc" <- linked to 1
When a crossing triggers multiple boundaries, each child gets parent_sig:child_index:
crossing 2: sig = "ccc" <- fork point
crossing 3a: trace = "ccc:0", sig = "ddd" <- first child
crossing 3b: trace = "ccc:1", sig = "eee" <- second child
crossing 4a: trace = "ddd", sig = "fff" <- continues from 3a
Any trace containing : is a fork child. The part before : is the parent sig.
The user's JWT signs the initial crossing. Each boundary signs its own crossing. The full request lifecycle is a merkle chain.
user JWT signs -> crossing 0 (from_addr: :sessions:user-xyz)
boundary signs -> crossing 1 (from_addr: :boundaries:auth_gate, trace: sig_0)
boundary signs -> crossing 2 (from_addr: :boundaries:game_engine, trace: sig_1)
boundary signs -> crossing 3 (from_addr: :boundaries:vine_right, trace: sig_2)
Every link signed by a different identity. The chain proves: who initiated (JWT), what executed (boundary sigs), in what order (trace links), nothing tampered (merkle verification).
Every crossing uses the storage record shape: to_addr, from_addr, type_addr, payload, trace, at, sig.
:games:abc123 <- session prefix
:games:abc123:session <- user identity crossing
:games:abc123:level <- level definition crossing(s)
:games:abc123:placements:A2 <- user placed vine_right at A2
:games:abc123:runs:001 <- run seed crossing
:games:abc123:runs:001:spring:A1 <- boundary output at A1
:games:abc123:runs:001:vine_right:A2 <- boundary output at A2
:games:abc123:runs:001:harvest:A5 <- _fin, run complete
:games:abc123:runs:002 <- second run (after edit)
to_addr: ":games:abc123:runs:001:vine_right:A2"
from_addr: ":boundaries:vine_right"
type_addr: ":types:tick"
trace: "bbb" # sig of the crossing that caused this one
payload:
state: { shape: "circle", color: "red" }
direction: "right"
at: "2026-04-08T22:15:03Z"
sig: "ccc" # signature over this crossing
Every leaf value in payload is a ContextVariable — carries the value and the layer that set it.
New run = new trace chain. Old runs preserved. Sharing = reading someone else's trace prefix. The solution IS the log. Leaderboard = query across :games:*:runs:* for _fin crossings.
The API loads crossings by prefix into a Context and returns it. The frontend projects:
One response, multiple readings.
def write(crossing, storage_addr)
crossing["trace"] ||= nil # root crossing has no trace
# Sign the crossing
crossing["sig"] = sign(crossing)
Storage::Registry.append(storage_addr, crossing)
context = load_context(storage_addr)
context.append(crossing)
sync_chain = chase(crossing, context, storage_addr)
{
"chain" => sync_chain,
"state" => derive_state(context),
"context" => context_projection(context)
}
end
The wave loop terminates when:
_halt (error stop)_fin (clean stop)In-flight parallel work finishes. The stop prevents new waves.