The discipline of building AI systems that sense, decide, and act — with purpose, evidence, and human trust.
Agents that sense, decide, and act — with human approval gates at every critical juncture.
Multi-agent coordination through typed contracts, not brittle prompts. Each agent owns one job.
Long-term memory via .yawn files. Short-term context via state graphs. Both are auditable.
Policy Decision Points gate every action. deny > escalate > allow. No silent failures.
Every agent action produces evidence. Experiments track hypotheses. Nothing is assumed.
SENSE → MAP → PREDICT → EXPLORE → DECIDE → ACT → PROVE → LEARN. Then repeat.
Agents should be autonomous, not uncontrolled.
Human-in-the-loop for high-risk actions. Fully autonomous for low-risk. The kernel decides which.
Every part is also a whole.
Agents compose into teams. Teams compose into organizations. Rules inherit down the tree.
Don't guess. Experiment.
Define hypothesis, set success criteria, run the experiment, collect evidence, then decide.
If it isn't typed, it doesn't exist.
Agent inputs, outputs, and capabilities are TypeScript interfaces. No magic strings.
The system must make sense as a whole.
Individual agents can be wrong. The system catches it through coherence checks and feedback loops.
Start with a yawn. Define the job. Let agents sense, decide, act, and prove — while you stay in control.