methodology

knowledge patterns

a systematic approach to building with ai — world models, scenarios, skills, and quality gates that compound across every project.

the premise

most ai-assisted development fails the same way: context gets lost between sessions, decisions live in someone's head instead of in the system, and each new conversation starts from scratch. the work doesn't compound.

knowledge patterns is the methodology i use to solve this. it's not a framework you install — it's a set of architectural patterns for structuring how ai systems understand, decide, and build. every project i run uses these patterns. the result is work that gets better over time, not just work that gets done.

world model

every system needs to know what it's working with. the world model is a structured representation of the problem domain — the primitives, the relationships, the constraints that don't change.

in practice, this means documenting the entities, decisions, and boundaries that matter before writing any code. not a spec in the traditional sense — more like teaching the system to think about the domain the way the operator does.

a good world model means the system can make judgment calls. a bad one (or none) means every decision requires human intervention. the goal is to front-load understanding so execution runs long without interruption.

what goes in

  • primitives: the nouns of the domain. what exists, how things relate, what's mutable vs. invariant.
  • internal model: how the system itself works. architecture, data flow, integration points.
  • external model: the environment the system operates in. apis, dependencies, constraints from the outside world.

why it matters

the world model is the single most important artifact in an ai-assisted build. without it, every session starts cold. with it, the system has institutional memory — it knows what's been decided, what's in play, and what the boundaries are.

scenarios

scenarios are the behavioral layer. they describe what the system should do in specific situations — not as code, but as structured specifications that inform implementation.

think of them as the bridge between "what we want" and "what we build." they capture intent in a form that's precise enough to verify but flexible enough to accommodate changing requirements.

architecture scenarios

high-level descriptions of how components interact. these define the shape of the system before any code is written. they answer: when X happens, what should the system do? what are the failure modes? what are the edge cases?

reference implementations

working examples that demonstrate how a scenario should be realized. not production code — reference points that establish the pattern. when the ai builds something new, it has concrete examples of what "good" looks like in this codebase.

skills

skills are reusable workflows — structured processes that can be invoked by name and executed consistently. they encode how work gets done, not just what work needs doing.

each skill has a defined trigger, a process, and a quality gate. they're composable — a build skill might invoke a discovery skill which invokes an audit skill. the system knows when to use each one and what "done" looks like.

examples

  • discovery: structured interview process that extracts intent from the operator, produces a spec with acceptance tests, then executes against it. three passes: north star, taste, acceptance tests.
  • audit: dynamic-category review that evaluates a system against its own standards. produces actionable findings, not just observations.
  • test: two-pass validation — structural checks first, then end-to-end scenarios. detects spec drift automatically.
  • build: prompt-driven task queue with gatekeeper execution. each task is verified before the operator sees it.

the pattern is the same across all skills: define the process, gate the quality, compound the learnings.

quality gates

every piece of work passes through automated verification before the operator sees it. this isn't ci in the traditional sense — it's the system checking its own work against the acceptance criteria defined during discovery.

the gatekeeper model works in two phases:

  1. automated verification: the system runs checks, fixes failures, re-runs until clean. the operator never sees broken work.
  2. operator review: the operator evaluates finished work against their taste and standards. approvals, deferrals, and fixes are tracked. the system learns from every review cycle.

this means the operator's time is spent on judgment, not troubleshooting. the system handles the mechanics; the operator provides the taste.

compounding

the real value isn't any single pattern — it's how they interact. the world model informs the scenarios. the scenarios drive the skills. the skills enforce the quality gates. the quality gates feed learnings back into the world model.

each project that uses these patterns makes the next project better. decisions are captured, not forgotten. patterns that work get reinforced. patterns that fail get corrected. the system develops institutional memory across projects, not just within them.

this is what separates systematic ai engineering from ad-hoc prompting. the work compounds.