Lab

Lab · Architecture & delivery economics

Governed projection from domain to code

Most teams treat code as the source of truth while the domain model quietly dies in a Miro board. The DDD Software Factory inverts that: a canonical model drives schemas, types, and structure; diagnostics catch mistakes before implementation; conformance and drift turn alignment into something you can measure, not debate.

  • Visualizer: strategic, tactical, and process perspectives with full DDD vocabulary end to end.
  • Diagnostics: 140+ structural signals on the model itself, an X-ray before a single line of code.
  • Projection & conformance: generated scaffolds plus tests that report how deeply the codebase adopted the model.
  • Evolution: roadmap, adoption drift, and deltas between model versions so change impact is visible early.

140+

Structural signals

Executable rules encoding what typically breaks in domain models, surfaced at authoring time.

€61K–€99K

Illustrative 3-year savings

On a €150K build + €75K maintenance baseline; scenario range from internal agency impact modeling (see caveats).

3–5×

Modeler leverage (directional)

One specialist with the loop can cover ground that traditionally takes a small committee of architect / BA / reviewer hours.

Why a factory

Coherence is a systems problem — not a heroics problem

Individual brilliance cannot keep a model and a multi-repository codebase aligned forever. The factory encodes alignment as pipelines and tests so teams get leverage without gambling on perfect discipline every sprint.

The cost of invisible drift

Two educational lenses — translation during the build, and semantic drift across the lifecycle — explain why tooling must span modeling and code, not one or the other.

The translation tax

Discovery produces alignment in a room. Then the model stops being maintained because nothing in the toolchain keeps it alive. Architecture becomes a manual rewrite into schemas and boundaries. Implementation becomes N developers each interpreting that rewrite. QA pays the invoice for gaps that were introduced months earlier as “tiny” translation decisions.

Industry studies cited in delivery research (Standish, IBM, Carnegie Mellon, PMI) converge on a blunt pattern: a large double-digit share of effort is rework rooted in requirements and domain misalignment. On a representative €150K agency build, that single failure mode can rival the cost of an entire phase.

Semantic drift is the compounding tax

After go-live, maintenance dominates lifetime cost. Teams pick up a codebase six months later and spend weeks re-discovering what the system is supposed to be. The running code has become the only authoritative artifact, and it already diverged from the original domain intent.

The factory targets that drift directly: conformance tells you where the code stands relative to the canonical model; drift and delta reports make the impact of model changes legible before implementation resumes.

This is not “a modeling tool with export.” It is a governed projection pipeline: the canonical model is authoritative; code, schemas, and tests are derived artifacts.

Synthesis from product brief · April 2026

System map

Six interlocking capabilities

Each layer does one job well; together they form a pipeline that traditional modeling tools and codegen tools usually leave disconnected.

DDD Visualizer

One modeling space, three lenses on the same truth.

  • Strategic: bounded contexts, context maps, relationships.
  • Tactical: aggregates, entities, value objects, domain events, commands, policies.
  • Process: how work and information cut across contexts.

Diagnostic engine

Structural validation on the model, not on hand-waved diagrams.

  • 140+ signals flag anti-patterns and incomplete modeling while you are still cheap to fix.
  • Examples you can demo: unclear context ownership, overloaded aggregates, unconsumed domain events.
  • Replaces weeks of senior review for a class of issues that are purely structural.

Canonical export

Machine-consumable source of truth for downstream tooling.

  • One artifact feeds projector, conformance checks, drift analysis, and documentation generators.
  • Eliminates the “which Miro frame is official?” problem.

Projector

Deterministic compiler from canonical domain data to contracts, docs, and governance signals.

  • Four-layer coupling: grammar of domain objects, canonical records, projector, application bridge. Each layer owns different failures and different fix velocity.
  • One merge pipeline: resolved graph → coordinated renderers → unified projection result feeding generation, drift, delta, and adoption views.
  • Bridge lists and backlog tests route pressure to the layer that can absorb it (model, mapper, app, or meta-model) instead of smearing blame across teams.

Conformance testing

Make “did we build what we modeled?” a metric, not a meeting.

  • Reports adoption depth: where the codebase matches the canonical model, and where it does not.
  • Surfaces gaps before QA spends cycles on symptoms of structural mismatch.

Drift & delta

Close the loop when the domain evolves.

  • Roadmap projections show how adoption should progress.
  • Evolutionary drift tracks how reality diverges over time.
  • Deltas between successive canonical models expose change impact early.

Contract compiler

Projector — boundaries first, then mechanics

The projector is the narrow waist between “domain as agreed data” and “application as behavior.” Treating it as a fancy template misses the point: it is a boundary object that must be deterministic, inspectable, and able to carry feedback back to the model, the compiler, or the grammar without collapsing those concerns into one mushy layer.

The valuable technical insight is separation of concerns with explicit back-pressure: each kind of mismatch has a named home, so teams stop debugging organizational ambiguity with ad-hoc diffs.

Four layers, four edit surfaces

If these collapse, you get either “code is truth” or “diagrams are theater.” Keeping them separate is what makes automation trustworthy: each failure mode routes to a different fix.

Layer 1

Meta-model (grammar)

Boundary

What counts as a valid domain object and property shape.

Responsibility

Defines the discriminated vocabulary of domain constructs and property kinds: the rules of composition every canonical file must obey.

Evolves when

A recurring pattern cannot be expressed without extending the grammar (new property kind, new structural variant).

Layer 2

Canonical model

Boundary

The single domain-as-data artifact teams negotiate.

Responsibility

Plain records for aggregates, events, commands, relationships, glossary, invariants: the authoritative description of the domain under discussion.

Evolves when

Understanding deepens: new fields, new relationships, refined boundaries, retired concepts.

Layer 3

Projector (compiler)

Boundary

Syntax of domain records → filesystem contracts + manifests + docs.

Responsibility

Builds an in-memory resolved graph, runs pluggable renderers (e.g. typed schemas and reference prose), merges file maps and coverage gaps, and feeds drift/delta/adoption machinery off one coherent result.

Evolves when

New output shape, unmappable property pattern, or new diagnostic required from production experience.

Layer 4

Application bridge

Boundary

How runtime code intentionally diverges from or adopts generated contracts.

Responsibility

Imports projected artifacts, then refines (omit, extend, tighten) with explicitly listed field decisions so every bridge choice is reviewable and testable, not tribal knowledge in wrappers.

Evolves when

UI constraints, performance, staged adoption, or legitimate local representation that should stay app-side.

Value flow is intentionally three-way

Forward-only codegen assumes the model is always ahead of reality. Mature systems need backward pressure from production and upward pressure into shared tooling.

Forward: model enriches the app

Canonical truth grows; regenerated contracts widen; the app adopts fields and shrinks duplication. Adoption and governance lists are the speedometer, not the steering wheel.

Backward: app pressures the model

When the runtime must carry a field the model does not yet acknowledge, that pressure is captured deliberately. Once the model absorbs it, canary-style checks force removal of temporary scaffolding so the loop closes instead of ossifying “temporary” forever.

Upward: compiler feeds the grammar

When the projector repeatedly cannot map a property pattern, the fix may belong in mapping logic, or expose a missing concept in the meta-model. Upward propagation makes the next project cheaper for everyone, not just one codebase.

The absorbing boundary (field- and object-level)

Three explicit field-level lists behave as pressure gauges between projected shape and app schema; together with object-level governance, they describe convergence instead of hoping for it.

App-led keys

Fields the application uses that the canonical projection does not yet declare.

Trending empty → Shrinking means the model is catching up to validated runtime reality.

Unadopted projection

Fields the model offers that the application has not yet taken.

Trending empty → Shrinking means adoption is catching up, or the model pruned unused surface.

Typed overrides

Fields adopted by name but with intentional or accidental type differences, tagged by corrective intent.

Trending empty → Shrinking means mapper accuracy, model truth, or deliberate divergence is converging, not silent drift.

Above fields, an object-level backlog forces every generated domain object to be either governed (tests exist) or explicitly deferred. New projection without acknowledgement fails fast, with the same philosophy as typed keys, one tier higher.

Compiler spine (conceptual)

The implementation detail changes; the contract does not: one merged projection result is the hub for write, preview, diff, and observability commands.

  1. 1

    Ingest

    Load canonical exports and treat them as the only structural input set for this run.

  2. 2

    Resolve

    Build an indexed graph (by identity, parentage, cross-links) so renderers query structure, not re-parse raw text.

  3. 3

    Render in parallel

    Each registered renderer consumes the same RendererInput and returns projected files plus its own coverage gaps.

  4. 4

    Merge

    Coordinator fuses partial maps into one ProjectionResult: one truth for downstream commands and diagnostics.

  5. 5

    Emit & snapshot

    Writers apply the merged map; persisted snapshots enable structured diff (what changed since last generation) and filesystem impact, not just “files differ.”

  6. 6

    Observe adoption

    Separate pipelines scan import graph and conformance bridges to classify unadopted, imported-but-ungoverned, governed, or tracked-by-convention entities.

Signal families, not one alarm

Operational maturity is distinguishing enrichment backlog from adoption backlog from mapper debt — without conflating them in a single “tech debt” bucket.

Bridge and backlog tests

Schema-, error-, port-, and wiring-level factories collapse boilerplate while keeping lists disjoint and meaningful. They encode the policy: “nothing implicit crosses the package boundary.”

Drift, delta, and propagation

Drift answers “are generated files stale relative to the model?” Delta answers “what changed between two model commits?” Propagation ties those answers to likely consumer files so triage becomes spatial instead of existential.

Bidirectional visibility

Model-anchored adoption asks what the app did with each projected field; reverse diagnostics ask which exported app types never anchor to a projection. Together they close the visibility gap that single-direction codegen always leaves.

Boundary enforcement as data

Strategic context relationships can be projected into edges consumed by import rules; allowed dependencies become data driven by the same canonical graph that generates types, reducing “forgotten” architecture violations.

Route signals before you route blame

When several symptoms appear together, read them as a chain: upstream gaps collapse downstream noise. The table is a teaching aid — not exhaustive.

SignalPrimary layerTakeaway
App-led field list growsCanonical modelRuntime already validated a concept the model should absorb.
Unadopted projection persistsApplication bridgeEither staged adoption or an honest prune; silence is not a third option.
Typed override tagged as projection mismatchProjectorMapper or template gap: fix once, every consumer inherits the correction.
Typed override tagged as model mismatchCanonical modelThe declared domain type does not match how experts use the word: fix semantics, not only code.
Coverage gap on generationProjector or canonical modelUnsupported pattern vs invalid declaration: split the diagnosis before coding.
Ungoverned new projectionGovernance backlogAcknowledge objects explicitly; prevents shadow IT inside generated surfaces.
Grammar cannot express patternMeta-modelPromote recurring structure to a first-class construct instead of N local hacks.

Layering, bridge semantics, and signal routing follow the DomainModel projector reference architecture: deterministic merge pipeline, explicit conformance bridges, and bidirectional adoption diagnostics.

Deep dives

Beyond throughput — why the loop changes how organizations think

Second-order effect: decision quality compounds

Cheaper modeling is only the first-order win. The deeper effect is organizational: as the model sharpens, the diagnostic surface changes. Early on, many signals fire; that is the team learning where the domain is fuzzy. Over successive passes through the loop, the set of relevant signals contracts toward high-altitude, pattern-level questions, the ones that actually steer investment.

In other words, the factory is not only printing structure; it is training the shared mental model the leadership team uses when it decides what to build next.

Why the integrated loop is the moat

Individual pieces (whiteboarding, architecture diagrams, OpenAPI generators, linters) each address a slice. None of them, alone, continuously proves that running code still means what the domain experts think it means.

Encoding 140+ structural failure modes as executable rules is a cumulative asset: every rule is a lesson someone already paid for in production. The defensible combination is model authorship, automated projection, and conformance evidence tied to evolution, not a prettier diagram export.

Operational loop

Model → measure → materialize → prove → evolve

Two rails separate negotiating truth from proving it: bridges at export and projection are where derived artifacts become evidence, not theater.

Design-time

Negotiate truth

  1. 1

    Model

    Author and explore in the visualizer: strategic, tactical, process.

  2. 2

    Diagnose

    Run structural signals; fix the model while it is still cheap.

  3. 3

    Export

    Freeze the canonical artifact downstream tools agree on.

  4. Canonical export crosses into measurement: conformance, drift, and delta all anchor to the same frozen artifact.

  5. 4

    Project

    Compile domain-as-data into a resolved graph, run registered renderers, merge outputs and coverage gaps. One projection result powers apply, plan, status, and change triage.

  6. The merged projection result is what adoption lists, bridge tests, and evolution triage read — not hand-maintained shadows.

  1. 5

    Conform

    Measure adoption depth; prioritize gaps before they become defects.

  2. 6

    Observe & evolve

    Track drift; diff model versions to see impact before wide rework.

Delivery economics

Where leverage shows up in the lifecycle

Reference shape: a mid-complexity DACH agency project (B2B portal, SaaS, or domain-heavy internal system). Base build ~€150K with ~€25K/year maintenance in the internal model (ranges in research drafts vary; treat numbers as directional, not guarantees).

Illustrative 3-year TCO (build + 3× maintenance)

Baseline in the research model: Baseline TCO (rounded) ~€225K

  • Conservative~€61Kbuild €34.7K · maint €26.3K · ~~18% TCO
  • Base~€80Kbuild €45.8K · maint €33.8K · ~~24% TCO
  • Optimistic~€99Kbuild €57.8K · maint €41.3K · ~~30% TCO

Discovery & domain analysis

8–15% of build · ~€15K

Strong

30–50%

Live model + diagnostics compress workshop → spec → rework cycles.

Architecture & design

10–20% of build · ~€22K

Very strong

40–60%

Collapses separate “translate domain to architecture” work into the same artifact.

Implementation

40–65% of build · ~€72K

Significant

20–35%

Behavior stays human; structural scaffolding and ambiguous typing shrink.

QA & testing

15–25% of build · ~€26K

Significant

25–40%

Conformance replaces subjective “does code match the model?” audits for structure.

Delivery & deploy

5–10% of build · ~€10K

Partial

5–15%

Indirect: cleaner structural alignment tends to reduce last-mile surprises.

Maintenance & evolution

ongoing of build · ~€25K/yr

Strong

35–55%

Drift + delta attack the largest hidden cost driver: semantic drift over years.

Who this is educationally for

Same facts, different emphasis than a VC deck — here we teach mechanism + economics for builders and leaders.

Agency CTOs & tech directors

  • Delivery economics, margin, and predictable quality, not slide decks.
  • Pain they recognize: architecture bottlenecks, QA finding “old” decisions, maintenance archaeology.
  • Resonance: phase-mapped savings, leverage ratio, conformance as proof to clients.

Hands-on architects & senior engineers

  • A toolchain that rewards rigorous modeling instead of punishing it with busywork.
  • Diagnostics encode collective experience; every run educates the team on structural quality.
  • Projection removes the most lossy hop: human translation from model to types and schemas.

What stays emphatically human

The factory automates mechanical truth and measurement. It does not negotiate politics between domain experts, decide strategic bets on core vs generic subdomains, design UX, integrate opaque third-party systems, or write the actual business behavior inside a bounded context.

Those are judgment calls and craft. The goal is to stop spending senior attention on repeatable translation and drift detection so humans spend it where ambiguity is irreducible.

Landscape — without slogans

  • LLM-assisted coding accelerates text generation; it does not, by itself, guarantee structural coherence with a validated domain model. Model + projector + conformance is a complementary layer: structure from the factory, behavior from the team (and tools they choose).
  • Classic DDD tooling and workshops excel at collaborative sense-making but usually stop before a tight, testable link to production structure. The factory’s thesis is that the loop has to close; otherwise the model is always “almost true.”

Demo arc

How a live session typically flows — teach the room in one pass

Step 1

1 · Visualize

Walk a domain in three perspectives so executives, modelers, and implementers share one graph-shaped truth.

Step 2

2 · Diagnose

Run signals in seconds that would otherwise consume senior review before budget is committed to the wrong aggregates or boundaries.

Step 3

3 · Export & project

Promote the model to canonical data, then materialize schemas and types into the codebase the team bridges into.

Step 4

4 · Conform

Replace opinion with measurement: depth-of-adoption per context, aggregate, and relation.

Step 5

5 · Drift & delta

Teach maintenance teams where reality diverged and what the last model change implies before sprint planning turns into archaeology.

Caveats on numbers and TAM

Figures and TAM sketches in internal research drafts carry wide uncertainty bands (often ±40–50% on market estimates). They illustrate where leverage concentrates (maintenance and early structural correction), not promises for any specific engagement.

Interested in a diagnostic pass or a deeper walkthrough?