Deterministic vs. Generative: How Should Specifications Be Visualized?


Author’s note (April 5, 2026): This article framed its argument around the opposition between AI’s non-deterministic nature and the deterministic nature of traditional tools. Over the past few months of working with AI-driven development, my view has changed substantially. I have come to see AI as heuristic in nature — much like humans. In practice, development has always involved building up specifications incrementally while working to minimize errors along the way. Working with LLMs turned out to be no different from doing this with humans. Some aspects of this article, such as parts of the modeling and the argument that specifications must admit only one interpretation, remain valid, but I would ask readers to weigh the rest accordingly.

This spring, I started working on a mission to radically transform how payment systems — one of the most mission-critical domains in software — are built using generative AI.

My foundational model for AI-driven system development is y = f(x):

  • x: specification
  • y: artifact (code, configuration, documentation, etc.)
  • f: generative AI

If specification x admits only one interpretation, generative AI f should in principle be able to generate artifact y with reproducibility. How to define such an x remains an open question — one I explored in a previous post.

Why Visualization Is Necessary

I have been researching contract programming languages and formal specification languages as candidates for specification x. These languages are structured and precise, making them well-suited for AI and software engineers. For domain experts, however, they are not.

In mission-critical domains like payment systems, review by a domain expert is non-negotiable. I understand the argument that domain experts should learn to read specification languages, but the cost of doing so is too high in practice.

This led me to believe that every specification x should be convertible into d — a diagram that domain experts can also read. The model for this is d = v(x):

  • x: specification
  • d: diagram
  • v: visualization

Throughout this article, I want to emphasize that diagram d is an instrument for verification. For a domain expert, verifying d must mean verifying specification x — nothing more, nothing less. However, if d does not accurately represent x, verification cannot hold. This premise places a high demand on the reliability of visualization v. In other words, v must produce the same output for the same input — that is, it must be deterministic.

Determinism is a necessary condition for reliable verification, but not a sufficient one — a deterministic v that transforms x incorrectly is still wrong.

This article focuses on why non-determinism is fundamentally incompatible with verification, not on the separate challenge of building a correct v.

Why Generative AI Is Not Enough

Generative AI has made producing documentation dramatically easier. For visualization specifically, it takes nothing more than a prompt to generate diagrams of any kind and layout, add annotations, or even fill in parts of the specification that were left ambiguous. It is genuinely convenient.

But there are fundamental problems lurking beneath that convenience.

No Two Visualizations Are the Same

There is no guarantee that diagrams d1 and d2 — both generated from the same specification x — will be identical.

In practice, specifications grow through multiple rounds of review and iteration. What matters in each round is confirming what has changed since the last one. A non-deterministic visualization v destroys this. When comparing d1 generated from x_v1 with d2 generated from x_v2, there is no way to tell whether the difference comes from a change in x or from the non-deterministic behavior of v. Domain experts are left with no choice but to review d in its entirety every time.

Beyond that, the record of agreement disappears. When a domain expert says “this looks right,” that statement should serve as a record that x is correct. But when v is non-deterministic, it is nothing more than agreement with a snapshot — d at a particular moment in time. Whether that agreement was made against the specification or against a snapshot must never be left ambiguous.

What I want to point out here is that verification is an act that unfolds over time. Specifications evolve continuously, and supporting that iteration requires d to be a faithful mirror of x. A non-deterministic v may look plausible at any given moment, but it cannot sustain trust across the full arc of iteration.

What You See Is Not What the Specification States

When a non-deterministic visualization v produces diagram d, the output contains not only information derived from specification x, but also information the AI has inferred or added. Domain experts have no way to distinguish between the two. In the worst case, constraints or behaviors that do not exist in x appear in d.

A domain expert believes they are reviewing the contents of the specification. But when a non-deterministic v is involved, they are no longer verifying x — they are confirming that the output of v(x) looks reasonable. The subject of verification has been substituted without anyone noticing.

Consider a credit card authorization flow as an example. Suppose x states the following:

If a connection error occurs in an authorization step, terminate the process.

There is no mention of retry here. But when a non-deterministic v generates d, it may reason that “retry logic is standard practice for error handling” and render a flow showing “retry 3 times → failure → terminate.”

The domain expert looks at the diagram and says: “Yes, that’s fine.”

But what they confirmed was whether retrying three times is reasonable from a business perspective — not whether retry logic is actually defined in the specification. Retry does not exist in x. Yet because it appeared in d, it now looks as if it was reviewed and approved.

Visualization as a Verification Gate

In AI-driven system development, I believe the development process — from an initial concept c to a working artifact y — involves two distinct phases of iteration:

  • Phase 1: concept c → specification x
    • Iterating toward a well-defined specification
  • Phase 2: specification x → artifact y
    • Iterating toward a correct implementation

Visualization acts as a verification gate that bridges these two phases:

flowchart LR
    c([concept c])
    x([specification x])
    d([diagram d])
    y([artifact y])

    c -->|"g (non-deterministic)"| x
    x -->|"v (deterministic)"| d
    d -->|feedback loop| x
    x -->|"f (non-deterministic)"| y

    style d fill:#f0f0f0,stroke:#999

In this model, deterministic visualization v serves as the guardian of the phase boundary. Generative processes are fluid by nature — they explore, drift, and diverge. That is why, at each phase boundary, someone must “land” on a specification that fixes meaning. A deterministic tool is what makes that landing verifiable.

Note: When Visualization Becomes Reversible

One point worth addressing: should it be possible to reconstruct specification x from diagram d?

If visualization v is deterministic and d faithfully represents everything in x, an interesting possibility opens up: changes made to d by a domain expert can be traced back to x. When a reviewer modifies a diagram, the corresponding change in the specification is unambiguous. This makes the review cycle bidirectional — domain experts can work in d, and their feedback flows back into x without guesswork.

This holds, of course, only when the changes made to d stay within what specification language x can express. A diagram is a freer medium than a formal language, and not every edit on d has a counterpart in x.

A non-deterministic v makes this significantly harder. If the same specification can produce different diagrams, mapping a change in d back to a specific change in x requires distinguishing what came from x and what came from the behavior of v — a burden that falls on the reviewer.

Conclusion

The argument in this article comes down to one principle: if the purpose of a transformation is verification, it must be deterministic. Introducing non-determinism into verification does not just produce occasional errors. It silently undermines the entire process.

A future where AI handles the full journey from concept to release may eventually arrive. Until then, determinism in verification is not optional.