About Plot

Built on the belief that accurate knowledge
is not a convenience.

Plot exists because the integrity of the information behind every AI response is not a feature consideration. For the teams building products that people depend on, it is a precondition.

01 The origin

The problem did not announce itself
as a product opportunity.

It announced itself as two lost days.

The first time, Stephen was deep in a GCP project — building an on-demand semantic similarity computation caller, a well-defined problem for someone with a background in data and software engineering. He searched Google. He queried ChatGPT, Claude, and Gemini. Every source returned detailed, confident guidance. He followed it carefully. None of it worked.

It took sustained investigation across multiple standups to surface a fact that should have been immediately available: Looker Studio does not support what the ticket required. The answer existed somewhere on the internet. It was buried under layers of documentation that described a product that no longer behaved the way it was described. Two standups passed before he had a straight answer. The time lost was not the result of insufficient effort. It was the result of knowledge that had aged out of accuracy — silently, without any signal that it had done so.

The problem was not the AI. Every model performed exactly as designed — it retrieved the most plausible response from the knowledge available to it. The knowledge was wrong. That distinction matters.

The second time was while building Plot itself.

Configuring Firebase Storage through the Firebase console, Stephen used the embedded Gemini assistant for guidance. The assistant directed him to a URL that Firebase had changed in 2022. He was working in 2026. The error was not obvious — it became visible only through sustained prompting, from someone navigating territory they did not yet fully know, until the model finally surfaced what was actually true. The tool designed to help him was drawing from knowledge that had aged out of accuracy. It had no mechanism to know that, or to say so.

Two incidents. The same root cause. The knowledge behind the tools had not kept pace with the products those tools described. And nobody — not the platforms, not the developers, not the AI models — had been responsible for closing that gap.

02 The decision

Not built in response to frustration.
Built in response to responsibility.

The decision to build Plot was not made in the aftermath of a difficult week. It was made in anticipation of what comes next.

Stephen is building a healthcare solution — a domain where wrong information does not produce a lost standup or a wasted afternoon. It produces harm. The complexity of the system being built — the business rules it enforces, the conditions it evaluates, the outcomes it produces — demands a level of documented precision that cannot be held in memory, cannot be assumed to survive team changes, and cannot be reconstructed from a codebase alone.

If the ambition is to create something that saves billions for mankind — and people are expected to trust it — then no context can remain buried in the head of the person who built it.

Plot will accompany that build. Every decision documented. Every rule captured. Every change governed. Not because auditors will eventually ask for it — though they will — but because the integrity of the work requires it from the first line of code.

This is the realisation that turned a frustrating experience into a commitment: that for anyone building something whose failure has real consequences, the question of whether the knowledge behind their AI is accurate is not academic. It is foundational. And the infrastructure to make that knowledge trustworthy did not exist. So it had to be built.

Plot is not a side project or a solution for a problem someone noticed. It is important infrastructure — for the founders who understand that context is an asset, that accuracy is a responsibility, and that the knowledge behind a product deserves the same care as the product itself.

03 The mission

The quality of an AI answer should not depend on the size of the organisation asking.

Today it does. Organisations with the resources to negotiate enterprise agreements have long enjoyed something others have not: dedicated expertise. A specialist who knows the product deeply, keeps pace with every change, and ensures that every question receives an accurate, current response.

For everyone outside that arrangement — the independent developer, the growing team, the founder building in a regulated sector at midnight — the available knowledge has been whatever happened to survive in a documentation system that nobody was specifically responsible for maintaining.

Plot was built to close that gap. Not by replacing the expertise, but by governing the knowledge that should have been governed all along. When every section of a product's documentation is owned, reviewed on schedule, and connected directly to the AI systems that represent it — the quality of the answer no longer depends on who is asking.

This is infrastructure for anyone serious about building something valuable for society. The founders who understand that getting it right is not optional.

The team
Stephen
Stephen
Founder
Michael
Michael
Co-founder
Get in touch

We are reachable.
Find us directly.

If you want to discuss the problem, the product, or what we're building — we're not behind a contact form. Find Stephen directly on LinkedIn or follow the journey on YouTube.