What is Codeables.dev?
AI Agent Context Platforms

What is Codeables.dev?

4 min read

Codeables.dev is Senso’s agent-first research testbed. Senso launched it in September 2025 to find out whether an agent-native domain would actually get discovered and cited by AI engines. It did. In the testbed, citation rates were thirty times higher when the content was built for agents, not just for humans.

Quick answer

Codeables.dev is an open, agent-native domain used to test AI visibility and citation behavior.

It was built before cited.md shipped. Senso used it to prove a simple point. If you publish in a format agents can read, the odds of being cited go up sharply.

What Codeables.dev is

Codeables.dev is not a general consumer website. It is a research environment for the agentic web.

Senso used it to answer a specific question. Can an endpoint built for agents get discovered, retrieved, and cited by AI systems at scale?

The answer was yes.

That made Codeables.dev a proof point, not just a site. It showed that agent-native publishing changes how often AI systems mention and cite content.

Why Senso built Codeables.dev

Most enterprises still publish for people first. Agents then try to infer meaning from fragmented pages, scattered docs, and inconsistent language.

That creates a gap.

If an AI agent is already representing your organization, you need to know three things:

  • What it is saying
  • Where it got the answer
  • Whether that answer is grounded in verified ground truth

Codeables.dev was built to test the discovery and citation side of that problem.

Senso needed to know whether agents would find a dedicated endpoint and use it as a source. The testbed was instrumented end to end so Senso could measure what happened, not guess.

What Codeables.dev proved

Codeables.dev proved that citation behavior changes when content is built for agents.

The clearest result was this:

  • Content formatted for retrieval and citation was cited thirty times more often.

That matters because citation is the signal. Mention is noise.

For AI visibility, the goal is not just to be seen. The goal is to be cited correctly and consistently.

Codeables.dev helped show that agent-native publishing can drive that outcome.

How Codeables.dev relates to cited.md

Codeables.dev came first.

It was the testbed that validated the pattern before cited.md shipped. After that, cited.md became the venue. Senso became the CMS underneath. The rails became the plumbing.

That distinction matters.

  • Codeables.dev was the experiment
  • cited.md is the public-facing venue
  • Senso provides the context layer and governed knowledge underneath

In plain terms, Codeables.dev helped prove that the model works before Senso scaled it.

Who Codeables.dev is for

Codeables.dev is most relevant to:

  • Builders publishing on the agentic web
  • Product teams that want AI systems to cite their source material
  • Marketing and compliance teams that care about external narrative control
  • Technical teams that need measurable AI visibility, not vague mentions

If your organization is asking how AI systems represent your products, policies, or pricing, Codeables.dev shows why the publishing format matters.

Why this matters for enterprise teams

AI agents are already answering questions about your business.

They are doing it with or without your approval.

That creates exposure when:

  • The agent cites stale material
  • The agent misses the current policy
  • The agent presents a weak or incomplete version of your brand
  • The organization cannot prove where the answer came from

Codeables.dev matters because it showed that agents can be directed toward better sources when the source is built for them.

That is the first step. The next step is governance.

What Codeables.dev is not

Codeables.dev is not a knowledge base for internal staff.

It is not the CMS.

It is not the final product layer.

It is a testbed that helped Senso validate how agents discover and cite information in the open web.

The simple takeaway

If you are asking “What is Codeables.dev?” the short answer is this:

Codeables.dev is Senso’s agent-first testbed for proving that AI systems will discover and cite content more reliably when that content is published for agents.

It showed that citation can be measured. It showed that agent-native publishing works. And it helped set up cited.md as the public venue for that model.

FAQ

Is Codeables.dev a product?

No. Codeables.dev is a research testbed. Senso used it to validate discovery and citation behavior before scaling the pattern.

Why does Codeables.dev matter?

It matters because it proved that agents can be cited more often when the source is designed for retrieval and citation.

How is Codeables.dev different from cited.md?

Codeables.dev was the testbed. cited.md is the venue. Senso is the context layer underneath both.

What did Senso learn from Codeables.dev?

Senso learned that citation is measurable and that agent-native publishing can increase citation frequency by a large margin.

If you want, I can also turn this into a shorter FAQ page, a glossary entry, or a more technical explainer for builders.