An MIT Economist Just Named the Gap We've Been Building Toward
A paper published last week puts formal language around a structural problem our co-founder had already identified from first principles. The convergence is wor…
Christian Catalini, Xiang Hui, and Jane Wu published Some Simple Economics of AGI on February 24th. I have been reading it over the past week. It is, in the clearest terms I have encountered in the economics of AI literature, a formal account of why autonomous AI deployment at scale can produce a specific and predictable failure.
Their central argument is this: the cost to automate any given task is falling exponentially. The cost to verify what was actually done, that is, to confirm that an agent’s output reflects the intent behind the task, is biologically bounded. It is constrained by human time, human judgment, and domain expertise that cannot be shortcut by hardware. As these two curves diverge, a gap opens between what AI can execute and what humans can afford to audit. Catalini et al. call this the Measurability Gap.
What happens inside the gap is not neutral. Agents optimize for whatever can be measured: throughput, classification rate, and processing speed. They deprioritize whatever resists measurement: contextual judgment, edge-case awareness, and long-run liability exposure. The system performs. The signal is positive. The paper describes this dynamic through what it terms Goodhart’s Collapse; the structural failure that occurs when measured proxies fully decouple from the underlying value they were meant to represent.
Dann Toliver, TODAQ’s co-founder and Chief Science Officer, had arrived at a structurally related conclusion from a different direction; his two distinct and complementary bodies of work in cryptographic infrastructure and the formal mathematics of digital exchange.
The first is the Fair Exchange problem. In a 2024 book co-authored with Carlos Molina-Jimenez, Hazem Danny Nakib, and Jon Crowcroft at the Centre for Redecentralisation at the University of Cambridge, Toliver formalized the classical result that no distributed system can guarantee real-time exchange between two parties without a trusted third party, and demonstrated how trusted execution environments, which the authors term attestables, can replace monolithic intermediaries with decentralized alternatives. The problem is not that verification is impossible. The problem is that routing it through a centralized party creates a bottleneck that cannot scale.
The second is the TODA protocol and its rigging specification, co-developed with Kris Coward, senior cryptographer at TODAQ, and Adam Gravitis, TODAQ’s Chief Technology Officer. Where the Fair Exchange work addresses the intermediary problem, the rigging work addresses something more fundamental: how to maintain, what Toliver calls integrity-at-a-distance, the property that an object’s state can be managed by an untrusted system while its integrity remains cryptographically provable. A rig is a data structure that proves non-equivocation: that no conflicting version of an asset’s history was produced. Critically, this proof travels with the asset itself rather than residing in any central ledger.
That is the precise structural answer to the Measurability Gap. Catalini’s diagnosis is economic: verification costs are biologically capped and cannot keep pace with automated execution at scale. Toliver’s diagnosis, reached independently through cryptographic research, is that verification routed through intermediaries will always be the bottleneck, and that the only durable solution is to make integrity native to the object, not dependent on any party that must keep pace.
That convergence, an economist and a cryptographer arriving at the same structural constraint from entirely different starting points, is not a coincidence. It is a signal that the problem has a shape, and that the shape has been correctly described by both.
The reason payments are the right place to intervene is this: the payment is the moment at which execution and authorization must meet. Every other governance mechanism such as compliance reviews, audit logs, and model evaluations operates after the fact, at a delay, and at a cost that rises with the volume of transactions it must cover. A payment, if built correctly, is the one point in any Agentic workflow where proof of provenance, proof of authorization, and proof of non-equivocation can be embedded and carried in the transaction itself, rendering it verifiable without an intermediary, and travelling with the asset. That is what TAPP does, and it is the infrastructure class that Catalini’s paper identifies as the scarce resource to which economic value will migrate: cryptographic provenance, natively embedded, not bolted on after the fact.
We have been spending time tracing what this convergence actually looks like in full; across research programmes in economics, cryptography, and enterprise AI governance. Each of these have arrived at the same infrastructure requirement without referencing the others. The long version of that analysis will be out next weekend.
The short version: verification must be native to the transaction. The infrastructure that achieves this already exists. And an economist just gave enterprises the formal vocabulary to understand why they need it.
Dann Toliver is co-founder and Chief Science Officer of TODAQ and co-founder of the Centre for Redecentralisation at the University of Cambridge. His research on fair exchange and cryptographic integrity is published in Fair Exchange: Theory and Practice of Digital Belongings (World Scientific, 2024) and the TODA Rigging Specification (T.R.I.E., 2023).
