The AI-Driven Contract Economy
The AI-Driven Contract Economy
AI agents are starting to negotiate contracts with each other across organisational boundaries — procurement agents talking to sales agents, agreeing on terms, executing legally binding agreements without human intervention. The efficiency gain is obvious: mid-sized companies processing hundreds of routine contracts per month can now handle them in seconds rather than days. But the deployment is outpacing the infrastructure: these agents are negotiating with no reliable way to prove who they are or who sent them.
The Identity Problem
When a procurement agent from Company X negotiates with a sales agent from Company Y, both agents claim authority to bind their respective organisations. The question is: how does either side verify that claim? And when the deal is done, how do both organisations prove — six months later, in an audit or a dispute — exactly what was agreed and by whom?
Most agent-to-agent systems today rely on API keys or OAuth tokens. Those prove that a request came from a specific system, but they don't prove organisational authority or create a legally meaningful audit trail. If Company X's API key leaks, anyone can impersonate their procurement agent. If the key is rotated, the historical record disconnects. In a dispute, there's no non-repudiable proof of what the agent actually committed to.
The agents need to verify who they're talking to, prove they have authority to negotiate on behalf of their organisation, and create an audit trail that holds up under legal scrutiny. Current systems don't deliver any of those.
The Legal Gap
Agency law — the legal framework governing when one party can act on behalf of another — wasn't designed for AI agents. Pınar Çağlayan Aksoy, a researcher at Bilkent University and the UZH Blockchain Center, has been writing about this gap for years. Her work on AI as Agents: Agency Law, Artificial Intelligence and Private Law examines whether AI agents can be legal agents under current law.
The short answer is no, not cleanly. Agency law assumes agents are people carrying explicit or implicit authority — power of attorney, employment contracts, delegation by a board. Courts assess intent, knowledge, and scope of authority. But an AI agent is software following instructions that might be probabilistic, learned from examples, or dynamically adjusted. When an AI agent signs a contract, liability becomes ambiguous: the software vendor? The company that deployed it? The person who configured it?
This isn't academic speculation. The CZS Institute for Artificial Intelligence and Law at the University of Tübingen, directed by Prof. Michèle Finck, runs the leading European research programme on these questions. Aksoy presented "Attributing Agency Laws to Machines: Legal Design for the AI-Driven Contract Economy" at the Tübingen Conference for AI and Law in November 2025, and the institute is working directly with policymakers on how agency law needs to evolve.
In the US, the government is beginning to engage. In February 2026, the NIST National Cybersecurity Center of Excellence published a concept paper on Software and AI Agent Identity and Authorization, scoping out how agent identity should work at both technical and policy levels — who issues identities, how they're verified, what standards apply. NIST's involvement signals this is becoming a real infrastructure problem.
Meanwhile, protocols like Google's A2A are defining how agents communicate — message formats, handshakes, task delegation. But A2A doesn't solve identity. It tells agents how to talk; it doesn't tell them how to prove who they are.
What's Actually Needed
The solution requires four components working together:
Cryptographic identity verification. Agents must sign their messages with private keys tied to their organisation's identity. Ed25519 signing is fast, produces small signatures, and the keys can be managed in hardware security modules or key management services. When Agent A receives a message from Agent B, it verifies the signature against a public key registered to Company Y, proving the message came from an authorised agent, not an impersonator.
Human-in-the-loop escalation. High-stakes contracts, unusual terms, and deals outside normal parameters should escalate to a human for approval before the agent commits. The agent negotiates, but a person makes the final call. That preserves human accountability and gives organisations a clear control point.
Audit trails with non-repudiation. Every message, negotiation step, and agreement needs logging with cryptographic proof. Not just "Agent A sent message X at time T" but "Agent A, acting on behalf of Company X, signed message X with key K at time T, and Agent B verified it." That creates a tamper-evident record that holds up in disputes and audits.
Trust frameworks that work across organisational boundaries. Even with cryptographic signatures, Agent A needs to know that Agent B's key is actually tied to Company Y and that Company Y authorised Agent B to negotiate. That requires shared trust infrastructure — a registry of agent identities, federated across industry groups or managed by trusted third parties. It can't be centralised (no single company will accept one vendor as the authority), but it can't be completely decentralised either (organisations need accountability and recourse).
The Path Forward
Aksoy's research and Tübingen's work make clear that law needs to evolve alongside technology. Courts and regulators can't write rules for agent-to-agent contracts until reliable mechanisms exist to verify agent identity and create audit trails. The technology must exist first.
Organisations deploying procurement and sales agents today are operating in a gap — technically functional but legally ambiguous. As agent-to-agent commerce scales from routine purchase orders to complex service agreements, that ambiguity becomes material risk. The identity and trust infrastructure needs to be built now, before the volume makes retrofit impossible.
Full disclosure: this is the problem AgentTrust was built to address.