Fraser Edwards | Co-founder & CEO of cheqd
More About This Author >
Fraser Edwards | Co-founder & CEO of cheqd
More About This Author >
When agents act on our behalf and replaces the interface, it's time for identity to step forward as the gatekeeper of trust, consent, and control.
The Vanishing Interface
We are entering a new era where digital agents will soon no longer wait for explicit commands. Tasks that once required deliberate input will be performed automatically, using real-time data, contextual cues, and embedded identity to interpret our intent and act quietly on our behalf. As AI agents become more capable and autonomous, our interactions are beginning to evolve alongside it, and what’s replacing it isn’t a better screen; but the absence of one.
Globetrender, a hotel planning website, deployed AI Agents to manage booking enquiry calls. 6 months later, these AI Agents handle over 45,000 calls per day, with one agent processing up to 1,000 calls at once and projected to double its revenue to £2.4bn as a result. These agents recommend hotels, check availability, offer prices and complete bookings, while ‘most customers do not realise they are speaking to AI’- says the platform's CEO.
The traditional consumer interface we are used to when interacting with a service or company is starting to disappear. An AI-handled call is still an interface, but it moves the interaction away from screens and manual steps. People now ask large language models for itinerary recommendations, reducing searches and clicks while giving anyone access to a once-premium experience: a knowledgeable travel agent. A simple conversation can trigger and complete a chain of complex autonomous actions with no swipes, logins or clicks.
Agents as Proxies for Human Intent
The familiar ways we interact with digital services: tapping through menus, swiping between apps or logging in with passwords still dominate most of our digital lives. That is starting to show signs of change. Shifting from something we operate to what systems manage on our behalf and direction.
Agents are already taking over routine tasks that benefit more from speed and accuracy than human involvement from completing forms, executing procurement contracts and making purchases while applying all coupons. Unlike chatbots that rely on pre-trained knowledge or simple search, these agents break down goals into actionable steps, navigate APIs, collect data, and synthesize results without waiting for manual input at every turn.
But giving agents the ability to act is only half the equation. The more decisions they make on our behalf, the more they expose the trust gap between humans and AI. Even if an agent can perform a task, it must be trusted to not only do so correctly, but also operate within its authorised boundaries.
As AI agents begin operating on our behalf, trust becomes the new interface; Closing that trust gap must start with identity to prove: What it is authorised to do, who it represents and under what conditions of authority.
Identity: The Foundation of Permission
Verifiable credentials close this gap. Providing proofs as a set of permissions that lives within the agent itself and enabling agents to prove: their roles, permissions and delegations. These proofs can be checked without central databases, scoped to specific tasks or contexts, and revoked when no longer valid. Turning identity from a static attribute into a dynamic permissioning layer that moves with the agent. In practice, a travel assistant might hold credentials for your preferences, spending limits, and loyalty memberships, or a procurement agent could carry conditional authority to negotiate and sign contracts.
What Needs to Shift to Enable This
If these agents are expected to execute actions without direct oversight, then trust must be engineered with the same precision once reserved for the interface itself. That means designing systems that do more than let an agent present a credential. They must determine if that credential is valid, who issued it, and under what rules.
Verifiable credentials and decentralised identifiers provide the technical foundation. They allow agents to carry cryptographic proofs of roles, permissions, and delegations that are portable, inspectable, and bound to specific conditions such as time of use, task scope, or issuing authority.
Trust registries define these parameters and make the codified trust machine-readable. Determining which parties are authorised to issue certain types of credentials, the rules under which they do so, and the constraints that apply. This allows agents and relying parties to instantly confirm whether a credential came from a recognised and legitimate source.
To codify trust at scale, delegation must be conditional, consent reversible, and accountability traceable. This creates a system where authority can be granted for a specific task or time period, withdrawn if conditions change, and audited when disputes arise. In practice, this means an agent that books travel, approves an invoice, or manages access can prove exactly what it was allowed to do, who authorised it, and the limits of its authority.
The interface may be disappearing, but the burden of trust is only growing. As agents act without direct oversight, systems must prove who acted, under what authority, and within what limits. Trust must be verifiable, enforceable and machine-readable, or delegation becomes a liability.