How often do you Google anything anymore?
Users are definitely not there yet with AI. My mum doesn’t know how to use ChatGPT — and many of those who do still treat it as a kind of glorified autocomplete, just as they did when they first encountered it in 2022/23.
But the direction of travel seems clear to me. I’ve been banging on about this since GPT-3 came out in 2020 to anyone who was willing to listen, and kept going even when people weren’t willing. In the last couple of months, something has shifted, and we’ve reached a full circle moment where I now have a lot of people in my life coming to me to ask about how Claude works, and how they can use it to optimise their work, and what comes next. It happened later than I expected since I ran my first Lunch & Learn on the topic in 2023, but it’s here, and it’s accelerating.
There’s been a joke floating around the tech industry for a while:
A software QA engineer walks into a bar
He orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1
beers. Orders a ‘ueicbksjdhd’.
The first real customer walks in and asks where the bathroom is. The bar bursts into
flames, killing everyone.
The thing I find so brilliant about the modality of LLM interfaces, is that people just… ask for what they want. When designing a product, there will always be an edge case that you miss, creating a barrier for a user to achieve their goal. Even with brilliant software, the interface itself always creates some level of friction.
With an LLM, this friction is almost entirely removed. You state what you need — you don’t have to figure out where to find it, you don’t need to understand the navigation, or taxonomy, or quirks of a platforms UI, or if something is even possible to find at all.
You could be the smartest, or least technical user alive. Everyone is capable of starting in the same place: with natural language — asking for what you need. And as models expand their context windows, and can reason over larger bodies of information, this might actually prove to be the most scalable UX pattern available for large, complex, and messy systems as compared to traditional, rigid navigational structures.
Right now, it seems that every notable company is modifying its interface to be AI-first, which is almost definitely the right thing to do at this stage — but what if the existence platform itself is slowing adoption of the service?
Anthropic, who are heavily focusing on the Enterprise market, are practically shipping a new feature every few days, and eliminate dozens of startups each time they do. I almost never use Google anymore, I mainly use ChatGPT (5.4 at the time of writing), and Codex — for everything. A large part of my design process has now been replaced by live code rather than Figma prototypes, and my colleagues use these systems for large swathes of their research. I’m yet to see a compelling argument as to why these AI Labs wouldn’t effectively absorb the function of all enterprise grade software, and consolidate it into a single interface. Open a new thread, prompt, and you’re off.
If people’s work is largely consolidated into a single platform like Claude, why would they ever log in to your SaaS tool? If the shift continues in this direction, the role of most tech companies becomes much clearer. Their role is no longer to offer the most beautiful dashboard, or the most ‘actionable insights’. The job of these companies will be to maintain well-structured, usable, and trustworthy industry-specific datasets that currently sit beneath an interface.
The most valuable IP a typical startup company can now produce is the algorithmic layer built on top of the data — calculating any additional industry specific values, or making any connections within their datasets which won’t be natively captured by an LLM in the near term.
If we assume that everything converges into a single LLM interface as the future of everything, then the data layer will be your startup’s only option to survive — every company will be asking the question “We have all of our data, why can’t we just build this dashboard ourselves”. The new moat is being the team who can effectively custody, clean, and surface the data in a more reliable way back to the model.
So, will everything become an API?
The answer is no, not yet — but we need to really seriously consider a very near-term future where Claude code is fully integrated with Microsoft Office, e-mail, calendars, search, people’s local machines… everything. Why would they ever want to leave that single tool? It’d be unnecessary friction. The value of a lot of these companies does not disappear, it moves down in the stack. The businesses who win will be the best at supporting Anthropic’s world domination.
We shouldn’t reason this out from where the industry is today, but from where it might be in 12–18 months. Right now, everyone is building chatbots because that is what the current generation of LLMs makes easiest to understand as the next frontier. But that may turn out to be the least durable layer in the stack. As the AI companies continue to expand what their platforms can search, retrieve, generate, and action, more of the conventional enterprise interface layer will be compressed into the model itself. The winners will not be the platforms that best mimic the current pattern, but the ones that position themselves around the layers AI is least likely to commoditise: trusted data, domain insight, workflow infrastructure, and proprietary systems of intelligence.
The writing’s on the wall. The winners will no longer own the interface. The winners will own the truth beneath it.