Many of the fastest-growing startups are design-led. Think Notion, Linear, and Superhuman. The user experience feels magical.
For founders building AI-native application software, this is more important than ever.
From Chat…
The history of technology is littered with examples of mapping new capabilities to familiar interfaces – we designed early GUIs to look like physical desktops with folders and files.
When large language models first emerged, we assumed that because they were good at chat, chat must be the right interface. Every product became a chatbot because that's what seemed natural. But we quickly discovered that making users type "please format this paragraph" is worse than simply clicking a "Format" button.
To Hybrid Interfaces
The best products today blend AI seamlessly into familiar workflows, creating what we might call hybrid interfaces. Cursor lets developers write code normally until they need help – then a simple keyboard shortcut summons AI assistance. Notion integrates AI capabilities directly into its standard formatting menu, making AI feel like just another text editing tool.
Building Trust
But enterprise adoption of AI faces a deeper challenge: trust. Organisations simultaneously want automation and control – a paradox that shapes how AI products must evolve. This isn't just about model accuracy (though that matters). It's about creating interfaces that make AI's decision-making process visible and controllable when needed while keeping it out of the way during routine tasks.
Progressive Automation
This paradox creates a natural progression in how companies adopt AI tools. They start by watching AI handle simple, low-risk tasks while maintaining full control – much like how organisations first adopted cloud computing for non-critical workloads. As confidence builds, they begin collaborating with AI on more complex work, gradually increasing the scope of what they'll delegate. Finally, they reach the stage of confident delegation, where AI handles entire workflows independently.
Consider how this plays out in sales operations. In the observation phase, a sales team might let AI analyse call transcripts to highlight key moments and customer sentiment but rely on humans to validate these insights before updating the CRM or adjusting strategy. The interface needs to show exactly why the AI flagged certain moments as significant and make it easy for salespeople to correct misinterpretations.
As trust builds, the team enters the collaboration phase. Now AI might proactively suggest follow-up strategies based on conversation patterns and deal history, while salespeople focus on relationship building and complex negotiations. The interface becomes a true partnership tool – AI handles pattern recognition and data analysis, while humans guide strategic decisions and maintain customer relationships.
In the final delegation phase, AI might autonomously manage lead qualification, route prospects to the right salespeople, and orchestrate follow-up sequences, with humans focusing only on high-value interactions and exception cases. The interface shifts to emphasise goals and outcomes rather than individual actions.
Or take product design teams. They begin by letting AI suggest simple UI improvements based on usage patterns – button placements or text clarity. Designers maintain full control, using AI suggestions as just another input in their process. The interface must make it clear why each suggestion was made and how it aligns with established design principles.
This evolves into collaborative design, where AI becomes an active partner in the creative process. It might generate multiple design variations based on brand guidelines and user behavior, while designers focus on overall user experience and brand consistency. The interface needs to support fluid exploration of AI-generated options while maintaining designer control over final decisions. Galileo AI is already doing this, allowing users to generate and edit UIs using natural language and enabling anyone to do UI design similarly to how Canva democratised graphic design.
Eventually, some design tasks become fully automated. AI might autonomously optimise landing pages based on conversion data, or generate and test multiple variants of marketing assets, while design teams focus on strategy and brand innovation. The interface is focused on setting parameters and reviewing results rather than pixel-level adjustments.
While in previous cycles these stages would take years, the extraordinary pace of change in AI means that in industries with high technology proficiency and low risk (eg website design), I’d expect to see full, high quality, automation as soon as next year while in higher risk and lower tech proficiency (eg law) this could take longer.
The End Game
This progression points to where AI interfaces are headed. The next generation will be truly agentic – understanding user intent, proactively suggesting solutions, and handling complex tasks autonomously. But they'll do this while maintaining the transparency and control that enterprises require. The key characteristics will be:
Task-oriented rather than tool-oriented interactions (users express what they want done, not how to do it)
Proactive assistance based on learned patterns
Interfaces that adapt to user expertise and context
Clear visibility into AI decision-making when needed
For founders building AI-native products, this suggests a clear path forward. Start with familiar patterns that match users' existing mental models. Make AI decision-making visible when stakes are high, but invisible when they're low. Scale interface complexity with user expertise. And perhaps most importantly, build systematic ways to handle and learn from errors.
The implementation requires careful attention to both technical and psychological factors. Consider authentication software. Traditional tools focus on rules and permissions. AI-native authentication might watch for unusual patterns and proactively suggest security improvements, but it needs to do this without creating alert fatigue or reducing transparency.
Or consider data analysis tools. Rather than just offering a chat interface to query data, next-generation tools will understand the types of analysis users typically perform and proactively prepare relevant insights. But they'll need to make their reasoning clear and allow users to modify their assumptions.
The interfaces that enable this will be fundamentally different from what we're used to. They'll need to handle uncertainty gracefully, make AI's confidence levels clear when it matters, and provide multiple paths to accomplish tasks based on user preference and context. They'll need to learn from user behaviour without becoming unpredictable and handle errors in ways that build rather than erode trust.
Taste will be more important than ever. Lower barriers to software creation will see a thousand variations of copycat software bloom. The startups that stand out will have strong, differentiated opinions on the ideal user experience at each level of AI proficiency, and iterate design quickly as user expectations change.
The winners of this new era won't be the companies with the most advanced AI. They'll be the ones who make AI feel less extraordinary – who transform it from a magical but uncertain technology into a reliable tool that just works. The best AI interfaces will be the ones we don't think of as AI interfaces at all. They'll just be the natural way we interact with increasingly capable software.
Thanks to Joumana Elomar for helping me refine the ideas in this post.
The excessive length of that article notwithstanding its bang on. UX is the killer. It’s how Canva won. And Apple. And most AI people are tech dorks. Tech dorks are not good design people generally speaking