Rewiring the Machine: How Fair Supply Built Their AI-First Development Process
Becoming AI Native #1: Product & Engineering
This is the first in a series of posts about how startups are rethinking how they operate by incorporating AI into every aspect of their workflow to increase team productivity and improve customer experience.
We're starting with Ben Henderson, Chief Product & Technology Officer at Fair Supply. Ben brings deep expertise in B2B SaaS product leadership, having led Product at HelpScout, A Cloud Guru, and later Pluralsight following the ACG acquisition before he joined Fair Supply.
In this interview, Ben reveals how his team radically accelerated product velocity by incorporating AI into every part of their workflow. He shares detailed insights on rebuilding tech stacks for the AI era, implementing novel approaches to team collaboration, and maintaining defensibility when every competitor has access to the same AI tools.
From tactical machine learning solutions to complete organisational transformation, Ben offers a practical roadmap for teams transitioning beyond AI experiments to production-ready implementation.
Key insights include:
Why traditional tech stacks are becoming obsolete in the AI era
How developers and designers collaborate to create prototypes, accelerating product development
Novel approaches to quality control using LLM agents as verifiers
Practical strategies for maintaining flexibility as AI tools rapidly evolve
Insights into the tools Fair Supply uses daily, including Claude MCP, Cursor, Baseten, Langchain and others.
On Evolving the Tech Stack for AI
“It was like trying to retrofit a car when we really needed to rethink the entire vehicle.”
How are you approaching AI implementation across the organisation?
We're thinking about it on three layers: the technical stack, integration into the product, and the tools we use on top. The stack piece has been particularly enlightening – and challenging.
We had what most would consider a fairly standard setup – tabular databases, Angular frontend, Django backend. As we started experimenting with AI tools, we quickly realised these systems weren't being innovated at the same pace as AI-assisted development. It was like trying to retrofit a car when we really needed to rethink the entire vehicle. You can tell a lot about a company or a framework by their approach to basic AI enablers like documentation and code examples.
What specific changes have you made to the stack?
We're now moving to React and Next.js because there's so much more training data and better community support. Companies like Vercel are building forward-facing tools and SDKs that integrate well with AI coding agents. Even the documentation is evolving – there's a movement called llm.txt where websites and tools output an LLM-friendly version of their documentation in a text file, ensuring AI tools always have access to the most up-to-date information.
We're also completely rethinking our approach to databases and knowledge storage. We've moved toward a knowledge graph approach with vector embeddings, which is particularly well-suited for the supply chain calculations and data connections we need. This shift would have happened eventually, but AI has accelerated the timeline and changed how we think about implementing it.
How are you managing the transition process?
What's fascinating is how AI transforms the re-platforming process itself. Traditionally, this kind of transition would take 12-18 months. Now, we can do it in weeks because converting code from one framework to another is a relatively straightforward task for LLMs. They're particularly good at this kind of translation work, much more so than creative programming from scratch.
On Development Process and Team Collaboration
“Instead of designers and developers working separately and hoping everything aligns in a few sprints, we're doing rapid prototyping together from day one.”
How has your development process changed?
It's fundamentally changing how the team collaborates. Instead of designers and developers working separately and hoping everything aligns in a few sprints, we're doing rapid prototyping together from day one. Next week, for example, we're going into a two-week sprint where the pods will collaborate virtually – designers using tools like V0 to create UI components while developers build out the first iteration of tools using Cursor or similar.
This approach to prototyping means everyone understands the scope from the beginning. It can't just be a development workflow – it has to include product, design, data and engineering all leveraging these new tools together.
What specific tools and processes have you found most effective?
One of our biggest breakthroughs came from using LLM agents as verifiers and fact-checkers. Initially, we had humans manually reviewing every output, and we still do in areas where we need high confidence. However, we adopted the concept of feeding outputs from one LLM into another LLM whose sole job is to check the work. If it's not good enough, it gets sent back. This has dramatically increased output quality and proven to be an approach that can be applied in multiple areas with agents.
We're also seeing huge benefits from advances in llm memory application. Claude has improved significantly in this area with their MCP, allowing us to load in current documentation from various sources into vector stores or even just markdown text documentation that can be updated in Obsidian in natural language to update the codebase. This helps address one of the biggest challenges: keeping up with rapidly changing frameworks.
On Infrastructure and DevOps
“DevOps isn't just about spinning up instances anymore – it's about managing complex pipelines and workloads that involve GPUs and real-time inference.”
How are you handling the infrastructure challenges?
We're primarily on AWS, and they've done a good job of providing self-hosted options for many of the newer tools. For instance, our knowledge graph is from Neo4j, but they offer a great self-hosted option on AWS that lets us keep everything within our secure infrastructure.
We haven't yet moved to using Amazon Bedrock for self-hosted LLMs, but I'm watching that space closely. As the business matures, having that option rather than relying solely on OpenAI's APIs will become increasingly important.
We’re also following companies tackling the GPU compute challenge, like Baseten. They've built both a hardware and a service platform, essentially creating their own version of Lambda functions optimised for LLMs. They even offer a self-hosted AWS option now. While AWS will probably build something similar eventually, these specialised solutions help solve immediate challenges around GPU compute costs and optimisation.
How do you see the role of DevOps evolving?
The infrastructure requirements are becoming much more sophisticated. DevOps isn't just about spinning up instances anymore – it's about managing complex pipelines and workloads that involve GPUs and real-time inference. We need to think about how to do this in a serverless, on-demand way so we're not unnecessarily burning through credits or GPU time. It's almost like returning to the early days of serverless architecture but with added complexity around AI workloads.
I think we'll see a resurgence in the importance of DevOps over the next couple of years. The job becomes less about configuring individual instances and more about orchestrating complex systems that can handle AI workloads efficiently. It's a hybrid role that combines traditional DevOps skills with AI infrastructure expertise.
On Tool Selection and Integration
“We're also taking a microservices approach to our AI architecture with a focus on internal APIs, making it easier to switch between different tools as the landscape evolves.”
What specific tools are you using in your AI implementation?
At the development level, some team members are using Cursor, while others prefer Cline. I personally prefer tools that feel more integrated with traditional coding workflows. But we're intentionally not being too rigid about tool choice – it's more about how we leverage them.
We're also taking a microservices approach to our AI architecture focusing on internal APIs, making it easier to switch between different tools as the landscape evolves. For example, we want to be able to swap between Langchain and AutoGen as needed. No one knows who the winners will be in this race, so we're building in flexibility from the start.
I'm interested in combining different models to leverage their respective strengths. Claude is excellent for producing high-quality written reports, GPT-4 is great at analytical tasks and web search, and other models excel in different areas. It's like building a team of humans – you want to leverage each member's strengths.
Looking Ahead
What’s on the roadmap for 2025?
Fair Supply’s three core goals are:
Solidify our new collaborative workflow, ensuring every team member leverages AI tools effectively.
Apply agentic automation to internal data and methodology pipelines, transforming traditionally manual, time-intensive processes into real-time automated updates.
Expand beyond ESG/Sustainability by accelerating the development of new models and datasets for customers.
We’re also hiring!
We’re looking for:
Engineers with Dev/MLOps or Data Engineering expertise, with a focus on infrastructure and workflows.
PMs with experience leveraging AI tools in the growth/PLG space.