"In a world of software abundance, all that matters is taste"...
• 4 minute read
By Chris Kindt
…This was our favourite quote from last week’s latest round of conversations in Silicon Valley. Once again, we met with some of the leading VCs, foundation model labs, and frontline innovators. The headline - despite a noisy picture peppered with disappointment - it’s clear that AI momentum is accelerating.
Models are progressing at a steady pace with actual business impact also accelerating as agentic tooling continues to mature. This is creating more urgency to mobilise AI transformations, in contrast to last year’s more tactical AI optimisation plays.
We’re now seeing a gap emerge between the winners who are reprioritising and committing, and those hampered by missing data foundations and inaction.
Some core takeaways:
1. Foundation models: steady progress, real impact.
In previous months sceptics were pointing to failed model runs at OpenAI and Meta as evidence for plateauing research. But, if anything, progress seems to have picked up again. In recent weeks there’s evidence of: model techniques improving (e.g. Google’s DeepThink deploying parallel reasoning streams); Reinforcement Learning advancing and driving more precise tool use (GPT o3, Claude Opus 4.0); increases in model memory; and more widespread agentic frameworks emerging (A2A, MCP). Focus is now swinging back towards compute capacity and energy as future constraints to continued progress.
Since the first generation of reasoning models launched at the end of ’24, our view is that models were already ‘smart enough’ for many transformative use cases in B2B SaaS that agentic frameworks are now unlocking for us. With continued technical progress, our challenge remains to formulate AI strategies that factor in continued progress, building for what’s available in six months or more, and ‘skating to where the puck is going’.
2. Foundation models are moving up the stack in 2025.
Pushing into the app layer remains a priority for the foundation models this year, as evidenced by recent hires and org changes. We should expect them to now play a much more active and impactful role in our B2B application ecosystem.
At the Fortune 50 end, OpenAI have begun winning business from large SaaS incumbents by offering bespoke agentic solutions, with the system-of-record component being bundled in. At the bottom end, they’re due to launch self-serve agentic platforms for in-house teams and start-ups alike to tailor OpenAI’s agent templates into bespoke agentic solutions. And for particularly high-value and adjacent use cases, foundation model builders are launching their own products - such as Codex by OpenAI and Claude Code by Anthropic - in the agentic engineering space.
3. The next act for AI start-ups: task-specific agents and getting to meaningful scale (+$100M ARR).
2024 was a very lucrative year for AI start-ups, fuelled by ample VC funding and easy revenues from CIOs’ experimental budgets. They’re no longer ‘just small start-ups’ – many are reaching +$100M ARR, and they’ve reached this milestone fast.
They are not just getting bigger, but also better. Enabled by reasoning models and agentic frameworks, the latest breed of successful start-ups are deploying task-specific agents to automate specific B2B tasks (e.g Norm.ai in Compliance).
As ever, there is a swamp of ‘AI demoware’ to navigate. GenAI delivers a compelling demo fast, but building the GenAI Ops infrastructure ready for production remains a hard endeavour.
Hg businesses have stepped away from many AI start-ups where diligence revealed that their products can be quickly rebuilt fast, if working AI-first. The most innovative Corp Dev strategies for incumbent software businesses are using this ‘AI demoware’ as helpful signals to determine customer demand - but then channelling a share of deal funds into product R&D - to deliver this value-add feature organically.
4. For B2B incumbents, the window is closing to capitalise on the ‘data and expertise advantage’.
Some B2B incumbents are pointing to AI fatigue with customers, and the absence of AI revenues that could justify the ‘AI hype’… but OpenAI’s success deploying Vertical AI should act as a wake-up call. We see more urgency than ever to mobilise as the window for incumbents to leverage their expertise, data and distribution advantage is closing.
The battleground is as before: Vertical AI startups are positioning themselves as the intelligent and agentic layer where workflow happens, relegating system-of-record incumbents to being commoditised database platforms. Meanwhile, incumbents are racing to bundle-in sufficient AI value, playing on their customers’ CIOs desire for consolidated providers, and looking for an edge through their expertise and data.
Will the foundation models rule us all? No, due to the "fractal complexity in the final mile" of delivering specific vertical tasks for customers. This complexity requires deep domain expertise, data and sophisticated up/downstream integrations that only vertical specialists can deliver.
But the race against vertical AI start-ups is very much alive. The true differentiator will be brilliant products delivering a new level of tech value-add by harnessing these new technologies. "In a world of software abundance, all that matters is taste" – this was a great quote from a VC investor. In other words, a deep and intuitive understanding of how best to solve tasks for customers in a specific vertical, is key.
5. The incumbents’ transformation imperative: from AI-extensions to AI-first
As the technology has progressed from incremental co-pilots to true agentic step-changes, incumbents’ path forward similarly requires bigger and bolder steps:
Commit AI-first, top-down: taking on incremental priorities yielded AI co-pilots in 2024. But the level and pace of change now requires committing to an AI-transformation, actively reprioritising to create the necessary space and focus. If every firm needed an AI strategy in 2024, they now need an AI transformation in 2025.
AI-first and from the ground up: the extreme operating leverage AI start-ups are getting is partially due to carefully crafted, top-down and AI-first operating model, but probably primarily from an AI culture where the right talent is challenged to always solve tasks AI-first, leading to a swarm of smart AI automation within these businesses. This requires us to engage and activate our entire organisations with the right inspiration, training and tooling, as well as changing our talent practices.
Incubate to innovate: Create protected spaces for AI-native innovation, free from organisational ‘antibodies’ that will kill off any nascent gains. The concept of an "AI-native twin" - creating AI-first challengers to co-exist as loosely coupled entities – is resonating strongly with our leading CEOs and Chairs.
Accelerate your expertise: Hg firms are massively amplifying their AI efforts by tapping into the collective Hg ecosystem, learning from their 50 peers, their 300 AI projects and harness the ~100 AI experts that Hg is deploying from the centre for hands-on support.
Our advice remains: be bold, experiment fast and move decisively. Those leading the charge are already pulling ahead, and this will only accelerate as agentic gains, and the wider GenAI ecosystem, matures. Just as we witnessed in the SaaS transition, early movers are establishing advantages - these will compound over time.
Join the journey
Ready to drive this next platform shift? We're investing heavily in our GenAI flywheel through exceptional talent, our Hg Catalyst product incubator, and pioneering AI partnerships. Join us on this transformative journey - get in touch: AI@hgcapital.com!