The AI debate right now centres almost entirely on models – which LLM is smarter, whether they’ll be commoditised, whether OpenAI or Anthropic or Google wins the arms race. These are real questions. But they’re not the most important ones. The most important question is what sits between the model and the outcome. And right now, that layer barely exists.
Call it the context engine.
Here’s the problem with a genius in a room. Sam Altman and Dario Amodei have both used some version of this analogy – imagine having a hundred brilliant minds working on your hardest problems. It’s a compelling image. But a genius without context is just a smart person operating in a vacuum. Hand them a legal brief with no background on the client, the jurisdiction, the negotiating history, the personalities involved – and their output is generic at best. The intelligence is real. The usefulness is limited.
What changes everything isn’t adding more geniuses. It’s the briefing before they walk into the room.
That briefing – the situational awareness, the organisational memory, the understanding of how a specific user or company operates in the world – is what a context engine provides. And it’s almost entirely missing from how most people are using AI today. We are essentially handing brilliant minds a task with no background and wondering why the outputs feel impressive but imprecise.
Lessons from Google’s history
Think about how Google evolved. In the early days, the metric everyone tracked was index size – how many websites had Google crawled. More pages meant better search. That was the commodity race, and Google won it. But analysts eventually realised, that did not give Google a long-term sustainable advantage. That came from the fact that Google knew you. It understood what you were actually looking for in the context of everything else you’d ever searched for. The index was replicable. The user relationship wasn’t.
We are in the index phase of AI right now. Everyone is measuring parameters, benchmarks, reasoning scores. These matter. But they are not where the lasting value will accumulate. The context layer is.
Consider what context unlocks in practice. A law firm’s AI doesn’t just need to know the law – it needs to know this client’s risk tolerance, this partner’s drafting style, twenty years of case history, and how the opposing firm tends to negotiate. A software team’s AI doesn’t just need to write clean code – it needs to understand the architecture decisions made three years ago, the technical debt the team has chosen to live with, and what “done” means in this organisation. The raw intelligence of the underlying model matters far less than whether it knows where it is.
Here’s why this is also a business story. LLMs, for all their impressiveness, are ultimately replicable. Given enough capital and talent, you can train a competitive model. That’s not a dismissal of what OpenAI, Anthropic, and Google have built – it’s an observation about the nature of the asset. The race between them is real, and the outcome matters. But it’s a race.
Why context matters in AI
Context is different. Context requires users and organisations to actively choose to share information – their workflows, their history, their preferences, their institutional knowledge. That act of sharing creates switching costs. Once an organisation’s context lives inside a system, leaving that system means starting over. The context doesn’t transfer. That’s an advantage that compounds over time in a way that model performance alone does not.
This is also why organisational context is more valuable than individual context. An individual user can rebuild their relationship with a new tool relatively quickly. An organisation cannot. The switching cost is institutional – it lives across teams, processes, and years of accumulated data. Whoever captures that first, and earns the trust required to hold it, is sitting on something that looks less like software and more like infrastructure.
The LLM debate will continue. It’s not unimportant. But the next phase of AI value creation won’t be won by whoever builds the smartest model in isolation. It will be won by whoever figures out how to make these models truly situationally aware – equipped not just with what they’ve learned, but with where they are, who they’re serving, and what actually matters in this specific moment.
The context engine is coming. The question is who builds it, and who owns what it learns.
Judah Taub is the founder and managing partner of Hetz Ventures, an Israeli early-stage venture capital firm specialising in cybersecurity, data, and AI infrastructure.
