Every organisation today is measured by two things: “exit velocity” and its “ability to pivot”.
Exit velocity is how quickly you can move away from a technology, platform or contract the moment it stops serving you. Ability to pivot is how easily you can shift direction, technologically or operationally, without destabilising the business.
Together, they define a company’s real digital resilience. And right now, most organisations don’t have either.
This is the backdrop to new research findings: 98% of IT leaders now prioritise digital sovereignty, yet half still lack a formal strategy. Meanwhile, 94% say open source is very or extremely important to resilience. The intent is there but the ability to act is lagging. The gap between aspiration and execution reveals a deeper truth: knowing where your data sits is not the same as being in control of it.
If you look at recent headlines and analysis on digital sovereignty, the discussion is mostly framed in terms of risk and the need for nation-states to exert greater control over their data and digital infrastructure.
Commentators are heavily focused on the downsides of continued over-reliance on big tech, with the tone skewed towards “threats”, “battlegrounds”, “traps” and other significant concerns. Crucially, though, much of this commentary conflates two distinct dimensions of the problem and that conflation is itself a risk, because it allows jurisdictional measures to stand in for genuine technical independence.
Lack of control
So, what’s the problem? In a nutshell, organisations everywhere have built much of their critical infrastructure on platforms they don’t control. This is hardly surprising. The outsourced as-a-service model has delivered enormous performance and financial benefits everywhere it is available.
The numbers don’t lie. The global cloud computing market was valued at over $780 billion last year, with the sector continuing to trend upwards. And as we know, US-owned providers occupy a dominant position.
And it’s precisely the issue of control, or the lack of it, which has given rise to the digital sovereignty movement.
In Europe, the regulatory wheels have been in motion for some time. NIS2, DORA, and in the UK the Cyber Security and Resilience Bill, have tightened expectations around resilience and supply chain accountability in critical sectors.
On an organisational level, many businesses believe they are addressing the underlying issues by moving to a national or regionally hosted cloud environment. The focus here is on ensuring data is stored under the governance of localised, relevant rules. After all, sovereignty is primarily about where data is stored, right?
Well, not necessarily. The issue is that data location does not equate to control. In reality, even when the infrastructure is in the appropriate geographic location, the systems, software and underlying platforms often remain owned and governed by external providers.
In these circumstances, legal jurisdiction and access rights can still sit outside the organisation, particularly as digital systems become more deeply embedded across operations and supply chains. The result is a growing mismatch between perceived sovereignty and actual control.
The hidden risks of outsourcing
These issues are nuanced. Organisations no longer simply store data in these environments. They run core operational systems on them.
The risk here is one of usage vs control, where heavy reliance on third-party platforms is accompanied by limited visibility into how the underlying infrastructure and software actually operate.
A good example is system updates and configurations, which typically sit with the provider, with customers dependent on decisions made outside their own governance structures. This introduces a dynamic in which critical systems are effectively governed externally, with vendor roadmaps or policy decisions having a direct, sometimes immediate, impact on operations.
The issue is not just dependency per se, but concentrated dependency, with a small number of providers as stakeholders in a significant share of digital infrastructure across multiple sectors.
The problems often only become apparent when a particular organisation needs to respond to new risks or when a change in regulation can’t be fully addressed because it lacks the required level of control. The point is that what appears to be a technology decision (ie, which cloud provider to use) actually adds to operational and regulatory risk.
Structural vulnerability
Is this anything more than a theoretical problem? The short answer is yes, because the implications of this model reach well beyond IT environments to mission-critical real-world systems in daily use.
Take sectors such as energy, manufacturing, logistics and aviation, for example, where digital platforms support practically every key process. When control over these platforms is limited, the risk is not just technical but also extends to potential disruptions to services and outputs.
In these and many other environments, concentrated reliance on a small number of non-domestic providers introduces a structural vulnerability, where issues that affect a single platform can have wide-reaching consequences across multiple organisations and sectors.
This is particularly relevant in the context of unexpected or sudden shifts in policy or international relations that could affect access or service continuity. In these circumstances, organisations may find themselves exposed to risks beyond their direct control, despite meeting baseline compliance requirements. As we have all seen, government policies and ways of doing business can change rapidly and with little to no advance warning. Limiting exposure to such situations is important, including via tech infrastructure.
The underlying risk, therefore, is a form of hidden fragility, where systems appear resilient on paper but are constrained in practice by external dependencies to the extent that digital sovereignty becomes an illusion.
Sovereignty needs to be reframed so organisations can have complete confidence in how their outsourced systems and services are governed and changed.
In practical terms, this means having sufficient visibility into services and dependencies to understand how they function and where risks sit. A key requirement is flexibility, particularly the ability to move workloads and data without being constrained by proprietary formats or tightly coupled architectures.
Open standards, open source and containerisation are central to this approach because they decouple workloads from the underlying infrastructure, making it possible to move between providers or environments without being locked into a single vendor’s ecosystem. This is common knowledge among IT teams, and now boardrooms and government offices are starting to realise. Without this kind of portability built in from the start, the freedom to act remains theoretical.
Without this clarity and freedom of action, organisations remain dependent on external roadmaps and decisions that may not serve their own priorities. Sovereignty, ultimately, is not a legal status, it is a practical capability, measured by exit velocity and ability to pivot.
