Mon. May 11th, 2026

Docker Isn’t Just About Containers Anymore

Docker moby


Docker moby
Docker moby

I’ve been working with Docker for over a decade. I containerized my first production service when Docker Compose was still Fig. Back then, people debated if containers would replace VMs. Time settled that argument. Containers won. But something interesting happened along the way: Docker stopped being just a container company.

If you’ve been heads-down in code, you might have missed how much the landscape has shifted. Docker now runs local LLMs, orchestrates MCP servers, and spins up microVMs for AI agents. The container runtime that changed how we deploy software is quietly becoming the infrastructure layer for how we build with AI.

I want to discuss what this really means for development teams. Most coverage either hypes it up or completely dismisses it.

The pieces on the board

Docker Model Runner lets you pull and run AI models locally through an OpenAI-compatible API. You run docker model pull the same way you’d pull an image, and the model loads into memory at runtime. It supports llama.cpp, MLX on Apple Silicon, and Vulkan for GPU acceleration. For teams that want to experiment with local models without sending data to a cloud provider, this is genuinely useful. It’s not replacing your production inference stack. It’s giving developers a way to prototype against real models on their own machines.

MCP Gateway is where things get more architecturally interesting. The Model Context Protocol has become the standard way AI systems connect to external tools and data. Docker’s gateway runs MCP servers in separate containers. It manages configuration in one place and takes care of credential injection. Rather than having every developer configure each AI tool individually, teams can set up the gateway once. For teams using several AI tools across their IDEs and workflows, this solves a real coordination problem.

Docker Sandboxes are the piece I find most compelling. When you let an AI coding agent run autonomously, it needs to install packages, execute scripts, build containers, and modify files. Giving it that freedom inside a regular container means it shares your host kernel. One bad decision from the agent, and your machine pays for it. Sandboxes solve this by running each agent in a lightweight microVM with its own kernel, its own Docker daemon, and its own network stack. The agent can do whatever it wants. Your host doesn’t care. Docker built their own VMM instead of using Firecracker. Firecracker only targets Linux, and developers work on Mac and Windows too.

There’s a security detail worth calling out: credentials never enter the sandbox. The host-side proxy intercepts outbound requests and injects API keys on the way out, so the agent works with a placeholder while the real secret stays on the host. If someone compromises the sandbox, there’s nothing sensitive inside to steal.

What’s the strategy here?

Docker joined the Linux Foundation’s Agentic AI Foundation as a Gold member alongside Anthropic, Google, Microsoft, and OpenAI. That’s not a casual move. Docker is betting that the infrastructure layer for AI agents will look a lot like the infrastructure layer for applications: isolated environments, standardized interfaces, centralized management, and portable configurations.

This is the same playbook Docker ran with containers a decade ago. Back then, the problem was “works on my machine.” Docker solved it with standardized packaging. Now the problem is “my AI agent trashed my environment” or “my agent can’t access the tools it needs safely.” Docker is positioning itself as the neutral platform that solves those problems without competing with the agents themselves.

There’s a pattern worth paying attention to: Docker keeps finding ways to be the layer between developers and whatever infrastructure complexity is currently making their lives difficult. In 2013, that complexity was deployment inconsistency. In 2020, it was Kubernetes configuration. In 2026, it’s AI agent isolation and tooling orchestration.

What should teams actually do?

If your team is using AI coding agents today, and most teams are whether they’ve formalized it or not, the isolation question is the first one to answer. Running agents with full permissions on your local machine was fine when they were autocompleting function names. It’s not acceptable when they’re autonomously executing multi-step workflows.

Beyond isolation, the MCP Gateway deserves a serious look from any team running more than two AI-assisted tools. The configuration sprawl is real, and it will only get worse as the ecosystem grows.

For everything else, wait and watch. Docker Model Runner is interesting for prototyping, not production. The recently launched Sandbox Kits are promising but still early. If your team standardizes on agent environments, keep an eye on how that feature matures.

The bigger takeaway is simpler: the company that taught us how to ship software in containers is now teaching us how to ship software with AI agents. The patterns rhyme. Whether Docker executes on this pivot as well as they did on the original one remains to be seen, but the technical foundation they’re building is sound.

 

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *