Fri. Apr 24th, 2026

Google introduces Agentic Data Cloud

Screenshot 2026 04 23 at 10.17.49 AM


Screenshot 2026 04 23 at 10.17.49 AM
Screenshot 2026 04 23 at 10.17.49 AM

Companies are shifting from gen AI that simply answers questions to autonomous agents that perceive, reason, and act on their behalf. Attempting to scale these agents on legacy stacks exposes structural failures that can lead to fractured governance, a persistent trust gap, and broken reasoning loops, all while causing costs to spiral.

To solve this, Google has introduced  the Agentic Data Cloud: an AI-native architecture that evolves the enterprise data platform from a static repository into a dynamic reasoning engine. It closes the gap between thinking and doing, allowing AI agents to act on your business data and context. While last-generation systems of intelligence were built only for human scale, the Agentic Data Cloud is a System of Action, built for agent scale.

Ther are three new innovation areas powering the Agentic Data Cloud:

  • A universal context engine that provides agents with trusted business context to drive higher accuracy.
  • Agentic-first practitioner experiences to evolve the role of data practitioners and developers as orchestrators of agents.
  • An AI-native, cross-cloud lakehouse that eliminates data silos by connecting your entire data estate.

This new architecture shifts the data practitioner role from writing manual pipelines to orchestrating intent-driven engineering.

Google is accelerating this transition with the Google Cloud Data Agent Kit (Preview). Rather than introducing a new interface, the company is launching a portable suite of skills, tools, environment-specific extensions, and built-in plugins, that drop into developer  environments. By meeting practitioners where they already build — including VS Code, Gemini CLI, Codex, and Claude Code — the Data Agent Kit turns your IDE, notebook, or agentic terminal into a native data environment. This enables your environment to autonomously orchestrate a wide range of business outcomes, automatically selecting the right frameworks (e.g., dbt, Apache Spark, or Apache Airflow) and generating production-ready code based on Google’s gold standards.

This kit also injects high-performance capabilities directly into the developer’s flow, scaling to petabytes without moving data. Featuring the same skills and tools that powers Google’s  own out-of-the-box agents, the kit includes:

  • Data Engineering Agent (GA): Builds complex pipeline transformations from scratch and enforces governance rules to keep bad data out of production.
  • Data Science Agent (GA): Automates the model lifecycle — from wrangling to training — scaling across BigQuery Dataframes and Serverless Apache Spark.
  • Database Observability Agent (Preview): Acts as a 24/7 guardian for your infrastructure, diagnosing root causes and executing database remediations.

To help ensure the smooth execution of agents, Google Cloud has fully embraced Model Context Protocol (MCP), which provides a secure, universal interface that allows any agent to discover and use your data assets across our core engines, including: BigQuerySpanner (Preview)AlloyDB, Cloud SQL (GA), and Looker MCP (Preview). MCP for Google Cloud uses our security stack, governing agent interactions based on your existing IAM policies, VPC Service Controls, and data residency requirements.

To learn more, read the blog announcement.

 

 

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *