
AI has moved from experimentation to executive mandate. Across industries, competitive pressure and rising user expectations are encouraging leaders to embed AI into core workflows, increase automation, improve efficiency and accelerate delivery. Competitive pressure drives innovation, and technology leaders and practitioners are finding new ways to meet rising demands. Enter: agentic AI systems that can reason, plan and act with autonomy.
However, they also recognize that autonomy introduces new attack surfaces, operational risks and governance challenges. And a certain level of caution is healthy, especially as Gartner predicts that, through 2029, 50% of successful attacks against AI agents will exploit access control issues via direct or indirect prompt injection.
Which leads to a fork in the road: Do organizations build walls around agentic AI or open the doors to broader collaboration?
As with any revolutionary technology, like Linux or Kubernetes, building the best, most secure AI agents requires community-driven innovation. Leveraging a breadth of contributors across hyperscalers, startups, financial services, healthcare, government and beyond, brings broader, more diverse peer review, and faster vulnerability discovery. Additionally, open collaboration distributes oversight across global engineering communities, rather than concentrating responsibility inside a single vendor.
As agents become embedded in critical systems, this collaborative model becomes essential. There is no doubt that AI agents will be powerful technology tools – instead, it’s a question of how to make sure organizations can trust that technology.
Scrutiny over secrecy
Autonomous systems tend to amplify small flaws. Little problems can turn into big problems when an agent retrieves incomplete context, misinterprets permissions or interacts with unstable infrastructure. If the design, retrieval pipelines, and operational logic behind an agent are opaque, determining the source of those failures becomes significantly slower and more difficult.
When building agentic systems, always lead with the assumption that vulnerabilities will surface, data may not be agent-ready, and real-world implementation will differ from the theoretical. No technology is perfect, and there will be gaps. However, in a closed environment, speed to visibility and remediation is often slower given limited internal visibility and resources.
Open development removes some of these barriers. More contributors enable additional testing across environments, increased peer review of architectural decisions, and faster discovery of vulnerabilities. Organizations often assume that transparency increases exposure, but experience shows that widely reviewed systems surface issues sooner – before they become systemic. In open ecosystems, issues can be documented publicly, investigated collaboratively, and mitigated by contributors with varied domain expertise. That collective responsiveness strengthens resilience and reduces long-term operational risk.
Trust starts with the data layer
The conversation around agentic AI often centers on model capabilities like reasoning, planning, orchestration and tool use. But in production systems, trust depends more on the data and retrieval layer than the model itself.
Agents act on context, and if the search, analytics, and observability systems providing that context lack accuracy, recency, or traceability, agents can produce incorrect outputs, take incorrect actions, or create brittle workflows. Often, failures attributed to AI are actually rooted in gaps in retrieval quality, permissions visibility or system telemetry.
These challenges drive engineering teams to integrate agentic workflows directly into production search, observability, and analytics platforms. Logs, metrics, traces, structured data, and semantic search pipelines are increasingly functioning as a unified operational foundation for AI agents.
Modern agentic AI stacks increasingly treat retrieval, analytics, and observability as core control layers rather than supporting components. By combining semantic and keyword retrieval, leveraging a proven, integrated vector database, enforcing fine-grained access controls, and instrumenting agent workflows with logs, traces, and decision telemetry, teams can see not only what an agent produced, but why it produced it. This architectural visibility allows engineers to validate grounding data, detect permission drift, reproduce failures, and continuously refine orchestration logic as workloads scale. In practice, trustworthy agents emerge not from model sophistication alone, but from infrastructure that makes every context source, query path, and automated action inspectable and accountable.
It’s clear that trustworthy agentic AI won’t come from hiding behind proprietary walls. It will come from building systems that are transparent, auditable and continuously improved by an expert community. Community-driven innovation ensures the infrastructure agents depend on, including retrieval pipelines, observability systems, and more, can be tested widely and improved collaboratively, delivering a truly trustworthy AI agent.
