Fri. Apr 10th, 2026

AI-assisted Development Multiplies Human Error: What’s Your AI Governance and Risk Management Strategy?

iStock 1495008087


iStock 1495008087
iStock 1495008087

Agentic artificial intelligence is becoming ingrained in enterprise operations at lightning speed. With the promise of delivering unprecedented productivity (and pushed by CEOs and CIOs who see AI as the key to being competitive), AI agents have become “co-pilots” for practically every developer. As a result, AI-generated code is turning up everywhere. 

But the hidden risks of the current use of agentic AI are piling up almost as quickly as the code. AI agents do an excellent job of predicting the next line of code, but they don’t grasp the security implications of the code being created. In many cases, by automating productivity as a trusty co-pilot, they amplify human error by suggesting insecure patterns that developers working at breakneck speed accept without a second thought. The ability of AI agents to work autonomously only accelerates the problem.

It’s moving even faster with operational technology such as home thermostats, cameras, and travel-booking assistants, Chief Security Advisor at BeyondTrust Morey Haber said recently. “In the next year, nearly every technology we operate will be connected to agentic AI,” he said. 

According to a recent report from Gartner, the rampant use of shadow AI and rogue automation is further fueling the proliferation of AI vulnerabilities. Gartner notes that 32% of IT workers using generative AI tools at work say they keep them hidden from cybersecurity teams. Combined with low-code/no-code platforms and vibe coding practices, the AI copilots are greatly expanding the enterprise attack surface. 

AI Vulnerabilities Proliferate

If high velocity development practices aren’t enough, agentic AI use is also being pushed from the top, where executives seem to have strong faith in what AI agents can do, with Gartner finding that 79% of IT leaders expect significant benefits. They readily convert custom-built AI chatbots into AI agents by linking them with APIs and tools. This increases risk because only 14% of IT leaders say they are confident that the data and content are ready for human and AI interactions. CISOs are often powerless to deter these initiatives.

Another survey by PagerDuty found that 81% of execs are willing to let autonomous systems take action during a security breach, system outage, or other crises. That finding underscores a disconnect between the hopes for agentic AI and the reality: 96% of execs say they’re confident they can detect and mitigate AI failures before they impact operations, even though 84% have already experienced AI-related outages. Meanwhile, research by Capgemini found that only 27% of organizations now say they have trust in fully autonomous agents, down from 43% a year ago. 

The reality is that AI doesn’t create new vulnerabilities; it replicates the bad habits found in the vast datasets it was trained on. Essentially, it’s amplifying human error. If organizations don’t change their approach to AI development, we risk flooding our repositories with AI-generated code that is fundamentally insecure and continues to feed the expansion of the enterprise attack surface.

How CISOs Can Stem the Tide

CISOs aren’t completely helpless in bringing autonomous AI use under control. But they must act quickly to implement a layered oversight program that reduces vulnerabilities in line with their risk tolerances.

Prioritize Developer Risk Management: AI agents may be introducing risks into the environment, but it begins with human developers. A comprehensive developer risk management program that addresses relevant learning pathways, AI guardrails, and tech stack observability and traceability is necessary to prepare developers for an expert security review of their work. Developer education and upskilling in security best practices, including the use of benchmarks to track progress in acquiring new skills, will be critical to ensuring the safety of both developer- and AI-generated code. It’s a core element of developers ultimately reaping the benefits of AI coding tools and agentic agents.

Inventory Shadow AI: Gaining control over AI agents begins with knowing what you have and where they are. Deep observability into AI-assistant development is essential, enabling you to identify which developers use which large language models (LLMs) and on which codebases. 

Gaining deep visibility into AI agents also allows organizations to prioritize the associated risks, depending on the agent type (embedded, standalone) and the risk level of the projects they are working on. A comprehensive inventory is also important for implementing effective access controls, which are necessary for defense. Gartner predicts that by 2029, more than half of successful cybersecurity attacks against AI agents will exploit access control issues through direct or indirect prompt injection. 

Focus on Governance: By automating policy enforcement, you can ensure that AI-assistant developers meet secure development standards before their work is accepted into critical repositories.

A Secure Foundation Is the Key to Success

AI-assisted development is here to stay because the benefits to productivity are too great to ignore. But the unfettered use of AI agents has multiplied vulnerabilities in code, leading to much greater risk that many enterprise security programs are not yet adequately prepared to defend against. 

A thorough, modernized program based on visibility, observability, governance and developer upskilling can reverse the trend and move organizations toward the successful use of automated AI-assisted development. Gartner estimates that CIOs and CISOs who work with business leaders in implementing structured security programs will see the best results. Those partnerships could, according to Gartner, lead to a 50% reduction in critical cybersecurity incidents by 2028, even as the number of high-level AI initiatives grows by 20% over the same period.

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *