
A recent study by GitKraken, in partnership with GitClear, found that developers are seeking productivity gains using AI coding assistants, but are struggling to quantify those gains, or even evaluate whether that AI output is improved over what developers can produce.
The report also found that the skill level of developers can create differences in AI code output, noting that variations in output support the notion that AI amplifies existing skills, but does not equalize them.
The analysis is based on a three month analysis of 2,172 developer-weeks of activity across multiple AI coding tools and real-world development environments.
Among the findings were that AI users are generating from 4x to 14x more activity than low or non-users; test code generation increases by 4% and provides greater coverage, and also that review efforts improve and scale using AI.
In the study’s analysis, the GitKraken wrote, “Taken together, the results suggest that AI is not magically creating the mythical “10x engineers”—but it is making engineers who adopt and adapt much faster, more productive, and more iterative.”
Another significant finding is the expanding amount of code churn, with output being revised and replaced more often.
The report also found that the skill level of developers can create differences in AI code output, noting that variations in output support the notion that AI amplifies existing skills, but does not equalize them.
Finally, the study saw changes in how organizations evaluate productivity, applying such techniques are measuring quality with speed as code is created, tracking churn and rework as first-class signals, and moving to connect code output to business outcomes such as delivery time and reliability.
The goal, GitKraken concluded, is to help teams understand not just how fast they’re moving, but whether they’re moving in the right direction.
To learn more, join GitKraken Vice President of Developer Research Jeremy Castile as he discusses how developers hold the key to AI success.
