Sixty-three percent of developers now use AI coding tools weekly. Stack Overflow publishes it. GitHub publishes it. JetBrains publishes it. It has become the headline figure for AI’s penetration into software development, and by extension the number most engineering organisations use to judge whether their adoption programme is working.
Here is what that number actually measures: how many of your developers have the tool installed and are committing code with it. Not whether the code is better. Not whether velocity is meaningfully higher. Not whether the investment is returning anything.
Adoption rate became the default metric because it is easy to produce and it goes up. Tool vendors report it because it reflects well on them. IT departments track licences because that is what procurement requires. But “our developers are using AI” and “our code is improving because of AI” are different claims, and the first one tells you almost nothing about the second.
What it misses
An adoption rate of 70% is consistent with any of the following situations.
Your developers are generating code faster and the output is lasting well. Churn ratio is stable, velocity is up, the codebase is growing in a healthy direction. The tools are paying for themselves.
Your developers are generating code faster and a significant portion is being rewritten within weeks. Gross additions are up, but net lines added is flat, and churn has doubled. The team is busier. The codebase is not growing meaningfully.
Your developers are using AI for boilerplate and low-stakes code, and writing the complex parts by hand. The tool is saving real time on certain tasks but is not changing the architecture or the technical debt trajectory.
All three situations look identical if you are only reading the adoption rate.
The research on this gap
GitClear’s longitudinal analysis of AI-assisted commits found that code written with AI assistance churns at roughly twice the rate of human-authored code. Churn here means lines committed and then rewritten or deleted within a short window — code that did not survive contact with the codebase. Separately, duplicate code blocks have increased four-fold in repositories with high AI adoption.
Neither finding appears in an adoption rate.
A team running 80% AI adoption with a churn ratio of 3.2 is in worse shape than a team at 30% adoption and a churn ratio of 0.9. The first team is producing more code. It is not lasting.
The teams that are getting the most value from AI tools — the ones that show 40–60% velocity increases alongside stable or improving quality signals — are not the teams with the highest adoption rates. They are the teams that know which developers and which task types are a good fit for AI generation, and have adjusted their workflows accordingly. That knowledge requires measurement.
The question worth asking instead
Adoption rate answers “are we using AI?” The question worth answering is “what is AI doing to our output?”
Those require different instruments.
The first question is a survey. The second requires reading your git history — the actual record of what shipped, what lasted, and what got rewritten. The data is already there. Most teams have not read it with this question in mind.