GitClear published longitudinal research on AI’s impact on code quality that’s worth reading carefully. Not because it’s alarming, but because it’s one of the few attempts to measure what’s actually happening in codebases rather than what developers report about their experiences.
The headline figures: AI-assisted commits show churn rates approximately 2× higher than human-authored commits, and duplicate code blocks have increased roughly 4× in AI-assisted work. Net lines added as a proportion of gross lines have declined, suggesting more code is being written and rewritten without accumulating into net new capability.
What the methodology measures
GitClear’s research uses git history as its primary data source. This is the right call: git history records what actually happened, not what developers believed would happen or what they reported in retrospect.
The core metrics:
Churn rate measures lines of code that are rewritten or deleted within a short window after being written. High churn suggests code that didn’t hold up — it was written, proved insufficient, and had to be replaced.
Duplicate code measures blocks of identical or near-identical code appearing in multiple places in the codebase. Higher duplication increases maintenance burden and the surface area for inconsistent behaviour.
Net versus gross lines is the difference between how much code is written and how much survives. A widening gap between gross and net suggests more churning activity without corresponding accumulation.
The optimistic reading
The research is sometimes cited as evidence that AI coding tools are harmful. That’s not what it shows.
What it shows is that AI assistance changes the patterns of code production in ways that have quality implications if teams aren’t watching for them. That’s a different claim. It implies that outcomes depend on how teams use the tools and whether they have visibility into the results.
A 2× churn rate increase isn’t an inevitable consequence of AI adoption. It’s the average across the dataset. Some teams will be well above it. Some will be below it. The difference between those teams almost certainly has something to do with how they’re reviewing AI-assisted code, how developers are using the tools, and whether anyone is watching the quality signals.
What the research doesn’t tell you
GitClear’s data covers patterns in aggregate. It doesn’t tell you where your team sits in that distribution. It doesn’t tell you whether your AI-assisted commits are churning faster than your human-authored ones, which developers’ AI assistance is producing the highest quality output, or whether the patterns have changed since you adjusted how the team uses the tools.
Industry data establishes that the risk is real and measurable. What it can’t do is substitute for measurement of your own codebase. Teams that treat the GitClear findings as a warning and act on them by measuring their own data are in a different position from teams that either dismiss the findings or accept them as a fixed outcome.
Why this matters for how you talk about AI with leadership
Engineering managers often find themselves explaining AI adoption to leadership with limited data. The GitClear research is useful context, but it’s most useful when it can be compared against your own numbers.
“Industry research shows 2× churn rates for AI-assisted code; our team is running at 1.4×, which we’re monitoring” is a more defensible position than either “AI is working great” or “AI is causing quality problems.” It’s also a more honest one.
The data exists to have that conversation. Most teams just haven’t connected it yet.
Scryable surfaces your team’s churn rates, duplication patterns, and pre/post AI comparisons directly from your git history. Get early access.