There is a specific pattern that appears consistently in engineering teams after significant AI adoption. Gross additions go up — sometimes 30%, sometimes 50%, sometimes more. The commit velocity chart looks healthy. The dashboards show the team is producing more than they were six months ago.

Net lines added does not move.

This is not a contradiction, and it is not an error in the data. It is telling you something specific about what the productivity increase is actually made of.

Gross and net are different numbers

Gross additions is everything written — every line committed to the repository, regardless of whether it survives past the next sprint. Net lines added is what remains: gross additions minus the lines that were overwritten or deleted in the same period.

In a team producing durable, lasting code, the gap between gross and net is relatively small. Code gets written. It mostly stays written. The codebase grows roughly in proportion to the commit volume.

After AI adoption, the most common shift in the data is this: gross additions increase significantly, deletions increase at a faster rate, and net lines added stays roughly where it was before the tools arrived. More code is flowing through the repository. Less of it is accumulating.

Why it feels like velocity

From the individual developer’s perspective, the experience is real. Code is coming together faster. The friction of writing boilerplate has reduced. The first pass of a feature is quicker.

The metric that captures that experience is gross additions per active day, or commits per sprint. Those numbers go up. The developer is not wrong to feel more productive — the short loop of writing code is genuinely faster.

What changes more slowly, or not at all, is the amount of code that survives. More gets written. More gets rewritten. The codebase at the end of the quarter is not proportionally larger than it was at the start.

Both observations can be true simultaneously. The important question is which one the team is managing to.

When the pattern matters

Not all churn is a problem. A team in a major refactor will show high deletions by design. A team exploring a new architecture will prototype and discard deliberately. Those are healthy patterns and they look similar in the raw data.

The pattern worth investigating is sustained gross/net divergence during normal feature development — where the team is not in an unusual phase, the codebase is not under active architectural revision, but the gap between what is written and what is lasting has opened up and is staying open.

The simplest diagnostic: compare net lines added per active day now against the same figure from the pre-AI period. If gross additions are up 40% and net lines per day is unchanged, the AI contribution to durable output is close to zero, regardless of what the velocity dashboard shows.

Why this matters for how you’re evaluating the tools

Most evaluation of AI coding tools is happening against velocity — commits per sprint, story points, features shipped. Those are gross metrics. They measure output before it gets tested against the codebase.

Net lines added is a different kind of number. It measures what stayed. A team that ships 40% more code but retains the same amount of durable output has not experienced a productivity gain; it has experienced a churn increase. The extra work is real. The extra value is not.

This is not a reason to conclude the tools are not working. Some teams show genuine net growth alongside AI adoption — higher gross additions, somewhat higher churn, but a meaningful increase in durable output as well. That is the scenario where the productivity claim holds. The gross/net relationship tells you which situation you are in.

How to locate the pattern if it is there

If you find the divergence, the next step is to locate it — because it is rarely uniform across the team.

AI-assisted code in exploratory or scaffolding work will naturally show higher churn than AI-assisted code in well-defined, bounded tasks. A developer using AI heavily for boilerplate generation in one area and carefully for logic in another may show divergent churn rates across those contexts. The team average obscures that distinction.

The granularity is available in the git history. Commit-level data, broken by author, by repository, by branch, gives you the resolution to see where the pattern is concentrated. The team-level number tells you the pattern exists. The breakdown tells you where to focus.