Most conversations about AI coding tools in engineering teams happen at two extremes. Either the manager is advocating adoption and developers are sceptical, or developers are enthusiastic and the manager is worried about code quality. Both conversations stall in the same place: nobody has data.
The default is anecdote. One developer had a good experience with Copilot on a difficult refactor. Another thinks it’s introducing more problems than it solves. The manager has read the industry reports about velocity gains and the other reports about quality degradation and isn’t sure which applies to their team. The conversation ends with a vague agreement to keep using the tools and see how it goes.
This isn’t a failure of management or a failure of the team. It’s a structural problem: the conversation is trying to resolve a factual question with opinions.
What the conversation needs to be about
The question worth asking isn’t whether AI coding tools are good or bad. It’s whether they’re working for this team, on this codebase, in the way the team is currently using them.
That question has a measurable answer. Churn rates for AI-assisted commits compared to human-authored ones. Duplication rates. Net lines relative to gross. Velocity before and after AI adoption, with quality signals alongside it. These data points turn a vague feeling into a specific observation.
Why vague conversations don’t resolve
When a developer says “I think the AI is helping me move faster,” and another says “I think it’s introducing more bugs,” both can be right. Both can also be wrong. The statements aren’t in conflict because they’re about different things: the first is about subjective experience, the second is about downstream quality.
A conversation that stays at the level of experience can never settle the quality question. It can surface useful anecdotes about where AI assistance feels productive and where it doesn’t, and those anecdotes are worth having — but they’re not a substitute for knowing what the commit history actually shows.
What changes when you have data
Conversations backed by data have a different character. A manager who can say “your AI-assisted commits are churning at 1.8× the rate of your human-authored commits, which is close to the industry average, and your duplicate code rate has stayed flat” is having a specific conversation. The developer can engage with that specifically: they can explain what they’ve been working on, whether the high-churn commits were on exploratory work or production code, and what they’re doing differently when AI assistance produces better results.
This kind of conversation is useful for both people. The manager gets insight they can’t get from a standup. The developer gets to understand their own patterns from the outside, rather than having them inferred by someone else.
The framing matters as much as the data
Data about developer output is sensitive. The same numbers that help a manager have a more specific conversation can make a developer feel scrutinised if the framing is wrong.
The framing that works: this data is for understanding the team’s patterns and figuring out how to get more value from the tools. The framing that doesn’t: this data is a performance metric and it reflects on you as a developer.
Scryable’s view is that data should feel like a mirror rather than a report card. A developer looking at their own churn rates and commit patterns should see something useful about their own work, not a verdict from a system designed to evaluate them. Conversations that build from that framing tend to be productive. Conversations that don’t are where the surveillance concern comes from, and it’s a legitimate concern.
Scryable surfaces churn rates, duplication patterns, and before/after AI comparisons from your git history. Get early access.