“Engineering intelligence” as a category label has been attached to enough different products that it risks meaning nothing specific. Tools that track developer activity, tools that surface DORA metrics, tools that overlay project management data onto engineering output, and tools that run static code analysis have all been described, at various points, as engineering intelligence platforms.
This isn’t a terminology dispute. The definition of engineering intelligence determines what you measure, which determines what you optimise for, which determines what kind of engineering team you end up with.
What the term shouldn’t mean
Engineering intelligence built around activity metrics produces a picture of how busy a team is. Commits per day, PRs per sprint, story points closed, lines added: this data is available, it’s easy to present on a dashboard, and it tells you almost nothing about whether the engineering team is building something good.
Engineering intelligence built around surveillance of developer behaviour produces something different: a system that tracks time in the IDE, context-switching patterns, and individual productivity rankings. This data is also available. It creates the conditions for the worst outcomes in engineering management: developers who optimise for looking busy, managers who mistake activity for output, and a relationship between engineering leadership and the people on their team that is fundamentally one of monitoring rather than understanding.
What it should mean instead
Engineering intelligence, as a category that’s useful to engineering managers, should mean the ability to understand what a team is building and whether the code being produced is accumulating into lasting capability.
That requires measuring output quality, not output volume. It requires a baseline to compare against, so that trends are visible. It requires honesty about what the data shows, including when the data shows that a tool or practice that management has invested in is producing worse outcomes than what it replaced.
The specific version of this question that matters now is whether AI adoption is improving or degrading the code. Two years of widespread AI assistance in engineering teams has not produced a clear answer to that question at the industry level, because most teams don’t have the measurement infrastructure to answer it for their own codebase. The industry-level data from GitClear’s research suggests reasons to be watchful. What any individual team should do depends on what their own git history shows.
The definition Scryable is building toward
Engineering intelligence, in Scryable’s use of the term, means legibility: a manager who can look at their team’s git history and understand what the code is doing, how quality signals are trending, and whether the tools the team is using are making the codebase better or worse. A developer who can look at their own patterns and understand where their work holds up and where it generates future rework.
Engineering managers have access to plenty of data. What’s been missing is a specific set of measurements that answer the questions that currently get answered by gut feel — and the recognition that gut feel is an expensive substitute for measurement that the data to do better has been sitting in your git history all along.
Scryable. Engineering intelligence from your git history. Get early access.