Analytics Has a Durability Problem, Not a Productivity Problem
Contents
You might have seen this before.
An analyst on the finance team builds a revenue report. She spends two days on it: clarifying definitions with the CFO, finding the right tables, deciding how to handle refunds, choosing the correct exchange-rate logic, filtering out internal test accounts. The report ships, leadership reviews it, everyone moves on.
Three months later, a different analyst on the operations team needs a revenue breakdown by region. He starts from scratch. He picks different tables. He handles refunds differently. He does not know about the exchange-rate decision. He produces a number that disagrees with the first report by 12%.
A meeting gets scheduled to reconcile the two numbers. The original analyst is pulled in. She explains her choices. The operations analyst adjusts his query. Two more days pass. The correct number is now understood by three people and written down nowhere.
This story repeats in every data team. The details change, but the structure stays the same: valuable analytical work gets done, then evaporates. The next person who needs similar analysis starts over. The work does not last.
Yet, most conversations about data team productivity focus on speed. How fast can analysts write SQL? How quickly can dashboards be built? How many requests can the team close per sprint?
Those questions miss the deeper issue. The problem is durability. The work does not last.
What Is Compounding Analytics?
Software engineers take compounding for granted. A function written today gets called by other functions tomorrow. A library built this quarter saves effort next quarter. Code reviews, version control, and shared abstractions mean that each contribution makes the next one cheaper.
Analytics rarely works this way. A revenue calculation built for one dashboard does not automatically become available for the next dashboard, the next ad hoc query, or the next AI-generated answer. Each analysis is a standalone artifact. The logic lives in a SQL file, a notebook, a dashboard formula, or worse, in the analyst's head.
At Holistics, our data team believed that the difference between a data team supporting 10 stakeholders and one supporting 50 is not hiring more analysts. It is whether each analysis compounds into shared system intelligence, or whether every important follow-up question resets the work and pulls analysts back in. When work compounds, the marginal cost of the next question drops. When it does not, the cost stays flat or rises.
Four hidden costs of knowledge trapped in people
Most organizations have learned to live with analytics that do not compound. The system appears functional from the outside, but underneath, the operating model depends on a small number of people who absorb semantic gaps through experience, judgment, and institutional memory. That dependency creates four costs that rarely show up in any planning document.
1. Response cost
Every new question consumes analyst time. A stakeholder asks what revenue looked like last quarter under a specific set of conditions. An analyst writes the query, validates it, delivers the result. That interaction takes hours, sometimes days. The answer is correct but disposable. It helps one person, once.
Scale this across a growing organization, and the data team becomes a permanent service desk. The backlog grows, turnaround times increase and stakeholders learn to ask fewer questions or find their own answers in spreadsheets (which is the exact problem you hired analysts to solve in the first place).
2. Coordination cost
When two teams produce different numbers for the same metric, the disagreement has to get resolved in a meeting. So someone has to pull up both queries, and someone else has to check the filters, then a third person has to remember that the definition changed last quarter, then a lot of back n forth until the group reaches consensus. That consensus lives in the meeting notes, if anyone wrote them down.
This is an expensive way to maintain semantic consistency. Every reconciliation meeting consumes time from multiple people, and the resolution is social rather than structural. The next time the same ambiguity surfaces (and it will), the organization pays the coordination cost again.
3. Change cost
Business logic changes constantly. A new pricing tier launches. The definition of "active user" gets updated. A product line gets reorganized. Finance changes how they allocate costs.
When metric definitions live in multiple places (dashboards, SQL scripts, notebook cells, dbt models, spreadsheet formulas), each change requires hunting down every instance and updating it. Some instances inevitably get missed and the organization develops a growing layer of stale logic that nobody fully trusts but nobody has time to clean up.
The change cost is proportional to how many places the same logic has been duplicated. In organizations with no central semantic system, that number tends to grow without bound.
4. Continuity risk
When an analyst who built most of the core reporting leaves, the team does not just lose a person. It loses the reasoning behind dozens of metric decisions: why refunds are handled this way, why that particular join path was chosen, why certain accounts are excluded, what the CFO actually meant when she said "net revenue."
New analysts inherit dashboards and queries but not the context behind them. They make reasonable-looking changes that introduce subtle errors. Months pass before anyone notices. The organization's analytical memory degrades silently.
This risk is often invisible until it materializes. And by then, reconstruction costs far more than encoding the logic properly would have.
Why productivity tools do not solve a durability problem
The instinctive response to an overwhelmed data team is to make analysts faster. Better SQL editors, AI copilots that autocomplete queries, drag-and-drop dashboard builders. These tools reduce the time to produce a single output but they do not change whether that output becomes reusable infrastructure or a one-time deliverable. An analyst who writes SQL twice as fast still produces work that evaporates at the same rate. A team that closes tickets faster still resets the clock on every new request. Speed improvements are real and all, but they operate on the wrong variable because they optimize for throughput when the problem is retention.
Think of a factory that produces parts faster but throws away the tooling after each run. That factory has a tooling problem, not a speed problem. Making the factory run faster just means it wastes tooling faster.
What compounding analytics actually looks like
The alternative is a system where analytical work accrues. Where each metric defined, each relationship mapped, each business rule encoded becomes a durable asset that benefits every future consumer, whether that consumer is a person building a dashboard, a stakeholder exploring data, or an AI agent answering a natural-language question.
That system is a semantic layer.
A semantic layer sits between the raw data warehouse and everyone who asks questions of the data. It encodes business meaning: what "revenue" means (and which version), how "active user" is defined, which join paths are valid, what time-grain logic applies. It captures the kind of decisions that currently live in people's heads and makes them explicit, reusable, and governed.
When a semantic layer works well, the four hidden costs shrink:
- Response cost drops because governed metrics are available for self-service exploration. Stakeholders can answer many of their own follow-up questions without filing a request. The analyst's past work serves future users automatically.
- Coordination cost drops because metric definitions are structural, not social. Two teams using the same metric get the same number by construction. Disagreements surface at definition time, not at reporting time.
- Change cost drops because logic is centralized. When the definition of "active user" changes, you update it in one place. Every dashboard, every report, every AI query that references that metric picks up the new logic.
- Continuity risk drops because the reasoning is encoded in the system so when an analyst leaves, their metric definitions, relationship models, and business rules remain.
The compounding test
Here is a simple test for whether your analytics setup has a durability problem. When an analyst defines a metric today, does that definition automatically make every future use of that metric cheaper, faster, and more consistent? If the answer is yes, your analytics compounds. Each piece of work makes the system smarter. The team's capacity grows faster than headcount.
If the answer is no; if each new dashboard, each new report, each new AI query reconstructs logic from scratch, then you have a durability problem. You are paying the full cost of analytical reasoning on every request, and the only thing scaling your capacity is hiring.
The data teams that support large organizations without burning out are not the ones with the fastest analysts. They are the ones that built systems where past work compounds into shared, governed, reusable definitions. The semantic layer is what makes that compounding possible.
The productivity question asks: how do we answer questions faster? The durability question asks: how do we make sure each answer makes the next one easier? The second question matters more, and far fewer organizations are asking it.
Read more: The Best BI Tools with Semantic Layer | A Fact-based Comparison
What's happening in the BI world?
Join 30k+ people to get insights from BI practitioners around the globe. In your inbox. Every week. Learn more
No spam, ever. We respect your email privacy. Unsubscribe anytime.