
Data Team: You Don't Know What You're Building
25.03.2026 | 7 min ReadCategory: Data Team | Tags: #Data Team, #Data Engineering, #Data Governance
Most data teams aren't held back by bad decisions — they're held back by decisions nobody said out loud. Using Kahneman's System 1 and System 2 as a lens, we look at why silent assumptions are what actually kills data quality.
Through the lens of Daniel Kahneman
It often starts innocuously: a quick data delivery to solve a specific need. It ships fast, works well — and suddenly gets used as though it was built for reuse, long-term reliability, and shared truth. That gap between intention and expectation is where many data teams find themselves in trouble.
Data teams are often described as caught between two competing concerns: fast delivery and good quality. In practice, that framing oversimplifies things. The real tension rarely comes down to what you choose to do — but whether you are clear about which choice you are actually making, and what expectations follow from it.
Daniel Kahneman’s distinction between System 1 and System 2 offers a useful vocabulary for this. System 1 is fast, intuitive thinking — the autopilot that resolves things without expending too much energy. System 2 is slower, more analytical, and more resource-intensive. Kahneman uses this distinction to explain how people make decisions, but it maps surprisingly well onto how data teams work. Both systems are necessary. The problem arises when we confuse them — or pretend we can operate at System 1 speed and still produce System 2 results.
The case: we were supposed to decommission a legacy data warehouse
In one engagement we know well, the starting point was sensible: establish a team to decommission a legacy data warehouse, step by step, replacing it with new data products built on a modern platform.
It sounded clean. And in theory, it was.
But it quickly became clear that we were in a System 2 ambition project without System 2 prerequisites. Users weren’t mature enough to articulate what they actually needed from new data products — they were used to consuming, not owning. The data platform wasn’t set up for a data engineering team working at full pace. A data governance framework barely existed, and what did exist was too complex to actually use day-to-day. Source data was insufficiently mapped and prepared.
And yet, the expectation was to “get this done quickly”. The ambition was to work at System 1 speed and deliver System 2 results. That’s a combination that rarely ends well.
What happened wasn’t that the team failed. What happened was that nobody had articulated the gap between what they wanted to achieve and what prerequisites were actually in place. And in the absence of that conversation, everyone filled in their own assumptions.
System 1 is not the problem
It’s worth being clear: System 1 solutions are entirely legitimate. Not every data delivery needs to be broad enough for the whole organisation, fully documented, and designed for perpetual reuse. In many cases it’s right to deliver fast, test whether something actually creates value, and invest in longevity only if the solution proves its worth.
The problem doesn’t arise because someone chose a fast solution. It arises because the choice was made without saying so — and because nobody said out loud what that choice actually means.
An analyst who needs a number for a Friday meeting doesn’t need a pipeline built to last ten years, with full historical tracking and six layers of tests. But she does need to know that what she’s looking at is “good enough for now” — not “the definitive answer, forever”. If she doesn’t know that, and her manager doesn’t know it, and the data product owner doesn’t know it, the foundation has been laid for a problem that will be much harder to resolve than the numbers themselves.
System 1 can actually be a sign of maturity
In mature data teams, a lot of what happens is System 1 — and that’s a good thing.
When a team has worked with the same patterns long enough, they become automatic. That doesn’t mean they’re unconsidered. It means they were considered once, systematised, and now just work. Standardised onboarding for new data sources. Modelling and layering conventions that nobody needs to debate from scratch. A fixed standard for data quality tests that sit in pipelines without anyone having to ask for them. An operational rhythm where everyone knows who owns what, and what happens when something breaks at three in the morning.
That’s the kind of System 1 you want. And the path there isn’t to skimp on System 2 work — it’s to do the right things thoroughly enough, for long enough, that they become intuitive.
The goal is not to “work in System 2”. The goal is to do the right things so many times that they become System 1.
Where the problems actually arise
The real problems occur in the space between the choice made and the expectations set. To make it concrete: you build a quick data delivery for one purpose, yet expect it to be reused by other teams. You avoid defining terms because it takes time, yet expect consistent numbers across reports. You skip historical tracking to save time, yet expect stable time series when someone asks about a trend six months later.
Each of these examples is, in isolation, not dramatic. It’s the combination that causes the problem. And what makes it difficult is that none of those choices is necessarily wrong at the moment it’s made — they’re just not clearly communicated. So expectations accumulate. And one day they’ve become a debt that’s hard to repay.
Conscious choices are the quality indicator
What distinguishes mature data teams from immature ones isn’t necessarily that they make better decisions in a technical sense. It’s that they know which decisions they’re making — and why.
“This is a fast solution, and that’s completely fine — but we know what it isn’t.”
“This is a foundation for what we’ll build next. That’s why it takes time, and that’s the right call.”
Those sentences sound simple. They’re not always easy to say. Especially when there’s pressure to deliver and it’s tempting to promise more than you should. But data teams that consistently communicate which mode they’re operating in build something that’s hard to buy: trust that what they deliver is what it claims to be.
Taking conscious choices doesn’t require more time. It requires clarity. Clarity within the team about what you’re actually building. Clarity towards the business side about what is a temporary solution and what is a foundation. And enough clarity to speak up — early — when expectations don’t align with the level of ambition.
Speed doesn’t kill data quality. Silent assumptions that both are possible at once — without anyone saying so out loud — do.
So what do you actually do?
There is no one-size-fits-all answer, but a few practices recur in the teams that handle this tension well.
The first is to make the choice explicit. Not in a policy document nobody reads, but in the way you work. When a team starts a new data delivery, it’s worth taking a few minutes to clarify: is this a System 1 delivery or a System 2 delivery? What are the consequences of that choice? And who needs to know?
The second is to protect System 2 work. Maturity isn’t built in spare capacity. Standards, frameworks, and good technical hygiene always compete for space against whatever is urgent. Data teams that actually manage to raise their maturity level over time are usually the ones who have made it an explicit part of the plan — not something they hope to get to when things are quiet.
The third, and perhaps most important, is to talk about it. Not as self-criticism, but as professionalism. “We made a deliberate choice here, and this is what it means” is a sentence that builds trust — both within the team and with those who depend on what you deliver.
Does this tension sound familiar? At Glitni, we help data teams articulate which mode they’re operating in — and what it takes to move towards where they want to be. Get in touch if you’d like to discuss what this looks like in your organisation.
