Don’t make “data-driven product decisions”—build a data-driven semantic environment | by Pavel Samsonov



Learning organizations can use data to determine whether they were wrong, while others use it only to prove their rightness. The true value of UX lies in transforming the latter into the former.

Like “growth mindset” or “agile development,” data-driven decision making has snuck its way into the way almost every product org describes itself. The trouble is that it doesn’t mean anything — “data” is a suitcase word and every company making this claim usually means something different by it.

In the formal sense, “data-driven” is not even possible. Data has no meaning, but must be transformed into meaningful information through a process.

And this is where the two kinds of “data-driven” behaviors diverge.

In learning organizations, information is produced via the synthesis of research data, and preserved within organizational knowledge. But in non-learning organizations, that process flows the other way: incoming data is used solely to create evidence that validates existing knowledge.

Practitioners interested in making decisions that are actually informed by data need to do more than check a dashboard or produce a research report. They need to build a local semantic environment — a mental model shared by the people they want to carry out those decisions.

“In order to influence decisions with evidence you have to work with existing beliefs, not against them.”Erika Hall

It is a truth universally acknowledged that any critique of Agile will be accompanied by a comment that Waterfall was worse. So too it is with data-driven decisions. And just like comparisons between Agile and Waterfall, we must start this conversation by asking: what exactly did the new way of doing things save us from?

A sketch of an organization that isn’t driven by data might look something like this: an executive sets a goal (we need to sell 20 million widgets) and then a manager defines a strategy to meet it (we’d sell more widgets if we made blue ones).

But how did that assumption happen in the first place? If you asked the manager, they wouldn’t say “because I say so.” Managers (ought to) balance a lot of different factors; in addition to the classic “viability, feasibility, desirability” product triad the right thing to do is also informed by political concerns. Will I be able to get my executive’s buy-in? Will others go along with it? Will doing it make me and my team look good?

Conway’s Law may govern product decisions, but this is something more far-reaching: a semantic environment defined by the accumulated beliefs, experiences, and preferences of the organization. In other words: the organization’s knowledge. The manager might not be able to tell you why but they know that if we made blue widgets, sales will go up — and they know that others in the company know it too, and will be willing to go along.

A screenshot from a vintage role playing video game showing a girl in a library and a text box saying “Knowledge can be a guide or an obstacle depending on how you handle it.”
Wild Arms (1996)

The saying “nobody ever got fired for buying IBM” is a well-known manifestation of this principle. “IBM is good” was internalized as organizational knowledge; it was the path of least resistance and perhaps more importantly, you could not be blamed if you picked it and the solution ended up failing. Following organizational knowledge means you can’t be blamed if things go sour; everyone knew that making the widgets blue was the right thing to do!

When I say everyone, I of course mean mostly executives. Organizational knowledge is shaped by power; those who wield outsize influence on decision-making and blame-assigning will end up defining the boundaries of “common sense” beyond which it is dangerous to veer.

And the consequences can be catastrophic.

Setting a bunch of people to randomly talk to people ad-hoc means the broader organization doesn’t grow its body of knowledge.

It’s extremely tempting to draw a clean line between this approach and a “data-driven” one where everything is examined from first principles. It’s tempting because it is naive. The word “data” creates illusions of objectivity, but the entire process from deciding which questions to ask to drawing conclusions from the answers is as subjective as you can get.

The incumbent facing a data point is not another data point, but an entire mental model forged through many iterations of assimilating new information. What you’re up against is a meme; information packaged up into an idea that rose to prominence because it thrives in its semantic environment.

“Users want a dashboard” is an example of such a meme. It doesn’t matter that it’s not true — asserting to the contrary will raise a chorus of “prove it” followed by “let’s make one anyway, just in case.”

Birds burning a book. The vulture says, it didn’t agree with our observations, did it men? The two crows agree, no sir it’s out of date, and on fire.

It’s easy to pin the blame on “lazy managers” — but none of this is happening maliciously. The majority of people want to do good work, but the definition of “good” within their accustomed semantic environment is going to vary, as will the tools at their disposal. Managers giving “data-driven” an honest go are likely to try and skip straight from data to actionable insights — and therefore unconsciously mix that data with the most salient memes. When they examine data that’s coming in, the natural thing to do is pattern match — in other words, to seek out evidence that those memes promise you will find.

If data is defeated by memes, then it stands no chance against organizational power. In an environment where reward/punishment are not linked to success/failure, everyone is incentivized to hedge their bets and filter new data through designated decision-makers who are organizationally empowered to decide what the data says. Even a single data point that backs up the established narrative can drown out hundreds that contradict it; people who say “we were wrong” can be quickly shut down by conclusions of “no, you asked the wrong people the wrong questions, and misinterpreted their answers.”

It’s really easy to be data-driven if all you want to do is justify what you wanted to do in the first place. But data alone cannot drive change. You have to strike at the heart of organizational knowledge by embracing a parallel, rival approach to knowledge creation.

“When people talk about an ‘evidence-based’ intervention, what they really want is an evidence-determined intervention. They want the evidence to tell them what to do, and remove all risk. Most research, however, can only say what NOT to do, or what to TRY.” Kenny Temowo

There are only two ways to fight memes: by creating other memes that are more fit for the semantic environment, or by changing that environment. The best strategy, of course, is to do both: create a flywheel that evolves the semantic environment to fit your preferred memes, and evolves the memes to fit their environment.

When people ask the silly question “what is the ROI of UX”? This is it.

After establishing falsifiable hypotheses and gathering the data, the real work of UX begins: to transform the data into something useful. This is called synthesis; researchers use it to filter out noise, enrich quantitative data with qualitative data to turn it into actionable information.

Six steps separated by a dotted line. The bottom three — metrics analysis, usability testing, validation — are labeled low impact tasks. The top three — gathering data, synthesizing inforamtion, and curating knowledge — are labeled meaningful research
Analogous to the “design maturity” ladder, a research maturity ladder might look something like this

Synthesis is distinct from the natural process of data becoming subsumed into organizational knowledge because it is intentional; rather than taking in data piecemeal, we observe all of it at once. This is the only way to capture warm data — the insights that exist only within relationships between data points.

The other distinction is the scale at which these processes work. Organizational knowledge accretes slowly, and changes slowly too. But getting your small team together to look at holistic raw data and have everyone draw their own conclusions makes an impact much more quickly. Rather than trying to fight top-down, synthesis with teammates creates extremely effective bottom-up opposition to established narratives.

Synthesis prevents research from collapsing into “validation” because it is a far more rigorous lens for exploring the data than “can I find evidence my idea was good?” It protects teams from being strung along by hearing “people are saying they want blue widgets” because it forces them to confront not only the people who are saying it, but also everyone who is saying something else.

In other words, synthesis takes the decision-making power away from organizational memes, and gives it to you. Good synthesis enables your semantic environment to evolve from “this idea is good/bad” to “this idea works in some situations, but not others” and empowers you to decide whether that trade-off will lead to a viable product.

“When you over-rule designers’ judgment about users’ needs, you are telling them to spend less energy solving problems and more energy selling you on solutions.” Jonathan Korman

Knowledge isn’t the only thing that’s governed by institutional power. Synthesis is a process, and in a semantic environment where process is seen as a burden, introducing more process is not likely to succeed. As with any part of the process they don’t understand, managers will demand skips and shortcuts. And we live in a time rich with shortcuts-as-a-service.

However, approaching synthesis as a box we can check — whether with a traditional word cloud or a Gen AI powered word cloud — is a mistake that undermines the entire purpose of the exercise. The purpose of synthesis is not to be able to say that we did it — it’s to have the conversations that allow us to inhabit the data and update our beliefs.

Using an algorithm, no matter how sophisticated, to output a list of bullet points erases the benefits of being “data-driven” in the first place and brings us back to where we started: requirements issued by a black box and indistinguishable from managerial fiat. Rather than creating new competitive memes, we go right back to allowing established memes to dictate what our data means.

But proper, participatory synthesis produces mental models as its output. Teams who go through this process internalize not only the conclusions, but the entire thread of logic that led up to it. They are implicitly empowered to extend that logic even when there are no researchers or decision-makers in the room — and to push back when the requirements they are given don’t make sense within that mental model.

If your management is pushing back on synthesis, automation is not the answer. Automation will keep you stuck in the same semantic environment where UX process is seen as a burden. The only way to design a semantic environment where design makes sense is to do it properly; start small and it will scale as people learn to value it.

As dismissive as Cagan is of UX, his empowered teams need the ability to synthesize knowledge as part of their toolkit, in order to actually be empowered to make their decisions based on data rather than on the organization’s extant memes.

Thus Agile — as framed within the original manifesto — can only exist within UX.


Source link