Stop Debating, Start Testing
Ten executives. Six meetings. A thousand-pound broadband decision. An essay on why debate, at every altitude of the business, is cost — and why small tests are the fastest way to move any idea, in any function, toward any kind of value.
A theory is just a guess until it meets reality. Every hour spent in discussion without evidence is cost, accumulating against value that hasn't arrived yet — and the pattern plays out at every scale, in every function, for every organisation that's trying to move ideas toward something valuable.
Six meetings, ten executives, one thousand pounds
The decision was whether to upgrade the office broadband.
The upgrade cost £1,000 for the year. The question was genuinely open — the existing connection worked, the new one would be faster, the business case was plausible but not overwhelming. A reasonable thing to decide on.
What was less reasonable was what happened next. A meeting was convened. Ten people attended, all executives. The conversation was thorough. The decision was not made. A follow-up meeting was scheduled, because the discussion had surfaced some implications worth considering.
That meeting produced a decision to consult two more stakeholders and gather additional evidence. The third meeting re-opened the question of whether the business case had been adequately challenged. The fourth introduced a concern about vendor risk. The fifth returned to the original question, now surrounded by six weeks of context. The sixth reached a decision.
Ten executives. Six meetings. At their salary costs, somewhere between fifteen and twenty thousand pounds of collective executive time, deliberating a one-thousand-pound spend.
You would be forgiven for thinking this is a story about bad judgement, or poor meeting discipline, or a governance model that lets small decisions consume large amounts of senior attention. All of those are true. But they're not what's actually interesting here.
What's actually interesting is that the exact same pattern is happening in every function of every large organisation, every day, about every kind of question — and most of the people involved don't notice it because their version doesn't have the comic clarity of a thousand-pound broadband question.
What's actually being debated, everywhere, all the time
Here's the move.
If you look closely at what organisations actually spend their time discussing, you find that "what product to build" or "which marketing campaign to prioritise" is a tiny fraction of the total.
Far more organisational time is consumed by debates about everything else.
Ways of working. Tools. Vendors. Operating models. Reporting lines. Approval thresholds. Governance frameworks. Performance management approaches. Career frameworks. Compensation design. Financial reporting standards. Team structures. Communication norms. Documentation standards. Meeting cadences. Hiring processes.
Every one of these can sit in extended debate for months. Every one of them can consume attention disproportionately to the value at stake. Every one of them can stall while the people downstream wait for a resolution that never quite arrives.
This is the pattern. It isn't specific to any function or to strategic decision-making. It runs across every domain of organisational life. An HR team can spend six months debating a new performance management system without piloting any of it with a single team. A finance team can spend a quarter deliberating a new reporting standard without trialling it with a single business unit. A leadership team can spend two years discussing a new operating model without running a single experiment at the scale of a department.
And all of it — every meeting, every deck, every proposal, every round of stakeholder feedback — is cost, accumulating against value that hasn't arrived and may never arrive.
Every debate is an Idea to Value cycle, stalled
Here's what's actually happening in the broadband meeting, and in every meeting like it.
The facilities team had an idea: upgrade the broadband to improve internal network performance. That idea was meant to produce a specific kind of value — faster connections, fewer complaints from staff, marginal productivity improvement across the organisation. In the taxonomy of the four value types, this is enablement (or possibly marginal cost reduction, depending on how you frame it). Not financial value. Not revenue. But a legitimate internal value type, worth the £1,000.
To move from that idea to that value, someone needs to invest. Time. Attention. Decision-capacity. A bit of procurement effort. The upgrade itself.
That's an Idea to Value cycle. Small one. Small value. Small investment. Should take a week.
What actually happened was that the investment phase consumed six meetings of ten executives. That meeting time is cost — and because the executives would otherwise have been making other decisions or moving other work forward, the cost isn't just the direct salary expenditure. It's also the opportunity cost of everything those ten people were not doing during those six meetings.
The cycle stalled in discussion. Not because the question was hard. Because the structure of the organisation made extended discussion the default response to any open question, regardless of the stakes involved.
And this — this exact pattern — is what's happening in every function of every organisation, simultaneously, for every open question. Finance is running their own Idea to Value cycles (new reporting standard, new budgeting approach, new tooling) where the intended value is cost reduction or enablement.
HR is running theirs (new framework, new process, new policy) for enablement. Operations is running theirs (new supplier, new location, new workflow) for cost reduction or enablement. Product is running theirs (new feature, new platform, new campaign) for financial value. Leadership is running strategic Idea to Value cycles (new operating model, new market entry, new acquisition).
Every one of these is an Idea to Value cycle. Every one of them is trying to move an idea to a specific kind of value. And every one of them can stall in discussion just like the broadband meeting stalled — accumulating cost while producing no value, often for months or years at a time.
Debate in one cycle slows activity in every other
There's a second cost here that most organisations never count.
The ten executives in the broadband meeting weren't only failing to decide about broadband. During those six meetings, every other decision that needed their attention was waiting. The product launch that needed sign-off. The hiring decision that needed approval. The strategy question that needed a steer. The supplier relationship that needed a call.
Organisations are networks of parallel Idea to Value cycles, happening simultaneously, interdependent in ways that aren't always visible. When debate dominates one cycle, it doesn't just add cost to that cycle — it creates friction that slows every other cycle downstream of the people stuck in the debate.
The product team can't ship the platform because the tooling decision hasn't been made. The tooling decision hasn't been made because the finance team is debating the new procurement policy. The finance team is debating the new procurement policy because leadership hasn't resolved the new operating model. Every stalled cycle somewhere upstream produces waiting, rework, and lost momentum somewhere downstream.
This is why "just a bit more discussion" is so much more expensive than it looks. The direct cost of the meeting is visible and bounded. The indirect cost — the waiting, the re-planning, the context-switching, the decisions other teams defer because their dependencies haven't resolved — is often multiples of the direct cost, and almost always invisible on any of the dashboards that the organisation actually looks at.
Which is why, in aggregate, an organisation that tolerates extended debate across many of its simultaneous cycles is producing an enormous amount of drag on itself — drag that nobody is measuring, and drag that gets blamed on everything except its actual cause.
Guessing feels safe. Testing feels exposed.
There's a reason organisations default to discussion. It isn't laziness. It's a specific psychological trade.
When you're discussing, you're exploring possibilities in the abstract. Nobody's theory has been proven wrong, because nobody has put anything into contact with reality. Every position in the room remains defensible. Every person in the room can still be right. The social cost of being wrong hasn't been triggered yet, because nothing concrete has happened.
When you're testing, the structure changes completely. A test produces a result, and the result tells you something you didn't want to know as often as it tells you something you did.
Someone's theory survives. Someone's theory doesn't. The ambiguity that kept everyone safe in the meeting collapses, and the organisation is forced to deal with the insight.
Guessing feels safe. Testing feels exposed.
But only one of them produces knowledge. And the cost of the exposure — a week's worth of effort, say, on a small pilot — is almost always tiny compared to the cost of the extended discussion that testing replaces. The discomfort of finding out you were wrong early is radically cheaper than the cost of finding out you were wrong after committing the organisation at scale.
This trade is the same at every altitude of the business. The exposure of piloting a new finance reporting approach with one business unit feels identical to the exposure of prototyping a new product feature with a small user group. The HR team running a small trial of a new promotion framework feels the same vulnerability as the product team shipping an MVP. The sensation is universal because the underlying structure is — testing means putting a theory into contact with reality, and reality doesn't care what anyone in the room argued for.
Small. Cheap. Quick. Careful. Reversible. Real.
This is what a proper test looks like, regardless of what's being tested. Six adjectives, each doing specific work.
- Small — a test is not a full implementation. It's the minimum viable piece of evidence. One team. One workflow. One quarter. One business unit.
- Cheap — the investment is a fraction of what the full idea would cost. If the test is expensive, it isn't a test — it's a soft launch with a different name.
- Quick — measured in days, weeks, or at most a single quarter. Tests that run for years are not tests. They are unfunded projects pretending to be experiments.
- Careful — thoughtfully designed, with a clear hypothesis and a clear definition of what "learning" looks like. An undesigned test is just an accident that you're calling an experiment.
- Reversible — if the test produces a bad result, you can stop, back out, and the organisation is not damaged. Irreversible tests are not tests. They are bets dressed as pilots.
- Real — deployed against actual reality, not simulated. A proposal circulated for feedback is not a test. A thing that actually encounters the world and produces evidence is.
What the test is of doesn't change these criteria. Testing a new product feature with one customer segment for a month meets them. Testing a new approval threshold across one department for a quarter meets them. Testing a new tool by running it alongside the old tool for six weeks meets them. Testing a new meeting cadence with one team for eight weeks meets them. Testing a new financial reporting approach with one business unit for a quarter meets them.
Most "tests" in organisations fail at least two of these criteria. They're big, slow, expensive, poorly designed, unrecoverable, or never actually deployed — and when they produce ambiguous results, the organisation decides that testing doesn't work. The problem isn't testing. The problem is that what got run wasn't a test.
A test is itself a miniature Idea to Value cycle
Here's the recursive insight that makes this more than generic advice, and it's the observation that quietly unifies the whole Idea to Value system.
When a team runs a small, careful test — of anything — they are not doing something separate from the Idea to Value system. They are doing the Idea to Value system, at the tightest possible loop, compressed into days or weeks.
The test itself is the idea. The time and attention spent designing it is the investment. The hypothesis being tested is the activity set. Running the test is the creative action. The result — the data that comes back — is the output. And the value produced, at this scale, is learning — a change in the organisation's understanding of its own situation, produced quickly and cheaply enough that it can actually be acted upon before conditions change again.
Teams that test often are running the Idea to Value loop many times, at small scale, rather than attempting it once at large scale. Every loop compounds. The team that has run twenty small tests in a quarter has accumulated twenty times the organisational learning of the team that spent the same quarter building one big thing, and usually at a fraction of the cost.
This is the quiet insight that reframes the whole system. The organisation isn't just running Idea to Value cycles at the level of products, campaigns, and initiatives. It's running them at every scale, in every function, for every intended value type. And testing — proper testing — is itself a micro-version of the same cycle, with learning as its value.
Which means an organisation practising testing frequently, across many domains, isn't just getting smarter about individual decisions. It's becoming fluent in the Idea to Value system itself, at tight loop, hundreds of times — building the core organisational muscle of turning ideas into value, one small cycle at a time.
The model is universal
This is the point worth stating plainly, because it's the foundation everything else rests on.
The Idea to Value system isn't a management methodology for technology companies or large enterprises. It's the structure of productive human activity itself.
A solo creator sitting at a desk, deciding whether to publish the blog post or edit it for another week, is running an Idea to Value cycle — idea (the post), investment (their time and attention), activity (the writing), output (the published piece), value (financial, if they're selling something; learning, if they're testing an audience response; enablement, if they're building toward something later). They face the same choice as the broadband committee. Polish the article indefinitely, or put it in contact with reality and see what happens.
A startup debating whether to add a feature or ship what they have is running an Idea to Value cycle. An author deciding whether to submit the manuscript or re-edit chapter four for the fifth time is running one.
A podcaster wondering whether to record the episode or noodle the topic further is running one. An enterprise finance team deliberating a new reporting standard is running one. A leadership team debating a five-year strategy is running one.
Every one of these people is trying to move an idea toward some form of value, and every one of them faces the same underlying choice.
Continue discussing, polishing, theorising — or put it in contact with reality and generate learning. The scale is different. The value types differ. The stakes differ. The vocabulary differs. But the pattern is identical, and the costs of stalling are identical, and the methods for moving forward cheaply are identical.
This is why the model works for solo creators, enterprise teams, authors, podcasters, startups, and every scale between them. It isn't describing a particular industry's workflow. It's describing the structure of what productive work is.
Every productive act, at every scale, in every domain, is idea → investment → action → output → value. And every unproductive act is the same structure, stalled somewhere between idea and output, accumulating cost.
A counterweight
Not every question deserves a test.
Some decisions are genuinely irreversible. Some investments require commitment at scale to be meaningful at all. Some questions can be answered by looking at evidence that already exists, without needing a new experiment. And some organisational problems aren't actually unknowns — they're decisions that someone needs to make, and the discussion is a way of avoiding the making.
Testing is the right default for questions where the answer genuinely isn't known, where a small experiment could reduce that uncertainty, and where the cost of learning is small relative to the cost of being wrong at scale. It's the wrong response to questions where the answer is already known but nobody wants to say it, or where the real issue is a decision dressed up as an analysis problem.
The discipline is in knowing which is which. An organisation that tests everything is as dysfunctional as one that tests nothing — just expensive in a different direction.
The skill is recognising the moment where discussion has done what it can do, and further discussion is just accumulating cost without reducing uncertainty. In that moment, at any altitude of the business, the correct move is to stop talking and run the smallest test that would settle the question.
The cost of certainty without evidence
The real danger is not being wrong. It is being certain without evidence — and then acting on that certainty at scale.
Most organisational disasters are not the product of testing. They are the product of skipping testing, because someone important was confident, or because the cost of being wrong felt too expensive to find out early. The cost of being wrong at scale is always higher than the cost of finding out early and adjusting.
Testing is humility in motion. It is the quiet acknowledgement that reality is wiser than theory, and that the fastest path from any idea to any value runs through the evidence that the idea either does or doesn't work.
Not every test succeeds. But every test teaches. And learning early is always cheaper than guessing wrong at scale.
This applies to the solo creator agonising over a blog post, to the startup debating a feature, to the enterprise deliberating an operating model, and to the ten executives debating a thousand-pound broadband upgrade. The altitude changes. The pattern doesn't. The cost of standing still is always real. The cost of moving cheaply is almost always smaller than it looks.
So the question isn't who is right?
It's what did we learn?
Test, and the future becomes visible.
Guess, and it remains imagined.
Related reading
→ Every Mistake Is an Opportunity — the companion piece on what happens when tests fail, and how organisations convert failure into learning instead of into blame.
→ The Best Plan Is Not the Best — on the pathology of planning as a substitute for action, and why the best plans are revealed by contact with reality rather than by further refinement.
→ Creativity Is a Climate Problem — on the conditions that make testing possible in the first place. Teams test in climates that make testing safe; in climates that don't, they discuss.
Go deeper
This principle is one of 26 in the full deep dive Idea to Value system. Here's where to continue.
Watch the full Studio session below
4.5 hours of practitioner-level video across all 26 principles — separate from the course, and going significantly deeper. Built for people who want to go deep and apply the system with a rich understanding.
Get the Idea to Value course
The complete field guide and video series — all 26 principles explained clearly, with practical examples and a way of seeing your work you won't be able to unsee. The clearest place to get the full system in one place.
Start with the Orientation Session
A free 21-minute overview of how ideas move from concept to value — the clearest place to begin with an overview of the full system. Free on signup.