Post

Dissent Mode: Eliminate Mid-Bar Work

Alignment Through Disequilibrium

In the 1990s the West congratulated itself for “protecting the environment” by shuttering mines and offshoring processing. The pollution didn’t disappear; it migrated to China, where the environmental impact grew worse but the industrial capability deepened. We optimized for virtue signals — clean hands and local optics — while China optimized for capability, aligning its industrial policy, foreign investment, and scientific research around control of critical materials. What looked like environmental responsibility was, in systemic terms, a transfer of learning. The result was dependence, not progress. Misalignment on intent—confusing looking good for doing good—always compounds into fragility.

We won’t increment our way to the hundred‑fold power the next decade demands—compute or materials. Iteration is safe but slow death without intent.

It preserves motion without momentum, a pattern of competent repetition that looks like progress until you compare it to what the world requires. The rare‑earth story is an example of what happens when systems optimize for stability instead of evolution.

That macro failure mirrors what happens inside most organizations. Platform and data teams often behave like miniature versions of the West’s rare-earth policy: outsourcing discomfort, polishing the surface, and measuring the wrong things. The language changes—tickets instead of mines, refactors instead of environmental protections—but the pattern is the same. Work becomes about equilibrium — busy, stable, and stagnant instead of learning. True leverage comes from the opposite: from alignment sustained through deliberate, managed instability.


Diagnosing Misalignment

Most lasting impact comes from alignment, not scale. When progress stalls, leaders add more—people, automation, dashboards—but effort without shared direction just accelerates entropy. The teams that break through don’t move faster; they move together. Alignment converts effort into compounding force.

Breakthroughs require a controlled loss of balance. Systems evolve when they leave equilibrium long enough to discover a better configuration, then re-align around what they’ve learned. This is the essence of what I call safe disequilibrium: creating bounded instability that stretches a system without breaking it.

The leader’s job is to define a clear, ambitious intent—something just beyond current capability—and then guide the organization through a sequence of small, recoverable experiments toward it.

Your job isn’t to balance perfectly; it’s to let the system wobble safely.

The power curve flips when every iteration aims at the same horizon.

I used to tell teams, “swing for the fences,” meaning we needed bigger bets. What I’ve learned since is that the size of the bet matters less than the size of the intent. The real trap is not timidity; it’s unaligned iteration. A thousand micro-optimizations don’t sum to a breakthrough. You cannot compound what is not coherent. Setting a revolutionary aim (the thing we cannot yet do) and then iterating toward it is how large systems find new ground.

The two are not opposites but complements. Together, they let systems learn faster than they decay. That same logic plays out in every layer of execution. What blocks most initiatives isn’t ignorance of the goal—it’s the friction of moving together.


Classes of Alignment Constraint

These are three sides of the same constraint: alignment, coordination, and humility about what can be known. Most organizations don’t die of hunger; they choke on mid-bar work and chaotic motion that goes nowhere. Everyone knows roughly what’s wrong but can’t afford to move first. Costs are individual; benefits are collective. Healthy organizations engineer tension—safe pulses of shared disequilibrium where movement is synchronized instead of solitary. Without that pulse, even obvious truths stall.

Saying alignment is the constraint is not an abstraction; it’s a diagnosis. Drag inside a system — technical, social, or strategic — often traces back to some form of misalignment. Coordination failures wear many masks, but they share the same physics: when the cost of moving together is higher than the cost of staying still, equilibrium wins. The symptoms differ; the constraint is the same.

Alignment failures come in several recognizable forms:

  • Epistemic Centralization. (The Anointed perspective) Decisions rely on a small, self‑confident center that believes it can see the truth from above; weak signals and dissent at the edges are filtered out. The remedy is distributed sensing: red‑team reviews, minority reports attached to decisions, rotating ownership of mechanisms, and dashboards where field evidence can contradict leadership narratives in public.
  • Incentive Drift. When rewards favor local throughput over shared outcomes. The “feature factory” is the classic case: teams measured by how much they ship, not whether it mattered. Disequilibrium starts by changing what’s measured, tying success to feedback instead of motion.
  • Context and Information Debt. Decisions made with stale or siloed context. Teams refactor in parallel, solving yesterday’s problems beautifully. Visible, shared feedback loops—decision registers, outcome dashboards—are the antidote.
  • Coordination Friction. When the transaction cost of collaboration is too high. Multi-team releases stall, approvals bottleneck, and no one dares to stop the line. Guardrails lower the cost of synchronized change.
  • Social Stasis. Fear and habit masquerading as professionalism. Everyone agrees but no one moves first; the equilibrium holds because risk is social, not technical. Managed disequilibrium breaks the freeze.
  • Temporal Myopia. When incentives shrink to the next sprint or quarter. “Tech debt weeks” and “stability sprints” feel virtuous but change nothing. Long-horizon feedback keeps direction alive.
  • Structural Equilibrium. The architecture or org chart enforces its own inertia. Platforms become fortresses; silos fossilize behavior. Small, autonomous units with clear interfaces (Amazon’s model) reintroduce flexibility.
  • Cognitive Local Maxima. Experimentation without theory. Endless A/B tests tune the hill you’re already on. Hypothesis-driven loops, grounded in causal reasoning, are how teams leap to new terrain.
  • Moral Optics Alignment. Optimizing for appearances instead of outcomes. We close the mine to look responsible, or rewrite the platform to look modern. Keeping feedback inside the system—where we must face it—is the only cure.

These aren’t separate problems but facets of the same constraint. Each form of misalignment is a local equilibrium the organization defends because it feels safe. Alignment, in this sense, is not harmony: it’s the ongoing act of reducing the friction that keeps us from learning together. In each case, a leader’s work is to identify which constraint dominates right now and introduce just enough disequilibrium to loosen it.

Elevating Feedback

Design for Feedback and Dissent

There is no privileged perch from which the full truth is visible. Alignment collapses when leaders mistake intent for insight and suppress the signals that don’t fit. The fix is architectural: Build systems that mine for incongruence and amplify it safely.

Require hypotheses before work begins and dissent memos when it ends; log decisions with revisit dates and publish the disconfirming evidence; rotate stewards so no team can overfit to its own story; and give anyone an “andon‑for‑assumptions” (a pull cord for bad reasoning) cord to pull when the mechanism of action drifts.

In healthy organizations, feedback can contradict leadership in daylight — and the system thanks it for the correction. Treat dissent and anomaly as a first‑class signal, not a social problem to be managed. Encode it:

  • Contradiction Budgets: Allocate time each cycle to hunt for evidence that we are wrong (backtests, counter‑metrics, user narratives that don’t match dashboards).
  • Dissent Memos: Every major decision includes an attached short brief from the strongest opposing view; it travels with the decision in the log.
  • Red‑Team Rotations: Assign a rotating pair to challenge assumptions on high‑leverage work; reward the best falsifications.
  • Shadow Metrics: Track at least one counter‑indicator (e.g., reliability vs. speed; trust vs. conversion) to catch negative‑impact work early.
  • Anonymous Andon: A light‑weight path for anyone to flag assumption drift without career risk; review weekly and close the loop publicly.

These practices don’t slow organizations; they keep them honest. The point is not to admire disagreement—it’s to operationalize it so systems can change course before reality forces them to.

Practicing Safe Tension

Complex systems learn through exposure, not insulation. In Cynefin terms, run safe-to-fail probes: small, bounded experiments that reveal how the system behaves. The goal isn’t to avoid failure but to learn safely from it. Netflix operationalized this with chaos experiments; elite DevOps teams generalize it via rapid releases and quick recovery. DORA’s research links fast feedback to business outcomes—elite performers are roughly twice as likely to exceed profitability and customer-satisfaction goals (DORA 2024). Frequent change isn’t instability; it’s calibration.

Safe-to-fail practice institutionalizes disequilibrium. A steady cadence of reversible shocks prevents drift into comfort and keeps alignment alive.

Leadership in the age of AI and accelerating automation is less about control and more about sustaining discomfort with care. Set a long-horizon intent (an Amazon-style single-threaded mission) and hold the team in the productive zone long enough to learn. Psychological safety isn’t the absence of stress; it’s the confidence that tension will be used for growth, not punishment.

Make alignment tangible:

  • Keep a one-page articulation of aim → mechanism → feedback for every major initiative.
  • Replace status updates with evidence reviews.
  • Admit we rarely have proof—only credibility earned through pattern recognition and intellectual honesty.

The work of leadership isn’t to know better; it’s to build systems that make it obvious when we’re wrong sooner. Alignment isn’t consensus; it’s coherence earned through continuous contradiction.

What is Impact?

Impact isn’t volume. It’s the compounding of small corrections that add up to direction. It’s the willingness to bear local pain for systemic progress. The rare-earth crisis, the feature-factory treadmill, the rewrite that solves nothing—these are all forms of the same disease: equilibrium mistaken for health. Healthy systems breathe. They stretch, wobble, and recover stronger.

Alignment without tension is bureaucracy; tension without alignment is chaos. Held together, they form the learning loop every adaptive system depends on. The operating system is simple but requires tenacity and continuous effort: articulate intent, apply safe tension, and make sure the feedback can contradict you. Impact does not come from doing ten times more work; it comes from staying aligned while the ground keeps moving.

Barbell strategy is how you keep that learning loop honest: probes on the left, bets on the right, nothing in the middle.

Barbell Strategy for Alignment

If alignment is the constraint, portfolio shape matters as much as individual bets. A practical pattern is the barbell strategy: put most of your capacity into many small, safe-to-fail experiments (cheap, reversible probes) while reserving a smaller slice for a few high-conviction, long-horizon moves that express the north-star mechanism. Avoid the mushy middle: medium-sized, medium-risk projects that are too big to learn from and too small to change the game. Run two cadences in parallel: weekly probe‑and‑prune loops and quarterly pushes with clear stop criteria and revisit dates, so the left side learns and the right side compounds. This isn’t risk-seeking; it’s risk shaping so disequilibrium produces fast learning on the left side and durable capability on the right. Details and worksheet: see Appendix B — Barbell Cadence Worksheet.

The founder’s dilemma is the purest form of this constraint. Let’s look at how misalignment scales when tension is person-dependent rather than systemic.

Beyond Founder Mode

Founder‑led companies often get something important right: a strong voice that can dissent in public and make the system react. That permission to contradict keeps learning alive even when the organization is tempted to play it safe. As they grow, the founder tries to institutionalize that tension with a barbell — fast, reversible experiments on one side and bold, long‑horizon bets on the other.

Then they bring on hires to scale: experienced operators who understandably reach for the right side of the bar—big, long‑horizon bets. Without shared intent, those plans collide with founder control and org muscle memory; the reins tighten. The barbell collapses into the middle: initiatives too large to learn fast and too diluted to change the game. I’ve seen the anti‑pattern at many successful founder‑led companies that outgrew their early disequilibrium. The founders built momentum through bold bets and improvisation; they stumbled into a niche where their instincts fit the market. Scaling demanded new muscles — planning, process, delegation — so they brought in seasoned operators they admired. New leaders aim for the right side of the bar—strategic, high‑stakes bets—but founder control and organizational antibodies shove them back toward the middle. Nobody gets what they want. Operators can’t play out their long game; founders don’t get boldness; and the company drifts into the most expensive place possible—mid‑bar mediocrity, where work is too polished to kill and too small to matter.

Structural moves

The goal isn’t “founder‑led forever.” It’s dissent privilege for everyone: institutionalized dissent that doesn’t depend on a personality. Dissent is a capability, not a trait. Make contradiction cheap, socially safe, and operationally routine so teams can name what the founder would have named—without needing the founder in the room.

  • Dissent Permits: Every team has explicit license (and airtime) to challenge the mechanism of action on its highest‑leverage work once per sprint.
  • Disagree‑and‑Show: An objection must ship a probe; disagreement comes with the smallest safe test attached.
  • Narrative Checks: Quarterly, rewrite the “founder story” of why this initiative exists; if the story can’t be told crisply, stop the line.
  • Two‑Key Overrides: Major reversals require two independent leaders to co‑sign—it prevents both heroics and deference.
  • Skip-Level “Show Me Where We’re Wrong”: Leaders host a recurring forum where field evidence can contradict the plan in daylight; unresolved contradictions enter the decision log with a dissent memo.

You can call this approach Dissent Mode: Privilege the behavior we liked about founder mode by making it a system property rather than a personality perk.

The Ticket Factory Mindset

If alignment is how systems learn, the surest way to stall that learning is how we measure progress. Teams measure throughput (tickets closed, sprints completed, uptime maintained) and congratulate themselves on discipline. It feels responsible; it’s actually evasive. As John Cutler observed in his “feature factory” critique, many companies ship more and learn less. The best engineering organizations outperform peers not because they produce more artifacts but because they shorten the feedback loop between intent and result (DORA 2024). They align measurement with learning.

Ticket thinking survives because it is emotionally safe. You can’t be wrong for closing a ticket. But you can be wrong for naming a hypothesis and discovering it false. Impact work exposes judgment; ticket work hides behind process. Tickets are how we coordinate; outcomes are what we want to achieve: stop confusing one for the other.

Avoiding False Virtue

Cosmetic improvement feels virtuous—tidy code, elegant rewrites, clean optics—but without shared intent and visible feedback, it’s avoidance in disguise. This section combines the tech‑debt caution with the feel‑good motion critique into one standard: favor coherence and outcomes over appearances.

Most teams treat “tech debt” as moral high ground. Cleaning code feels like virtue. Yet the true drag on progress is not messy syntax; it’s decision debt—the slow accretion of choices made without shared context. You can refactor endlessly and still build the wrong thing, beautifully. Leaders who document decisions and revisit them regularly discover that alignment debt, not technical debt, is the silent killer of velocity.

A global platform team once proposed a massive replication initiative: app servers and databases deployed across continents. On paper it promised performance; in practice it added complexity without benefit. Targeted render‑time fixes beat global replication with none of the operational burden.

Platform teams fall into the same trap when they chase elegance over efficacy—rewriting systems to feel modern while business outcomes stagnate. Feel-good motion is misalignment disguised as progress. Real alignment keeps the mess inside the system, where the feedback is visible. If you hide the pain, you also hide the learning.

Itamar Gilad’s Total Impact Matrix quantifies how pervasive this drift can be. Across large public experimentation programs (e.g., Booking.com, Microsoft), fewer than ten percent of ideas consistently show positive impact, and a nontrivial share produce negative impact—extra complexity, user friction, or reputational harm (Booking.com & Microsoft base rates; Gilad 2025). The default state of large systems isn’t neutral; it’s subtractive. Progress begins when leaders stop trying to prescribe truth and start designing systems that surface it—where pruning, dissent, and signal amplification are built in, not punished.

The antidote to mid-bar mediocrity isn’t more planning—it’s smaller, safer shots of disequilibrium.

Controlled Instability: How Systems Learn

Toyota’s andon cord is the purest expression of safe disequilibrium: anyone can stop the production line when something seems off. Software teams need the same reflex, but at the level of mechanism of action, not syntax. When the strategy itself no longer works, the courageous act is to pause the machine and question the premise. That pause — momentary disequilibrium — creates space for learning.

Facts don’t move systems on their own. It took years—and a pandemic of denial—for medicine to accept hand-washing between patients. The data was clear; the coordination wasn’t. Change required systemic disequilibrium: new intent, visible feedback, and social guardrails strong enough to override habit.

Case Study: Hand-Washing and the Coordination Trap

What happened / why it mattered

In the mid‑1800s, Semmelweis saw strikingly higher mortality among women treated by doctors than midwives and traced it to contamination from cadaver work. A simple chlorine wash cut deaths from ~18% to <2%, yet peers resisted because the social and institutional costs were high. Progress required disequilibrium: reframing the aim (prevent infection), making infection data visible, and installing hygiene protocols as default guardrails.

Why it matters for platform and data work

Disequilibrium isn’t chaos; it’s a coordination tool. Create enough shared tension that the old equilibrium is intolerable, and the system can move together. For the full mapping from the Semmelweis case to org practice, see Appendix D — Coordination Trap Case.

Ron Heifetz calls this the “productive zone of disequilibrium,” the narrow range of tension that keeps people alert and adaptive without tipping into chaos. Governance’s role is to maintain that zone. Policy-as-code, service-level objectives, and paved roads are not bureaucratic constraints; they are the safety rails that make experimentation survivable (policy‑as‑code → 35% faster, 20% fewer rollbacks — Pulumi 2025; 20× release frequency with no increase in incidents — Capital One DevOps case). The organizations that never allow tension eventually become the ones least able to handle it.

Two ideas meet here: experiments only teach when they test a causal mechanism, and organizations evolve in bursts when they engineer safe, bounded shocks. Together they argue for deliberate, small instabilities with guardrails so you learn on purpose, not by accident.

It’s easy to mistake endless experimentation for progress. Many teams run A/B tests in a perpetual loop, swapping variables and collecting metrics without ever forming a hypothesis about why a change might matter. This is motion masquerading as learning: refining what exists instead of exploring what might. Without causal reasoning, experimentation becomes a ritual, an exercise in comfort, not discovery. Learning comes from reasoned disequilibrium: forming causal hypotheses, testing them with intention, and realigning based on what’s revealed. I’m not anti-experiment; I’m anti experimentation without hypotheses. Run more tests with better causal claims.

For a concrete example of experimentation turning into learning only through disequilibrium, see the Semmelweis hand‑washing case under “The Courage to Stop the Line.”

Biology teaches us that evolution rarely proceeds by smooth, incremental change. Instead, species endure long periods of stasis, punctuated by sudden bursts of adaptation—what Stephen Jay Gould called punctuated equilibrium. Organizations must design for these cycles deliberately, building in mechanisms that create safe disequilibrium and allow for rapid, coordinated leaps. Amazon’s proliferation of small autonomous teams and Toyota’s andon cord are engineered punctuation marks—intentional disruptions that force the system to stretch, learn, and realign.

These interventions demand change on terms that preserve coherence. Alignment is the DNA that holds the species together; disequilibrium is the mutation pressure that lets it evolve. Alignment through disequilibrium is how we practice evolution before it’s mandatory. These moves presume observability, progressive delivery, and practiced rollback. Without those, you’re shaking a system that can’t catch itself.

Appendix A — Impact & Optics Diagnostic (click to expand)

Use these worksheets to expose value detractors — work that feels virtuous but erodes real progress.

Negative Impact Audit (portfolio scan)

  1. List all active/planned initiatives (include “maintenance” work).
  2. Rate Expected Benefit (None/Low/Med/High) and Potential Harm (Complexity/Cost/User friction/Trust). Add a quick “why.”
  3. Tag Value Detractors (low benefit, high harm) and Unsustainable items (short‑term wins that erode the system).

Default actions

  • Cut or pause detractors; salvage lessons.
  • Re‑scope “unsustainable” items to preserve long‑term health.
  • Publish the cuts.

Optics vs Outcomes Diagnostic (where feedback lands)

Map where the feedback lives. If it’s outside your system, consider whether you’re optimizing for optics and vanity metrics.

Goal (stated)Mechanism (current)Where Feedback LandsReal OutcomeAdjustment
Environmental protectionOutsource extractionOffshoreGlobal harm ↑; capability ↓Bring feedback inside; invest in cleaner local process
Platform qualityRewrite for eleganceCode reviewCustomer outcomes unchangedTie to Impact Chain checkpoints

Run this table for your top five initiatives. If you can’t point to where the pain is felt, you can’t steer.

Appendix B — Barbell Cadence Worksheet (click to expand)

Design a portfolio that learns fast and builds durability.

Left side (70–90% capacity): Safe-to-Fail

  • List 5–10 probes for this sprint (1–2 week horizon).
  • Each includes: hypothesis, owner, guardrail, rollback, next decision.
  • Default action: prune aggressively; promote only with evidence.

Right side (10–30% capacity): High-Conviction Bets

  • 1–3 initiatives tied directly to the north-star mechanism.
  • Each includes: mechanism of action, leading indicators, stop criteria, revisit date (≤ 90 days).
  • Fund continuity, not heroics; pause if the mechanism is falsified.

Anti-Middle Rule

Mid-bar work is the enemy: if it’s 4–10 weeks and causally fuzzy, kill it, shrink it, or make it real.

Appendix C — Stop‑the‑Line & Dissent Protocol (click to expand)

Some ways to make contradiction and pauses cheap and visible.

Stop‑the‑Line Triggers (any one is enough)

  • Outcome trend moved opposite to intent for two consecutive reviews.
  • Primary assumption falsified by new evidence.
  • Risk/side‑effects exceed agreed guardrails (error budgets, SLOs).

Stop‑the‑Line Procedure

  1. Announce the pull (who/why); freeze the affected scope.
  2. Convene a 30–60 min evidence review (aim, mechanism, feedback).
  3. Decide: continue as‑is, alter mechanism, or retire the work.
  4. Log decision + revisit date; communicate broadly.

Norms

  • No penalty for pulling early; penalties for ignoring signals.
  • The pause is time‑boxed; indecision is the true outage.

Dissent Signals to Watch

  • Outcome moves opposite to leading indicator.
  • Qualitative user/operator stories diverge from dashboards.
  • Repeated exceptions to guardrails; “temporary” overrides become defaults.

Dissent Cadence (bi‑weekly)

1) Review the contradiction queue (5–10 minutes). 2) Select two items for rapid falsification. 3) Log outcomes and update decision register (include a dissent memo if disagreement remains).

Practices

  • Dissent Memo (≤ 1 page): Specifies the claim being challenged, alternative mechanism of action, evidence, proposed probe.
  • Assumption Andon: Who/what/why, affected mechanism, proposed stop‑the‑line scope, owner for next step.

Success = surfaced contradictions resolved or re‑tested within one cadence; no outstanding items without an owner or revisit date.

Appendix D — Coordination Trap Case (click to expand)

Hand‑washing (Semmelweis) mapped to organizational practice.

Medical caseOrganizational analogue
Doctors “knew” but didn’t actEngineers know the friction but don’t stop the line
Evidence ignored for comfortMetrics ignored for optics
New hygiene protocolsGuardrails and stop‑the‑line rules
Germ theory reframed intentBusiness‑outcome framing reframes platform work

Appendix E — Sources & Further Reading (click to expand)

These works underpin the claims about alignment, disequilibrium, feedback, and governance.

  • Forsgren, Humble, KimAccelerate / DORA Research Program (2018–2024). Supports feedback loops → org outcomes; defines the Four Key Metrics.
  • Gede & HulukaStrategic Alignment and Organizational Performance (Cogent Business & Management, 2023). Supports alignment → performance claim.
  • John CutlerFeature Factory essays/interviews (2016–2017). Supports output vs outcome critique.
  • Itamar GiladThe Total Impact Matrix – Beyond Blind Bets (2025). Supports negative-impact prevalence and pruning argument.
  • Liz KeoghCynefin Safe‑to‑Fail Probes / InfoQ (2017). Supports safe-to-fail experimentation framing.
  • Heifetz & LinskyAdaptive Leadership (2002 and follow-on). Supports “productive zone of disequilibrium.”
  • Netflix Engineering / SEIChaos Engineering / Chaos Monkey case (2015). Supports engineered punctuation marks for resilience.
  • PulumiState of Policy‑as‑Code (2025). Supports guardrails-not-gates; speed with fewer rollbacks.
  • Postrel, VirginiaThe Future and Its Enemies: The Growing Conflict Over Creativity, Enterprise, and Progress (Free Press, 1998). Introduces the “dynamist vs. stasist” distinction; supports organizational application of bottom-up discovery and feedback-driven progress.
  • Capital OneDevOps / DORA Case Study (2018+). Supports 20× release frequency with constant incident rate.
  • AWS Executive InsightsSingle‑Threaded Leadership / Two‑Pizza Teams (2021). Supports small autonomous teams and STL pattern.
  • Vaclav SmilEnergy and Civilization: A History (MIT Press, 2017). Supports energy throughput ↔ human welfare argument.
  • Taleb, Nassim NicholasAntifragile (Random House, 2012). Introduces the barbell strategy; supports portfolio shape for learning under uncertainty.
  • Semmelweis / Handwashing — summaries and historical analyses (mid‑1800s onward). Supports coordination trap, evidence vs adoption.
This post is licensed under CC BY 4.0 by the author.

Comments powered by Disqus.