Back to Blog
    Experimentation Culture
    January 8, 20258 min read

    Why Teams Don't Run Experiments (And How to Fix It)

    Sarah Chen

    Sarah Chen

    Head of Product

    Share:

    Your company invested in experimentation infrastructure. You have A/B testing tools, analytics dashboards, and data engineers who can crunch the numbers.

    So why is your experiment velocity stuck at one or two tests per quarter?

    This is one of the most common problems in product organizations. The tools exist. The desire exists. But the experiments don't happen.

    Here's what's really going on and how to fix it.

    The obvious reasons (that aren't the real problem)

    When you ask teams why they don't run more experiments, you'll hear:

    • "We don't have time"
    • "Our sample size is too small"
    • "We don't have the right tools"
    • "We need to ship features, not run tests"

    These answers are real. But they're symptoms, not causes. Plenty of teams face the same constraints and still run dozens of experiments per month.

    The difference is culture. Let's dig into what's actually blocking your team.

    Reason 1: Nobody owns experimentation

    When experimentation is "everyone's job," it's nobody's job.

    Product managers are measured on features shipped. Engineers are measured on code quality and velocity. Designers are measured on user satisfaction scores.

    Who's measured on experiments run? Usually nobody.

    The fix: Assign ownership. Someone (or a small group) needs to be responsible for experiment velocity as a metric. This doesn't mean they run every experiment. It means they're accountable for the team's overall testing culture.

    Some companies call this person an "experimentation lead" or "testing champion." The title matters less than the accountability.

    Reason 2: Experiments feel risky

    Running an experiment means admitting you don't know the answer. For many people, this feels like professional vulnerability.

    What if the experiment fails? What if the variant you championed loses? What if leadership asks why you "wasted time" on a test that didn't work?

    This fear is often unspoken but powerful. People would rather ship confidently than test uncertainly.

    The fix: Reframe what "failure" means. A losing experiment isn't a failure. It's information that prevents you from building the wrong thing.

    Leaders need to model this. When executives share experiments that didn't work and explain what they learned, it gives everyone permission to test without fear.

    Reason 3: The feedback loop is too long

    Imagine this: You spend two weeks setting up an experiment. You run it for three weeks to get statistical significance. Then you wait another week for the data team to analyze results.

    Six weeks later, you finally know if your hypothesis was right.

    That's too long. By the time results come in, the team has moved on to other priorities. The learning doesn't stick.

    The fix: Shorten every stage of the cycle.

    • Make experiment setup faster with better tooling and templates
    • Set minimum viable sample sizes, not maximum confidence levels
    • Automate result reporting so it happens instantly when significance is reached
    • Announce results immediately to the whole team, not just the people who ran the test

    Fast feedback creates momentum. Slow feedback kills it.

    Reason 4: Results disappear into a black hole

    Many teams run experiments that get conclusive results. Then nothing happens.

    The winning variant doesn't get shipped. The learning doesn't get documented. The rest of the organization never hears about it.

    When people see their experiments ignored, they stop running them. Why bother testing if the results don't matter?

    The fix: Create visibility and accountability around results.

    • Share experiment outcomes in a public channel, not a private report
    • Make it clear who's responsible for implementing winning variants
    • Track "insights acted on" as a metric, not just "experiments run"
    • Celebrate experiments that changed minds, not just experiments that confirmed hypotheses

    When experiments visibly influence decisions, people run more of them.

    Reason 5: It's not interesting

    Let's be honest: reading experiment results can be boring.

    Most experiment announcements look like this: "Experiment #847 concluded. Variant B showed a 2.3% lift in click-through rate with 95% confidence."

    Nobody outside the analytics team gets excited about that.

    The fix: Make experiments engaging.

    This is where gamification helps. When team members can predict outcomes, compare their intuition to colleagues, and compete on accuracy, experiments become interesting.

    ExperimentBets exists specifically to solve this problem. When experiments have stakes (even virtual ones), people pay attention.

    Reason 6: The team doesn't know experiments are happening

    In many organizations, experiments run in the background. The product team knows about them. Maybe the data team. But engineers, designers, customer success, marketing? They have no idea what's being tested.

    This isolation has two effects. First, it limits the diversity of perspectives on what to test. Second, it means most of the organization never builds experimentation muscle.

    The fix: Broadcast experiments to the whole company.

    Every experiment should be announced somewhere visible. Slack is ideal for this. When everyone can see what's being tested and why, experimentation becomes a team sport.

    The meta-problem: Starting is the hardest part

    Most teams don't need more process or more tools. They need momentum.

    Once you run a few experiments and see results, it gets easier. People start suggesting tests. The feedback loop tightens. Culture shifts.

    But getting that first wave of momentum is hard.

    Here's a practical approach:

    • Pick one person to own experiment velocity for 90 days
    • Set a target like "run 4 experiments this month" (achievable but stretching)
    • Make it visible by announcing every experiment to a public channel
    • Share every result whether it wins, loses, or is inconclusive
    • Add engagement through predictions or lightweight competition

    Small wins compound. Run four experiments this month, and you'll run eight next month. Run eight, and testing becomes how your team works.

    What high-performing teams do differently

    Teams that run lots of experiments share common traits:

    They celebrate curiosity. Asking "what if we tested this?" is rewarded, not dismissed.

    They tolerate inconclusive results. Not every test will show a clear winner. That's okay.

    They separate learning from launching. Running an experiment is valuable even if the variant doesn't ship.

    They make it social. Experiments are team activities, not solo projects in a spreadsheet.

    They iterate on the process. Their experimentation workflow improves over time based on what works.

    Start this week

    Pick one blocker from this list that resonates with your team. Focus on fixing that one thing.

    If nobody owns experimentation, assign someone.

    If results disappear, create a public announcement channel.

    If experiments feel boring, try predictions and see what happens.

    The path to high experiment velocity starts with the first step. And the first step is usually smaller than you think.

    experimentation-culture
    experiment-velocity
    team-culture
    leadership
    Share:
    Sarah Chen

    Sarah Chen

    Head of Product

    Sarah spent 8 years in product roles at growth-stage startups, most recently leading experimentation at a Series C e-commerce company. She writes about finding the right metrics and building a culture of testing.

    Get more insights like this

    Join product teams learning to build experimentation cultures.

    Related Articles