The Experimentation Engagement Pyramid is a model for understanding how deeply your team engages with experiments. It describes four progressive levels, from basic awareness to full ownership.
Most teams operate at Level 1 or 2. High-performing experimentation cultures reach Level 3 or 4. The pyramid helps you diagnose where you are and what it takes to move up.
Unlike maturity models that focus on process and infrastructure, the Engagement Pyramid focuses specifically on how people interact with experiments. You can have sophisticated tooling but still be stuck at Level 1 if nobody pays attention to results.
Level 1: Awareness
At the base of the pyramid, teams know that experiments exist. This is the minimum viable level. People have heard experiments are running, but they don't actively seek out information or engage with results.
Experiments are announced
New experiments are shared in team channels or meetings. People see notifications but may not read them closely.
Results are published
When experiments conclude, results are documented somewhere. Most people never look at them.
Passive knowledge
If asked, team members could name a recent experiment. But they couldn't tell you the results or what was learned.
Signs you're here
Low open rates on experiment announcements. Empty comment sections. Results shared but not discussed.
Level 2: Participation
At Level 2, team members actively engage with experiments before they conclude. They make predictions about outcomes, ask questions about methodology, and check in on progress.
Predictions and betting
People guess which variant will win. They have skin in the game, even if it's just bragging rights.
Questions and discussion
Team members ask about experiment design, metrics, and timeline. Experiments spark conversation.
Anticipation for results
People check back to see what happened. They remember experiments they predicted on.
Signs you're here
Active prediction threads. People asking 'Did we ship that variant?' Discussion of surprising results.
Level 3: Advocacy
At Level 3, team members actively request experiments. They propose hypotheses, ask for tests on their features, and champion experimentation as a way of working.
Experiment requests
Product managers, designers, and engineers ask to test their assumptions. 'Can we A/B test this?' becomes common.
Hypothesis generation
Team members propose what to test and why. Ideas come from across the organization, not just the data team.
Experimentation champions
Certain individuals become known for pushing experimentation. They convince skeptics and share success stories.
Signs you're here
Backlog of experiment requests. Non-data people writing hypotheses. Experiments referenced in product specs.
Level 4: Ownership
At the top of the pyramid, teams run their own experiments. Experimentation is not something done to them by the data team. It's a capability they own and operate.
Self-service experimentation
Teams can launch, monitor, and conclude experiments without waiting for the central data team.
Embedded expertise
Each product team has someone who understands statistical methods and can interpret results correctly.
Experimentation as default
New features launch as experiments. The question isn't 'should we test?' but 'why wouldn't we test?'
Signs you're here
Product teams run their own experiments. Experiment velocity measured per team. Testing is in the product development process.
How to Apply This Framework
Diagnose your current level
Use the signs for each level to honestly assess where your team sits. Most teams overestimate by one level. Look at behaviors, not intentions.
Identify level-up blockers
What's stopping you from reaching the next level? Common blockers: lack of visibility (1→2), no process for requests (2→3), tooling or skills gap (3→4).
Implement one intervention
Choose a single change that addresses your biggest blocker. For 1→2: add predictions. For 2→3: create a request process. For 3→4: enable self-service.
Measure engagement
Track leading indicators: prediction rates, request volume, experiments per team. Review monthly and adjust your approach based on what's working.