The Growth Experiment Management System that Tripled Our Testing Velocity

Joey DeBruin Best Headshot.jpg

Joey DeBruin started his career as a Research Technologist at the Johns Hopkins School of Medicine. His time in neuroscience labs taught him the fundamentals of experimentation, from hypothesis to analysis. Now he leverages those fundamentals to run rapid experimentation as the Head of Growth at Feastly, an online marketplace connecting adventurous diners with talented chefs outside of the traditional restaurant. Feastly has grown 10X since he joined two years ago.


All the fastest growing companies move at lightspeed.

Uber, Airbnb, Amazon, and Facebook are some of the fastest growing companies in history, in part because they built high velocity testing into their cultures from day one. Lindsay Pettingill, a data science manager on Airbnb’s growth team, reports that her team has increased their experiment cadence from 100 to 700 experiments per week over the past two years.

Airbnb isn’t the only rocketship that prioritizes speed. For years Facebook’s motto was, “Move fast and break things.”

Move Fast and Break Things.jpg

In 2014, they updated that motto to, “Move fast with stable infrastructure.”

Move-Fast-With-Stable-Infra.png

Slightly different, but the message was still the same - speed matters. A lot.

Despite knowing that speed is crucial, most teams move at a snail’s pace. There are two key reasons for this:

  1. They struggle to get organizational buy in for growth initiatives across the team.
  2. They fail to build airtight, repeatable experimentation systems.

Just as Jira has improved efficiency and collaboration for many development teams, a management system built specifically for experimentation can improve speed for a growth team.

Larger companies can devote engineers to building powerful systems in-house, but smaller teams often view these systems as too expensive, too difficult to build, or both.

Google sheets, though clunky and increasingly ill-suited as test pace increases, is the default for most startups. This creates an inherent negative feedback loop within the process that slows cadence.

In this post, I will reveal how I built a lightweight, inexpensive experimentation system for Feastly that:

  • Facilitated collaboration and buy-in across the entire team.
  • Increased idea contribution from the whole company by 2x.
  • 3x'd experimentation speed.

Growth lessons from today's top practitioners

Get our newsletter with insights from the field

Using Pipefy, and a few simple integrations, we’ve built an experimentation system that costs virtually nothing and allows us to accelerate learning.

The system I describe in the following post can be adapted to fit the needs and goals of any team. Building it requires no advanced programming, takes only few hours to create, and can be broken down into 4 key steps:

  1. Identify priorities based on your team’s strengths and weaknesses.
  2. Build the steps of your pipeline to establish a repeatable experiment flow.
  3. Create powerful reporting to help you learn and optimize the process.
  4. Set up simple automations and integrations to facilitate key actions.

By walking through each step individually with examples, my goal is to share a blueprint you can use to build a lightweight experiment system for your own growth team.

Step 1: Before you Build, Identify your Priorities

Many of the challenges of running an effective growth team are shared across organizations; there’s a reason the Reforge Growth Series event on “Building a Growth Team” is always standing room only. Even so, the existing pace of tests, approval process, and other unique aspects of your team may affect the pipeline you build, and may differ from the one described here.

The beauty of this system is that it can easily be adapted to address different needs, and it will undoubtedly evolve over time. For example, adding positive feedback loops via Slack and email is something we didn’t build into our system initially, but added later to further encourage idea contribution across the company, which increased the volume of ideas added to our prioritization queue by 2X.

We use the same growth principles that apply to building product to improve our process. We test against a few main KPIs including experiment speed, team adoption, and success rate, to measure how changes to the system affect those metrics.

We had a few main goals when we started - speed, collaboration, reporting, and price - all detailed below.

Speed

We wanted a system that empowers us to be fast and agile. Our previous system in Google sheets had become cumbersome as the number of experiments we ran increased. We wanted a system that can do most of the busywork and free us up to build more tests.

Collaboration

Like many companies, we have a talented and diverse group of people focused on very different areas of the product, from venue operation to performance marketing to sales, and the team members from each area have their own expertise. Our team is too small to have area-specific growth teams, so without a simple way of gathering experiment ideas, we relied on weekly meetings to generate ideas. To facilitate idea collaboration without adding more meetings, we wanted the company to be able to create an experiment in our queue directly from Slack (which they always have at their fingertips).

Reporting

Moving experiments through a series of Google sheets made it cumbersome to create reports on testing pace and accuracy, which means reporting happened far less than we wanted. Layering onto the issue, we also weren’t spending enough time on post-mortem analysis of our projects. We wanted to create a pipeline that would force us to follow best practices and require retrospection before any test could be marked as “complete.”

Price

We took a few calls with SaaS experimentation management platforms, and the price seemed unjustified given our stage and the efficiency gains we expected. Happily we were wrong about the latter, but given how easy this system was to build I’m glad we decided to create something on our own.

As a team, we discussed the features that would help us accomplish the goals outlined above. From there, we created a spec sheet before investigating tools to create the system. Even with this guide, I’d recommend following a similar process before beginning the build steps.

Step 2: Building the Experiment Pipeline

The experiment pipeline is similar to any board system, such as Trello or Jira, in which you move cards from left to right as the task progresses from start to finish. Along the way we add different fields (some of which are required before a card can move forward), which triggers automations and reports.

Our pipeline has the following steps:

  1. Backlog
  2. Queue
  3. Scheduled to Launch
  4. In Progress
  5. Analyze
  6. Winners
  7. Losers
Overview of the pipeline

Overview of the pipeline

Depending on your goals you could opt for a more streamlined or more detailed flow. Let’s walk through each step in detail.

Backlog ­

Ideas flow into this bucket from Slack (integration described in Step 4) or directly in Pipefy via the growth team. Our goal was to create the lowest possible barrier to entry for a new idea, so the only required field to get into this phase is a title.

Queue ­­

From here forward, movement of cards in the pipe is controlled by the growth team. In order to move an idea from the backlog to the queue, several required fields must be fulfilled:

  • Goal
  • Tracking metric
  • Hypothesis of the expected result
  • Area of focus
  • Expected impact
  • Confidence of success
  • Ease

The area of focus for the experiment, such as retention, monetization, activation, etc., helps us see how the pace and performance of our tests map to different company goals over time.

Additionally, we use the ICE prioritization framework (impact, confidence, ease), to rank queued ideas in order of priority.

Scheduled to Launch

During our weekly meeting, we move ideas from the queue into our schedule and assign management of any necessary tasks or reminders to team members directly on the card.

In Progress

Once an experiment has launched, it moves into this phase, along with any relevant tracking links, reminders, attachments, and tasks.

Analyze

When an experiment has ended, it moves into this phase where it must be analyzed against its original goal and hypothesis before it can move forward. We record the result, any observations, and whether the experiment significantly underperformed, met, or significantly overperformed the original hypothesis.

For example, if we hypothesized a test would lift conversion by 10%, and it ends up lifting it by 20%, we would categorize it as a success that overperformed against the original hypothesis. This allows us to understand not only whether our experiments are successful, but how accurately we are estimating outcomes over time in different types of experiments.

View of card in analysis phase.png

Example of an experiment card in the Analyze phase

Winners

Experiments that achieve their goal move into this phase. If follow up experiments are needed, we create them directly in the card via Pipefy’s “connection” field type. This connects the cards so previous experiments and learnings are easily accessible in further projects.

Losers

Experiments that did not achieve their goal move into this phase (note: like “Winners,” cards will flow directly into this phase from “Analyze”). Follow up experiments can be created directly in the card, as described above.

Archived

Experiments removed from the pipeline go on to this phase, with a reminder to revisit them if necessary (note: cards may flow directly into “Archived” from any other phase).

Growth lessons from today's top practitioners

Get our newsletter with insights from the field

Step 3: Building Prioritization and Analysis Reports

One of the main motivations of building a sequential pipeline is that it makes prioritization and reporting a breeze, which opens up a path to greater efficiency, impact, and buy-in from company leaders. You can build some reports directly into Pipefy, while you need to export other more advanced reports to Excel or other data analysis and visualization platforms.  

The reports you can easily create from this flow are nearly endless, but here are a few reporting ideas with which to start.

Prioritization

This report pulls any tasks in the “Queue” phase, lists out the methods and goals, and sorts them by their “ICE” score. It lives directly in Pipefy, where we can easily go through and move high priority tasks to the schedule.

prioritization report with experiment ideas.png

Cadence

Pipefy has built-in cadence reports, though we decided to build a simple dashboard to drill down further. To do so we set up a report in Pipefy that logs when experiments are launched and completed along with what area of the company they address, and a few other key fields. Then we set up the dashboard in Chartio (a simple data visualization platform), and can easily refresh the data from the pre-built Pipefy report, a process that takes less than a minute.

Dashboard final.png

Team Adoption

Using the same report as we do for cadence, the dashboard (or pre-built Excel pivot tables if you prefer) calculates how many different team members (growth and non-growth) submit ideas over time. Since one of our main priorities is to instill a data driven culture, this gives us a metric to let us know how we’re doing, motivate further collaboration, and inspire focus on increasing adoption. Our efforts to improve the process have increased the number of experiment idea contributors by 2x since the pipeline was built.

Hypothesis Improvement and Success Rate

Again using the single Pipefy data export detailed above, the dashboard calculates how the results of our experiments are tracking against our hypotheses over time. This is a crucial step - if we aren’t getting better at predicting results, we can drill into the problem areas and decide whether to adjust our priorities. For example, we learned we typically overestimate the results of full-funnel projects and underestimate the results of simple copy and design changes. This reminded us to keep up the pace of small, low-hanging-fruit style tests.

Step 4 (optional): Automations and Integrations

The above details everything you need to know to create a functional system to use within your growth team. Adding cards, setting alerts and reminders, and facilitating teamwork between users in Pipefy is easy, so integrations and automations are mostly useful to facilitate interaction between the growth team and other areas of the organization. They can create a system that is incredibly lightweight for collaborators, and automates the communication necessary to keep everyone in the loop.

Below are the two key automations we’ve built into the system so far.

Create Pipefy Cards in Slack with Zapier

I would highly recommend implementing this integration. Reducing the friction for team members to surface ideas is a key lever for increasing cadence and adoption. Zapier offers a pre-built Zap for this but it doesn’t allow you to create experiments from any channel using a hashtag, so we built a simple one of our own. Any time someone starts a post with #experiment, the post text creates a card in the backlog and adds the email of the submitting user to the card.

View of Slack message.png

Instrument Email Notifications as Cards Move to Certain Phases

Using the Pipefy email template system, you can trigger emails as cards move into new phases, pulling in dynamic info based on card values. We use this to alert team members when the experiment is launched for an idea they submitted, detailing the hypothesis, goals, and method that the growth team has applied to their idea, as well as the expected launch and completion dates.

By passing the owner of the experiment into the reply-to field, the email recipient is encouraged to simply reply to the email to add any comments or additional info to the card. This allows for any necessary back-and-forth between the growth team and other team members, without the need for everyone to have Pipefy accounts and become familiar with another tool.

Email to experiment idea creator.png

Make it Your Own

The best growth practitioners are results oriented and find satisfaction and excitement in learning and experimenting to make the best possible product. Yet they often use systems to manage their processes that run counter to that mantra: they’re inflexible, clunky, and aren’t easy to optimize quantitatively.

If you’ve designed a solid experiment management system, you will find yourself using it daily, or at least weekly. The system you build should feel rewarding to use and help you be a more effective growth practitioner - if it doesn’t serve both of those purposes, make some tweaks until it does.

For us, this system has evolved quickly. Since everyone is actively using and contributing to the flow, opinions for how to improve the system are constantly being added to the backlog in addition to product ideas. Start with something simple that addresses your main KPIs as a team and a company, and pretty soon you’ll have the kind of living, breathing experimentation culture that is often mentioned among the principal drivers behind some of the biggest and most nimble companies in tech.  

Growth lessons from today's top practitioners

Get our newsletter with insights from the field