Thoughts on Growth — April 13, 2018

Thoughts on Growth is Reforge's weekly newsletter of must-known updates and perspectives in growth. By subscribing, you'll join a few thousand PMs, marketers, UX folks, engineers and analysts at today's top tech companies. Check out today's edition below.

1. What's hurting your email deliverability (hint: it's not what you think)



20% of opt-in emails never make it to the inbox. A lot of us have email deliverability issues, but don't always know why, or how to solve it (building opt-in lists isn't enough).

People think that sending too many emails is what damages their sender reputation, but it’s not. Many senders do a low volume of emails, and still end up in spam.

What, then, is the key to getting into your leads' inboxes?

It's all about list hygiene.


We spoke with Boris Savoie Doyer, Email/CRM Lead @ Aspiration, and former Senior Campaigner @, about a few tactical things companies can do to improve their email deliverability.

BORIS SAVOIE DOYER, Email/CRM Lead @ Aspiration, and former Senior Campaigner @

Send to only half the subscribers on your list, but email twice as often as you currently do. Your real number of mail-able users is half of what you think it is.

Look at it this way: if you don’t choose which of your recipients are going to be excluded, then Gmail / Hotmail / Outlook / AOL will do it for you. And they’re not going to pick based on your best interest!

Instead of sending to your entire list:

1) Remove all hard bounces.

If your email marketing tool isn't automatically cleaning out all hard bounces after every send that generates them, then you need to do so. Continuing to send to hard bounces is one of the most damaging actions for your deliverability.

2) Remove all people who haven't engaged with your emails in the past 9-12 months.

The cost/benefit of sending to people who haven’t engaged in 12 months is unreasonably high - you’ll just end up blacklisted and in the spam box.

3) Reputation is tied to domains now, not IP's, so you can't switch IP's to reset your sender score anymore. If you’re landing in spam a lot, send only to your best, most engaged recipients for a while.

The key to improving your deliverability over time is to segment out and focus on recipients who've opened an email in the last 3 months. These recipients are your real list.

When you send to that list only, sending “too many emails” is no longer a relevant worry. Your engaged recipients are the people on your list who can’t get enough of your emails, so you don't want to deprive them of email volume because of the dormant recipients on your list (who abandoned their email addresses long ago and aren't seeing any of your emails anyway)."



2. How cognitive biases are skewing your growth experiments



One variable we often overlook in our experiments is ourselves, and the biases we bring.

We work hard to remove excess variables to make our results more meaningful. We track statistical significance and make sure to run experiments long enough, so our sample reflects the general audience. But if we allow our biases to infiltrate the process, it doesn't matter how rigorous we are in other parts of our experiment design.

To prevent making important business decisions based on faulty inputs or conclusions, we must evaluate the biases that we, ourselves, bring into our experiments.

Biases like:

  • Self-serving bias - our tendency to see ourselves, and the work that we do, in the most positive light as possible.
  • Anchoring - our over-reliance on initial data points as a reference point. In a growth experiment this may occur when we measure our results against an old and inaccurate benchmark.
  • Confirmation bias - our tendency to view the results of our experiment from the perspective that most backs up our initial assumptions about how the experiment would unfold and what we would learn.


We spoke with Joey DeBruin about how he and his team work to make sure experiment results don't get skewed.

JOEY DEBRUIN, Head of Growth and Product @ Feastly and former neuroscience researcher at Johns Hopkins and UCSF:

Avoiding dangerous false positives due to bias in experimental design is just as important as driving quick wins.

My team uses a checklist I adapted from my days in the lab - we ask ourselves the following questions to help minimize bias:

1) How much will a false positive cost us? For example, if we're changing the payment flow, we must be absolutely sure that the change we make is having a positive effect. Whereas if we're A/B testing email subject lines, we don't have a baseline. We're optimizing for speed of learning to drive more traffic to the “winning” test group.

2) Does our experiment design answer our hypothesis clearly? The most common mistake I see teams make in their experiment design is failing to have a positive control. For example, let's say your hypothesis is that using machine learning to recommend restaurants will increase reservations over time. If you decide to test it by sending a new email that serves people smart restaurant recommendations, you also need to introduce a new email that serves people recommendations via a manual mechanism. Without this positive control you can't split out the effect between the algorithm and the simple fact that you're sending more emails.

3) Who can we ask for “peer review” on our experiment design? Ideally, ask someone who works in laboratory science. Science's peer review process ensures lab scientists are methodical about eliminating potential bias. While positive results are important, rejected papers due to design flaws are costly and embarrassing.

4) Are we celebrating our failures as much as our successes? The worst thing you can do is build a culture that slips "failed" experiments” under the rug, and only celebrates "winning" experiments. Punishing failed experiments decreases the team's incentive to suggest ideas where they don't know the answer, which is exactly what they need to get big wins long-term.”



3. How to quantify your qualitative data with a single chart


We've previously shared growth thoughts on collecting user feedback, but it's important to remember that the qualitative data you gather from users is only as useful as the insights you pull from it. Softer data usually requires some work to create some structure for quantifying it and finding those insights.

It can be easy to fall back on the biases and miss the real insight in the qualitative data if you don't quantify the results. With user feedback this means bucketing opinions into different themes, allowing you to better evaluate the feedback as a whole and identify the right things to focus on.

The Pareto chart is a tool for organizing, quantifying, and visualizing user feedback (it's one of the seven basic tools of quality control). By grouping and ordering feedback in a Pareto chart, product teams can visualize the distribution of user issues and where and how they should prioritize the action they'll take.


BRIAN BALFOUR, CEO @ Reforge & former VP Growth @ Hubspot:

"Qualitative data like user feedback can be like an inkblot test — a dozen people can review it and each can walk away thinking they saw something different.

That's because we can’t help but bring our own preconceived notions to qualitative data. The piece of feedback that is most compelling is often the one that most aligns with our own views and opinions. We all walk away having seen what we want to see.

By quantifying the qualitative, we take a step back. We aren’t grabbing a single anecdote and holding it up as the answer. Instead, we're working to identify the trends and correlations in the data - whether that’s topics that come up repeatedly, or patterns we're seeing with how different segments (ie, free vs paid users) behave within the product.

By itself, purely quantitative data is too rigid, and it requires us to find the story in it in order to uncover real insights. In the same way, purely qualitative data can be too much “story,” and the best way to reign that in is to quantify it and give it structure."



4. Rethinking Paul Graham's “Do Things That Don't Scale”

Effort-Return Matrix & Company Stage



Paul Graham famously extolled the virtues of “doing things that don't scale” in his 2013 blog post.

It was advice Y Combinator gave to all of their companies looking to find early traction. But as companies and products grow and mature, those tactics need to be reviewed with fresh eyes and put to the ROI test.


Kieran Flanagan, VP Marketing/Growth @ Hubspot offered some thoughts to put this into perspective with an example.

KIERAN FLANAGAN, VP Marketing/Growth @ Hubspot:

"The best way to evaluate opportunities is to consider the effort you need to invest vs. the return you'll get.

Instead of "do things that don't scale", you could reframe it to "do things that have a meaningful return."

But, what you consider “meaningful” changes as you grow.

When you're a start-up with 10 customers, you may be spending effort on hosting meet-ups, calling up potential customers to do demos via Skype or a range of other tactics we consider "non-scalable."

Maybe that's how you grow to 20 customers, and it has a real impact on your business (it's 100% growth). That's a good return for the effort invested. It's high effort but high return, based on the current state of the business.

But let's say you grow to 200 customers, expending all that effort for another 10 customers is probably a bad return on the investment you've made. It's high effort for low return.

I measure everything we do through that lens (I use an actual 2x2 effort/return matrix to map all of our current tactics against).

As much as possible we try to avoid high effort work that results in a low return on the metrics we value. Continually getting sucked into high effort for low return opportunities can grind your growth to a halt."



Thoughts on Growth is Reforge's weekly newsletter of must-known updates and perspectives in growth. By subscribing, you'll join a few thousand PMs, marketers, UX folks, engineers and analysts at today's top tech companies. Check out today's edition below.