Kieran Flanagan is the VP of Growth at HubSpot, a marketing and sales software company. He is responsible for the growth of HubSpot's freemium products, monetization of their freemium funnels and optimization of their global web strategy.
The future of growth belongs to product driven companies. At HubSpot, we realized this a few years ago, which is why we disrupted our own business model before anyone else could.
At the time, HubSpot was still growing 30%-40% per year on the shoulders of our original marketing and sales driven inbound marketing model. Despite the success, we consciously chose to upend what had been working by launching our first freemium products in 2014.
Market dynamics and consumer behavior have been changing - increasingly consumers expect to use software and extract value from it before buying. To stay relevant over the long term we needed to adapt, or risk “getting our lunch eaten.”
We entered the world of freemium in 2014 with Sidekick, a sales automation product, and HubSpot CRM. In 2016, we rebranded Sidekick as HubSpot sales and deepened our commitment to becoming a product-first company, launching Customer Hub, a freemium product for customer success, in 2017.
Product is the Future of Growth
Over the past 24 months, I’ve been dedicated to building out product driven growth at HubSpot - acquiring users into our freemium products and working with product and engineering to upgrade them to paying users.
What do I mean when I say product driven growth?
I’m talking about using in-product levers to grow, in place of or in conjunction with external marketing and sales channels. When people can try your product for free, they experience the value of your product before making the decision to pay. This turns more people into happy users, creating more opportunity for them to tell their friends, who in turn tell their friends. This can trigger virality and widen the top of your funnel.
In a new reality where Google and Facebook are the only two platforms that offer opportunities for user acquisition at scale, product driven growth allows you to decrease customer acquisition cost by reducing dependence on paid marketing, and sales for B2B products.
Because it’s scalable and cheaper, product driven growth is how the biggest products have grown so large so quickly. It’s also how new products will win in the future.
How HubSpot Experimented Its Way Into Freemium Growth
The first step in adding freemium to our go-to market strategy was setting the overarching vision of where we wanted to go. Then, our goal was to run experiments to iterate towards the vision or inform how we needed to evolve the vision.
We set our sights on providing companies from big to small with the right tools to grow. We wanted customers to be able to get started with our marketing, sales, and customer success products for free, and upgrade to different packages as their needs grew. Navigating the associated shift to product driven growth (while still growing 30-40% a year!), hasn’t been easy. But it has brought valuable learnings, which I’ll share with you in this post.
I’ll walk you through the process our growth team used to experiment our way into higher and higher impact growth opportunities for HubSpot’s freemium products. I’ll detail the initiatives that drove step-change improvements to our funnel, rather than just small percentage gains, and the principles we used to arrive at those initiatives.
Here’s the high level process that worked for our growth team:
- Get wins on the board to build trust with leadership and other teams, such as product and engineering.
- Prioritize growth experiments you can execute quickly to demonstrate results.
- Once you start to see a high-level of test failures or non-results, move on to tackle more complex growth opportunities (take big swings).
- Eventually, tell your CEO you want to test pricing ;-) (take even bigger swings)
If you already work in growth, this process of getting quick wins and laddering up should be familiar. Where I’ll add value, is through transparently sharing how the growth team at a public company like HubSpot actually executed this process, applying it to build a freemium businesses, and the learnings and results that came out of it all.
Must-Know Updates from Tech’s Growth Leaders
Get our weekly 5-min digest
Building a Product Qualified Lead Playbook to Get Early Wins
Not all tests are created equal in terms of the time and resources needed for execution. To start, we categorized all our experiment ideas as easy, medium, or hard. Then, we prioritized and executed on the easier experiments to get ourselves our first quick wins.
Once we added freemium as part of our go-to market, product qualified leads (PQLs) - leads from free users who show interest in paid features - became a key source for both touchless sales and leads for the sales team.
Initially, we noticed two important things about our PQLs:
- Tests to optimize capturing them generally fell in the easy or medium category for resources and time required.
- They converted 3-4x higher than our marketing qualified leads.
This was great news - PQL optimization was a high impact, lower effort area for experimentation. So we doubled down here and built out a playbook for optimizing PQLs.
Our first objective was to identify which PQLs offered the biggest growth opportunities. To help us figure this out, we segmented our PQLs into 3 buckets:
- Hand Raise PQLs: We would show free users call to actions within the product for paid only features or the opportunity to get assistance with a particular task, those users would interact with the CTA to reach out to us. We used them sparingly.
- Usage PQLs: We triggered a call to action based on product usage, for example using all of your free call minutes or email templates would trigger an option to upgrade or talk with sales.
- Upgrade PQLs: These were features only available to paid users, they would send users to an upgrade page.
Then we built a dashboard to show the funnel for each PQL (thanks to Scott Tousley & Sam Awezec). We needed to get the correct data infrastructure in place to track each, otherwise we wouldn’t have known which growth experiments were driving results.
Snapshot of our PQL Dashboard (with dummy data)
The PQL dashboard showed us the performance of each PQL point within our freemium products, giving us how many times people interacted with them, how many upgrades they generated and the average selling price (ASP) for each one.
We could use the dashboard to look at our PQL data in many different ways. For example, we could see by bucketing the PQL's into different types (hand raise, usage, upgrade), the usage PQLs were our best performing category. This aligned with our fundamental hypothesis for why product-driven companies do so well - when users can use your product and get value out of it before being asked to pay, they're more likely to convert when you do show a paywall.
We also used the dashboard to find opportunities to improve the conversion rate of hand raise PQLs, or areas of the product where we could test adding new ones.
Given we scored all tests on both their potential and ease of implementation, we decided to focus our initial efforts on hand raises. They were easy for us to iterate on, had good potential and had the added benefit of showing us points within the product where free users were highly engaged.
Below is an example of the type of experiments we ran for hand raise PQLs. From our analysis, we knew that once people import their data into our CRM, the probability of them both upgrading and retaining is high, given it's easier for them to see how valuable our product is. But we could also see that people who indicated their previous CRM was a spreadsheet struggled with this step. Our hypothesis was by adding in some human assistance, we could increase both the activation and upgrade rate of this cohort of users, and also gather information on how we could make it easier for people to import their data within the product.
A hand raise PQL experiment for users who previously used a spreadsheet as their CRM
Not only did this PQL point become one of our most popular upgrade points, but it also gave us information on how we could improve our onboarding around key actions, and started to make us think on how we could provide more support for free users within the product.
Today our PQL model has evolved towards usage PQLs as they continued to convert at higher rates. You can start using most of our features for free, within certain limits (e.g., sending a certain number of email templates, or booking meetings on your calendar, or using a certain amount of call minutes). We have few gated features, and hand raises have transitioned to live chat, which I'll speak to later in the post.
After 12 months of building out and optimizing the PQL playbook, we started to see diminishing returns on our PQL optimization efforts. But, having experimented our way into so many measurable funnel improvements, our growth team had earned the trust and support of the leadership and product teams. This gave us the opportunity to take on some big bet experiments that produced substantial payouts.
What You Need to Know About Taking Big Bets
Since big bets require a lot more resources and take longer to execute, drawing on your political capital and credibility in the process, it’s important to make the right bet. But how?
We identified two key ways to hone in on the right big bets:
- Speak with other growth teams experimenting with the same types of things you’re considering so you don’t have to reinvent the wheel - gather learnings to help you produce results faster.
- Do the math to make sure that, if successful, the big bet will significantly increase a key metric.
Once we had picked the low hanging fruit by optimizing most of the PQL points, we moved onto the bigger fruit higher up the tree. That bigger fruit could be found in optimizing the PQL funnel, which ended up requiring more resources to test, but also bringing big long term improvements.
Knowing where to make your big bets isn’t easy, but one of the most important things our growth team did was talk to other companies that already had successful freemium models. One of those companies was Dropbox. From talking to their growth team, we learned that they were seeing good results from using live chat to support upgrade opportunities in the product.
We realized that a similar strategy could be very helpful during our onboarding. Because our products had such a broad set of use cases, it wasn’t easy to identify user intent. It also wasn’t easy to create the optimal onboarding experiences to help users solve the problem they came to solve. We thought live chat, though resource intensive, could help us identify intent. We also saw it as an opportunity to give users a nudge in the right direction so they would experience the value they came for as fast as possible.
But before jumping in, we did the math to make sure the potential outcome justified the investment. We determined the baseline conversion rate, did some sensitivity analysis on the impact of potential conversion improvements from adding live chat, and then projected the impact on revenue.
When you’re analyzing the potential impact of a big bet in your own product, don’t just ask what the upside is. Make sure the experiment will impact a meaningful metric, not a vanity metric. In this instance, our analysis indicated live chat could create a significant increase in revenue, so we went forward with testing in an iterative way.
Must-Know Updates from Tech’s Growth Leaders
Get our weekly 5-min digest
Increasing Conversions by 50%
In HubSpot Sales, when a user triggered a PQL, they would receive a modal that invited them to upgrade or talk to sales. Clicking on “Talk to sales” brought them to a form where they could submit their details for someone to call them.
To start, we tested if our conversion rate would improve by giving the user more options to interact with us.
The resources to run this experiment were considerably higher than optimizing PQL points. Although we didn't code Live Chat into the app, we did take users to a web page where they could start chatting with a support agent about the product. That meant we needed someone from the services team to be our Live Chat resource for this test.
We also allowed the user to schedule a meeting instead of waiting for someone to call them. If they filled out their email address to schedule a meeting they would receive a kickback email from one of our reps, along with a link to the rep’s calendar (via our Meetings app), that allowed the user to book time.
In this experiment, 'Call Now,' just provided our phone number. If this was successful, we planned to integrate this CTA with our "Calling" feature so users could reach a sales rep immediately.
The experiment proved successful. People wanted options, and we increased our overall conversions by 50%.
So we doubled down. We optimized the experience further by allowing users to book time with a rep within the app instead of getting a kickback email from a sales rep with a link to their calendar, increasing the conversion rate by a further 20%.
This resulted in significant gains when rolled out across all PQL points.
The Double Impact of Live Chat in Onboarding
After launching our our CRM and sales acceleration tools, we learned that a large percentage of the people signing up for both had no prior experience using similar software.
And because our products had a broad set of use cases, it was easy for users to become overwhelmed by choice during onboarding. Attempting to alleviate the onboarding confusion, we ran multiple experiments highlighting different use cases to find out what the majority of users wanted to do.
Multiple use case onboarding to determine intent
But given the variety of use cases available, deciding on an onboarding sequence to activate users into weekly active teams (one of our key metrics) was complicated.
From our earlier experiments, we had seen positive signals that users wanted to engage via live chat. So, at one of our monthly performance meetings I showed this chart:
The chart was a poorly drawn out version of how live chat would facilitate onboarding. Users who signed up for the product and encountered friction during onboarding could reach out to a user success coach via live chat. The coach could then help the user navigate the friction points to become activated, while also providing user feedback to the product team. My goal with this chart was to demonstrate how adding live chat to onboarding could have a double impact to get buy in from leadership and the product team that a larger experiment was worth the investment.
We pulled data to identify which usage actions correlated strongly with active teams and upgrades. The data showed that the highest leverage usage actions for new users were sending the first email template, importing their data, or getting the first meeting booked on their calendar.
We would empower the success coaches with this information. This would allow us to focus the live chat experience on the specific usage actions that demonstrated the value of our products to users and drove growth for the business. At the same time, the success coaches would be able to collect data on common friction points and relay it back to the product team, creating a feedback loop for improving onboarding.
Over time we would want our chart for onboarding look like the image below - the more feedback we get from coaches, the better our touchless onboarding gets and the less we rely on live chat.
Naturally, we wanted to run a minimum viable test (MVT) to make sure we were on track before jumping in, right? (MVT's are the lean startup way to run growth experiments.)
The Problem with MVTs
The problem with MVT's is that you don’t know just how 'minimal' the test can be and still produce accurate results. To get some data back fast, we hacked together what we thought reached the minimum bar for an MVT. It turns out it was a sub-par experience for the user and didn’t give us clear feedback on whether live chat could drive the results we thought it could.
Our initial results were terrible. When we dug into the reasons why, we saw from the quantitative and qualitative data that too often when the user tried the live chat, a success coach wasn’t available. In this case, we suspected that we got the 'minimal' (not the hypothesis) wrong.
The risk of building an MVT that’s ‘too minimal’ is that it gives you a false negative and you may give up rather than iterating. When running MVT's you need to be clear on what the bar is for 'minimal' to get accurate results - did the test fail? Or was the bar too low to make a solid conclusion?
For our live chat test, we needed to improve the experience for users. Although it required a lot more developer time upfront, we had enough confidence in the hypothesis to swing big and accept the wasted efforts if wrong.
In the next round of tests, we sent the user to a dedicated page where they were matched with a coach who was immediately available. We also gave them details about the coach so they would know who they were speaking to and how they could help them. This time the results were positive.
We saw a big spike in team activation rate and upgrades for users who live chatted with a coach. Since we had already mapped out a long-term vision for this test if it showed positive signals, we immediately began scaling this process, hiring user success coaches to onboard people onto our products. These coaches are now a core part of our freemium go-to-market and we are using their feedback to improve onboarding.
After hearing why and how HubSpot made the shift from a marketing and sales driven model to a freemium model, I hope you picked up some useful insights into whether product driven growth may be right for your product.
Must-Know Updates from Tech’s Growth Leaders
Get our weekly 5-min digest