The 3x Conversion Playbook
Step-by-step instructions to increase customers and fuel your business.
I run the Growth Product Team at Rev.com: 1 Designer, 7 Engineers, and 2 Product Managers. We are responsible for building products to acquire new customers and increase revenue across Rev’s 6 businesses. The team has acquired over 100,000 customers and we’re just getting started.
In the past 18 months, we have tripled the conversion rate for 3 separate Rev-owned services — Rev (On-Demand Human Audio Transcription), Temi (AI Transcription Service), and MathElf (On-Demand Math Tutoring) — across three different mediums — Mobile Web, Desktop Web, and Mobile Apps.
—
I started the Growth Team at Rev and focused on one service line and one channel. At first, my work (landing page design and graphics, service information, copy, and checkout flow) tanked all relevant metrics. There was doubt brewing. By focusing on Growth as a process rather than a series of growth hacks (hat tip to Reforge), we saw the metrics turn and then skyrocket within a couple months.
After seeing success, I petitioned for more resources. I was turned down.
To prove the growth process I was building was not just random luck, I set out to do it again. This time, I called my shot, starting with writing out a playbook that anyone could run with.
At first, it was ugly.
I cut excess steps.
Added necessary ones.
Honed the process.
Applied the Feynman technique.
Then, I applied the playbook to a new service line and saw another 3x conversion improvement! It was gold, and I felt confident we could replicate our success.
Overview
What I am sharing here is the conversion playbook that you can apply to websites, mobile apps, or any business you’re trying to grow. The steps listed will help you work on the right projects. By being as user-centric as possible, you will see the best results.
This is an open book process. Please ask others for help—no work is done in a silo.
Step 1: Information Gathering (Understand the Problem)
Key Questions
- Where is the biggest opportunity in improving conversion?
- Which pages or steps have the highest bounce and exit rates?
- What information is lacking?
- What are people not engaging with?
Gather Your Data. Look at all sources of funnel information. If you’re not tracking your funnel, you won’t be able to accurately measure changes. Here are some of the tools we use:
Observe, don’t assume customer behavior
Understand your customers
- Who are they? What positions do they hold and at what sort of company?
- What is important to them? What are they trying to accomplish?
- Look at NPS results, past orders, and talk to customers
Step 2: Develop a hypothesis. Seek to understand the problem. Get ideas for improvements.
Talk to customers
- Intercom
- Telephone calls
- Emails
- NPS Survey
- Other surveys
Watch customers
- FullStory
- Talk to strangers
- CraigsList — bring in randoms to use your product. Great for general exploration
- Validately — remote user testing. Use for one-off questions.
Talk to experts
- Wherever you can find them
- Past examples: Online Geniuses Slack Group, Conversion Experts — Chris Neumann @ CRO Metrics, Joshua Bretag @ Blueprint Solutions
Copy Ideas from Successful Websites
- Dropbox, Box, Stripe, Square, Grammarly, Airbnb, Asana, Drift, etc.
- Depends on what you’re designing for — should always be relevant to the problem
- No need to start from scratch
Develop a hypothesis on what will change and why. Hypothesis should be tied directly to a metric and have the following form: The current state is ___. If we do ___, then ___ will happen to our key metric because ___.
Create a list of ideas based around your hypothesis and learnings.
Step 3: Prioritize Tests using ICE framework or related framework
Score each test on 3 areas
- Impact, Confidence of Success, Ease of Implementation
- High = 3, Medium = 2, Low = 1
Rank the tests by score
Ignore anything below 7
- Before working on a 6, rinse and repeat Steps 1 & 2 to find great tests
- Small tests will likely not show statistically relevant in a small timeframe
- If necessary, look into batching a few 6’s together to create a bigger overall test
Focus on High impact wins
Prioritize small, quick wins over huge “big bang” improvements that may take a month to get in front of customers.
Bring the top tests to your peers and get feedback on best tests. Pick one and proceed to the next step.
Step 4: Design Your Visual and Content Changes
Visual Design
- Create rough mock-ups showing what you are changing and how (I use Skitch)
Nail the copy
- Appeal to the customer and their sense of value
- Simplify. Take the words you want to say. Cut it in half. Cut it in half again
- Make it honest and believable
Ask yourself, is this valuable for the user? Will they benefit from this information?
Get v1
Iterate until completed
- Show to other people (product/design/management/strangers) and get feedback
- Incorporate feedback with all of your other inputs
- Look at more best-in-class examples
- Repeat
Get sign-off on the test
Step 5: Final Checklist Questions
- With the resources available, is this the best test we can be running right now?
- Is this the most efficient way to test the hypothesis?
- Is our test instrumented correctly? Are we collecting the right data?
- What is the right amount of data we need to gather for this test to be valid?
- What do you expect the results to be and why?
Step 6: Run Your Test
- Confirm test with your team/key stakeholders
- Prioritize the test in the engineering backlog
- Run the test until you met the appropriate (predetermined) threshold of data
- Look at the results
Step 7: Results
Measure whether your test (challenger) had a higher conversion rate than the current design (champion).
Declaring a winner
- 1st method: challenger > champion with >=95% statistical significance
- 2nd method: challenger > champion and provides value for users
- Value = information, context, etc
Losers
- Challenger =< Champion
- Challenger slightly > Champion but not statistically significant or better for users
- Aesthetic change, small copy change
Step 8: Analysis
- Was our hypothesis correct? Why or why not?
- What is the major takeaway?
- Are there any common learnings with our other tests?
- Does this influence the next test we should run?
ALWAYS communicate results and learnings from the tests. Even the losers.
You lose credibility if you only talk about the wins and gloss over losses.
Final Notes about Rev’s Growth Team
- The Growth team is hypothesis and data-driven
- We believe in constant experimentation. Always Be Testing
- We expect most tests to fail. The test failed, you didn’t fail
- We don’t need to test everything (e.g. button colors)
If you are working on Growth or have questions about the above, please reach out: Barron.Caster@gmail.com.