In recent years, the idea of “pirate metrics” has gained wide adoption. Pirate metrics stand for AARRR: Acquisition, Activation, Retention, Referral and Revenue. The precise definition of the metrics may differ based on your business. For example, for some e-commerce businesses, acquisition could mean getting a visitor to sign up to your newsletter, activation (or reactivation) could be measured as the first purchase in the last 6 months or since joining, retention as repeat purchase, revenue as total sales, and, and referral could be measured as the number of friends invited by a user who signed up to the newsletter or made a purchase; for a subscription commerce business, retention could be measured as the churn rate (the percentage of customers who cancel their subscription).
The work of lifecycle marketing or CRM typically begins after the initial acquisition, and is about optimizing the activation, retention, repeat revenue and referral rates. How can you achieve marketing success and improve these metrics?
Here is a 5-step guide to taming these metrics:
- Construct a “state diagram” of the lifecycle stages for your business: The pirate metrics map to changes in the lifecycle state of the user: e.g. the activation rate metric calculates the change between the new user state and the active customer state. As a first step to optimizing the lifecycle, draw the state diagram of how users could transition between these states. Define active customer based on activity within a time window (e.g. at least one purchase in the last 3 months). Customize the diagram to include the states that makes sense for your business, to define additional states like “core users”, who might not only be active, but making frequent purchases.
- Calculate the percentages along the “edges”: Every month, look at the new/core/active/lapsed users from last month, and understand what new states they have transitioned to. Calculate the percentages of these transitions. The following 2 tables illustrates this.
Count_t0 Active_t1 Core_t1 Lapsed_t1 Count_t0 Active_t1 Core_t1 Lapsed_t1 New_t0 100 30 5 65 Active_t0 50 30 5 15 Core_t0 10 7 1 2 Lapsed_t0 40 6 0 35
Active_t1 Core_t1 Lapsed_t1 New_t0 30% 5% 65% Active_t0 60% 10% 30% Core_t0 70% 10% 20% Lapsed_t0 15% 0 85%
- Assess opportunities by benchmarking and monitoring over time: By looking at the percentages along the edges, you discover where your opportunities and challenges lie. For example, you may discover that only 60% of previous month’s active users stay active in the current month, and that might be a good metric to try and improve through a targeted effort.
- Construct targeted experiments for each step: Once you have assessed the opportunities, you can create experiments that might improve the metrics. For instance, in a subscription commerce environment, you might have a hypothesis that you could increase the retention rate by focusing on the edge between following 2 states: customers who have subscribed for 3 or more months, and lapsed customers. In order to improve this metric, you might come up with multiple experiments; an example of an experiment could be to give the users who have subscribed for 3 months a heavy discount to sign on to an annual plan. You could communicate this discount over email, and measure if the email improved the metrics on the relevant edge.
- Measure, and iterate: Once you start experimenting, you need to measure how well the experiments are working, and iterate. Successful experiments are
How does this approach compare with the analytical approach known as cohort analysis? Analyzing cohorts a great tool for a couple of analytical use cases:
- Calculating the lifetime value of a user
- Understanding if more recent cohorts are performing better than older cohorts
However, white cohort analysis is a great analytical tool, it doesn’t by itself does not provide you the actionable insights you need to improve your lifecycle metrics. The key difference is that cohort analysis primarily classifies users by when they first signed up, rather than their current activity level. In our state transition model outlined above, we group all active users together, irrespective of when they joined, making it easier to see just a few metrics of interest that line up well against experiments you can create.
Once you get started on this approach, there is no limit to the number of experiments you can run to optimize the metrics between all the edges in your state diagram, except limitations imposed by the lack of right tools for measurement, and running the experiments. Creating an experiment might involve some form of messaging that includes stitching together content and offers, delivering these messages, and measuring the impact.
At Blueshift, we are building tools that will enable marketers to monitor user states and create the right experiments easily. Stay tuned for updates from us, but in the meanwhile, we would love to hear how you think about driving your lifecycle metrics.