In the previous post of the series, we explored some of the challenges of “seeing” product usage data as a series of events. At Amplitude we’ve noticed that some teams are far more effective at deciding what to measure. What do they do differently in addition to getting familiar with event-based data?
Most use what we (and others) call a “t-shaped” instrumentation approach—they instrument select events across the user journey, and then go deep where they have the most questions. They also intuitively understand that the relationship between work spent instrumenting events and the insights you get is non-linear. Let me explain.
Say we instrument a single event with a basic set of user properties (e.g. Browser, Country, Device Type, Referring URL etc.) Customers use our product. Even with a single event we can understand counts (e.g. # of users using Chrome 83.X), trends (e.g. Browser usage over time), comparing users by property values, basic retention, retention by referring URL, daily/weekly/monthly activity, etc. As with many analytics products, Amplitude is configured to capture these user properties automatically (but that can be turned off).
Now, let’s add 1) another event (just one!) that captures a key value exchange like making a purchase, upgrading to the paid plan, or favoriting an album, and 2) a small number of event properties like purchase amount, paid plan options selected, and album genre.
Consider the types of questions we can answer:
- Do fans that favorite albums retain better? Are they more likely to upgrade to paid?
- What % of fans favorite an album? How long does that take?
- Does referring to URL influence total monthly purchase amounts?
- What are the most popular paid plan options?
- Do mobile users spend more or less money?
- And more…
You probably see what is happening here. With just two events—one that indicates presence (with some properties), and one that indicates a key value exchange (with some properties)—we unlock lots of insights! A large F50 customer of Amplitude recently walked in with a mess of data. Hundreds of events. We talked them back to six events—yes six—and they discovered insights that have had a transformative impact on their product.
How did we encourage that particular customer to take that leap ? Trust me, to the customer it initially sounded far-fetched. They had invested so much time and money setting up that firehose of events. Well, we used the methods described above: using multiple patterns and paying attention to the customer domain.
In our workshops at Amplitude, we do the following sequence of exercises:
- Name personas (the “actors”)
- Product promise, explore growth assumptions, and competitive
- Optional: North Star Workshop
- Customer journey mapping, customer narrative
- Indicate key narratives
- Review the interface/product
- Map key narratives to key events
- Question-storming/decision-storming activity
- Event brainstorming in areas of interest
The end result is captured in a canvas of sorts:
Note how we are mixing patterns. We are establishing context, taking different perspectives, and casting/re-casting the net a couple times. Each successive activity helps us understand the customer domain more clearly and integrate the various perspectives in the room. The activities take a couple hours, but it is time extremely well spent (especially if completed with a diverse group of participants).
The output of this sequence of activities is typically around thirty events (culled from lots of ideas). If two events can help you answer the questions we mentioned above, imagine what thirty well instrumented events can answer (provided you have a product like Amplitude, or know to write SQL and do the analyses). Instrumenting thirty events, including testing, might take a day or so for a single developer, which is a small, small price to pay for those insights.
The key point here is that you don’t need to know exactly all of the questions you’ll need to ask in advance. I say this as someone who was formerly obsessed with getting questions perfectl framed. However, by brainstorming a bunch of questions we are better able to understand the patterns in the questions we are pondering, which can guide instrumentation. This idea is key. Our goal is sense-making and pattern-discovering, not specification. For example:
- “Interesting, we seem to be asking a lot of questions about that workflow. We should make sure to capture the start and end, and the likely places the customer will drop!”
- “Interesting, we’ve talked about wanting to stay ahead of Competitor X by beating them at Y. I guess Y is important! What is the minimally viable way to measure the efficacy of Y?”
- “Interesting, a bunch of questions seem to reference the plan the customer is on. That’s a good user property to capture!”
Notice how we use questions and decisions to focus our instrumentation efforts. Noah Rosenberg, a principal at Amplitude partner WWT, describes this as “T-shaped” instrumentation. The general idea is to capture enough events across the user journey to measure key value exchanges and cover your bases, and then “go deep” in an area of interest. Teams get paranoid that they’ll have a new question, and then not have historical data. This rarely plays out when you take a t-shaped approach.
When you work this way, you increase the probability that what you instrument will be helpful. You can always add properties for more context. And frankly, the worst that can happen is that you don’t use an event and choose to sunset it at a later point.
The full series:
Part 1: Measurement vs. Metrics
Part 4: Learning How to “See” Data
Part 6: Asking Better Questions