Collecting event data enables us to answer vital analytical questions such as understanding marketing attribution, drop-off rates throughout a customer journey, pain points within the app, success of recommendation algorithms and search engines, and to gather insights on product improvements.
At the same time, as the importance of web and mobile apps has grown, countries and regions around the world have introduced laws to enshrine user privacy into law and regulate the way that we collect data (in Europe most notably GDPR). It has now become more challenging to connect the dots for an unknown users who visit your site, leaves and then returns to purchase something a few weeks later. Most ad platforms, such as Google Ads and LinkedIn Ads, have switched to first-party cookies and click ID/user ID tracking as they move away from third party & cross-site tracking.
Because of all these constraints, the tracking tool that you implement will heavily influence the quality of data, sessionisation, user identification, structure, enhancements and enrichments of the data available to you for analytics.
So what does this mean:
- The tool you choose and its implementation is critical. Ensuring we pick the right tool to enable a good tracking plan and implement this properly is key.
- Designing a tracking plan that enables us to capture the necessary information (and nothing else) in a consistent and reliable format enables us to drive key insights and analytics.
- Lastly, we need to consider the downstream data model — to understand how we might make the most of our event data.
A Tracking Plan
We approach tracking planning using three tiers of event types:
- We have UI/UX events (e.g. the page_view, button_click, link_click) — these are quite straightforward and often come out-of-the-box;
- Product events (specific events for a set period of time to capture a new product feature or behaviour, e.g. events used in A/B testing) — these are more complex and we will talk about them next week;
- And business-critical events — we focus on those today.
Take this example of a property letting agency. We have a landlord and a tenant — a landlord owns one or many properties which each have their own listing. A tenant can place a booking request against a listing, which may or may not lead to a successful booking.
For each core entity, we review the actions we can perform on it. We identify this in a few ways:
- firstly we think of classic CRUD (CReate, Update and Delete) actions,
- and then we make sure we understand the business domain very well to see what other actions make sense and need to be covered.
In our case, taking the Listing we can identify a set of different ways to interact with it. CRUD is for sure involved as a basis — but a user can also view the listing, and the landlord can publish it on the website. That’s five identified actions (see illustration).
Going through this exercise, you’ll discover not all actions need an associated event! Some actions can overlap (e.g. a transaction is an action done by a buyer and a seller) and so can be handled by just one event. Other actions aren’t critical, and some actions are better covered by backend data. For instance, the viewing of a Listing on the website can be captured through a property of a page view event. Try and keep the events you capture to a minimum so that they’re easy to maintain.
Next, identify what information & properties are important to capture. In the case of the Listing we have a backend source describing the listing and therefore we don’t need to capture too much additional information from the related actions other than the right
listing_id and the default event properties (timestamp, event_name and device information). We can lookup all the other data from our backend data sources (this is why a good centralised data model is important as well!)
We then name the events using the taxonomy entity_action, so in our example listing_created, listing_updated, listing_published.
Once we’ve done this for all the entities we then add them all to Avo — a governance tool that allows us to manage the Tracking Plans across different teams and functions. Snowplow has a great solution here as well. They are an integral part of our analytics offering.
First and foremost, how do you know what events are implemented in your business? In most cases, if one of our clients already has implemented tracking work, they will have used a spreadsheet or a table on a Notion page. This is of course better than nothing, but a static document incentives a “build once, never use again” scenario. We see a lot of situations where, for instance, the wrong event is being used for analytics, or where the naming by the development team is not clear, or where there is clear duplication of event definitions. This unclear event provenance creates major issues: data output looks right initially, but accuracy falls down under closer scrutiny. You do not want to trawl through your code base to find the answer to why revenue numbers are off.
We want to avoid that!
Where we finished on December’s #TrackingThursday, we’d come up with a nifty tracking plan to capture key business events about a listing for a letting agency. But how do I ensure I can communicate as a product owner what I want to track to my development team? How do I know if what I want to capture is already in the data?
Avo is a great tool that lets us do all of this. It enters the tech stack for us at the point we have an idea of the entities we wish to track. We typically move a tracking plan in a Miro board to a branch in Avo and get set up. We define our events, their contexts/properties and we can collaborate with colleagues on what and where the actions occur. Avo becomes our source of truth for event governance.
To help us design our tracking plan iterations, we have started to use metrics. Where we can, we define a clients KPIs and assign the events that are going to enable them. That way we capture, for example, the time it takes a landlord to publish its first listing, or the time to first booking from the listing publish. We therefore ensure that our client can always understand its core North Star metrics, while using the smallest number of events!
At this point, we start to really consider which tracking tool are our clients going to use. If it is indeed Snowplow, we will configure our contexts as a nifty way to enable our grouping of properties at entity level! For each tracking tool we can consider what is already being captured, and what are the generic easy to implement UI/UX events that can be put in place (remember page view, button click etc).
Product owners become critical in event design. They can create the event in Avo and define the right trigger. Avo can then provide a consistency check on naming, and ensure these events get technical sign-off before they’re passed to the development teams for implementation. We can also provide the implementation team screenshots showing where and what we want to trigger each event.
Last but not least, Avo then helps you implement the event tracking. This can help speed up development and enable easier debugging. For those of you with existing tracking, Avo just realised a new feature we can’t wait to try out that lets you capture tracking that already exists and pull this into your first tracking plan using their inspector, making it easier to review what’s already there!
Overall, governance helps you know why you’re tracking. Align on clear terminology and ensure best practice and consistency is maintained across the business, sounds like a no brainier to us.