How to measure impact in Product Operations

7 min read
Share on

As product managers, when we release a new feature, we typically measure success using built-in product analytics. We often have a baseline number and we measure improvement on that baseline. For example: To what extent did our conversion rate increase after the new onboarding tool was released?

However in Product Ops, the problems we work on and the form the solutions take are highly varied, so it’s not easy to identify a one-size-fits-all way of measuring impact. 

Before jumping into how we measure impact, it’s worth recapping on the varied types of work we do in Product Operations. The diagram briefly summarises this for the FreeAgent Product Ops team:

We have various measurement challenges we need to solve:

All of this means we can’t measure everything in an automated and quantitative way. It’s going to require a bit of flexibility and a bit of manual input. So where do we start?

Firstly, we need to define impact. What do we consider a successful outcome for our projects? At FreeAgent, we’ve defined impact as:

A positive change, action or decision that happens as a result of our work.

To be more specific, we consider our work as having had impact if it results in any of the following six impact categories (with an example for each):

We also acknowledge that not all projects will have impact as per the definition, often for good reason. For example, we may have done a valid exploratory market analysis project, only to find that we’re already ahead of our competitors in a certain area, and no further action is needed.

It’s also important to play the long game sometimes. Just because you don’t see an impact by the end of the measurement period, it doesn’t mean it won’t come later.

In order to track this, and to generate a quantitative measurement, we have four impact classifications:

At the beginning of every project, we decide what we want success to look like, and how we’ll measure it.

At the end of the project we document the impact we had on completed projects, which we measure in different ways. See the table below for example projects completed in a 2-month cycle, documenting the impact for each project, and how it was measured.

Our North Star metric is the number of projects that have measurable impact in a given time period (in FreeAgent we use 2-month product cycles). For this cycle, it was 4 impact projects at a 57% impact rate.

Of course, of those four impact-projects, some will have more impact than others (which we evaluate qualitatively each cycle), but this method gives us a single metric to help assess a) whether we’re picking the right projects and b) how good we are at delivering material value on those projects.

At the end of each cycle, we summarise all this in a single-page Key Performance Indicator (KPI) scorecard which looks like this diagram:

In addition to impact, we also care a lot about user feedback and so we send out short project evaluation forms to stakeholders after each project. 

Most of the feedback is qualitative, but we do also have some quantitative questions in the form. The measure of these can be seen in the ‘Feedback’ row in the scorecard image.

It’s important to note that many of the above are leading metrics; i.e., we consider them predictors of more value being delivered further down the line. Lagging metrics take longer to measure, but we want to remain aware of them. Some examples of lagging indicators we aim to monitor:

In FreeAgent, we don’t use velocity as a measure of success. The reason being, that due to the varying nature of Product Ops projects, and because these projects can often widen in scope and opportunity as we learn more, we would rather focus on producing high-quality, high-impact outputs, than on speed.

Saying that we do have some Team Fitness metrics which we monitor, such as number of projects in progress and average start to finish duration of projects. This helps us understand how efficiently we are working, and helps us identify where there may be removable blockers we can work on.

We’ve been using this impact tracking system recently, and it’s working well for us. However, we expect it will evolve and we’re always interested in learning and improving this process. 

If you have thoughts on other ways of measuring Product Ops impact, it would be great to hear about them in the comments (or message me on Linkedin).