Last week we published the 7th episode of the Systemic Insight podcast (get it from Libsyn or Apple Podcast or you can also find it on Spotify). It features Dr Toby Lowe of Northumbria University and his work on why outcome-based performance management doesn’t work – and what to do instead. In this blog post, I’m sharing some quotes from Toby and some insights I took from the conversation.
The discussion in the podcast touches upon why outcome targets distort rather than enhance performance, why they lead to gaming becoming a rational strategy, and what the alternatives are for people who work in complex contexts. As outcome-based performance management is still the prevalent method to manage the performance in many fields, this discussion is highly relevant and pertinent.
This episode is packed with ideas and quite challenging thoughts! In this blog post I am bringing together a number of quotes from Toby and some comments from my side.
Toby is Senior Lecturer in Public Leadership and Management at Newcastle Business School at Northumbria University here in Newcastle. Toby describes his purpose as an academic as helping to improve the funding, commissioning and performance management of social interventions across the public, private and voluntary sectors. His research team has used complexity theory to create a critique of New Public Management approaches, particularly highlighting the problems created by attempts to use outcome-based performance management (e.g. Payment by Results) in complex environments.
The quotes in the following are not chronological; I tried to bring them together in a logic that makes sense here.
Outcomes are attempts to collapse complex realities into simple measures
Toby has two main argument of why outcome-based performance management does not work.
The first one is that, assuming that the aim of a change initiative is to improve people’s lives as in, it is almost impossible to collapse what ‘having a better life’ means into a measurable outcome. For different people, improvements in their lives look different. They depend on the context, on timing, and on personal preference. This is not only true for initiative to improve human lives, but for all initiatives with complex objectives, including things like more competitive businesses or a better culture in an organisation. The needs of businesses from the small-scale farmer or trader to the corporation that produces car parts and provides jobs for 1000s is not generic but depending on the specific context. Similarly, corporate culture is multi-faceted and is continuously generated by the people who work in the company. Collapsing these complex aspects into a single measure or a basket of measure that will in the end decide if an intervention was successful is impossible. You will inevitably loose context and nuance.
Performance management is a control perspective where someone who is in charge, someone who has some control or authority is saying to someone who they have control over: ‘I want you to do it like this and I will check up on you to make sure that you were doing it like this.’ And so, in order for outcome-based performance management to work you need to collapse this sense of ‘I want you to create good outcomes in someone’s life’ into what is measurable and specifiable in advance.
So, you have to pre-determine an outcome, and then you have to find what is measurable about that outcome. And that is two pieces of abstraction that take away from the inherent subjectivity of how an outcome is experienced in someone’s life.
In a complex environment, detail matters, context matters … the detail of particular people’s lives matters. And the whole business of trying to … performance measure outcomes strips away all of that context and says: ‘we are going to try and produce standard measures.’
Summarising point 1: simple measures cannot reflect the necessary nuance of complex outcomes in human lives, economies, cultures, and other complex spaces. Furthermore, we cannot choose any measure, but we need to rely on measures that we can actually measure with reasonable accuracy, requiring reasonable effort in a reasonable time frame – which makes us become the man who searches his lost keys under the streetlight; not because he lost them there, but because this is where the light is. These are two levels of abstraction that, according to Toby, massively reduce the usefulness of outcome measures. But there is a second argument why outcome-based performance management doesn’t work.
Setting quantitative targets inevitable leads to gaming behaviour
Toby’s second argument against outcome-based performance management is that there is ample evidence that such ways of managing performance are subject to gaming. Toby quotes Campbell’s law, which I have blogged about before.
Whatever measure is chosen ends up distorting the performance and the activities of the people working towards that measure. This is Campbell’s law … any quantitative indicator used for the purpose of social decision-making tends to distort and corrupt the processes it is intending to monitor. Any indicator that is used as a performance measure distorts the practices it is intending to monitor because doing good in the world is too complex to collapse into a measure or a basket of measures. It just is. And the most frustrating things, it’s not necessary.
Toby makes the point that, if a project is not able to achieve a target but the people are rewarded or punished based on this achievement, then people learn to game the system. There are four ways of gaming a system:
- “Teach to the test” – only do the thing that is measured, whether it makes sense or not.
- Cherry-pick – achieve the numbers by picking the easiest way to get there, “help the easiest to help” and leave many behind.
- Reclassify – get things to count against the target that did not count before, e.g. count people who are in precarious work with zero-hour contracts as having a job.
- If everything else fails, make stuff up.
There seems to be solid evidence about this gaming behaviour, according to Toby (also see Lowe and Wilson (2015) where they pull together some of the evidence):
There was a meta-study recently about gaming in performance management. I think about 83% of the studies that looked at target-based performance management found evidence of gaming. 83%! It is so ubiquitous. … It is kind of reckless to manage using target-based performance management in a complex environment because we know exactly what will happen.
However, it is important to say here that this is not about pointing fingers at people and say how bad they are because they are gaming the performance management system. No, it is that they often don’t have a choice if they want to function in that context.
The frustrating thing about an outcome-based performance management is that it creates a logic in which the only rational thing to do is to game if you are a manager.
This is really the killer argument against outcome-based performance management: you should only performance managed over things over which you have control, right? Because otherwise you are being held accountable for things over which you don’t control. And if you are held accountable for things you don’t control, that is bound to drive weird behaviours. And people undertaking social interventions are not in control of the outcomes they are being held accountable for. Those outcomes are products of hundreds of different factors in the world, all operating interdependently. And if you try and hold people accountable for outcomes, you are holding them accountable for things they don’t control and so they learn to manage what they do control. And so an outcome-based performance management environment is one where the only sensible thing to do is to game.
This last point strikes a particular chord. I have seen many projects for which the way their results (in their logframes) are set up puts them in control of delivering systemic change (they are defined as project outputs). How can a single project control something as complex and multi-faceted as systemic change, for which many actors on many levels need to change their behaviours in different ways which again are shaped by many interdependent factors? I have criticised this practice many times. A project cannot be in control of delivering systemic change, it can at most contribute to it.
Summary of point 2: setting quantitative outcome targets inevitably leads to gaming, i.e. not being honest on how they targets are achieved. This is not a result of individual choices of evil managers, but an incentive system that is set up in a way that strongly favours that behaviour. People are asked to deliver changes they are not in control of, measured by simplified measures. But what makes things worse is that even if we can see changes in the chosen measures, we generally cannot attribute them to the interventions.
Attribution is impossible in complex contexts
Toby and I also touched upon the thorny issue of attribution. I have written about this many times before and even wrote a paper on it for the BEAM Exchange a while back.
For Toby it is clear, as it is for me, that attribution is not possible in complex contexts because too many interdependent factors are at play that determine a specific behaviour.
The complexity perspective clearly identifies that it is impossible to attribute outcomes to interventions in a complex environment. It’s not hard, it’s not difficult. It’s not the case of ‘oh if we do the work and if we crunch these numbers hard enough and if we do the right kind of regression analysis, we will find the way to attribute …’ no, it’s impossible! And we have wasted so much human ingenuity and time and effort trying to do the impossible!
Many projects I work with stopped talking about attribution and instead use the term contribution. Toby also welcomes the move from attribution to contribution but makes a rather important point with regards to performance management:
The move to contribution from attribution is really important and it is fantastic that people are recognising that. But as soon as they say that, they need to recognise that it kills the potential to performance manage. Because you cannot performance manage if you can’t have attribution.
The point that is important for me and that I tried to make during the chat with Toby is that if you use ‘contribution’ but then try to define the exact share a project contributed to a change, you again, conceptually, go back to attribution. You want to attribute a specific, definable share of the change to the project. So what is happening in reality is that while people change their language by using contribution instead of attribution, they still try to attribute change – consciously or unconsciously. That is why, in the BEAM Exchange paper linked above, we made the point that there is no clear benefit in using ‘contribution’ instead of ‘attribution’ if they are used in a performance management framework.
If contribution is used in a complexity-sensitive way, describing how an intervention influenced some behaviours in a non-quantitative way, as is done in Outcome Harvesting for example, the concept is useful. But it cannot be used to manage performance.
After having established why outcome-based performance management does not work, we shifted towards the alternative Toby and colleagues have started to describe.
What we need now is a paradigm shift
… complexity demonstrates beyond any reasonable doubt: it’s not just that outcome-based performance management doesn’t work, it’s that it cannot be made to work reliably.
This is a pretty fundamental point made by Toby. We cannot just try harder to make outcome-based performance management work. Initiatives like the DCED Standard for Results Measurement, while laudable and fuelled by a lot of good intent of smart people, are in my view exactly such attempts to try harder to get better in something that is conceptually limited and often not useful in our work. I am not saying what many of these initiatives propose we do is wrong – the Standard and its guidance for results measurement is very well thought through and useful in clearly defined circumstances – but it does not support improvement in our work in the complex, uncertain and ambiguous realities that we often face.
What is needed instead is a fundamental rethink of the way we improve our work in other ways than to manage individuals’ and organisations’ performance.
… actually, it was paradigm shift that was needed around this. Because the outcome-based performance management is actually just a kind of broader part of the New Public Management paradigm for how this stuff works, for how to plan, implement and fund any form of social intervention. When you’re locked into that New Public Management paradigm then it is impossible to see an alternative.
Outcome-based performance management and the theories it is built on are not laws of nature but social constructs. Which is why we can find alternative ways. Outcome-based performance management is based on a particular world view that was emerging at the time of the industrial revolution and was still dominant in the 20th century. In the second half of the 20th and definitely in the 21st century, a new way of understanding how the world works has been emerging. An understanding that is inspired by complexity sciences. Hence, the way we improve our performance should also be inspired by this new world view.
It is Toby’s aim to support this move to a new paradigm. It is not a paradigm that Toby or other people in business schools are inventing on their desks. Rather, Toby sees it as his mission to be more of a kind of midwife (not a word he used) for that new paradigm.
But [New Public Management] is a particular ideology. The world can be different. And the change that is happening now is that it is now incumbent on managers to realise that there is an alternative because a set of other managers have created another paradigm.
So Toby and colleagues went out and talked to a large number of people to capture the new, emergent way of doing things differently. This work has resulted in the report “A Whole New World: Funding and Commissioning in Complexity“. It is briefly described in the following section.
A new paradigm built on intrinsic motivation, learning, trust and relationships
The quotes below give some notion of how the new paradigm looks like. Generally, important aspects of the new paradigm are:
- We can trust people to want to do the best in the work they can, we do not have to threaten them with punishment should they fail or offer them rewards if they work well (indeed, it is well-accepted that extrinsic rewards destroy intrinsic motivation).
- Improvement in how people do their work happens when they learn (together).
- In order to generally improve how systems work, we need to enable learning across organisations, which requires trust and relationships to be strengthened in a system.
The public managers who were doing this differently assumed that the people they were managing or funding were intrinsically motivated to do their work. They didn’t assume that those people needed reward and punishment to do so. That was the big conceptual shift that these people were making. And that meant that they could unlock a significant second shift which was that … they didn’t improve performance in the organisations they were part by trying to use vertical accountability, by trying to control the people they were responsible for, they used learning as the engine for performance improvement and created learning environments so people could get better at what they did. And finally, they said because outcomes are produced by systems and not by particular interventions, they said we think we have a responsibility to produce more effective systems. So, systems in which the actors in the system could coordinate and collaborate more effectively together. So, we think if we do those three shifts that will enable us to produce better outcomes in complex environments.
… we think that a healthy system in a place is a learning system. So, how is it that actors in a system are able to learn?
It is important that the actors within that system are able to learn effectively. … If you just bring in learning capacity into that system and say ‘we will help you learn by bringing a bunch of smart people with a set of learning resources and they’ll do all the learning and pass that on to you’ that’s gonna fail, we think.
Toby and colleagues have been working on compiling what they found when talking to people who explore the new paradigm so this new way of doing things can become a real viable alternative. They call the new way of managing complex change processes ‘Human Learning Systems’ and describe it in their report Exploring the new world: Practical insights for funding, commissioning and managing in complexity, which is definitely worth checking out.
On measurement and accountability
Toby makes an important point that while setting and controlling the achievement of quantitative targets doesn’t work to improve performance, measurement is still important in the new paradigm.
Measurement is really important in any form of improvement. Because those seeking to improve their practice need good information on which to base those reflections. And measurement and doing measurement well is a really significant part of that. … but we need to change the purpose of measurement. And this is what the people who are operating in this alternative paradigm are doing. They are measuring to learn and improve, not to hold ourselves accountable to others.
Yet here is the catch:
We need to stop measuring to demonstrate impact because it is impossible and wastes everyone’s time. The evidence is really, really clear: when people measure to demonstrate their impact they consciously or unconsciously game these measures to make themselves to look good. This means you can’t use that data for learning, because it’s been distorted.
Also accountability will still play an important role, according to Toby. But it will take a very different and much more diverse shape.
The current version of accountability does not really produce accountability. People don’t have control, it’s the illusion of control. But again, it’s not about saying accountability is not important, because accountability can be a helpful way to think about the relationships between different actors in the system. But accountability is a type of relationship between actors in the system. … What we think is going to be important in accountability in a complex environment is that there are multiple dimensions of accountability.
That form of accountability as ‘providing an account’ is kind of what we have lost in the illusion of accountability that we have at the moment. And that sense of that accountability is one person in the system asking for an account of why a particular decision was made that is I think at the heart of what this revised and expanded version of accountability will look like.
To summarise this point: measurement is still going to be very important but measurement for learning, not for demonstrating impact. Also accountability has a natural role to play, but the concept itself is more multi-faceted than the way it is used now and needs to take into account different relationships between the actors in the system. This means for example we cannot just be accountable to the funder – while this remains important, we also need to be accountable to our peers and to the people whose lives we want to improve.
On competition and innovation
Toby and I also briefly talked about competition (linking back to the previous podcast episode on competitiveness) and innovation. Here is what he had to say:
What we’ve seen is that competition [between organisations for funds] discourages learning between organisations and given that this alternative paradigm focuses on learning as the key driver for improvement anything that is preventing learning across organisations is not great. … And also if we’re looking for effective systemic responses then, one of the things that we see is that it’s very rare for an organisation to meet all the individuals’ or communities’ strengths and needs, and that means that they are going to need to collaborate effectively with others in order to meet those strengths and needs. And again, if competition is getting in the way in that form of collaboration than why are we promoting competition?
In our work, the most significant driver for innovation that we’ve seen is the releasing of control. … When you stop trying to control what workers are doing and give them the specific job to learn about the detail of the context of each and every person they are working with and respond effectively to whatever it is that they hear, that’s what creates innovation.
I still think there is a role to play for competition but I also agree with Toby that competition cannot be in the way of learning. As we have seen in the discussions in the last episode of the podcast, also competition can have many nuances. It is very good for example in finding solutions for clearly defined problems. Maybe it is less good in filing a process of continuous learning and exploration in complexity, for which collaboration is much more conducive.
I thoroughly enjoyed the talk with Toby as he has such a talent to bring things to the point and has vast experience in exploring the new paradigm of delivering social interventions. And if you have made it all the way here: I have asked Toby if is he was willing to have a follow up talk and analyse the effect of the Covid-19 pandemic on the shift from a control to a human learning systems paradigm, to which he agreed.
Below are links to some of the resources mentioned during the episode:
- LOWE, T. & WILSON, R. 2015. Playing the Game of Outcomes‐based Performance Management. Is Gamesmanship Inevitable? Evidence from Theory and Practice. Social Policy & Administration, Vol. 51(7):981-1001. (Behind a paywall unfortunately)
- KNIGHT DAVIDSON, A., LOWE, T., BROSSARD, M. & WILSON, J. 2017. A Whole New World: Funding and Commissioning in Complexity. collaborate for social change.
- LOWE, T. & PLIMMER, D. 2019. Exploring the new world: Practical insights for funding, commissioning and managing in complexity. collaborate for social change, Northumbria University Newcastle.
- Next Stage Radicals website
- Human Learning Systems website
Title photo by Charles Deluvio on Unsplash
This is good as well: https://www.youtube.com/watch?v=hRswTZg4Lc4