BLOG POST

Complexity, Adaptation, and Results

September 07, 2012

In the last of a series of three blog posts looking at the implications of complexity theory for development, Owen Barder and Ben Ramalingam look at the implications of complexity for the trend towards results-based management in development cooperation. They argue that is a common mistake to see a contradiction between recognising complexity and focusing on results: on the contrary, complexity provides a powerful reason for pursuing the results agenda, but it has to be done in ways which reflect the context. In the 2012 Kapuscinski lecture Owen argued that economic and political systems can best be thought of as complex adaptive systems, and that development should be understood as an emergent property of those systems. As explained in detail in Ben’s forthcoming book, these interactive systems are made up of adaptive actors, whose actions are a self-organised search for fitness on a shifting landscape. Systems like this undergo change in dynamic, non-linear ways; characterised by explosive surprises and tipping points as well as periods of relative stability. If development arises from the interactions of a dynamic and unpredictable system, you might draw the conclusion that it makes no sense to try to assess or measure the results of particular development interventions. That would be the wrong conclusion to reach. While the complexity of development implies a different way of thinking about evaluation, accountability and results, it also means that the ‘results agenda’ is more important than ever. Embrace experimentation There is a growing movement in development which rejects the common view that there is a simple, replicable prescription for development.  Dani Rodrik talks of ‘one economics, many recipes’. David Booth talks of the move from best practice to best fit.  Mirilee Grindle talks of ‘good enough governance’. Bill Easterly has talked of moving ‘from planners to searchers’. Owen Barder has called for us to design not a better world, but better feedback loops.  Sue Unsworth talks of an upside down view of governance.  Matt Andrews, Lant Pritchett and Michael Woolcock aim to synthesize all this into their proposal for Problem Driven Iterative Adaptation. These ideas are indispensable in the search for solutions in complex adaptive systems. In his 2011 book Adapt, Tim Harford showed that adaptation is the way to deal with problems in unpredictable, complex systems.  Adaptation works by making small changes, observing the results, and then adjusting.  This is the exact opposite of the planning approach, widely used in development, which involves designing complicated programmes and then tracking milestones as they are implemented. We know a lot about how adaptation works, especially from evolution theory. There are three essential characteristics of any successful mechanism for adaptation:

  1. Variation – any process of adaptation and evolution must include sources of innovation and diversity, and the system must be able to fail safely
  2. An appropriate fitness function which distinguishes good changes from bad on some implicit path to desirable outcomes
  3. Effective selection which causes good changes to succeed and reproduce, but which suppresses bad changes.

These principles are reflected in the six principles for working in complex systems which Ben set out in a Santa Fe Institute working paper with the former head of USAID Afghanistan, Bill Frej. They also run through the ideas in the must-read recent paper by Andrews, Pritchett and Woolcock  which sets out four steps for ‘iterative adaptation’ in the case of state-building and governance reforms:

  1. focus on solving locally nominated and defined problems in performance (as opposed to transplanting pre-conceived and packaged best-practice solutions);
  2. create an ‘authorizing environment’ for decision-making that encourages ‘positive deviance‘ and experimentation, as opposed to designing projects and programs and then requiring agents to implement them;
  3. embed this experimentation in tight feedback loops that facilitate rapid experiential learning (as opposed to enduring long lag times in learning from evaluation);
  4. engage broad sets of agents to ensure that reforms are viable, legitimate, relevant and supportable.

So there is now some convergence around these ideas, all of which focus on the importance of experimentation, feedback and adaptation as ways of coping with uncertainty and complexity. The role of results in adaptation Andrew Natsios, a former Administrator of USAID, fired a celebrated shot over the bows of what he calls the ‘counter-bureaucracy’ (the compliance side of the US aid system).  He says:

Let me summarize the problems with the compliance system now in place:
  • Excessive focus on compliance requirements to the exclusion of other work,  such as program implementation, with enormous opportunity costs
  • Perverse incentives against program innovation, risk taking, and funding for new partners and approaches to development
  • The Obsessive Measurement Disorder  for judging programs  that limits funding for the most transformational development sectors
  • The focus on the short term over the long term
  • The subtle but insidious redefinition of development to de-emphasize good development practice, policy reform, institution building, and sustainability.
The reason for most of these process and measurement requirements is the suspicion by  Washington policy makers and the counter-bureaucracy that foreign aid does not work, wastes  taxpayer money, or is mismanaged and misdirected by field missions. These suspicions have  been the impetus behind the ongoing focus among development theorists on results.

These arguments – made with particular authority by Natsios – resonate strongly with the views of the growing movement for more experimentation, adaptation and learning.  But does that mean – as is often implied – that it is inappropriate or impossible to pay attention to results? If anything, the opposite is true. All three steps in the adaptive process – variation, a fitness function and effective selection – depend on an appropriate framework for monitoring and reacting to results.  Natsios himself calls for ‘a new measurement system’. But – as Ben argued  last year – we must ensure that the results agenda is applied in a way which is relevant to the complex, ambiguous world in which we live. Results 2.0: thinking through a complexity-aware approach  A meaningful results agenda needs to take account of the diversity of development programmes, and the need for a more experimental approach in the face of complex problems. A good place to start is to borrow some approaches from academia, civil society and business strategy. This work suggests that a complexity-aware approach to results needs to get a better handle on need to be based on: (a)   the nature of the problem we are working on, (b)   the interventions we are implementing (c)    the context in which these interventions are being delivered. This gives us three dimensions – ranging from simple problems and interventions in stable contexts through to complex interventions in diverse and dynamic contexts.   Between a rock and a hard place Down in the bottom left-hand corner are simple problems and stable settings.  This is where ‘Plan and Control’ makes most sense. Tradition results-based management approach, the more conventional unit-cost based value for money analyses and randomised control trials work especially well. (Classicists among you will recognise the hard rock of Scylla.) At the top right we have complex problems, complex interventions in diverse and dynamic settings. (A lot of donor work in fragile states and post conflict societies are in this corner).  Here the goal is Managing Turbulence’. In this space, everything is so unpredictable and fluid that planning, action and assessment are effectively fused together. To deliver results in this zone, we need to learn from the work of professional crisis managers, the military and others working in highly chaotic contexts. (This is the whirlpool of Charybdis.) In between is what we have called the zone of Adaptive Management’. Here we may find ourselves managing a variety of combinations of our three axes.  In our view, the vast majority of development interventions sit in this middle ground. In this messy, non-linear world the challenge is to tread a careful path avoiding narrowly reductionist approaches to results without surrendering to excessive pessimism about our ability to learn and adapt.  In practice this means a more adaptive, experimental approach, trying out multiple parallel experiments, monitoring emergent progress, rates of success and adapting to context. Real-time learning is essential to check the relative effectiveness of different approaches, scaling up those that work and scaling down those that don’t.  It is a learning process which is essential for donors and – more importantly – for the governments and institutions of the developing world. Adaptive management must engage the three drivers of evolution:

  1. Variation – which means participants must be given space to experiment and engage in ‘positive deviance’.  The key is to liberate people implementing programmes from the conventional requirements to follow a preconceived plan, while retaining accountability of donors to their domestic constituencies. Development agencies and their partners can be given room for manoeuvre and experimentation if they are held to account not for their activities and spending according to a plan, but for the results they achieve or fail to achieve.
  2.  An appropriate fitness function – which means that socially-useful changes are distinguished from ineffective or harmful changes.  This in turn requires society to agree – either in advance or at least in retrospect – what constitute useful changes, and to assess whether those changes are coming about.  For five decades the development industry has been inconsistent about what constitutes success, has failed to measure overall progress, and has eschewed opportunities to learn more about effects of different interventions through various kinds of rigorous impact evaluation.
  3. Selection – which means that changes that bring about improvements according to the fitness function are reproduced and further adapted, while bad changes, policies or institutions are either reformed or brought to an end.  This requires a greater focus on evidence-based policy making, and that decisions about programmes and interventions must be more strongly linked the results they produce. The development industry has traditionally been insufficiently effective at taking success to scale, and insufficiently ruthless about failure.

Getting REAL with Results-Enabled Adaptive Leadership Tracking results (and linking money to results) are often considered most appropriate for the simple stable situations in the bottom left hand corner of the cube. This is where it is easiest to attribute impact to the intervention.  It is in this corner that we find ‘piece rate’ systems: the manufacturer knows full well what the production function looks like for sewing machines and machinists and uses the piece rate system to motivate greater effort from staff. But in the complex world of development, we do not know the ‘production function’ and we cannot readily attribute progress to any particular intervention.   Furthermore, we often do not know where we are in the cube.  We sometimes have reliable evidence about the value of a particular technology (say, a nutritional supplement or a bednet) which suggests we are down in the bottom left hand corner of predictable and attributable results. But when we introduce the messy reality of needing to inform people about the product, overcome resistance to change, of managing production and distribution and creating incentives for effective delivery, we rapidly find ourselves in a much more complex world. So most of what we do to promote development is not in the bottom left hand corner: our interventions operate in the world of adaptive management and complexity.  The main value of a results focus in development not squeezing greater efficiency out of current service providers: rather it is in enabling people to innovate, experiment, test, and adapt.  The challenge here is to ensure that we have a focus on results which supports, rather than inhibits, effective feedback loops which promote experimentation and adaptation. This requires a new and more innovative toolkit of methods, and most importantly an institutional and relational framework which uses that information to drive improvement. We call this results-enabled adaptive leadership (because it has a nice acronym: REAL). What might results-enabled adaptive leadership look like in practice? The Center for Global Development is currently exploring two specific ideas which we believe fit well with an adaptive, iterative and experimental approach to development :  Cash on Delivery Aid and Development Impact Bonds. If you believe that development is a characteristic of a complex adaptive system then both of these ideas are attractive because:

  • They explicitly focus on independently verified, transparently reported outcomes and impact – that is, appropriate measures of what society is trying to achieve – rather than inputs and outputs which are thought to be correlated with progress (but may not be, especially in a complex system).
  • They avoid the need for an ex ante top-down plan, log-frame, budget or activities prescribed by donors.  Because payment is linked only to results when they are achieved, developing countries are free to experiment, learn and adapt.
  • There is no attempt to follow money through the system to show which particular inputs and activities have been financed; it is important for governments to learn about whether certain activities are working, but it is futile for donors to speculate about the extent to which those changes would happen without them.
  • They automatically build in a mechanism for selection by shifting funding to successful approaches and bringing failed approaches safely to a close (something which development cooperation which has traditionally found it difficult to do).

In a recent talk at USAID, Nancy Birdsall issued the following rallying cry: “It’s time to stop worrying about getting what we’re paying for, and start paying for what we get”.  This principle also underpins another initiative with which CGD is associated, TrAiD+, which calls for the creation of a “market of global results” in which investors could choose what type of projects to fund, based on results achieved. Given the growing role of business and philanthropy in development, this approach may well prove to be attractive to many funders. These are examples of how a focus on results could help, rather than hinder, the process of adaptation and experimentation in development.  That does not mean that these are the only or even the best approaches (though CGD’s Arvind Subramanian teases his colleagues for offering cash on delivery as a solution to every problem). Conclusion The growing movement towards experimentation and iteration is driven by a combination of theory and experience.  Though these argument have rarely been explicitly framed as a response to complexity, as a whole they are entirely consistent with the view that development is an emergent property of a complex system.  We in the development community have much to learn from other fields in which thinking about complexity is further advanced. Many development interventions operate in the space between certainty and chaos: the complexity zone which in which we believe that adaptive approaches are not only effective but essential.  This is often presented as a decisive argument against results-based approaches to development.  We argue that, on the contrary, a focus on results is an indispensable feature of successful adaptive management.  The challenge is to do this in a way which avoids simplistic reductionism and promotes an approach which focuses on outcomes rather than process, monitors progress, and which scales up success. We are conscious that this falls well short of a detailed blueprint for how this might work in practice.  As they say in the world of tech: that is a feature not a bug. As Alnoor Ebrahim of Harvard University, one of the leading authorities on development accountability, puts it: “there are no panaceas to results measurement in complex social contexts.” A nuanced approach to results must be based on a thorough assessment of the problems, interventions and contexts. Our point is that there is no contradiction between an iterative, experimental approach and a central place for results in decision-making:  on the contrary, a rigorous and energetic focus on results is at the heart of effective adaptation. Consistent with our view that success is the product of adaptation and evolution – of ideas as well as institutions and networks – we look forward to comments, improvements and corrections to these ideas so that we can get past simplistic extremes on either side and build a shared understanding of how to make this work. This is the last in a series of three blog posts based on Owen Barder’s presentation on complexity and development. The first blog post asked ‘What is Development?’. The second blog post looked at the UK government’s ‘golden thread’ approach to development through the lens of complexity. Ben Ramalingam’s book, Aid on the Edge of Chaos, will be published by Oxford University Press in 2013.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.