Red rectangles depicting a fitness landscape

If development is complex, is the results agenda bunk?

Red rectangles depicting a fitness landscape

This is the last of a series of three blog posts on Views from the Center looking at the implications of complexity theory for development. In this joint post, Owen Barder and Ben Ramalingam look at the implications of complexity for the trend towards results-based management in development cooperation. They argue that is a common mistake to see a contradiction between recognising complexity and focusing on results: on the contrary, complexity provides a powerful reason for pursuing the results agenda, but it has to be done in ways which reflect the context.

In the 2012 Kapuscinski lecture Owen argued that economic and political systems can best be thought of as complex adaptive systems, and that development should be understood as an emergent property of those systems. As explained in detail in Ben’s forthcoming book, these interactive systems are made up of adaptive actors, whose actions are a self-organised search for fitness on a shifting landscape. Systems like this undergo change in dynamic, non-linear ways; characterised by explosive surprises and tipping points as well as periods of relative stability.

If development arises from the interactions of a dynamic and unpredictable system, you might draw the conclusion that it makes no sense to try to assess or measure the results of particular development interventions. That would be the wrong conclusion to reach. While the complexity of development implies a different way of thinking about evaluation, accountability and results, it also means that the ‘results agenda’ is more important than ever.

Embrace experimentation

There is a growing movement in development which rejects the common view that there is a simple, replicable prescription for development.  Dani Rodrik talks of ‘one economics, many recipes’. David Booth talks of the move from best practice to best fit.  Mirilee Grindle talks of ‘good enough governance’. Bill Easterly has talked of moving ‘from planners to searchers’. Owen Barder has called for us to design not a better world, but better feedback loops.  Sue Unsworth talks of an upside down view of governance.  Matt Andrews, Lant Pritchett and Michael Woolcock aim to synthesize all this into their proposal for Problem Driven Iterative Adaptation.

These ideas are indispensable in the search for solutions in complex adaptive systems. In his 2011 book Adapt, Tim Harford showed that adaptation is the way to deal with problems in unpredictable, complex systems.  Adaptation works by making small changes, observing the results, and then adjusting.  This is the exact opposite of the planning approach, widely used in development, which involves designing complicated programmes and then tracking milestones as they are implemented.

We know a lot about how adaptation works, especially from evolution theory. There are three essential characteristics of any successful mechanism for adaptation:

  1. Variation – any process of adaptation and evolution must include sources of innovation and diversity, and the system must be able to fail safely
  2. An appropriate fitness function which distinguishes good changes from bad on some implicit path to desirable outcomes
  3. Effective selection which causes good changes to succeed and reproduce, but which suppresses bad changes.

These principles are reflected in the six principles for working in complex systems which Ben set out in a Santa Fe Institute working paper with the former head of USAID Afghanistan, Bill Frej. They also run through the ideas in the must-read recent paper by Andrews, Pritchett and Woolcock  which sets out four steps for ‘iterative adaptation’ in the case of state-building and governance reforms:

  1. focus on solving locally nominated and defined problems in performance (as opposed to transplanting pre-conceived and packaged best-practice solutions);
  2. create an ‘authorizing environment’ for decision-making that encourages ‘positive deviance‘ and experimentation, as opposed to designing projects and programs and then requiring agents to implement them;
  3. embed this experimentation in tight feedback loops that facilitate rapid experiential learning (as opposed to enduring long lag times in learning from evaluation);
  4. engage broad sets of agents to ensure that reforms are viable, legitimate, relevant and supportable.

So there is now some convergence around these ideas, all of which focus on the importance of experimentation, feedback and adaptation as ways of coping with uncertainty and complexity.

The role of results in adaptation

Andrew Natsios, a former Administrator of USAID, fired a celebrated shot over the bows of what he calls the ‘counter-bureaucracy’ (the compliance side of the US aid system).  He says:

Let me summarize the problems with the compliance system now in place:

  • Excessive focus on compliance requirements to the exclusion of other work,  such as program implementation, with enormous opportunity costs
  • Perverse incentives against program innovation, risk taking, and funding for new partners and approaches to development
  • The Obsessive Measurement Disorder  for judging programs  that limits funding for the most transformational development sectors
  • The focus on the short term over the long term
  • The subtle but insidious redefinition of development to de-emphasize good development practice, policy reform, institution building, and sustainability.

The reason for most of these process and measurement requirements is the suspicion by  Washington policy makers and the counter-bureaucracy that foreign aid does not work, wastes  taxpayer money, or is mismanaged and misdirected by field missions. These suspicions have  been the impetus behind the ongoing focus among development theorists on results.

These arguments – made with particular authority by Natsios – resonate strongly with the views of the growing movement for more experimentation, adaptation and learning.  But does that mean – as is often implied – that it is inappropriate or impossible to pay attention to results?

If anything, the opposite is true. All three steps in the adaptive process – variation, a fitness function and effective selection – depend on an appropriate framework for monitoring and reacting to results.  Natsios himself calls for ‘a new measurement system’. But – as Ben argued  last year – we must ensure that the results agenda is applied in a way which is relevant to the complex, ambiguous world in which we live.

Results 2.0: thinking through a complexity-aware approach 

A meaningful results agenda needs to take account of the diversity of development programmes, and the need for a more experimental approach in the face of complex problems. A good place to start is to borrow some approaches from academia, civil society and business strategy. This work suggests that a complexity-aware approach to results needs to get a better handle on need to be based on:

(a)   the nature of the problem we are working on,

(b)   the interventions we are implementing

(c)    the context in which these interventions are being delivered.

This gives us three dimensions – ranging from simple problems and interventions in stable contexts through to complex interventions in diverse and dynamic contexts.

Between a rock and a hard place

Down in the bottom left-hand corner are simple problems and stable settings.  This is where ‘Plan and Control’ makes most sense. Tradition results-based management approach, the more conventional unit-cost based value for money analyses and randomised control trials work especially well. (Classicists among you will recognise the hard rock of Scylla.)

At the top right we have complex problems, complex interventions in diverse and dynamic settings. (A lot of donor work in fragile states and post conflict societies are in this corner).  Here the goal is Managing Turbulence’. In this space, everything is so unpredictable and fluid that planning, action and assessment are effectively fused together. To deliver results in this zone, we need to learn from the work of professional crisis managers, the military and others working in highly chaotic contexts. (This is the whirlpool of Charybdis.)

In between is what we have called the zone of Adaptive Management’. Here we may find ourselves managing a variety of combinations of our three axes.  In our view, the vast majority of development interventions sit in this middle ground.

In this messy, non-linear world the challenge is to tread a careful path avoiding narrowly reductionist approaches to results without surrendering to excessive pessimism about our ability to learn and adapt.  In practice this means a more adaptive, experimental approach, trying out multiple parallel experiments, monitoring emergent progress, rates of success and adapting to context. Real-time learning is essential to check the relative effectiveness of different approaches, scaling up those that work and scaling down those that don’t.  It is a learning process which is essential for donors and – more importantly – for the governments and institutions of the developing world.

Adaptive management must engage the three drivers of evolution:

  1. Variation – which means participants must be given space to experiment and engage in ‘positive deviance’.  The key is to liberate people implementing programmes from the conventional requirements to follow a preconceived plan, while retaining accountability of donors to their domestic constituencies. Development agencies and their partners can be given room for manoeuvre and experimentation if they are held to account not for their activities and spending according to a plan, but for the results they achieve or fail to achieve.
  2.  An appropriate fitness function – which means that socially-useful changes are distinguished from ineffective or harmful changes.  This in turn requires society to agree – either in advance or at least in retrospect – what constitute useful changes, and to assess whether those changes are coming about.  For five decades the development industry has been inconsistent about what constitutes success, has failed to measure overall progress, and has eschewed opportunities to learn more about effects of different interventions through various kinds of rigorous impact evaluation.
  3. Selection – which means that changes that bring about improvements according to the fitness function are reproduced and further adapted, while bad changes, policies or institutions are either reformed or brought to an end.  This requires a greater focus on evidence-based policy making, and that decisions about programmes and interventions must be more strongly linked the results they produce. The development industry has traditionally been insufficiently effective at taking success to scale, and insufficiently ruthless about failure.

Getting REAL with Results-Enabled Adaptive Leadership

Tracking results (and linking money to results) are often considered most appropriate for the simple stable situations in the bottom left hand corner of the cube. This is where it is easiest to attribute impact to the intervention.  It is in this corner that we find ‘piece rate’ systems: the manufacturer knows full well what the production function looks like for sewing machines and machinists and uses the piece rate system to motivate greater effort from staff.

But in the complex world of development, we do not know the ‘production function’ and we cannot readily attribute progress to any particular intervention.   Furthermore, we often do not know where we are in the cube.  We sometimes have reliable evidence about the value of a particular technology (say, a nutritional supplement or a bednet) which suggests we are down in the bottom left hand corner of predictable and attributable results. But when we introduce the messy reality of needing to inform people about the product, overcome resistance to change, of managing production and distribution and creating incentives for effective delivery, we rapidly find ourselves in a much more complex world.

So most of what we do to promote development is not in the bottom left hand corner: our interventions operate in the world of adaptive management and complexity.  The main value of a results focus in development not squeezing greater efficiency out of current service providers: rather it is in enabling people to innovate, experiment, test, and adapt.  The challenge here is to ensure that we have a focus on results which supports, rather than inhibits, effective feedback loops which promote experimentation and adaptation. This requires a new and more innovative toolkit of methods, and most importantly an institutional and relational framework which uses that information to drive improvement. We call this results-enabled adaptive leadership (because it has a nice acronym: REAL).

What might results-enabled adaptive leadership look like in practice? The Center for Global Development is currently exploring two specific ideas which we believe fit well with an adaptive, iterative and experimental approach to development :  Cash on Delivery Aid and Development Impact Bonds.

If you believe that development is a characteristic of a complex adaptive system then both of these ideas are attractive because:

  • They explicitly focus on independently verified, transparently reported outcomes and impact – that is, appropriate measures of what society is trying to achieve – rather than inputs and outputs which are thought to be correlated with progress (but may not be, especially in a complex system).
  • They avoid the need for an ex ante top-down plan, log-frame, budget or activities prescribed by donors.  Because payment is linked only to results when they are achieved, developing countries are free to experiment, learn and adapt.
  • There is no attempt to follow money through the system to show which particular inputs and activities have been financed; it is important for governments to learn about whether certain activities are working, but it is futile for donors to speculate about the extent to which those changes would happen without them.
  • They automatically build in a mechanism for selection by shifting funding to successful approaches and bringing failed approaches safely to a close (something which development cooperation which has traditionally found it difficult to do).

In a recent talk at USAID, Nancy Birdsall issued the following rallying cry: “It’s time to stop worrying about getting what we’re paying for, and start paying for what we get”.  This principle also underpins another initiative with which CGD is associated, TrAiD+, which calls for the creation of a “market of global results” in which investors could choose what type of projects to fund, based on results achieved. Given the growing role of business and philanthropy in development, this approach may well prove to be attractive to many funders.

These are examples of how a focus on results could help, rather than hinder, the process of adaptation and experimentation in development.  That does not mean that these are the only or even the best approaches (though CGD’s Arvind Subramanian teases his colleagues for offering cash on delivery as a solution to every problem).

Conclusion

The growing movement towards experimentation and iteration is driven by a combination of theory and experience.  Though these argument have rarely been explicitly framed as a response to complexity, as a whole they are entirely consistent with the view that development is an emergent property of a complex system.  We in the development community have much to learn from other fields in which thinking about complexity is further advanced.

Many development interventions operate in the space between certainty and chaos: the complexity zone which in which we believe that adaptive approaches are not only effective but essential.  This is often presented as a decisive argument against results-based approaches to development.  We argue that, on the contrary, a focus on results is an indispensable feature of successful adaptive management.  The challenge is to do this in a way which avoids simplistic reductionism and promotes an approach which focuses on outcomes rather than process, monitors progress, and which scales up success.

We are conscious that this falls well short of a detailed blueprint for how this might work in practice.  As they say in the world of tech: that is a feature not a bug. As Alnoor Ebrahim of Harvard University, one of the leading authorities on development accountability, puts it: “there are no panaceas to results measurement in complex social contexts.” A nuanced approach to results must be based on a thorough assessment of the problems, interventions and contexts. Our point is that there is no contradiction between an iterative, experimental approach and a central place for results in decision-making:  on the contrary, a rigorous and energetic focus on results is at the heart of effective adaptation.

Consistent with our view that success is the product of adaptation and evolution – of ideas as well as institutions and networks – we look forward to comments, improvements and corrections to these ideas so that we can get past simplistic extremes on either side and build a shared understanding of how to make this work.

This is the last in a series of three blog posts based on Owen Barder’s presentation on complexity and development. The first blog post asked ‘What is Development?’. The second blog post looked at the UK government’s ‘golden thread’ approach to development through the lens of complexity.

Ben Ramalingam’s book, Aid on the Edge of Chaos, will be published by Oxford University Press in 2013.


12 responses to “If development is complex, is the results agenda bunk?”

  1. Alan Hudson avatar

    Enabling local learning – or who’s/whose learning?

    Complexity – the fact that systems, including social systems, are more than the sum of their parts and that context matters a lot – seems to me to have one, primary, fairly straightforward implication for “the results agenda”.

    That is, those of us working in the aid/development industry should rebalance our focus and give more emphasis to enabling people to innovate, learn and adapt to deal with the challenges that they face in the contexts in which they live, and perhaps a little less to donor efforts to establish “what works” – a phrase that always made me cringe when I was a “Governance Adviser” at DFID – from an outsider’s perspective. Or, less strongly, there should be more linkage between those two angles on results and learning.

    This point is weaved into Owen and Ben’s post – getting REAL, and Cash-on-Delivery Aid and it’s promise of “accountability without attribution” – and is absolutely central to the excellent Andrews/Pritchett/Woolcock paper that is cited. But I think it’s worth emphasising.

    Clearly, outsiders need better information to make their decisions too, but next time I’m in a meeting and people are talking about setting up a database about “what works”, I’ll be asking “how is this exercise going to help the people we’re trying to help, to innovate, learn and help themselves?”

  2. Susan Stout avatar
    Susan Stout

    Owen,  
    This is your best blog yet on the results agenda.  And I completely agree with Alan that the first major implication is that the donor community should be focusing a great deal more effort on setting up/strengthening  institutional structures that enable learning and experimentation tightly focused on results at the country level.  Think what would happen if even say 10% of what the World Bank (just for example) spends every year on its ‘knowledge products’ were to go instead to setting up ‘results learning’ capacities at the regional and country level.  
    Thanks for another excellent contribution 

  3. Marcus Leroy avatar
    Marcus Leroy

    One can only applaud Owen’s emphasis on complexity and his plea for allowing experimentation. The three-dimensional cube gives us an excellent image of the complex world the aid industry is facing.
    Reading to the end, however, my enthusiasm turned to disappointment where “getting REAL” is seized as an opportunity for singing – once again – the praises of Cash on Delivery, COD. I believe COD is anything but an attractive answer to the complexity development actors are facing. Let me, to make my point, raise a few questions related to Owen’s 4 bullets that explain why COD is so attractive. I do so by quoting as much as possible from Owen’s text.
    1. COD is based on “independently verified, transparently reported outcomes”. It may be easy enough to get such outcomes as long as we are talking about “nutritional supplements or bed nets” or other simple interventions. But what if, as Owen rightly says, “most of what we do to promote development is not in the bottom left hand corner [of the cube]”? That’s where the most transformational development sectors are (see Natsios). But that’s also where measuring of outcomes is (a) fiendishly difficult and (b) never independent from ideology and power relations.
    2. Where will developing countries find the financial means “to experiment, learn and adapt” if funding is not forthcoming (because of lack of results)? How long will it take before partners yield to the temptation of trying to get results at any price?
    3. How can COD be a means to hold development agencies “to account […] for their results” if at the same time “it is futile for donors to speculate about the extent to which those changes would happen without them”? It’s evident: the idea of using COD as a way to justify aid to the taxpayer bumps into the attribution problem. Or is the idea to take the taxpayer for a ride by concealing the impossibility of attribution?
    4. COD may possibly be a reliable “mechanism for selection by shifting funding to successful approaches” in the simple world of “the bottom left hand corner [of the cube]”. That however is not where “the vast majority of development interventions sit”. Is there any evidence that COD is a reliable “mechanism for selection” “in the complex, ambiguous world in which we live”? 

  4. Mark Moran avatar
    Mark Moran

    I am so desperate to be with you Owen, but how do we get to the how?  You suggest that private finance through development impact bonds may be the way to go, and I agree that private finance will be more disciplined than public finance in procuring outcomes.  How then, when the reality of adaptive management is that its often the intermediate or emergent outcomes that become more important that what you are financed to achieve at the onset.  How do we wrap a robust accountability framework around the emergent realities that you and the authors you cite so eloquently describe, and which I passionately agree with?  And how do we convince financiers, whether private or public, that are laid out money to buy A, but get handed something with bits of A, but lots of B, C and D?   Any tips?

  5. rick davies avatar

    The application of an evolutionary approach to learning and knowledge may in fact be easier than it seems on first reading of Owen’s blog. I have two propositions for consideration.
     
    1.      Re Variation: New types of development projects may not be needed. From 2006 to 2010 I led annual reviews of four different maternal and infant health projects in Indonesia. All of these projects all were being implemented in multiple districts.  In Indonesia district authorities have considerable autonomy. Not surprisingly, the ways the project was being implemented in each district varied, both intentionally and unintentionally. So did the results. But this diversity of contexts, interventions and outcomes was not exploited by the LogFrame based monitoring systems associated with each project. The LogFrames presented a singular view of the “the project”, one where aggregated judgements were needed about the whole set of districts that were involved. Diversity existed but was not being recognised and fully exploited. In my experience this phenomena is widespread. Development projects are frequently implemented in multiple locations in parallel. In practice implementation often varies across locations, by accident and intention. There is often no shortage of variation. There is however a shortage of attention to such variations. The problem is not so much in project design as in M&E approaches that fail to demand attention to variation – to ranges and exceptions as well as central tendencies and aggregate numbers.
     
    2.      Re Selection: Fitness tests are not that difficult to set up, once you recognise and make use of internal diversity. Locations within a project can be rank ordered by expected success, then rank ordered by observed success, using participatory and/or other methods. The rank order correlation of these two measures is a measure of fitness, of design to context. Outliers are the important learning opportunities (high expected & low actual success, low expected & high actual success) that warrant detailed case studies. The other extremes (most expected & actual success, least expected & actual success) also need investigation to make sure the internal causal mechanisms are as per the prior Theory of Change that informed the ranking.
    For more on this issue and Owen’s blog, see my posting on Evolutionary strategies for complex environments
     

  6. rick davies avatar

    The application of an evolutionary approach to learning and knowledge may in fact be easier than it seems on first reading of Owen’s blog. I have two propositions for consideration.
    1.      Re Variation: New types of development projects may not be needed. From 2006 to 2010 I led annual reviews of four different maternal and infant health projects in Indonesia. All of these projects all were being implemented in multiple districts.  In Indonesia district authorities have considerable autonomy. Not surprisingly, the ways the project was being implemented in each district varied, both intentionally and unintentionally. So did the results. But this diversity of contexts, interventions and outcomes was not exploited by the LogFrame based monitoring systems associated with each project. The LogFrames presented a singular view of the “the project”, one where aggregated judgements were needed about the whole set of districts that were involved. Diversity existed but was not being recognised and fully exploited. In my experience this phenomena is widespread. Development projects are frequently implemented in multiple locations in parallel. In practice implementation often varies across locations, by accident and intention. There is often no shortage of variation. There is however a shortage of attention to such variations. The problem is not so much in project design as in M&E approaches that fail to demand attention to variation – to ranges and exceptions as well as central tendencies and aggregate numbers.
    2.      Re Selection: Fitness tests are not that difficult to set up, once you recognise and make use of internal diversity. Locations within a project can be rank ordered by expected success, then rank ordered by observed success, using participatory and/or other methods. The rank order correlation of these two measures is a measure of fitness, of design to context. Outliers are the important learning opportunities (high expected & low actual success, low expected & high actual success) that warrant detailed case studies. The other extremes (most expected & actual success, least expected & actual success) also need investigation to make sure the internal causal mechanisms are as per the prior Theory of Change that informed the ranking.
    These thoughts are explored in more detail in my blog posting Evolutionary strategies for complex environments

  7. Gyuri Fritsche avatar
    Gyuri Fritsche

    Very nice blog on the importance of system thinking in managing complexity in complex adaptive systems. As an actor designing and implementing innovative results-based financing programs, I do like to challenge you on presenting Cash on Aid Delivery as a good application of your arguments. Perhaps in sectors other than Health (in which I work), such could work, however, all experience to date show that setting up results-based financing programs that work you need to change many paramenters. Planning and financing, and measuring results rigorously are all part of such programs. Setting up such programs demands a lot of detail in the form (adapted to local context). Poorly designed and wrongly implemented results-based financing programs are the main cause of lesser results…

  8. Alex Jacobs avatar

    Great blog. I also just watched your excellent on-line lecture on Development & Complexity. Thank you for both.

    I had the same response as I did to a critique of World Bank policy. Why did we (/policy makers) keep getting it wrong? Why did we keep adopting over-simple models that fly in the face of common sense (and, in NGOs, years of hard won experience about participation)? I think this has major implications for what we do in response to your excellent analysis.

    (Another way of posing the same question: Why did Larence Salmen’s work on Beneficiary Assessment in the World Bank stay resolutely sidelined? Alnoor Ebrahim gave an answer in his testimony to the US House of Representatives about internal management practices, p. 5+.)

    Broadly, it seems that more and more money is pushed through bureaucratic systems that are heavily influenced by demands for results. Policy makers and senior managers need simple narratives they can sell to those providing funds and demanding results (whether the public, ministers, MPs or grant makers). And they need bureaucratic systems that at least appear credible as ways of delivering those results.

    As you point out, complexity has been bulldozed in those circumstances – exacerbated by spiralling claims made by development agencies as they compete for funds. Maybe we can only bring it back in by offering practical alternatives to (a) the simple narrative, and (b) the bureacratic systems.

    I’m not sure that ‘development as an emergent property of complex systems’ is simple enough for (a)! And it’s striking to see the uncertainty about Cash on Delivery Aid in the comments above.

    Any solutions have to be able to survive the realities of managing aid, including the ambition of leaders and the role of middle managers – which is tough. I think there’s mileage in focusing on outsiders’ role in ‘providing assistance’ rather than ‘doing development’ and I particularly like your emphasis on feedback systems. I am looking forward to experimenting further with both at Plan – and to staying in touch on these critical questions.

  9. Bart Doorneweert avatar

    Dear Owen,

    Thank you for this post! I am working on a project management method myself for private sector development projects using lean startup and customer development approaches to business model innovation or generation.I really value the insights you provide here on the importance of linking experimentation and accountability.

    I have written a post on my approach a couple of months ago here: 
    http://valuechaingeneration.wordpress.com/2012/06/04/private-sector-development-projects-and-the-pivot/

    I would plead for tailoring private sector development project performance metrics to validate business model hypotheses. That would give one the ability to judge a development project like an investor would judge an entrepreneur’s business model; not on the results of execution, but on the results of search.

    I would be keen to have your thoughts on my idea. If you know of any projects which are currently applying more agile and iterative project design methods, then I’d really like to learn of them. Hope my thoughts merit a response

    Bart 

  10. Jake Allen avatar
    Jake Allen

    Very interesting. I was only last week reading a separate blog on the idea of ‘failing forward’ which links very closely to this post. I think that it’s perfectly possible for an apporach like this to be relatively easily rolled out on quite a wide scale. It just won’t be, at least not for a while.

    The rather wearying realpolik of development is that donors, especially government donors, tend to have two priorities: spending money and then showing what this bought. The political cycle and nature of external attention and scrutiny works to keep ideas that encourage higher risk and less certainty at the fringes.

    But that’s not to say there isn’t interest within donors for such ideas – there certainly is – though i would also caution against even thinking about one donor as a single entity, when often there are slightly groovier head office people who have time to toy around with ideas like this (without comitting to them), whereas country office staff are under huge pressure to spend and account. The latter is not a happy environment for experimentation, and valuable knowledge can be overlooked, as Rick’s post shows.

    My feeling is at this stage is finding the small ways to test ideas like this – be that a small part of a more ‘traditional’ project, or using more flexible non-donor funds to do some experiment. I’m looking at a small ‘venture capital’ element of a programme at the moment specifically along these lines. This will then hopefully start to turn the tanker of the staus quo in terms of intervention design, management and evaluation, hand-in-hand with a constant flow of less formal pressure via blog like this, meetings, workshops etc.

    There’s also the role of INGOs. Too often we can take on a project which is to all intents and purposes ‘designed’, and we run with it. We need to be better at saying ‘hold on’, and really pulling the initial idea apart; not submitting so easily to ridiculously short inception periods; being clear that THINGS WILL CHANGE; and so setting up the interventions on the right footing.

    Looking forward to keeping in touch with this ongoing debate.

  11. […] all play their part. In the complex world of international development (Owen Barders' recent work on complexity theory) developing the skills to diagnose and analyse failure is a real […]

  12. […] Barder, O. 2012) ‘If Development Is Complex, Is the Results Agenda Bunk?’. Owen Abroad Blog, 7 September, https://owen.org/blog/5872 […]