Causality & Assumptions
This section of the workshops is based on theory of change, backcasting and innovation management frameworks, as well as action-learning/research principles.
As we learnt from the Complexity section, causality isn’t as clear as we used to think. In fact, only retrospectively can we sometimes see a clear path of causality, yet when we plan forward, we are assuming chains of causality which may not be correct, even if they were in the past (because of emergence).
This part of the process is aimed at exposing assumptions in our thinking and surfacing new insights. It can either be used to retrospect a project which has happened, to draw some conclusions about what happened, or it can be used as a future-thinking tool to design for certain forms of impact and identifying important areas to test.

Example

This structure was used in the workshop to surface the phases of the research project, and to highlight a range of outcomes and outputs, as well as the activities which contributed to them.
It can be used to map an existing project retrospectively, but also to backcast from 'desired outcomes' through to the activities we think will generate them. This then enables us, as a team, to highlight assumptions in the 'if this then that' scenarios, showing weakenesses which should be tested through experimentation.

Process

FOR RETROSPECTING PAST PROJECTS

  1. 1.
    Using the refined insights from the Capitals Framework, we work with a small number of these, and lay them onto a backcasting framework: Activities, Outputs, Outcomes/Impact.
  2. 2.
    The goal is to build a picture of the project from input activities (what actions the Lab took), outputs (what happened because of this), and outcomes (what change happened because of these outputs). Generally we will work backwards from right (outcomes) to left (activities).
  3. 3.
    As we build a picture using some of the insights from the capitals framework, and write new activities and perhaps outputs and outcomes, we are trying to draw links between the columns, to build a picture of causality.
  4. 4.
    Once we have a picture, we need to interrogate whether the causality is actually true, or whether other factors may have played a role. This can be somewhat subjective, but can also be backed up with evaluation data if it’s available. List the assumptions in a new column on the far left.
  5. 5.
    Group reflection: what have we learnt through this process? Is there anything surprising, or that we previously were unaware of? Where were the weakest links of causality? Did we think they were a big leap of faith before we started?
  6. 6.
    Optional: save these insights, assumptions and their context, so they can feed into future work.

FOR DESIGNING A PROJECT

  1. 1.
    Using the refined insights from the Capitals Framework, we work with a small number of these, and lay them onto a backcasting framework: Activities, Outputs, Outcomes/Impact.
  2. 2.
    The goal is to build a picture of a project with input activities (what actions the Lab will need to / plans to take), outputs (what is expected to happen because of these), and outcomes (what change we expect to happen because of these outputs). Generally we will work backwards from right (outcomes) to left (activities).
  3. 3.
    As we build a picture using some of the insights from the capitals framework, and write new activities and perhaps outputs and outcomes, we are trying to draw links between the columns, to build a picture of causality / logic we’re using to justify the project’s work.
  4. 4.
    Once we have a picture, we need to interrogate whether the causality is actually true, or just an assumption. This needs a solid foundation of trust and a culture of generative critique as sometimes you will be questioning your colleagues’ perspectives. List the assumptions in a new column on the far left, and rank them in priority of ‘riskiness’ to the project - the Lab needs to decide what risk looks like for themselves.
  5. 5.
    Group reflection: what have we learnt through this process? Is there anything surprising, or that we previously were unaware of? Where were the weakest links of causality? Did we identify any assumptions which may break the project if they’re not correct?
  6. 6.
    Optional: write scenarios for specific assumptions - what would happen if they were right / wrong / somewhere in between. Writing these scenarios helps to envision alternative paths for the project if things do not work out, and breaks down the ‘program logic’ mentality of theory of change, and support Labs to see the future as fluid, and able to be shaped.

Methods

Last modified 2yr ago