+1 Recommend
1 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The contribution of a ‘synergic theory of change’ approach to democratising evaluation

      Research for All
      UCL Press
      theory of change, democratisation, co-production, evaluation


            This paper focuses on an evaluation of three projects working with young people in innovative ways to tackle societal alcohol misuse. Rather than presenting the findings of the evaluation per se, the paper presents learning from using theory-based approaches in a collaborative way to evaluate these complex, multi-strand initiatives. Traditional evaluations conducted by academics without collaboration with stakeholders can fail to meet the needs of those delivering interventions. Drawing on interviews with practitioners involved in delivering the projects, the paper adds new evidence to epistemological debates by introducing the notion of a ‘synergic theory of change’, whereby academic expertise and the skills, knowledge and experiences of stakeholders are subject to dialogue, and a theory of change becomes the result of collaborative consensus building. This way of using theory of change in evaluation requires researchers to work in a spirit of co-production and dialogue, and it can move evaluation away from being an exercise that seeks to judge interventions and, by extension, practitioners, to one which prioritises a shared learning journey. Using a synergic theory of change approach has the potential to change the nature of evaluation and lead to a different kind of relationship between researchers and practitioners than traditional methods-based approaches allow.

            Main article text

            Key messages
            Theory of change approaches are growing in popularity, but they are often developed by evaluators. It is possible to create a theory of change in a collaborative way which enhances the relevance and utility of evaluation for project leaders and workers, and enables evaluation to become an embedded tool in their project development.
            Investing time at the beginning of a project for developing a shared theory of change through dialogue can enable project workers to understand how they can be a partner in the evaluation process, and can enable evaluators to better understand how to assist project development for a shared learning journey.
            Using a synergic theory of change approach has the potential to change the nature of evaluation and lead to a different kind of relationship between researchers/evaluators and practitioners than traditional methods-based approaches allow.


            Much attention has concentrated in recent years on the need for professionals to work together in order to deliver the most effective services and to achieve the best outcomes for children, families and communities, in the belief that the holistic solutions offered by a multi- or inter-agency response are more effective at tackling complex social problems (Melville et al., 2015). While evidence is mounting that collaborative working can achieve outcomes that are over and above what can be achieved by any one organisation or sector working alone (Cummings et al., 2011; Dyson and Kerr, 2013), academic researchers have not traditionally been seen as key actors in this collaboration, but rather as sitting somewhat distant and distinct from professional service delivery, particularly when evaluating it.

            Existing models of academic evaluation of service delivery vary, but they usually have at their heart an ambition to understand interventions or initiatives, and/or to assess the extent to which they meet their objectives in terms of what they are intended to achieve. The way in which evaluation is designed will influence the ways in which data are collected and analysed. Experimental designs such as randomised control trials will assign control groups and place their emphasis on exploring statistical relationships between cause and effect, focusing on ‘what works’, but they can struggle to explain why (Deaton and Cartwright, 2018). In circumstances where establishing control groups is not possible, designs such as case studies can be effective in dealing with complexity, but they struggle to generalise from findings, limiting the extent to which they can inform future service development (Yin, 2013).

            Indeed, for some in the academy, maintaining complete academic independence is seen to be the best way to ensure scientific rigour and validity when researching social life, whichever approach to evaluation is taken. This can, however, lead to a variety of problems, including a lack of understanding by both academics and practitioners of the ways and the context in which each other work, and domains of expertise that are little understood and not crossed or shared effectively (Clark et al., 2017a). Consequently, evaluation that is undertaken by academics in order to understand the outcomes and impacts of community interventions without collaboration with stakeholders can fail to meet the needs of those tasked with delivering such interventions.

            Some researchers have made attempts to overcome these challenges by developing a participatory research ethos, and developing new methods to encourage inclusive research which change the emphasis of research from doing ‘on’ to doing ‘with’ (see, for example, Bourke, 2009; Clark et al., 2013; Kellett, 2005; Nind, 2014; Facer and Enright, 2016; Banks et al., 2019). Nevertheless, despite the growing popularity of participatory research, the academy (and, indeed, researchers of all kinds) has been slow to shift in respect of evaluative work, and levels of collaboration vary depending on the ethos of the researcher and the scope of the research being conducted (Clark and Laing, 2012).

            Opportunities to encourage closer relationships between academics and others have been stimulated by the introduction of the notion of ‘societal impact’ into regulatory frameworks (Laing et al., 2018). Many universities in the UK are beginning to position themselves as civic institutions, involved in, and contributing to, the communities in which they are situated (Goddard, 2016; Shucksmith, 2016). My own university seeks to work with communities to address societal challenges (see, for example, Goddard and Tewdwr-Jones, 2016). Nevertheless, the infrastructure in place for academic regulation struggles to take account of this new relationship for, and with, society (Campbell and Vanderhoven, 2016). Ethics committee procedures still require researchers to position those being researched as subjects (Nind et al., 2013), re-enforcing notions of power and status. There remains a power dynamic inherent in the relationship between researcher and service providers, particularly during evaluative research. Target-driven management cultures (for example, performance by results) have led evaluation to be seen as a test which providers must pass, rather than a process of learning, and researchers have often been perceived as ‘judge’ (Ensminger, 2015). Academic researchers have reported barriers to undertaking research in terms of ‘deficits’ among the practice community, such as a lack of receptivity based on previous poor experiences of research, or a lack of knowledge of research and poor communication (Heubner, 2000). Evaluation can be seen as the task or domain of the researcher, and practitioners can be perceived to lack an understanding of their role in the process of evaluation. Yet, as researchers, we have a responsibility not just to produce good science, but also to produce findings that are useful for policy and practice. This paper provides a counter to those deficit notions of practitioners and evaluation, and it explores how adopting a collaborative approach to using theory-based evaluation approaches can, in the right circumstances, work to harness and value the expertise of all involved, but it requires a change in approach by researchers to be effective.

            Previous studies using theory-based approaches have found that these approaches can increase stakeholder engagement in evaluation (De Silva et al., 2014), and can enhance participation (Jackson, 2013), having the potential to produce better data. Both De Silva et al. (2014) and Jackson (2013) advocate more dialogue and knowledge exchange between academics and practitioners, and the involvement of beneficiaries in the development of a theory of change. Both practitioners and evaluators often ultimately share the same goal – to produce good outcomes for beneficiaries and social value – but this may not be articulated as a shared goal, particularly when researchers are preoccupied by measurement and scientific method, and by what Dura et al. (2014) term their ‘trained incapacities’, and practitioners are under pressure to report outputs to funders. Key questions become salient: How can evaluators and practitioners work together effectively? How can they be supported to understand each other’s perspectives? How can knowledge be shared and used effectively?

            I suggest that the answers to these questions lie in the co-production and democratisation of evaluation, and that one way to achieve this lies in the use of a collaborative theory-based methodology. This paper presents data from a three-year evaluation conducted by the author and others, and explores how using a theory of change approach changed the nature of the evaluation and led to a different kind of relationship between researchers and practitioners than traditional methods-based approaches allow. The paper begins by describing a theory of change approach to evaluation, and outlines the context in which it was used in three youth projects designed to address alcohol misuse. This paper does not describe in detail the results of the evaluation, which can be found in the final report (Clark et al., 2017b), but rather, it draws on a series of interviews with project staff about their experiences of collaborating with researchers to develop and utilise what I have termed a ‘synergic theory of change’. The paper finishes by analysing the potential of this approach to democratise evaluation.

            Theory-based methods for evaluation

            Theory-based methods for evaluation have been growing in popularity in recent times, both in the developed world and in developing countries, as a response to the need for frameworks that can take account of the complexity and change that underlie interventions in the social world. Theory-based approaches embody a set of research strategies that rest on the assumption that knowledge is socially constructed, and attempt to provide a conceptual framework for analysis by situating research as a process of learning. These methods provide a way of conceptualising programmes from inception through to implementation and the evaluation of outcomes, in order to develop an understanding of how they work, for whom and in what circumstances (Dyson and Todd, 2010).

            Theory-based methods are referred to in a variety of ways (for example, programme theory, implementation theory, realistic evaluation) and are enacted differently, but they usually incorporate a theory of change in varying degrees of complexity from the simplest logic model through to tracking changes arising from multiple strands of action. Champions of this approach have emerged throughout the world, most notably in the US (for example, Weiss, 2000; Chen, 2015), in the UK (for example, Pawson, 2013) and in Australia (for example, Rogers, 2008), and theory of change methodology is increasingly being used in international development to evaluate interventions in the developing world. Theory-based approaches have arisen as an alternative to methods-driven approaches, which often do not take into account stakeholders’ needs and are driven by the demands of method rather than theory, thus rendering them inflexible and inadequate for addressing complex, multi-strand initiatives (Chen, 2015).

            A theory of change approach is a theory-based framework for planning, implementing, evaluating or reviewing change. It normally takes on a diagrammatic format to articulate a theory of how an initiative is intended to work (Laing and Todd, 2015). It can be utilised in different ways in order to describe how an intended outcome is expected to achieve change for beneficiaries, within the context in which it is being enacted, and given the constraints under which it is working. Table 1 demonstrates how a simple theory of change works.

            Table 1.

            Establishing a theory of change (Source: Author, 2022)

            Key informationPurpose
            The starting situationEstablishing the rationale for intervention.
            Clarifying the problem it is addressing.
            Identifying the context in which it is being implemented.
            Deciding what needs to change.
            Exposing assumptions that exist.
            Strands of actionEstablishing what actions are being undertaken in order to change the starting situation and achieve different outcomes for beneficiaries.
            Identifying how changes will be made.
            Identifying what will be different to usual practice.
            Steps of changeEstablishing a chain to indicate how things will change for beneficiaries.
            Identifying desired effect of the actions, and for whom.
            Making explicit the order in which changes will be made.
            Intended outcomesIdentifying how the starting situation will change.
            Risks and opportunitiesPredicting what might prevent change from happening, or indeed assist the desired change.
            EvidenceWorking out how the articulated change can be measured, and how change is identified as happening.

            Once the theory has been established, it can be put into a visual form in order to represent the theory simply, and, in some cases, as an aid to check the understandings of the theory by different stakeholders. The theory of change approach has sometimes operated in a top-down manner, privileging academic knowledge and research to impose an evaluation structure on an initiative. In other words, academics will build a theory based on their existing knowledge from the literature about how initiatives work, without involving the views of stakeholders in the process. However, this approach often cannot take into account the intricacies of programmes, especially complex ones, and researchers lack the insider knowledge that could strengthen theory building and the production of new knowledge. This is particularly the case when a theory of change is developed later in the process, rather than being part of the planning of an intervention. This deductive approach employs a reductionist attitude to programme development, implementation and evaluation, and fails to challenge traditional relationships between researchers and stakeholders.

            Nevertheless, developing a theory of change that can take account of many stakeholder perspectives, and incorporate research evidence, experiential knowledge and practice insights, is difficult. There can be disagreement about which theories should be prioritised. But where there are complex or complicated interventions, with little previous research evidence (although perhaps elements of previous theories that can be built upon), something different is needed. As Chen (2015: 386) has advocated, ‘Perhaps collaborative efforts could help to develop a new kind of middle-range intervention theory to which both researchers and stakeholders could relate, thereby narrowing the gap between science and service.’

            I therefore approached the evaluation with a collaborative ethos, determined to find a middle ground, by positioning myself as a critical friend. I have named this approach a ‘synergic theory of change’.

            The role of the researcher

            Positivist traditions maintain that researchers are intended to be almost invisible in the research process. More constructivist approaches view the role of the researcher as being to mediate and interpret data (whether quantitative or qualitative). This can be done with varying levels of involvement or insider-outsider participation (Punch, 1998). A theory of change approach can be developed within either paradigm. Some are developed entirely from empirical research findings by researchers (the mental model), while others can be developed entirely from practice knowledge with stakeholders. Different stakeholders can have different theories, drawing on different bodies of knowledge and using evidence in different ways. Developing a theory of change using both sets of knowledge requires collaboration, and thus has the potential to stimulate change in the nature of the relationship between researchers and researched to that of co-production, drawing on the individual skills and experience of everyone involved, and positioning the researcher as a critical friend who can help negotiate conflict, challenge and consensus. This can be uncomfortable ground for the researcher to walk. Critics of this approach might point out the risk of influencing programme design and implementation, and so producing less scientific or robust research in the process. However, in order to make evaluation relevant and useful for those delivering policy and practice, it needs to take account of stakeholder views. Using a completely mental model of theory of change (Funnell and Rogers, 2011) means privileging the knowledge and experience of practitioners, and not utilising the expertise and evidence that researchers can bring. The researcher simply acts as an interpreter and guide.

            What is needed, therefore, is a hybrid model, or a ‘synergic theory of change’. The word ‘synergic’ implies a working together that is collaborative and cooperative, and that brings benefits over and above those that can be achieved by working alone. I suggest that this manifests itself in a theory of change as a model of mutual input, with the researcher acting as interpreter and guide, but also presenting challenge based on academic expertise, and being open to challenge from tacit expertise. This can be made possible by situating the evaluation as a learning journey, a journey of exploration for both evaluator and practitioner, for which the theory of change acts as a framework for action. This approach then sets the researcher back on a comfortable, more familiar, path. This journey needs to be made explicit, however, as Evans (2014: 356) points out: ‘Unless we embrace and make clear our critical stance, we might risk being co-opted into helping reproduce ameliorative practices (focused on alleviating symptoms) rather than initiating transformative practices.’

            Evans (2014: 356) sees the role of critical friend as a moral responsibility, given the magnitude and complexity of social problems, and the need to take action to ameliorate them, and describes the role thus: ‘Based in an enduring relationship of trust and mutual respect, the critical friend joins with research and action partners to subject community practice to deliberate and continuing critique in order to illuminate relations of power and shape action to better achieve mutually agreed social justice objectives.’ Figure 1 depicts a visual representation of a synergic theory of change.

            Figure 1.

            The framework for a synergic theory of change (Source: Author, 2022)

            The projects

            Three projects were commissioned by a consortium of funding agencies that aimed to find new ways to tackle alcohol misuse in the UK. Working in partnership with local schools, health practitioners and other local organisations, the project teams had already come up with their ideas for action, in consultation with young people, and they were about to implement their projects as the evaluation started. The projects were trying out innovative ideas designed to tackle the causes of alcohol misuse rather than its effects, and to situate young people as agents in the change process, rather than as the cause of problems associated with alcohol misuse.

            The funders specified two long-term outcomes, but also set each project its own specific outcomes in relation to peers, parents or communities. This meant that each project was different and needed a bespoke evaluation framework, but one that could provide an overview across all three. The projects were multi-strand and complex, and working in areas with different needs and cultures of alcohol misuse, and different histories of multi-agency working. The task was to evaluate which projects were successful in meeting their outcomes, and how. The projects were encouraged to ensure the participation of young people (although this was not defined), and to enable capacity building so that communities could effect change themselves. Given the complexity of the actions suggested, and outcomes that were not easily measurable, using a theory of change framework seemed suitable and appropriate in this context. Table 2 describes the different models the projects adopted, and the outcomes towards which the funders asked them to work.

            Table 2.

            The projects and their intended outcomes (Source: Author, 2022)

            ProjectModelOutcomes expected
            Project 1Long-term group work with identified friendship groups from across the region, centred on addressing risky behaviour and strengthening peer relationships.

            A film project with young people designed to produce materials to share with peers about alcohol.
            To understand how a young person develops skills and awareness over time to address alcohol-related harm.

            To improve young people’s decision-making capacity in relation to the choices they make around alcohol.

            To examine changes in attitude and behaviour towards alcohol consumption and associated risky behaviour at both the individual and peer group levels.

            To develop protective peer networks which support harm reduction, moderate behaviour around alcohol, support positive decision making and build resilience.
            Project 2A variety of short-term group-work projects with young people to identify key messages about alcohol to share with parents and other adults. These included a variety of media, including film, art, animation and radio.

            A young educators’ group to provide information about alcohol to adults and other young people.
            Improving family understanding of alcohol-related harm, and to enhance how families address and support their children with issues and concerns in relation to alcohol.

            Developing positive approaches to influence young people’s drinking habits, attitudes and beliefs about alcohol.

            Promoting and improving parents’/carers’ understanding of how their own alcohol use and associated risks may impact on their children.
            Project 3Community development approach in two neighbourhoods, enabling and empowering young people to find and implement solutions to address problems identified by the community. Specific methods included a photography project, radio project, community events and outreach.How an intervention or approach may engage with a local community on matters relating to alcohol-related harm.

            Increasing the contribution that young people can make within their local community.

            Reduction in the number of alcohol-related harm incidents across a community or local population.

            Developing theories of change and evidencing them

            Developing synergic theories of change (one for each project) involved several stages. The first stage was to conduct face-to-face meetings with strategic and operational staff from all projects to establish relationships, explain the evaluation strategy, and discuss roles and expectations. The second stage consisted of a series of interviews during which an initial theory of change was developed that incorporated the knowledge and experience of practitioners, and drew on existing research evidence. This was then depicted in a diagram, and discussed and modified until consensus was reached with the projects about a coherent, workable and measurable theory of change. This included a situational analysis, outlining actions to be taken, making explicit any risks to the projects, and developing a clear chain of steps of change for beneficiaries leading to intermediate and longer-term outcomes. Through the use of language – that is, ‘your’ theory of change, and being encouraged to annotate and redesign the theories of change – projects were encouraged to see them as something that they were able to use themselves for planning and reviewing their work, and not as something that was being imposed on them purely for evidencing purposes. The theories of change developed with the projects incorporated elements of existing theory, such as, for example, theories of community development, but they also drew on practitioner knowledge and experience of the context in which they were working.

            The third stage involved drawing up an evaluation plan in collaboration with the projects, utilising existing data collection strategies devised by the projects, and advising on methods they could incorporate in their practice, as well as identifying a specific data collection role for the evaluator. These three stages happened in the early months of the evaluation, and they took some time to complete. Project staff have only limited time and capacity to lend to evaluation, and there needed to be some time for reflection and consensus building incorporated. The fourth stage happened approximately halfway through the evaluation, where projects were invited to revisit their theory of change, based on the evidence collected at that point, and to re-evaluate whether the theory seemed to hold true, or whether changes needed to be made, perhaps due to modifications of actions or because steps of change were not happening in the way that had been predicted. Regular learning set events took place throughout the evaluation for practitioners and evaluators to share learning and talk about their theories of change.

            The data collection for this evaluation used a pragmatic paradigm, utilising mixed methods as appropriate (Burke Johnson et al., 2007). Established, validated tools were used, alongside more qualitative exploration of users’ and beneficiaries’ experiences. Examples of data collected included: interviews with staff, young people and community members; diaries; videos; visual art; youth star assessment and follow-up data; before and after Audit-C data; photographs; evaluation forms; and observations. The evaluators, the project staff and young people were constantly on the lookout for ways to incorporate evidence of what they were doing, and the theory of change gave them a framework to be able to focus on the key items for discovery.

            The evaluators maintained close involvement with the projects throughout, and the following data come from a thematic analysis of interviews conducted with eight management and operational staff from all three projects and fieldwork observations since the start of the evaluation. The following section presents an analysis of how theory of change worked in this evaluation context, and the barriers and facilitators to democratising evaluation. The interviews were analysed using an inductive method of analysis, which drew out key themes in relation to the evaluation methodology (Braun and Clarke, 2006). This research was undertaken in accordance with the British Educational Research Association guidelines, and it received ethical approval from Newcastle University ethics committee.

            Democratising evaluation processes

            The theories of change were developed dialogically and collaboratively, with the intention of enabling both evaluators and practitioners to demonstrate and value the domains of expertise each brought, and yet to start to understand the contribution of the other in terms of evaluation practice. Evaluation was not seen solely as the job of the evaluator, but as a shared endeavour which developed from the varied ontologies presented by both the researchers and the practitioners.

            A key issue for the project teams was their previous experiences and understandings of what evaluation meant. They told me that, in previous projects, evaluation had tended to concentrate on measuring end outcomes, and whether they had delivered the outputs they promised. Evaluation was thus seen as an additional and external system of inspection and judgement which bore no relation to their daily practice, and which was not embedded within the project. Their previous experience of reporting to funders from self-evaluations was often based on the evidence of delivery and some assessment of end outcomes, but did not require the level of self-reflection or criticality that was involved in developing, evidencing and reviewing a synergic theory of change, and implementing changes in practice as a result. Through the process of developing a theory of change, some practitioners were able to see how working collaboratively in this way could give them a more useful perspective on their work:

            I think the world is turning, because everybody is much more focused on outcomes, and I think the [previous] model isn’t necessarily going to work for the future because it is possible to do… centred on outputs and outcomes and a project can be successful, even if neither of the outcomes or the outputs have been achieved, but the methodology has been gone through. So you can say money was spent, and we had 27 classes and 550 people turned up, because that’s the emphasis, rather than – do people have behavioural change as a result of it, or attitudinal change? What are the outcomes? What did you see people doing? (Director, Project 2)

            Part of developing a theory of change was then to develop a workable evaluation plan based on that theory, in order to build a portfolio of evidence to support, or reject, that theory. This was a new concept for the project teams, who were used to evaluators privileging certain kinds of data (such as numerical data from management information systems, performance data, or a survey designed and imposed on them by evaluators), and discounting the other kinds of evidence they collected themselves. It meant that project staff needed to look at the data they were collecting with a much more critical eye, and to develop new ways of evidencing their theories. They were then able to suggest ways in which the evaluators could work with them to collect new kinds of data:

            What’s definitely highlighted to me using the theory of change… is, I suppose, the value or the validity of the evidence that you’ve got, because a lot of what was acceptable… was that a photograph was enough to suggest whatever you wanted it to suggest really, and now I look at things and think, well what does that tell you? What does that show? Is this evidence? But in a way, the theory of change goes into a whole – it’s a questioning tool. (Director, Project 2)

            This growing criticality by project staff about the data they were collecting was of enormous value to the evaluators, as open and interesting discussions could take place about evidence without the researcher having to assume an authoritative position as the person expert in research. Projects 2 and 3 understood early on that the theory of change is designed to look at the mechanisms by which outcomes are reached, not just those outcomes themselves, and this was welcomed by them, as it positioned the evaluation as a learning journey, not as a ‘pass or fail’ inspection:

            I think for evaluation, it clicked for me because I think we could actually start to demonstrate impact, both on the different strands of activity that were going on in the project, but then also the project as a whole, because you can use that as a process, so it feels like evaluation is starting to become a bit more tangible. (Project manager, Project 3)

            This positioning of evaluation as a learning journey for all involved meant that the idea of ‘failure’ was re-conceptualised:

            At the same time, it [the theory of change] also allows for failure. I’ve never felt, all the way along, if something has failed or gone wrong [with the project] or we haven’t had anybody turning up, or where there are no outcomes, that that has been seen as a failure, or that’s something that somehow we shouldn’t report – if anything, yes, we have to report it because from that platform you then get guidance and reworking the theory of change model to know how to go forwards. In a way, it’s more of a perfect tool because it allows failure as well as success, and the culture isn’t geared towards failure really. (Director, Project 2)

            There was also a realisation that their theories of change were flexible enough to be able to incorporate change and evidence in a way that other evaluation frameworks could not. This fitted in much more with what they needed in practice than more traditional methods-based approaches:

            It mirrors much more the human experience, which is about adapting as you go along, and this model I think allows for flexibility and adaptation more than others, so in other words, as your outcomes are being discovered and the project is changing, you can actually change it in the theory of change model without having to go back and reinvent it. (Director, Project 2)

            The learning there is how projects shift and change and the need to keep it alive and active, so I suppose we are looking at a framework that can perhaps encompass that. (Project manager, Project 3)

            Project 3 were going to repeat their work in a different geographical area, and they found that the theory of change could provide a framework in order to implement that, and to help them to stay focused on doing that in a way that would enable cross-comparison. By collaboration between evaluators and practitioners, using the theory of change as a common tool to frame dialogue, evidence change and assist in project planning, Projects 2 and 3 felt that evaluation had become much more embedded in their practice, rather than being an add-on unassociated with their delivery:

            It’s a planning tool with a bit of evaluation stuck on the end… And that’s the key point – it’s the stuck on the end bit of it, isn’t it? Being able to go through it with you throughout the three years, refreshes it, but also keeps us on track, so the evaluation isn’t a stuck-on in year three and you are going through all your evidence saying, well, I’ve got a photograph which says this, it’s much more comprehensive, and much more evidence based than some of the other tools that are out there. (Director and project worker, Project 2)

            This framework starts to crystallise evaluation, it will enable us to look at impact in a much clearer and stronger way. We can look at the best way to start to embed evaluation into what we are doing. (Project manager, Project 3)

            The theory of change enabled them to see not just where they themselves could contribute to evaluation, but where project users and beneficiaries could be involved. Work was conducted with young people to contribute to the theories of change, and, later in the project, young people were able to be recruited and trained as peer researchers, which very much fitted with the ethos of the projects as empowering young people, and, at the same time, enabled young people to make a valued contribution to the evaluation (Clark and Laing, 2018).

            Enhancing understanding, exposing assumptions

            One of the underlying purposes of articulating a theory of change is to clarify assumptions about the way in which projects work, and to expose the theoretical, ideological and political positions of all involved. During early meetings, it was made clear by the funding partnership that the projects were intended to be innovative and to try out things that had never been done before. This included tackling alcohol misuse in communities by empowering young people to become agents of change. This inevitably led the evaluators to think in a certain way about how the projects were acting, and to ask questions accordingly. Project 3 could easily theorise about the ways in which they hoped this could be achieved:

            You encourage them to think differently, then you encourage them to speak out, and then you encourage them to change, which is part of what the thinking differently is all about, it’s about thinking, speaking and acting. (Project worker, Project 3)

            Nevertheless, it became clear after the production of the first interim evaluation report that not all projects saw themselves as acting in this way. Project 1 challenged the evaluators’ assumptions, and modified their existing theory of change to explain that ‘for our project, it will not have that sort of impact’ (Project worker, Project 1). Part of the problem had been the use of terminology. For example, developing ‘positive friendships’ had been a key step of change described for young people, and yet further discussion revealed that while the evaluators had interpreted that to mean friendships that were a positive peer influence for change, the project defined positive friendships as a resilience factor for young people, and had not seen any implications from this for peer influence (even though this was one of the outcomes requested of them by the funders). Articulating this enabled the evaluators to see that the project was offering a therapeutic service to those young people participating, but not expecting to see those young people becoming agents of change for their peers. Although the synergic theory of change was not able to prevent those misunderstandings, it became a way of being able to expose them and to talk about them in a way that may not have been possible otherwise, had the evaluators simply been using a methods-based approach. Indeed, it was Project 1’s expectation of an evaluation based on performance data and measurement of end outcomes that had, in part, led to this misunderstanding:

            [Our service] is very outcomes driven, and that’s just the way my brain works, I think, so I find it very hard when we are saying, hang on a minute, and why are we writing that down, how does that relate to the outcomes, I find it quite difficult because I’m not used to it… That’s why I struggle with the steps of change because I think, well, why do you need to know what happens in between there, it has no relevance to that. (Project worker, Project 1)

            Changing roles and relationships

            The synergic theory of change approach is dependent upon dialogue and collaboration between stakeholders. This was new and unusual for all three project teams, who were expecting an external evaluation team based in a different geographical area to be remote, making requests for data from time to time to a set reporting schedule, in the way that they were used to:

            Certainly, I’ve never had an external evaluator involved like you guys are – it is a different process. I think, to me, we’ve had higher-up people come in and do quality assurance, but we’ve never had an external. The only experience I’ve had is maybe in the council, but that’s not someone coming in to help you, it is a very report-based type of inspection. Yeah, it’s more like an inspection than an evaluation. (Project worker, Project 1)

            The evaluators positioned themselves as ‘critical friend’ in that they helped the project staff to articulate their own theories of change, but they would challenge their assumptions based on knowledge of existing research evidence and theories, in what they hoped was a supportive and questioning way. The evaluators also advised on data collection techniques and protocols, and encouraged criticality of practice. All projects seemed to respond well to that, although for some, engaging with the evaluators to a greater degree stretched their capacity in a way they had not prepared for, and which had not been taken account of in original project plans:

            Sometimes, when you guys come up, it’s the only time me and [the other project worker] will sit down and talk about the processes and be reflective. We don’t have the time. (Project worker, Project 1)

            Indeed, all projects were surprised by the level of involvement and support that the evaluators offered, particularly when they had existing self-evaluation strategies:

            I think the critical friend role is an interesting and new one for me. There is often a hostility to evaluation within projects because everyone is target focused and thinking ‘why do I have to do this?’, but I think it is helpful to have someone who’s coming in from the outside who you don’t feel is like an inspector and has got that helping aspect, but at the same time because I think we’re so used to developing monitoring and evaluation ourselves, it can feel a bit like, ‘oh, there’s someone there to help us, this is weird!’ (Project worker, Project 1)

            This perception of the evaluators ‘helping’ the projects was one that persisted, and projects felt that, rather than evaluators taking evidence for external purposes, the evaluators added value to the delivery, and that the synergic theory of change could provide a framework that produced more robust, valid evidence:

            Everybody wants to tell their funders ‘the project’s working great’ and ‘here’s what we’ve achieved’, however, you guys are really the judge of that, and in this process, we see what works and what could be improved, but I think having that external, but also that external tool, so it’s not our report mechanism, it’s a report mechanism that’s recognised, evidence based and planned together, rather than at the end of it, us saying ‘this is us, we’ve done a great job’. (Director, Project 2)

            That’s how I feel about the evaluation, it has been [that critical friend], and it has been really useful to discuss things and do things like we’ve done today, and talk through our processes. (Project worker, Project 1)

            As projects took ownership of their theories of change, actively reviewed and made changes to them, and were comfortable with data collection, the helping role of the evaluator became reconceptualised as keeping them ‘on track’ with their delivery and advising on ways forward for practice. This became a difficult position for the evaluators to find themselves in. Decision making around the direction of the projects’ activity could only be made by the funders. As evaluators, we could point out what the evidence was saying in respect of their theories of change, but we could not advise the projects on what they should do to amend their practice. Furthermore, the funders did not have the same level of knowledge and experience around the theories of change that had been developed between the projects and the evaluation team. Midway through the project, it was decided to hold a meeting between the evaluation team and the funders to reassess roles and responsibilities in respect of the projects. This was a useful exercise in that it reaffirmed the purpose of the evaluation and re-evaluated the expectations of the evaluation and the role of the evaluators.

            Reciprocity in evaluation

            Project 1 struggled to see the relevance of the theory of change, both to their practice and to evaluation throughout the project. They were the project who had already set in place their evaluation strategy before the evaluators arrived, and they were used to reporting on outputs and outcomes in a very predictable, mechanistic style. Their funder, although part of the funding partnership, had been unable to compromise on its funding mechanisms, and so funded that particular project outright, and had its own reporting requirements, which were quite different to those expected of the other two projects, who were consortium funded. The capacity to engage with evaluation was limited, and indeed had not been anticipated, and so we experienced resistance to engaging with the process in the early days. The project worker felt that it would have helped her to know right from the start that the project and the evaluation would work in a very different way, as when she was employed after the start of the project, she expected to work in a traditional intervention, with responsibility for delivery. Focusing on the mechanisms by which the project would achieve its outcomes was seen as irrelevant and time-consuming, and something that she could not benefit from, as the expectation was on her to deliver her outcomes and report those to her funder:

            Whatever goes into the learning aspect, we get no brownie points for. We were given, right, this is the set of outcomes you’ve got to meet, and yes there’s a process in there, but you’ve still got to meet outcomes, there’s still got to be outputs, although now the outputs seem to be very muddied. (Project worker, Project 1)

            Projects 2 and 3 very quickly saw that their theory of change could become embedded in their practice in terms of planning, implementing and reviewing, and that evaluation was a major part of that. Project 3, for example, articulated that they felt they would be successful if there were sustainable activities happening for young people in their area that were supported and encouraged by the local community, by the end of the three years. They were clear that they would not have reached the end outcomes during the lifetime of the project, but they were able to situate their view of success clearly within the theory of change, thus managing expectations of what could be achieved in the timescale they had and preventing unrealistic expectations. They also saw the theory of change as an important planning tool that kept them focused on the needs of beneficiaries, rather than becoming lost in a cycle of delivery:

            It also highlights an aspiration that we’ve got… we want the steering group to be seen as community activists, and we want to develop that steering group to be a youth activist body who play a strong role within [the project], but then there’s other projects and other forms of activism we want to explore as well and develop that group into… by even having that framework [the theory of change] in place, it starts to set the structure in to drive forward that aspiration. (Project manager, Project 3)

            Project 1 were slower to find ways in which the theory of change could help their practice, and they found that they did indeed become lost in that cycle of delivery:

            When we were running the groups, we were just, heads down, getting on with it, doing what we thought was the right thing… we were not looking at the bigger societal goals. (Project worker, Project 1)

            According to what staff told me during the interviews, the theory of change seemed to enable them to look at the impact their projects were having in addition to the end outcomes, looking at how actions fitted as part of a process, which they found helpful. Project 2 told us that they were able to use their theory of change to show to potential multi-agency partners who, they felt, were enabled to understand how the project was intended to work and what it hoped to achieve, and thus to work out the best way to contribute.

            Theorising the synergic theory of change

            In the synergic theory of change, meaning is made by collaborating, and by engaging in dialogue with others. This dialogic approach necessitates negotiating a shared understanding in order to reach a consensus on theory, and on how the theory can be measured. This is assisted by the visual nature of the depiction of theory, which mediates dialogue and facilitates understanding, ‘bridging the gap between the worlds of the researcher and the researched’ (Harper, 2002: 20). Gomez (2015) argues that if we create meaning by dialogic interaction, then our research should seek meaning and knowledge in the same way, in other words, privilege dialogic processes in order to transform social problems. In some cases, this worked well (for example, in Project 2). In other cases, project staff struggled to see the relevance of it (for example, in Project 1). Staff in the projects had different understandings of what constituted valid knowledge. For Project 1, valid knowledge related to the creation of numerical output data and measurable end outcomes, such as activities provided and changes in baseline assessments for young people. The adoption of this ontological stance resulted in a rigid and positivist approach to learning processes, and thus, for them, did not necessitate the need for dialogue. The theory of change did not engender a collaborative approach to evaluation in this project, although small shifts were seen over the course of the project (such as a developing interest in peer research processes, described in Clark and Laing, 2018).

            Project 2 was able to accept the notion of evaluation as a learning journey, and the staff embraced an approach that saw knowledge construction as a joint endeavour between the researchers and themselves, grounded in dialogue and resting on the assumption that contributions were equally valid. Gomez (2015: 302) describes this as ‘the disappearance of the premise of an interpretative hierarchy’ that situates participants on an equal epistemological level. The staff had been seeking alternative approaches to evaluation, and they had been unhappy with the performativity culture engendered by their previous experiences of evaluation. This difference in epistemological stance from the staff in Project 1 enabled them to embrace a synergic theory of change, and indeed to find ways to integrate it into their own processes and practices. This shift from positivism to interpretivism facilitates collaboration and democratisation because it actively seeks to engage stakeholders to understand how change is enacted for them and how projects impact on people’s lives. The theoretical underpinnings of interpretivism and dialogue that the synergic theory of change embraces places value on learning processes, rather than on the discovery of knowledge per se, and this can facilitate transformative practices. Freirian theory posits that transformative action is shaped by both dialogue and critical reflection, which empowers people to enact change (Freire, 1970). The synergic theory of change has shown its potential as a tool for scaffolding dialogue, critical thinking and reflection so that the process of evaluation becomes a vital part of stimulating change, and all stakeholders have a part to play in that.

            Nevertheless, the synergic theory of change approach is challenging. Because of the emphasis on relationship building, time needs to be built into evaluation processes for this to develop. Both project staff and evaluators also need to spend time working together to produce the theory of change framework, to collect data about the steps of change that can be qualitative and quantitative, and to concentrate on more than evidencing the final outcomes for beneficiaries. This is demanding for project staff, who in some cases may not have been allocated sufficient leeway in their day-to-day workload to accommodate this. It is also demanding for evaluators in terms of the level of involvement needed with projects in order to provide the ‘critical friend’ role described here. This is especially resource intensive when the fieldwork takes place on geographically dispersed sites, as these projects were. The roles of funders, project staff and evaluators can become blurred, and they need constant (re)negotiation in order to avoid confusion and conflict. Performative demands (for example, from funders, trustees or managers) that require projects to demonstrate their success in relation to their objectives can provide a disincentive to collect evidence that can highlight less effective practice and stimulate change.

            In conclusion, it is time to ask whether theory-based approaches can lead to the democratising of evaluation. We believe that they can, provided that certain circumstances are met. Funders, projects and researchers need to work together to see projects as learning journeys, in pursuit of knowledge about what works, for whom and in what circumstances in order to inform future policymaking and service delivery. Practitioners need to be able to see evaluation as integral to practice and useful for their own needs in delivery. All this is possible. But why is democratisation of evaluation important?

            During the evaluation, the evaluators discovered that they were able to obtain a much deeper understanding of the projects, and the context in which they worked, by using a synergic theory of change. They were able to make sense of the complexity that the projects presented and encourage critical thinking by those involved. Theory of change demands a commitment to dialogue and trust between practitioner and academic, and where trust is developed, change is more likely to happen (Nelson et al., 2015). Different kinds of data were accessed in order to inform the evaluation, such as that produced by young people themselves. Practitioners were able to see the value in external evaluation, and they appreciated the time and space for reflective practice. They benefited from the knowledge of research methods that the evaluators contributed, and thus were able to collect relevant, robust data. They were able to use these data within the framework of a theory of change to review their practice and find ways of reaching their outcomes.

            Using a synergic theory of change approach draws on the skills, knowledge and experience of all parties, which, it can be argued, enhances the validity of evaluation. There may well be critics who argue that the democratising of evaluation reduces the scientific basis on which research can make claims to knowledge, but, as Pawson (2013: 105) states, ‘The only available tactic is to trust a sizable proportion of the programme theory whilst putting certain of its facets to the test in the expectation that knowledge of them can be revised or improved.’

            By using a synergic theory of change that incorporates both praxis and research evidence, projects have the ability to recognise the parts of their work that are likely to lead to change, and those that are not. This can then lead them to concentrate on delivering more effectively with the well-being of beneficiaries in mind, and reduce the pressure of achieving outcomes at any cost. ‘Evaluative knowledge is always partial knowledge’ (Pawson, 2013: 104), but it is useful knowledge nonetheless.

            By using this approach, the problem of ‘which theory to evaluate’ (Weiss, 2000) is reduced, as lots of theories are introduced, and chewed over, accepted or discarded, based on the introduction of specialist practitioner knowledge, in a process of ‘decisional balance’ (Funnell and Rogers, 2011) where appropriate theory is decided collaboratively. This can help evaluators concentrate on what is important in the innovation under scrutiny, rather than what has been judged to be important in the past, and offer new ways of asking questions. This approach becomes particularly salient when tackling deep-seated social issues such as substance misuse, which are little understood, complex, complicated and culturally bound.

            Chen (2015: 14) states that ‘External evaluators are not constrained by organisational management and relationships with staff members and are less invested in the program’s success.’ I would challenge that. We must by necessity become ‘part of the team’ in order to access the insider knowledge and ‘cultural beacons’ (Dura et al., 2014: 99) that would otherwise pass unquestioned. Social research and evaluation are often publicly funded, and it is difficult to argue that research should not be concerned with promoting societal good. Notions of societal impact and learning for change are gaining traction, and the role of the researcher can be adaptable in response. This approach requires a change in mindset, from researcher as ‘social engineer’ to researcher as ‘critical interpreter’ (Määttä and Rantala, 2007), and a move away from methods-based approaches to an approach that encompasses dialogue and two-way exchange of knowledge and expertise, alongside a change in mindset to notions of researcher as active in the change process, utilising ‘everyday ethics’ (Banks, 2016; Banks and Westoby, 2019) to guide evaluation practice.

            Declarations and conflicts of interest

            Research ethics statement

            The author declares that research ethics approval for this article was provided by Newcastle University ethics board. The author conducted the research reported in this article in accordance with British Educational Research Association standards.

            Consent for publication statement

            The author declares that research participants’ informed consent to publication of findings – including photos, videos and any personal or identifiable information – was secured prior to publication.

            Conflicts of interest statement

            The author declares no conflicts of interest with this work. All efforts to sufficiently anonymise the author during peer review of this article have been made. The author declares no further conflicts with this article.


            1. Banks S. 2016. Everyday ethics in professional life: Social work as ethics work. Ethics and Social Welfare. Vol. 10(1):35–52. [Cross Ref]

            2. Banks S, Westoby P. 2019. Ethics, Equity and Community Development. Bristol: Policy Press.

            3. Banks S, Hart A, Pahl K, Ward P. 2019. Co-producing Research: A community development approach. Bristol: Policy Press.

            4. Bourke L. 2009. Reflections on doing participatory research in health: Participation, method and power. International Journal of Social Research Methodology. Vol. 12(5):457–74. [Cross Ref]

            5. Braun V, Clarke V. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology. Vol. 3(2):77–101. [Cross Ref]

            6. Burke Johnson R, Onwuegbuzie AJ, Turner LA. 2007. Toward a definition of mixed methods research. Journal of Mixed Methods Research. Vol. 1(2):112–33. [Cross Ref]

            7. Campbell H, Vanderhoven D. 2016. Knowledge That Matters: Realising the potential of co-production. Manchester: N8 Research Partnership.

            8. Chen HT. 2015. Practical Program Evaluation: Theory-driven evaluation and the integrated evaluation perspective. Thousand Oaks, CA: Sage.

            9. Clark J, Laing K. 2012. The Involvement of Children and Young People in Research Within the Criminal Justice Area. Newcastle upon Tyne: Newcastle University. Accessed 25 January 2022 https://eprints.ncl.ac.uk/file_store/production/180127/EEEEEA14-1C4D-4C29-BA2F-FFFAA022E44C.pdf

            10. Clark J, Laing K. 2018. Co-production with young people to tackle alcohol misuse. Drugs and Alcohol Today. Vol. 18(1):17–27. [Cross Ref]

            11. Clark J, Laing K, Tiplady L, Woolner P. 2013. Making Connections: Theory and practice of using visual methods to aid participation in research. Newcastle upon Tyne: Research Centre for Learning and Teaching, Newcastle University.

            12. Clark J, Laing K, Leat D, Lofthouse R, Thomas U, Tiplady L, Woolner P. 2017a. Transformation in interdisciplinary research methodology: The importance of shared experiences in landscapes of practice. International Journal of Research and Method in Education. Vol. 40(3):243–56. [Cross Ref]

            13. Clark J, Laing K, Newbury-Birch D, Papps I, Todd L. 2017b. ‘Thinking Differently’ About Young People and Alcohol: An evaluation of preventative trial interventions in Scotland. Newcastle upon Tyne: Newcastle University.

            14. Cummings C, Dyson A, Todd L. 2011. Beyond the School Gates: Can full service and extended schools overcome disadvantage? London: Routledge.

            15. Deaton A, Cartwright N. 2018. Understanding and misunderstanding randomized control trials. Social Science and Medicine. Vol. 210:2–21. [Cross Ref]

            16. De Silva MJ, Breuer E, Lee L, Asher L, Chowdhary N, Lund C, Patel V. 2014. Theory of change: A theory-driven approach to enhance the Medical Research Council’s framework for complex interventions. Trials. Vol. 15:267–79. [Cross Ref]

            17. Dura L, Felt LJ, Singhal A. 2014. What counts? For whom?: Cultural beacons and unexpected areas of programmatic impact. Evaluation and Program Planning. Vol. 44:98–109. [Cross Ref]

            18. Dyson A, Kerr K. 2013. Developing Children’s Zones for England: What’s the evidence? London: Save the Children.

            19. Dyson A, Todd L. 2010. Dealing with complexity: Theory of change evaluation and the full service extended schools initiative. International Journal of Research and Method in Education. Vol. 33(2):119–34. [Cross Ref]

            20. Ensminger DC. 2015. Case study of an evaluation coaching model: Exploring the role of the evaluator. Evaluation and Program Planning. Vol. 49:124–36. [Cross Ref]

            21. Evans SD. 2014. The community psychologist as critical friend: Promoting critical community praxis. Journal of Community and Applied Social Psychology. Vol. 25(4):355–68. [Cross Ref]

            22. Facer K, Enright B. 2016. Creating Living Knowledge: The Connected Communities Programme, community–university relationships and the participatory turn in the production of knowledge. Bristol: University of Bristol/AHRC Connected Communities.

            23. Freire P. 1970. Pedagogy of the Oppressed. London: Continuum.

            24. Funnell SC, Rogers PJ. 2011. Purposeful Program Theory: Effective use of theories of change and logic models. San Francisco: Wiley.

            25. Goddard J. 2016. National higher education systems and civic universitiesGoddard J, Hazelkorn E, Kempton L, Vallance P. The Civic University: The policy and leadership challenges. Cheltenham: Edward Elgar. p. 94–113

            26. Goddard J, Tewdwr-Jones M. 2016. City Futures and the Civic University. Newcastle upon Tyne: Newcastle University.

            27. Gomez A. 2015. Communicative methodology of research and evaluation: A success storyGragonas T, Gergen KJ, McNamee S, Tseliou E. Education as Social Construction: Contributions to theory, research and practice. Chagrin Falls, OH: Taos Institute Publications/Worldshare Books. p. 297–314

            28. Harper D. 2002. Talking about pictures: A case for photo elicitation. Visual Studies. Vol. 17(1):13–26. [Cross Ref]

            29. Heubner TA. 2000. Theory-based evaluation: Gaining a shared understanding between school staff and evaluators. New Directions for Evaluation. Vol. 87:79–89. [Cross Ref]

            30. Jackson ET. 2013. Interrogating the theory of change: Evaluating impact investing where it matters most. Journal of Sustainable Finance & Investment. Vol. 3(2):95–110. [Cross Ref]

            31. Kellett M. 2005. How to Develop Children as Researchers. London: Sage.

            32. Laing K, Todd L. 2015. Theory-Based Methodology: Using theories of change in educational development, research and evaluation. Newcastle upon Tyne: Research Centre for Learning and Teaching.

            33. Laing K, Mazzoli Smith L, Todd L. 2018. The impact agenda and critical social research in education: Hitting the target but missing the spot? Policy Futures in Education. Vol. 16(2):169–84. [Cross Ref]

            34. Määttä M, Rantala K. 2007. The evaluator as a critical interpreter: Comparing evaluations of multi-actor drug prevention policy. Evaluation. Vol. 13(4):457–76. [Cross Ref]

            35. Melville A, Laing K, Stephen F. 2015. Family lawyers and multi-agency approachesMaclean M, Eekelaar J, Bastard B. Delivering Family Justice in the 21st Century. Oxford: Hart. p. 163–74

            36. Nelson IA, London RA, Strobel KR. 2015. Reinventing the role of the university researcher. Educational Researcher. Vol. 44(1):17–26. [Cross Ref]

            37. Nind M. 2014. What is Inclusive Research? London: Bloomsbury.

            38. Nind M, Wiles R, Bengry-Howell A, Crow G. 2013. Methodological innovation and research ethics: Forces in tension or forces in harmony? Qualitative Research. Vol. 13(6):650–67. [Cross Ref]

            39. Pawson R. 2013. The Science of Evaluation: A realist manifesto. London: Sage.

            40. Punch KF. 1998. Introduction to Social Research: Qualitative and quantitative approaches. London: Sage.

            41. Rogers PJ. 2008. Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation. Vol. 14(1):29–48. [Cross Ref]

            42. Shucksmith M. 2016. InterAction: How can academics and the third sector work together to influence policy and practice? Dunfermline: CarnegieUK Trust.

            43. Weiss CH. 2000. Which links in which theories shall we evaluate? New Directions for Evaluation. Vol. 87:35–45. [Cross Ref]

            44. Yin RK. 2013. Validity and generalization in future case study evaluations. Evaluation. Vol. 19(3):321–32. [Cross Ref]

            Author and article information

            Research for All
            UCL Press (UK )
            01 March 2022
            : 6
            : 1
            : e06108
            [1]Senior Research Associate, Newcastle University, UK
            Author information
            Copyright 2022, Karen Laing

            This is an open-access article distributed under the terms of the Creative Commons Attribution Licence (CC BY) 4.0 https://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            : 12 January 2021
            : 12 December 2021
            Page count
            Figures: 1, Tables: 2, References: 44, Pages: 17

            Assessment, Evaluation & Research methods,Education & Public policy,Educational research & Statistics
            co-production,democratisation,theory of change,evaluation


            Comment on this article