Policy brief

Governing Evaluations

Internationally Shared Challenges in Evaluating Preventive Measures Against Extremism

Bressan Friedrich Wagner 2021 Pr Eval Spotlight
A protest in London, United Kingdom. Source: Yeo Khee /​Unsplash
08 Jul 2021, 
published in
Peace Research Institute Frankfurt (PRIF/HSFK)
Download PDF

Government agencies and civil society organizations in different countries face similar challenges in evaluating their efforts to prevent and counter violent extremism (P/​CVE). Many of these hurdles relate to core organizational choices: who decides how evaluations are commissioned, funded and implemented; who has the power to influence these decisions; and how do actors implement the results of evaluations? If the answers to these questions reflect institutional path dependencies rather than deliberate policy choices, the result can have largely unintentional effects on evaluations – and therefore on P/​CVE practices themselves. These effects are often unrelated to the evidence on what works or how to improve prevention programs. This is why policy makers need to make deliberate choices about organizational structures for P/​CVE evaluations. Well-designed structures that govern evaluations at various levels of government balance the needs and interests of a diverse set of actors in P/​CVE and related fields, such as crime prevention and civic education. Ideally, such structures can help promote shared objectives and balance the twin goals of any evaluation: learning and accountability.

P/​CVE is a relatively young policy field spurred forward by public and political pressures to prevent extremist attacks. While P/​CVE practices combine decades of experience across multiple professions, they do not operate using a set of established guidelines or clear metrics for success. Many policy makers are still figuring out how to best structure the evaluation of P/​CVE. Analyzing the various evaluation challenges different countries are facing, provides valuable insights that can inform recommendations for organizing quality assurance and evaluation in Germany. Based on early insights into the governance structures of P/​CVE evaluations in Canada, the Netherlands and the United Kingdom (UK), we discuss three challenges of evaluation. The first and most fundamental conceptual challenge is in balancing the dual goals of learning and accountability. At an operational level, two additional challenges relate to how to best fund and time evaluations to support accountability and learning.

Balancing Accountability and Learning

Evaluations are intended to ensure that the money invested into projects and policies is well spent in pursuit of the chosen objectives (accountability), as well as to help improve positive and minimize negative impacts (learning). While both of these goals are necessary, legitimate and compatible in principle, P/​CVE evaluations in practice reveal conflicts between the goals of accountability and learning. Should evaluations prioritize assigning responsibility for failure, even at the price of fear and defensiveness? Or is it more important to prioritize learning as a way to identify improvements, even if it remains unclear who needs to make the change?

Across country cases, we observed that actors struggle to find a good balance between the two goals. Governments are often under pressure to justify public spending on prevention measures against extremism. We found that this can impede learning when policy makers extend this pressure to implementers, who must demonstrate their added value in a field where it is difficult to assess – or even define – impact. In Canada, for instance, small civil society organizations worry that evaluations pose a significant risk to their work when their existence depends on a positive evaluation result. This thinking deters some actors from voluntarily conducting impact evaluations, despite their desire to improve their P/​CVE efforts. This trend also holds true in the UK, where small organizations often lack the workforce and skills to conduct standardized evaluations. Additionally, in many cases, these organizations are not capable of formulating their results and challenges in a manner that can secure follow-up funding. At the same time, implementers can also benefit from accountability, as in the case of the Netherlands. As elsewhere in Europe at the onset of P/​CVE efforts in the mid-to-late 2000s, the Netherlands experienced a surge in the number of actors involved in prevention efforts. Some of these practitioners wanted to sell easy solutions with questionable effects. Since then, an open culture of evaluation has helped to establish a field of experienced practitioners and evaluators.

To make P/​CVE efforts as effective as possible, organizational structures should balance different stakeholders’ interests – which may include confidentiality, client trust, future funding, certainty about the efforts’ impacts, and democratic control over public spending. Early results from our case studies underline the need to make learning an explicit priority of evaluation practice. Especially when structures and cultures of evaluation are still evolving, a focus on learning is crucial to building the mutual trust needed for high quality evaluations.

Making Deliberate Funding Choices

The question of who funds evaluations, and according to what criteria, is a challenge that further affects the relationships between actors involved in P/​CVE. Decisions about funding structures and mandates greatly impact the incentives received by implementers and evaluators, and shape the knowledge that is produced. In addition, funding choices determine which actors and approaches gain (or lose) financial support.

Federal politics and institutional mandates can have unintended and dysfunctional structural consequences. For example, in Canada, evaluation requirements and funding structures vary between different provinces, as well as between provincial and federal administrations. Because of differing priorities at the federal and provincial levels, provincial projects often lack funding for external evaluations, and thus, implementers must resort to internal evaluations and reporting. Small civil society organizations and academics worry that this favors established and larger implementers, who can afford to commission external evaluations and invest in extensive self-evaluation to demonstrate their successes and learnings.

Beyond availability of funding, budget decisions determine who has the authority to decide on the evaluator, evaluation method and the best use of the results. In cases where governments make these decisions in a top-down manner without consulting project implementers, constructive learning is difficult. However, implementers’ preference for autonomy over the use of evaluation funding can impede the application of standards. For instance, in the UK, some evaluators have argued that the government increasingly relies on those major implementers, who have an outsized say in deciding the terms of their evaluations. While implementers taking ownership in the evaluation process is a welcome development, it should not come at the expense of the quality and impartiality of evaluations. Transparent evaluation criteria and standards can help assure high quality results, but should allow for both, standardization and flexible adaptation to a particular project and its context. To strike this balance, governments can allocate dedicated funds for external evaluations, as this can help to strengthen the independence of evaluations. Importantly, they should also invest in capacity-building and additional resources for (self-)evaluation, such as toolkits and trainings on evidence-based project work for implementers, as done in the Netherlands.

Timing and Targeting Evaluations

Another shared challenge across country cases is how to target limited evaluation resources to maximize learning and accountability. Evaluations can relate to different levels of P/​CVE efforts – such as project, program, thematic, portfolio, and strategy evaluations – and may vary in terms of their main focus. With regard to timing, they can serve different goals: to find out how to best set up a new project (ex-ante), to learn how to improve it (along the way), or to assess the project’s implementation and its lessons learned (ex-post). For high-quality results, the timing, goal and type of evaluations should all correspond to one another.

Interviewed experts agree that it is key to think about evaluations in the conception phase of a project, since otherwise it is often too late to adjust course, and project staff will have likely already rotated into another job. By encouraging evaluations that accompany a project’s lifecycle and consequently allow for learning-based adaptations, policy makers can help ensure the quality of P/​CVE programs. An example from the Netherlands shows that formative evaluations during the project cycle can help determine whether a project’s design corresponds with its goals and how the project could be improved in the following funding period. Another best practice is to establish an approachable help-desk with expert staff to support implementers throughout the evaluation cycle.

However, evaluation and project cycles do not always have to align: some evaluations assess broader programs or strategies that do not adhere to a single project’s timeline. In such cases, fostering knowledge sharing through meta-reviews and comparative studies is crucial. In all three countries considered, evaluation experts and implementers emphasized the importance of long-term evaluations for measuring outcomes, which short funding cycles can make difficult. With more flexible budgeting cycles, policy makers could help ensure meaningful evaluation results and the application of lessons learned.

Conclusion

Making deliberate choices about how P/​CVE evaluations are organized and funded is important: if done well, evaluation can help all involved parties better prevent extremism and radicalization. While there are conflicts between the goals of learning and accountability, all actors in a democratic system have a vested interest in upholding both principles. When establishing structures of evaluation, however, actors should prioritize a culture of innovation and learning-based improvements to build trust. In addition, funders should carefully consider how they allocate funds for evaluations, keeping in mind the incentives they put in place for implementers. This shapes the areas in which the P/​CVE field is able to learn. In any case, actors should consider the right timing for evaluations early on in the project cycle, plan ahead and ensure capacity support for evaluations at every stage.

Particularities aside, many countries currently face similar challenges related to governing evaluations. Our insights indicate that considering other countries’ organizational structures and experiences in addressing these challenges, can help overcome barriers and avoid duplicating mistakes. The P/​CVE policy sphere will greatly benefit from exchanges between a variety of perspectives and actors across different states.


This policy brief was originally published as a PRIF Spotlight’ on the PRIF Blog in English and German on July 08, 2021. This brief is part of the PrEval project on evaluation designs for the prevention of violent extremism.