https://bristoluniversitypressdigital.com/view/journals/pp/48/1/article-p89.xml

Intoduction

The article explores the potential benefits to public policy of combining traditional evaluative inquiry with insights developed dynamically in policy labs.

What possible synergies between evaluation practice and policy labs could help in addressing the three challenges of public policy: establishing what interventions work, explaining their change mechanisms, and using research findings?

Twenty leading labs from five continents are critically analysed through a literature review as well as policy and programme evaluation practices, assessing the extent to which the purpose, structures and processes used in policy labs address three challenges: (1) establishing the causality and value of public interventions, (2) explaining mechanisms of change, and (3) utilising research findings in public policy.

Summary findings:

In general, policy labs produce programmes and projects that seek to explore ideas, solve problems, train leaders and deliver tools to improve public services through innovation. This includes creating spaces for entrepreneurs, SMEs, students, academics, citizens and NGOs to use their talent, ideas and experiences related to a particular public service in order to find solutions to existing challenges.

Labs aim to create impacts in two major ways: by engaging citizens, and by changing the culture of government through increasing the adoption of suggested changes and improvements by the relevant agencies and departments. However, with regards to the effects of policy labs’ activity, the study has found little evidence. It seems that labs’ ambition to create meaningful change in these two areas has not been followed by a clear idea of what exact impact they are aiming at, nor a way to measure it. For example, when reporting on ‘impact’, labs do not discuss long-term, structural change in the situation of the addressees. Instead, they present short-term, mainly process-related outcomes (meetings, reports, and so on). Likewise, their success stories have a highly processual character – they report on connections, networks and actors’ involvement, and the scale of dissemination.

This clear mislabelling of process indicators as impact indicators can be partially explained by the short period of existence of some labs, but in other cases it may be associated with a reluctance to make outcomes measurable and public. In other words, the level of labs’ transparency may be influenced by their partners’ expectations, as well as the political sensitivity of some matters addressed.

We can conclude that what seems to be lacking is a systemised effort to measure the impact after fully implementing a project. Some labs are satisfied with general feedback from the community, partner institutions or local governments (‘Looks like the mayor is happy’) often collected during workshops. However, there seems to be little research done in measuring real change in the behaviours of citizens, public servants or organisational actors. The lack of this feedback loop may limit the positive impact of labs’ work, as it hinders learning from experience and developing better tools in the future. Therefore, there is a need to develop greater reflexivity in terms of the short- and long-term effects of labs’ work.