As we note above, these topics are drawn from proposed questions and discussions by an interdisciplinary group of scholars, practitioners, funders and other stakeholders. It became clear during this process that many were unaware of relevant research which had already been undertaken under these headings. These topics reflect our own networks and knowledge of the field, so cannot be regarded as definitive. We need and welcome partnership with others working in this space to attempt to broaden the conversation as much as possible. We have selected a proportion of the selected topics to illustrate a number of points.First, that no one discipline or researcher could possibly have the skills or knowledge to answer all of these questions. Interdisciplinary teams can be difficult to assemble, but clearly required. We need leadership in this space to help spot opportunities to foster interdisciplinary research and learning.Second that all of these topics could be framed and addressed in multiple ways, and many have been. Many are discussed, but there is little consensus; or there is consensus within disciplines but not between them. Some topics have been funded and others have not. We feel there is an urgent need to identify where research investment is required, where conversations need to be supported, and where and how to draw out the value of existing knowledge. Again, we need leadership to help us generate collaborative research agendas.Third, that while we all have our own interests, the overall picture is far more diverse, and that there is a need for all working in this area to clearly define what their contributions are in relation to the existing evidence and communities. A shared space to convene and learn from one another would help us all understand the huge and exciting space within which we are working.Finally, this is an illustrative set of topics, and not an exhaustive one. We would not claim to be setting the definitive research agenda in this paper. Rather, we are setting out the need to learn from one another and to work together in the future. Below, we describe some examples of the type of initial discussions which might help us to move forward, using our three themes of knowledge production, knowledge mobilisation, and decision-making. We have cited relevant studies which set out research questions or provide insights. By doing so, we hope to demonstrate the breadth of disciplines and approaches which are being used to explore these questions; and the potential value of bringing these insights together.Transforming knowledge productionFirstly, we must understand who is involved in shaping and producing the evidence base. Much has been written about the need to produce more robust, meaningful research which minimises research waste through improving quality and reporting (Chalmers et al., 2014; Glasziou and Chalmers, 2018; Ioannidis, 2005), and the infrastructure, funding and training which surround knowledge production and evaluation have attracted critical perspectives (Bayley and Phipps, 2017; Gonzalez Hernando and Williams, 2018; Katherine Smith and Stewart, 2017). Current discourses around ‘improving’ research focus on making evidence more rigorous, certain, and relevant; but how are these terms interpreted locally in different policy and practice contexts? How are different forms of knowledge and evidence assessed, and how do these criteria shape the activities of researchers?Enabling researchers to reflect on their own role in the ‘knowledge economy’—that is, the production and services attached to knowledge-intensive activities (usually but not exclusively referring to technological innovation (Powell and Snellman, 2004))—requires engagement with this history.This might mean asking questions about who is able to participate in the practice and evaluation of research. Who is able to ask and answer questions? What questions are asked and why? Who gets to influence research agendas? We know that there are barriers to participation in research for minority groups, and for many research users (Chrisler, 2015; Duncan and Oliver, 2017; Scott et al., 2009). At a global level, how are research priorities set by, for example, international funders and philanthropists? How can we ensure that local and indigenous interests and priorities are not ignored by predominantly Western research practices? How are knowledge ‘gaps’ or areas of ‘non-knowledge’ constructed, and what are the power relationships underpinning that process (Nielsen and Sørensen, 2017)? There are important questions about what it means to do ethical research in the global society, with honesty about normative stances and values (Callard and Fitzgerald 2015), which apply to the practices we engage in as much as the substantive topics we focus on (Prainsack et al., 2010; Shefner et al., 2014).It might also mean asking about how we do research. Many argue that research (particularly funded through responsive-mode arrangements) progresses in an incremental way, with questions often driven by ease, rather than public need (Parkhurst, 2017). Is this the most efficient way to generate new knowledge? How does this compare with, for example, random research funding (Shepherd et al., 2018)? Stakeholder engagement is said to be required for impact, yet we know it is costly and time-consuming (Oliver et al., 2019, 2019a). How can universities and funders support researchers and users to work together long-term, with career progression and performance management untethered from simplistic (or perhaps any) metrics of impact? Is coproduced research truly more holistic, useful, and relevant? Or does inviting in different interests to deliberate on research findings, even processes, distort agendas and politicise research (Parkhurst and Abeysinghe, 2016)? What are the costs and benefits to these different systems and practices? We know little about whether (and if so how well) each of these modes of evidence production leads to novel, useful, meaningful knowledge; nor how these modes influence the practice or outputs of research.Transforming evidence translation and mobilisationSignificant resources are put into increasing ‘use’ of evidence, through interventions (Boaz et al., 2011) or research partnerships (Farrell et al., 2019; Tseng et al., 2018). Yet ‘use’ is not a straightforward concept. Using research well implies the existence of a diverse and robust evidence base; a range of pathways for evidence to reach decision-makers; both users and producers of knowledge having the capacity and willingness to engage in relationship-building and deliberation about policy and practice issues; research systems supporting individuals and teams to develop and share expertise.More attention should be paid to how evidence is discussed, made sense of, negotiated and communicated—and the consequences of different approaches. This includes examining the roles of people involved in the funding of research, through to the ways in which decision-makers access and discuss evidence of different kinds. How can funders and universities create infrastructure and incentives to support researchers to do impactful research, and to inhabit boundary spaces between knowledge production and use? We know that potential users of research may sit within or outside government, with different levels and types of agency, making different types of decisions in different contexts (Cairney, 2018; Sanderson, 2000). Yet beyond ‘tailoring your messages’, existing advice to academics does not help them navigate this complex system (Cairney and Oliver, 2018). To take this lesson seriously, we might want to think about the emergence of boundary spanning- organisations and individuals which help to interface between research producers (primarily universities, but also civil society) and users (Bednarek et al., 2016; Cvitanovic et al., 2016; Stevenson, 2019). What types of interfacing are effective, and how—and how do interactions between evidence producers and users shape both evidence and policy? How might policies on data sharing and open science influence innovation and knowledge mobilisation practices?Should individual academics engage in advocacy for policy issues (Cairney, 2016a; Smith et al., 2015), using emotive stories or messaging to best communicate (Jones and Crow, 2017; Yanovitzky and Weber, 2018), or rather be ‘honest brokers’ representing without favour a body of work (Pielke, 2007)? Or should this type of dissemination work be undertaken by boundary organisations or individuals who develop specific skills and networks? There is little empirical evidence about how best to make these choices (Oliver and Cairney, 2019), or how these consequences affect the impact or credibility of evidence (Smith and Stewart, 2017); nor is there good quality evidence about the most effective strategies and interventions to increase engagement or research uptake by decision-makers or between researchers and their audiences (Boaz et al., 2011). It seems likely that some researchers may get involved and others stay in the hinterlands (Locock and Boaz, 2004), depending on skills and preference. However, it is not clear how existing studies can help individuals navigate these complex and normative choices.Communities (of practice, within policy, amongst diverse networks) develop their own languages and rationalities. This will affect how evidence is perceived and discussed (Smallman, 2018). Russell and Greenhalgh have shown how competing rationalities affect the reasoning and argumentation deployed in decision-making contexts (Greenhalgh and Russell, 2006; Russell and Greenhalgh, 2014); how can we interpret local meanings and sense-making in order to better communicate about evidence? Much has been written about the different formats and tailored outputs which can be used to ‘increase uptake’ by decision-makers (Lavis et al., 2003; Makkar et al., 2016; Traynor et al., 2014)—although not with conclusive findings—yet we know so little about how these messages are received. Researchers may be communicating particularly messages, but how can we be sure that decision-makers are comprehending and interpreting those messages in the same way? Theories of communication (e.g., Levinson, 2000; Neale, 1992) must be applied to this problem.Similarly, drawing on psychological theories of behaviour change, commentators have argued for greater use of emotion, narrative and story-telling by researchers in an attempt to influence decision-making (Cairney, 2016b; Davidson, 2017; Jones and Crow, 2017). Are these effective at persuading people and if so how do they work? What are the ethical questions surrounding such activities and how does this affect researcher identity? Should researchers be aiming to communicate simple messages about which there is broad consensus?Discussions of consensus often ask whether agreement is a laudable aim for researchers, or how far consensus is achievable (De Kerckhove et al., 2015; Lidskog and Sundqvist, 2004; Rescher, 1993). We are also interested in the tension between scientific and politician consensus, and how differences in interpretations of knowledge can be leveraged to influence political consensus (Beem, 2012; Montana, 2017; Pearce et al., 2017). What tools can be used to generate credibility? Is evidence persuasive of itself; can it survive the translation process; and is it reasonable to expect individual researchers to broadcast simple messages about which there is broad consensus, if that is in tension with their own ethical practices and knowledge (even if the most effective way to influence policy? Is consensus required for the credibility of science and scientists, or can am emphasis on similarity in fact reduce the value of research and the esteem of the sector? Is it the task of scientists to surface conflicts and disagreements, and how far does this duty extend into the political sphere (Smith and Stewart, 2017)?Transforming decision-making, and the role of evidence within itFinally, we need to understand how research and researchers can support decision-making given what we know about the decision-making context or culture, and how this influences evidence use (Lin, 2008). This means better understanding the roles of professional and local cultures of evidence use, governance arrangements, and roles of public dialogues so that we can we start to investigate empirically-informed strategies to increase impact (Locock and Boaz, 2004; Oliver et al., 2014). This would include empirical examination of individual strategies to influence decision-making, as well as more institutional infrastructures and roles; case studies of different types of policymaking and the evidence diets consumed in these contexts; and how different people embody different imperatives of the evidence/policy nexus. We need to bring together examples of the policy and practice lifecycles, and examine the roles of different types of evidence throughout those processes (Boaz et al., 2011, 2016).We want to know what shapes the credibility afforded to different experts and forms of expertise, and how to cultivate credibility to enable better decision-making (Grundmann, 2017; Jacobson and Goering, 2006; Mullen, 2016; Williams, 2018). What does credibility enable (greater attention or influence; greater participation by researchers in policy processes; a more diverse debate)? What is the purpose of increasing credibility? What is the ultimate aim of attempting to become credible actors in policy spaces? How far should universities and researchers go—should we be always aiming for more influence? Or should we recognise and explore the diversity of roles research and researchers can play in decision-making spaces?Ultimately, methods must be found to evaluate the impact of evidence on policy and practice change, and on populations—including unintended or unwanted consequences (Lorenc and Oliver, 2013; Oliver et al., 2019, 2019a). Some have argued that the primary role for researchers is to demonstrate the consequences of decisions and to enable debate. This requires the development and application of methods to evaluate changes, understand mechanisms, and develop theory and substantive and normative debates, as well as engage in the translation and mobilisation of evidence. It also requires increased transparency to enable researchers to understand evidence use (Nesta, 2012), while also allowing others like Sense about Science to check the validity of evidence claims on behalf of the public (Sense about Science, 2016).

Categories: Research


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: