Context and previous research: the academic debate on trust
In the academic literature, the subject of trust spans multiple research fields: social sciences in the domain of politics and international affairs have produced measures on aggregated level of social trust (Justwan et al., 2017), communication studies have analysed trust of information in online environments (Metzger and Flanagin, 2013), and political studies have related the level of trust in public institutions expressed by journalists with the social environment in their country (Hanitzsch and Berganza, 2012).
In our societies, the concept of ‘trust in science’ is no longer a paradox. The type of trust needed to benefit from scientific knowledge is not a blind ‘leap of faith’ clashing with the Royal Society’s motto ‘Nullius in verba’ (take nobody’s word for it). Trust is not only required for non-scientific audiences to grasp complex phenomena without mastering the underlying theory and research. It has also become a fundamental skill within the scientific community, where knowledge advancement implies some trust in other people’s outcomes. Trust in scientific knowledge produced, verified and analysed by others is becoming part of many scientific projects: research teams may be large and geographically distributed, so that ‘even within the same research team, trust in the knowledge of others is essential for everyday scientific practice’ (Hendriks et al., 2016a: 145), because ‘the cooperation of researchers from different specializations and the resulting division of cognitive labor are, consequently, often unavoidable if an experiment is to be done at all’ (Hardwig, 1991: 695).
Trust is also ‘a mechanism for the reduction of complexity … it enables people to maintain their capacity to act in a complex environment’ (Siegrist, 2021: 481). This is a fundamental function in our societies, because ‘all social arrangements rely on trust, and many involve expertise … if trust in experts were to come to a halt, society would come to a halt, too’ (Oreskes, 2019: 247). The level of public trust granted to scientists and science is still higher than the one granted to other social actors and fields (Krause et al., 2019), with a steady trend confirmed by reports such as the General Social Survey realised by the National Opinion Research Center (NORC) of the University of Chicago, the Public Attitudes to Science published by the market research company Ipsos MORI, the Global Monitor of the Wellcome Trust, the Science & Engineering Indicators compiled by the National Science Board in the US, the British Social Attitudes survey produced by the social research organisation NatCen, the Public and Scientists’ Views on Science and Society issued by the Pew Research Center, and the ‘Eurobarometer’ surveys collected by Eurostat (the statistical office of the European Union) about ‘European citizens’ knowledge and attitudes towards science and technology’ (see Eurostat, 1993, 2001, 2005, 2010, 2013, 2021; Ipsos MORI, 2011, 2014, 2018, 2019; NORC, 2013; Curtice et al., 2019; National Science Board, 2018, 2020a, 2020b; Pew Research Center, 2015; Funk et al., 2020; Wellcome Trust, 2018, 2020).
Even in this general climate of trust in science, scholars have reported various criticalities. For example, the polarisation around specific cultural, political or religious identities may generate mistrust and social controversy about certain scientific issues (Kahan, 2017; Hendriks et al., 2016a). The ‘chain of trust’ linking scientists to citizens, and scientific research to public health measures, involves also the political and industrial sphere, where the levels of trust in business leaders and governments managing our public health systems are far from the trustworthiness accorded to scientists and scientific research (Larson et al., 2018; Ipsos MORI, 2019). Our scientific institutional culture shows:
a lack of recognition of the increasing strains on public credulity and trust in which science itself has been an agent, [with an] apparent institutional lack of ability to imagine that public concerns may be based on reasonable questions that are not being recognised and addressed, rather than being rooted in ignorance and mis-understanding. (Wynne, 2006: 219)
Scholarly debate around the concept of trust in science has also explored the drawbacks of ‘uncritical trust in science’. This is considered a potential risk when citizens are asked to trust partial and provisional scientific outcomes concerning topics still under scrutiny (such as the ongoing pandemic) and coming from ‘zones of uncertainties’ where the scientific community is still struggling to find a clear consensus, grounded on a strong base of scientific evidence. Meanwhile, science is used to legitimate public policies, fostering ‘the idea that support for the policy stance is determined by scientific fact, and that no alternative is left’ (Wynne, 2006: 214).
According to Krause et al. (2021: 230), ‘insisting that citizens simply trust the science on any given study is not only disingenuous, it is likely unethical’ and ‘uncritical trust in science would be democratically undesirable’ as a goal per se, because certain levels of mistrust are linked to legitimate concerns coming from inequities in our public health systems, and ‘efforts to force scientific trust on society could make the worst fear a reality: that trust in science will become politicized’. From this perspective, uncritical trust in science as a social compliance requirement is a risk that may contaminate the democratic sphere with a politicised and controversial social conversation around science, resulting in a polluted ‘science communication environment’ (Kahan, 2017: 45). To some writers, this risk seems not to be merely a scholarly hypothesis, because it is increasingly tangible today, given that ‘in the context of COVID-19 crisis, science was invoked by politicians, or scientific legitimacy was claimed by advisers to governments, to support measures that sought total compliance and thus limited conversation’ (Bucchi and Trench, 2021: 10).
The trust needed in complex societies does not exempt us from critical thinking and duties of vigilance: epistemic trust in knowledge that scientists have produced or provided ‘rests not only on the assumption that one is dependent on the knowledge of others who are more knowledgeable; it also entails a vigilance toward the risk to be misinformed’ (Hendriks et al., 2016a: 143). Far from being a passive acceptance of scientific claims, actual trust in science comes from personal evaluations affected by successful replication of studies (Hendriks et al., 2020), open discussion about ethical implications of preliminary scientific results (Hendriks et al., 2016c), and perceived expertise, integrity and benevolence of sources (Hendriks et al., 2015, 2016a).
Trust in scicomm as research topic
In this context, research on the role of science communication (scicomm) as ‘the social conversation around science’ (Bucchi and Trench, 2021: 8) is crucial to understand how this conversation can counteract the tendency of scientific communities of being perceived as structurally monolithic and inaccessible to lay audiences. It is necessary to facilitate the process of sense making around scientific topics, support an informed and critical trust of science among non-specialised publics, and extend scientific debate from the academic community to a wide range of communities, practices and initiatives, also through social media (Davies and Horst, 2016).
A further reason for researching trust in science communication is that credibility and trust in connection with science may be ‘even more important than in any other area of social life’ (Weingart and Guenther, 2016: 9), and this topic is also linked to the controversial role taken in recent years by social media and personal blogs. These digital communication environments are used to spread misinformation and pseudoscience, jeopardising trust in scicomm, and legitimating pseudoscience and anti-science attitudes on established channels, ranging from popular blogs to the aggressive use of Twitter made by the White House (Chan et al., 2017). However, they have also been tools within the mechanisms of public scrutiny which have been fundamental in cases of correction (Hendriks et al., 2016b) or even retraction of scientific papers (Yeo and Brossard, 2017). Recent studies (Battiston et al., 2021) scrutinise also the role of scicomm in fostering citizens’ compliance with public health policies during the pandemic.
Trust in scicomm is an important research topic for social sciences also because of the increased availability of scientific information through those same digital channels today. This exposes online audiences to more direct interactions with experts and a higher quantity of science news than is possible through traditional news outlets. This access to authorities in the scientific community gives the public an enhanced sense of trust, rooted also in the social recommendations accompanying such news (Huber et al., 2019).
The role of scicomm as a connector between the best available science and lay audiences makes it relevant to question how trust in scicomm itself is formed, shaped and lost, especially for politicised, polarised and controversial topics where there is the tendency to regard controversy as something ‘that should be kept within the scientific community’ (Miller, 2001: 118). The changing nature of the trust relationship between lay audiences and scicomm initiatives has led scholars, scicomm practitioners and journalists specialised in scientific issues to work to keep up with changes in technology, media and culture, adapting their communication activities to an environment where contents, formats, habits and communication channels have radically evolved over the years (Davies and Horst, 2016).
In 1985, the ‘need for an overall awareness of the nature of science and, more particularly, of the way that science and technology pervade modern life’ shaped the well-known Public Understanding of Science report released in London by the Royal Society, which stated that ‘improving the general level of public understanding of science is now an urgent task for the well-being of the country’ and ‘scientific literacy is becoming an essential requirement for everyday life’ (Bodmer, 1985: 10). Besides these efforts towards ‘public understanding’, numerous scicomm activities have adopted the ‘deficit model’ based on the assumption that ‘the public has a “knowledge deficit” that affects perception of science and scientists’, and ‘science communicators can change attitudes towards science by providing more information’ (Short, 2013: 40).
This model is still in use now after over three decades, as the idea of a ‘public deficit’ never left the scientific debate (Ko, 2016; Cortassa, 2016; Raps, 2016; Meyer, 2016; Suldovsky, 2016). Indeed, the scientific community regularly reinvents the public deficit model explanation for public alienation from institutional science, producing ‘a repertoire of possible alibis which prevent honest institutional-scientific self-reflective questioning’ (Wynne, 2006: 216). This is happening even though the deficit assumption has been strongly questioned by studies showing that factual scientific information and individual scientific literacy can become irrelevant for changing attitudes towards science because of prevailing (or coexisting) social, ethical, religious and cultural beliefs (Short, 2013), or other psychological phenomena, such as cognitive polyphasia (Li and Tsai, 2019) and various forms of cognitive bias, confirming that ‘human cognition appears organized to resist belief modification’ (Bronstein and Vinogradov, 2021: 1).
Meanwhile, alternative models and practices based on ‘dialogic’ (or ‘consultative’) and ‘participatory’ (or ‘deliberative’) approaches have been discussed and practised over the years (Davies and Horst, 2016) for cases and contexts where the need for an exchange of inputs and concerns between scientists and citizens (or the need for an active engagement of citizens in open debates over scientific issues for shaping public policies) has become more prominent than the educational and social concerns that were addressed with scicomm activities based on the ‘deficit model’.
The multifaceted nature of the activities that fall under the wide category of ‘science communication’ has prompted science communication scholars to put in context the traditional narrative depicting the evolution of scicomm as linear historical progress from ‘deficit to dialogue’. According to Trench (2008: 123), ‘the supposed shift from deficit to dialogue has not been comprehensive, nor is it irreversible’. Davies and Horst (2016: 5) propose a more complex perspective on the evolution of science communication, conceiving a ‘scicomm ecosystem’ where multiple models are coexisting. In this complex ecosystem, we do not have ‘a narrative of progress, but one of multiplication of discourses’ (Bauer, 2009: 222) where different (and sometimes conflicting) forms of science communication are entangled with the diversity of models, cultures, contexts, practices and practitioners contributing to the public discourse about science.
In the recent scientific debate around trust in scicomm, new models of science communication have proposed to move beyond a naive view of science as ‘value-free’, rejecting the assumption that the only value shared by the scientific community is a pure interest for the progress of knowledge. After previous research showing that ‘we tend to trust and to believe the arguments of a person whose values are similar to our own’ (Siegrist and Hartmann, 2017: 449), critics of the ‘pure science model’ have argued that trustworthiness of science is better communicated sharing non-scientific values, to find a common ground between science and society (Oreskes, 2019).
Considering the relevance of trust in scicomm as research topic, the changing context for science communication and the specific challenges posed by the COVID-19 pandemic, we reached out to experts in scicomm (researchers, science journalists and scicomm professionals) asking them to share their experience regarding trust of lay audiences in science communication. The key questions under scrutiny in our analysis are:
Q1. According to the pool of experts who took part in this study, what are the critical topics, the key factors, the possible risks and the good practices that can affect the bond of trust between lay audience and science communication?
Q2. Before and during the COVID-19 pandemic, on which of these issues did the individual feedback of the experts converge on a shared consensus, and on which items?
To explore trust in science communication from different perspectives, our exploratory, qualitative research submitted a series of iterative online questionnaires to a multiple-stakeholder pool of experts comprising researchers/academics, journalists and scicomm practitioners, based in two countries (Italy and Belgium, chosen for their cultural and physical proximity to the research team).
The feedback provided by the pool of experts was collected, organised and analysed using the Delphi method. Developed in the 1950s, this method is recognised as a flexible technique to ‘obtain the most reliable consensus of a group of experts’ (Okoli and Pawlowski, 2004: 16) in situations where there is ‘incomplete knowledge about a problem or phenomena’ that may benefit from subjective judgements of experts (Skulmoski et al., 2007: 12), and for cases where other statistical methods ‘are not practical or possible because of the lack of appropriate historical/economic/technical data and thus where some form of human judgmental input is necessary’ (Marchais-Roubelat and Roubelat, 2011: 1496).
The Delphi method is based on iteration cycles (Figure 1) starting from an initial researcher-defined questionnaire, and the subsequent collection of responses from the experts, each of which inflects the questionnaire that follows on. The goal is to realise a series of one-to-many controlled interactions between the experts and the researchers, reducing the complexity of the communication flow of an open discussion to facilitate the detection of a majority consensus, or the lack of such consensus, over a specific set of topics. This iteration process also allows the participants to refine their view with a controlled feedback from the group outcomes (Skulmoski et al., 2007). In a Delphi panel, the validity and the value of the result rely on the qualifications of the experts involved, and not on the size of the sample: the recommended size for a Delphi panel of experts varies from 10 to 18. (Okoli and Pawlowski, 2004). Figure 1 summarises the workflow of our Delphi research process.
In order to select the target groups for our research, we adopted the procedure described by Okoli and Pawlowski (2004: 20) to ‘categorize the experts before identifying them’, using a Knowledge Resources Nomination Worksheet (KRNW) which lists relevant discipline or skills, organisations and related literature (Table 1).
|Discipline or skills
Group 1: Academics/Researchers
Researchers and scientists with scicomm experience inferred by received grants, or linked to their networking activities.
ERC beneficiaries in Italy/Belgium
Marie Curie beneficiaries in Italy/Belgium
Members of PSCT network in Italy/Belgium
Jamieson et al. (2017), The Oxford Handbook of the Science of Science Communication.
Davies and Horst (2016), Science Communication: Culture, identity and citizenship.
Group 2: Journalists
Journalists listed in national associations of science journalism practitioners.
Associations of scientific journalists
ABJSC members (Belgium)
AJP members (Belgium)
AGJPB members (Belgium)
UGIS members (Italy)
Group 3: Media practitioners of scicomm
Press officers of academic institutions, social media managers of scientific institutions, members of advocacy groups dedicated to promoting scientific culture, organisers of events and science fairs.
Media practitioners of scicomm
ECSITE members in Italy/Belgium
We then populated the defined categories with names of experts who could be involved in the Delphi process, picking from our direct contacts and from the list of organisations to be contacted according to the KRNW. When the first list was completed, we contacted the experts on the list, asking them to nominate other recognised experts in their fields with a ‘snowball technique’, aiming at achieving a sample size that provided a diversity of voices from each of the categories.
‘Questionnaire Zero’, asking for availability to take part in the research and names of other scicomm experts, was submitted to an initial list of 395 experts in four languages: English/Italian for contacts based in Italy, and French/Dutch for contacts based in Belgium. After the first group of experts was contacted, other people identified by them as peers with similar expertise were contacted as well, checking their availability with the same questionnaire, raising the number of contacted experts to 457. At the end of this process, we had a list of 49 experts confirming their willingness to contribute to our research (Table 2).
|% of contacted
|% of available
|Practitioner of scicomm
To represent the diversity of the panel, the participants were categorised in three groups. These do not correspond to the concept of ‘cohort’ used in statistical methods for social sciences. Rather, and according to the Delphi method, the pool of experts was considered as a single entity, providing qualitative results based on the opinions of experts coming from different fields of knowledge and social groups. The only requirement in this methodology is to guarantee inclusiveness and diversity of qualified voices by drawing on a maximally extended panel of experts (rather than prioritising balance) with the use of the ‘snowball technique’, asking the experts identified in the KRNW table to name peers to be invited to join the pool, and asking the invitees to provide new names of other relevant experts, until this iterative process comes to an end. The result is a list of relevant knowledge brokers meeting the four ‘expertise requirements’ cited in Skulmoski et al. (2007): knowledge and experience with the issues under investigation; capacity and willingness to participate; sufficient time to participate in the Delphi panel; and effective communication skills.
To the best of our knowledge, there are no publicly available lists of people registered as academics with expertise in science communication, so we gathered names of academics and researchers who received public grants for research which requires science communication activities, or who belonged to the international Public Communication of Science and Technology (PCST) network. As Table 2 reveals, this resulted in a relatively low number of scientists and researchers compared with the number of practitioners or journalists specialised in scientific topics and engaged with scicomm, because we could gather the latter from national lists that are publicly available.
Table 2 shows that this imbalance was subsequently reduced by the different ‘availability’ of each group (expressed as the share of experts available among those contacted). Even if scicomm practitioners were less than half compared with journalists, in the end they joined the expert pool with almost the same number of people because their availability was almost double. Academics also showed a higher level of availability than journalists, contributing to the extension and diversity of the pool of experts required for the application of the Delphi method.
After completing the participant list, we started the iterative submission of questionnaires and data collection to all the experts who accepted the invitation. Even if at each step of the research there were some losses, at each stage of the process we still had a number of participants that ranged from 17 to 46, which was always more than the minimum number of 10 participants recommended for Delphi panels according to the general guidelines of the Delphi method (Okoli and Pawlowski, 2004). This allows us to consider the feedback provided by the pool of experts as meaningful from a qualitative point of view.
We considered the risk of bias resulting from ‘strategic answering’ from experts who could theoretically have a potential conflict of interest to be negligible, because of the general nature of the questions posed, which focused only on the concept of trust in science communication in the public sphere and the nature of such trust.
In the first questionnaire, sent in April 2019, we asked for open answers to the following questions:
Positive factors: Could you please mention some key factors (like social, cultural, political or environmental factors) that can increase and promote trust in scientific communication among the general public?
Concerned domains: Could you please mention some critical topics or scientific domains where the bond of trust in science communication plays a key role according to your experience?
Risks and threats: Could you please mention some potential risks and threats that can undermine the trust in scientific communication for lay audiences?
Good practices: Could you please mention some good practices (like private activities, public initiatives or social regulations of any kind) that could promote trust in scientific communication?
The answers provided to the first questionnaire were organised, aggregated and rephrased to avoid duplicates and clarify concepts, and we submitted the overall list of answers to the participants for validation, to confirm that there was no loss of concepts and meaning introduced by the summarisation process.
After this validation step, we asked participants for the second questionnaire, launched in June 2019, to choose exactly 10 items from each of the aggregate lists produced with the previous questionnaire concerning positive factors, concerned domains, risks/threats and good practices. The number of choices was fixed and mandatory to avoid distortions in the feedback that would have resulted from allowing different ‘weights’ to the answers, corresponding to a different number of choices made by each participant. The feedback provided for the second questionnaire allowed us to check if the pool of experts expressed some consensus on items from the four lists that we asked them to provide individually (positive factors, concerned domains, risks/threats and good practices).
Following the Delphi method, we marked a consensus over one item of a list if more than 50 per cent of the experts included that item on the list. On the third questionnaire, launched in August 2019, we asked participants to ‘prioritise the consensus’, ranking in decreasing order the items of each list indicated by a majority of experts.
To measure the level of agreement between the different lists ordered by priority provided by each participant we used Kendall’s coefficient of concordance (W), defined as ‘a measure of the agreement between several judges who have rank ordered several entities’ (Field, 2005), where a small ratio corresponds to a disagreement between judges, and ‘a W value of 0.7 or greater would indicate satisfactory agreement’ (Okoli and Pawlowski, 2004: 26).
In November 2020, after the COVID-19 pandemic affected the global scenario of science communication, we submitted the questionnaire to the same pool of experts with the four aggregated lists of items linked to each research question, asking them to reconsider their choices of items for the proposed lists in order to check whether the crisis had brought change to the consensus expressed beforehand.
The use of the Delphi method enabled us to extract from a relevant pool of experts meaningful qualitative information about a complex, multifaceted issue. Despite the complexity, we found a ‘strong consensus’ (where ‘strong’ means confirmed before and during the COVID-19 pandemic) on two lists of items chosen as relevant by more than 50 per cent of the experts consulted, suggesting that behind the complexity we can outline a shared ‘common feeling’, representing a relevant and usable qualitative result.
For potential risks that can undermine trust in scicomm, before the COVID-19 pandemic the pool of experts expressed a consensus on a small set of items: lack of critical thinking, dissemination of false pseudoscientific information and ideological propaganda. This consensus was not confirmed in November 2020, when these factors seemed to become less relevant. More than half of the same group of experts, once the pandemic had begun, indicated only ‘sensationalism over possible scientific discoveries raising false expectations’ and ‘science-illiterate journalists covering scientific topics acritically’ as potential risk factors.
A similar uncertainty emerged about the key factors that increase trust in scicomm: a consensus was found in 2019 regarding only four items (the need to increase scientific awareness starting from school; communicate complexity in an open and transparent way; encourage the habit of critical thinking; and promote dialogue between people, experts and institutions), but in 2020, no consensus at all emerged after repeating the same questionnaire, once COVID-19 had spread.
There was no strong consensus regarding ‘potential risks’ or ‘key factors’ for trust in scicomm among the pool of experts, and the number of items where a limited consensus emerged is too low to draw any conclusions. A wide variety of risks and positive factors affecting trust in science communication was emphasised. The outcome of such diversity is shown in the aggregated lists of items in Table 3. The table shows in alphabetical order the list of items indicated by the experts, filtered to those chosen by at least 20 per cent of the respondents in 2019 or in 2020. None of the items was mentioned by more than 50 per cent of the panellists in both rounds of questionnaires (2019 and 2020).
|Positive factors promoting trust in science communication
|Potential risks compromising trust in science communication
|Avoiding hype and sensationalism
|Absence of experts’ voices from media
|Communicate complexity in an open and transparent way
|Anti-scientific beliefs coming from culture, education or relationships
|Cultural and social development
|Bad perception of pharma companies
|Develop skills to deal with pseudoscience and anti-science
|Conflicts of interest
|Education of the public, effective dissemination of science
|Dissemination of false pseudoscientific information
|Encourage the habit of critical thinking
|Firm answer to scientific nonsense
|Imposing scientific culture as an absolute truth
|Guarantee the quality of science communication
|Inability to understand uncertainty of science
|Highlight science embedded in our everyday lives
|Increased dependence of science on economic and private interests
|Improve the understanding of the progress in science and medicine
|Increase scientific awareness starting from school
|Lack of critical thinking
|Integrate science in the political decision-making process
|Lack of transparency when dealing with research misconduct
|Make science communication enjoyable and fun
|Lobbying and conflicts of interest among science communicators
|Merging humanistic and scientific cultures
|Political support to science
|Politicians supporting opinions against scientific evidence
|Promote dialogue between people, experts and institutions
|Premature publication of scientific results raising false expectations
|Promoting scientific awareness among lay people
|Reliability of the scientific communication
|Science-illiterate journalists covering scientific topics acritically
|Reliability of the sources
|Scientific fraud or misconduct
|Rely on the positive value associated to science as a cultural factor
|Scientists hiding or minimising possible negative drawbacks of their results
|Scientists not considering values and concerns on the side of lay audience
|Understanding of scientific method
|Self-referential attitude of scientists in dealing with the public
|Use of a clear language avoiding complexity
|Sensationalism over possible scientific discoveries raising false expectations
|Social media bubbles or ‘echo chambers’
|Wrong sources of information
In contrast with these results, a ‘strong’ consensus (confirmed in 2019 and 2020) is associated with critical topics where trust in scicomm plays a key role, and good practices to promote such trust. For both lists, a relevant number of items were consistently indicated by more than half of the experts in June 2019 and November 2020 (Tables 4 and 5).
|Consensus emerged in June 2019
|Consensus emerged in November 2020
|Role of pharma companies
|Communication of health risks
|Public health issues
|Genetically modified organisms
|Topics related with an increased perception of risk
Note: Following the Delphi method, consensus is considered to be reached over an item if more than 50 per cent of the respondents choose it to be included in the list.
|Consensus emerged in June 2019
|Consensus emerged in November 2020
|Activities in primary school to stimulate curiosity and passion for research
|Provide training about communication techniques to scientists and researchers
|Joint initiatives between scientific institutions and the media, especially at the local level
|Promote scientific literacy in school textbooks
|Public events about science
|Implement regulations and laws based on scientific evidence
|Understand society concerns and engage the audience as stakeholders
|Organise meetings with researchers and patients to promote trust in medical science
|Make scientific role models more visible
|Promote public science-based debates before taking public health decisions
|Direct encounters with science communicators and scientists
|Science festivals targeted to lay audience and young people
Note: Following the Delphi method, consensus is considered to be reached over an item if more than 50 per cent of the respondents choose it to be included in the list.
No new topics with over 50 per cent of respondents emerged in the inquiry during the pandemic, and three topics that were chosen by a majority in 2019 lost relevance during the pandemic (Table 4). Other domains of concern where trust in science communication plays a key role (chosen by between 20 per cent and 50 per cent of the pool of experts in any of the questionnaires) included access to new therapies, animal experimentation, chemistry, economic issues, evolutionary biology, genetically modified organisms, genetics, industrial chemical accidents, nuclear energy, oncology and waste disposal.
Concerning good practices to foster trust in scicomm (Table 5), the experts expressed a strong consensus over five good practices, and two good practices that emerged during the pandemic achieved a consensus that they did not have in the first round of questionnaires. Both emerging practices are activities involving physical encounters with scientists and scientific activities, suggesting that trust in the scientific endeavour can come from learning, but also from direct experiences with direct, in-person relations, even more in times of ‘social distancing’.
Other good practices fostering trust in science communication (chosen by between 20 per cent and 50 per cent of the participants in any of the questionnaires) included: a coherent approach for any type of message; extend the research process to include lay audiences; facilitate access to the best scientific evidence and expertise with ‘science media centres’; increase public funds for research to avoid interference by private interests; increase regulations on lobbies to protect scientific institutions such as the World Health Organization (WHO); promote public participation in science within museums; restrict the practice of science communication to journalists with a scientific background, science centres such as Exploratorium (San Francisco) or Science Gallery (Dublin); make scientific conferences accessible to lay audiences; and ‘open access’ initiatives for visiting research laboratories.
With the third questionnaire, in August 2019, we asked the pool of experts to prioritise the lists of 10 good practices and 10 critical topics where a consensus of more than half of the experts was found before the pandemic. The outcome of this prioritisation phase indicated a clear lack of consensus regarding priorities, with low values of Kendall’s coefficient of concordance (0.25 for key topics and 0.13 for good practices), very close to the value of 0 associated with a total disagreement over priorities, and far from the value of 1 described in literature as an expression of perfect agreement, or even the value of 0.7 representing the minimum threshold for a partial agreement (Everitt and Howell, 2005).
If we consider prioritisation as a proxy for implementation, we could say that even when there is an agreement about ‘what’ we can do to promote trust in scicomm (good practices) and ‘where’ this trust can be supported (concerned domains), the diversity of environments, perspectives and contexts represented by the experts obstructed an agreement on the ‘how’ (which good practice should be implemented with the highest priority). As no consensus over priorities emerged before the COVID-19 pandemic, we did not repeat the ‘prioritisation’ step in 2020 because no ‘strong consensus’ (confirmed in the two separate waves) was possible in this case.
If we consider the diversity of opinions and perspectives that emerged about key factors promoting trust in scicomm and the risk factors jeopardising it, and compare this with the consensus found within the same panel of experts (about critical topics where trust in scicomm plays a key role, and good practices to foster such trust), we can say that this exploratory, qualitative research confirmed the critical analysis of the limits of one-size-fits-all scicomm activities coming from previous literature. In other words, ‘there’s a thousand publics out there that one could address, any of whom has to be understood by the scientists in order to know how to deal with them, how to work with them, engage them, try to benefit them and be benefited by them’ (Mooney, 2010: 10).
Our research therefore reinforces the need identified by scholars to invite scicomm practitioners and researchers to consider the specific context, community, target audience, culture and cultural history, biases, demographic composition, misinformation and social debate characterising any local science communication ecosystem, which opens several paths for further research focused on public segmentation (Mooney, 2010; Füchslin, 2019; Metag and Schäfer, 2018), strategic communication (Besley et al., 2019) or framing (Druckman and Lupia, 2017).
The outcome of our research can also be interpreted as a confirmation of the limits of the ‘diffusionist ideology’ of science communication, which ‘fundamentally rests on a notion of communication as transfer’, assuming that ‘the same knowledge in different contexts will result in the same attitudes and eventually in the same type of behavior’ (Bucchi, 2008: 66) and treats knowledge as ‘a fixed, context-independent phenomenon that ought to be taken from the scientific community and delivered, unchanged, to the public’ (Suldovsky, 2016: 419). In line with previous research work, the outcome of this Delphi analysis seems to challenge the diffusionist model, suggesting that each communication act, in order to be effective and fulfil its purpose, should be adapted when moving from one context to another.
The most relevant outcome of this work is the information collected from scicomm experts before the COVID-19 pandemic. Comparison of this information with responses collected during the ongoing pandemic from the same group of concerned stakeholders provides evidence suggesting that topics related to health and environment were considered as critical and controversial subjects for trust in scicomm also before the pandemic. The pandemic cannot therefore be considered as a single ‘triggering event’ for the ongoing scientific controversies.
Within the limits and caveats of any exploratory and qualitative research, our findings identify a set of critical topics or scientific domains where the bond of trust in science communication plays a key role. Such topics include vaccines and the role of pharmaceutical companies, climate change and environmental issues, medical sciences, communication of health risks and public health issues. The result has an operational value for scicomm practitioners and/or policy actors working to trigger constructive engagement, dialogue and participation around these critical topics. Our contribution could also be useful for scicomm scholars interested in further analysis of proactive and pre-emptive ‘pre-bunking’ initiatives focused on the same set of topics (Basol et al., 2021; Lewandowsky and Van der Linden, 2021).
The list of best practices to promote trust in scicomm revealed a shared perception of effectiveness for science communication activities based on direct interactions with targeted audiences, and the consensus around this list became even more meaningful after the same pool of experts confirmed it during the COVID-19 pandemic. The focus among best practices was on activities for schools, training for scientists and researchers, joint initiatives at the local level and public science events. The COVID-19 pandemic made science-based law implementation, visibility of scientific role models and public debates lose relevance among the choices of experts. At the same time, direct engagement activities such as ‘direct encounters with science communicators and scientists’ and ‘science festivals targeted to lay audiences and young people’ found a consensus in 2020 that was not reached before the pandemic.
This orientation of the pool of experts towards ‘hands on’ activities (where science is experienced and not just learned) is another relevant result for scicomm practitioners looking at best practices for their activities, and for researchers interested in undertaking further research on the effectiveness of the experiences highlighted by this exploratory work.
The consensus emerging on a defined set of topics considered as critical for trust in scicomm reveals a complexity which does not contradict the high level of general trust in science and scientists recorded in polls collected over the last decades, confirming a consistent trend where in the United States ‘confidence in the other highly ranked institutions has not been as stable as it has been for science’ (Krause et al., 2019: 2) and ‘nine in ten EU citizens think that the overall influence of science and technology is positive’ (Eurostat, 2021: 90).
Within this complexity frame, where trust in science and controversies on mediated science coexist in the same ‘scicomm ecosystem’, we need further research to better understand perceptions of a ‘crisis of public mistrust of science’ (Wynne, 2006: 211), ‘crisis in science literacy and communication’ (Smol, 2018: 952) and an ‘anti-science crisis’ (Medvecky and Leach, 2019: 103) reported by scholars even before the pandemic.
Such perceptions may be reconsidered as a potential cognitive bias effect induced by the increased space given to misinformation, disinformation, anti-science and pseudoscience in mainstream traditional media (Zarocostas, 2020) and digital media (Xiao et al., 2021), resulting in what WHO defined as an ‘infodemic’ (Tangcharoensathien et al., 2020). This hypothesis deserves more in-depth and specific research, with different methodologies such as discourse analysis of semi-structured interviews with concerned stakeholders, focused on the topics highlighted as ‘critical’ by our panel of experts.
The problematisation of the diversity expressed by experts for lists where a consensus was not found (positive factors and potential risks for trust in scicomm) may encourage scholars to develop the analysis of trust relationship with scicomm in local contexts and with specific audiences, using the approach suggested by Scheufele and Krause (2019: 1), who envisioned ‘more systematic analyses of science communication in new media environments, and a (re)focusing on traditionally underserved audiences’, where empirical work is scant.
The noted diversity of feedback, coming from the same pool of experts and consistent over time before and during the pandemic, can also raise meaningful new research questions to ‘locate the differences’, in order to understand if and how such diversity is a context-dependent variable leading different experts to multiple ‘local certainties’, or an expression of uncertainty between experts sharing the same vision of a well-known problem, or even a symptom of a fuzzy understanding of a problem which is still out of focus, because of different assumptions and oversimplifications about what ‘trust in scicomm’ is, the nature of such trust and the way it is expressed on a social level.
If further research confirms the latter hypothesis, this fuzzy understanding of trust in scicomm (resulting in implementation problems for science policymakers and scicomm practitioners) will require an additional theoretical and conceptual effort. In the ongoing pandemic crisis, mistrust in scientific information communicated to non-specialised audiences was reported as the direct cause of ‘a rampant increase in the number of coronavirus cases and deaths’ (Nasr, 2021: 2) and therefore reaching a common ground of ‘understanding of trust – and doubt – as contextual, relational and fluctuating’ (Irwin and Horst, 2016: 4) can be a promising research path and a life-saving epistemological challenge.