+1 Recommend
5 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Critical topics and good practices for trust in science communication before and during the COVID-19 pandemic

      Research for All
      UCL Press
      science communication, scicomm, trust in science communication, Delphi method, COVID-19


            For a qualitative analysis of factors affecting trust in science communication (scicomm) we used the Delphi method to reach a pool of experts based in Italy and Belgium (researchers/academics, journalists and scicomm practitioners) before and during the COVID-19 pandemic. The results revealed a ‘strong’ consensus (confirmed before and during the pandemic) about good practices promoting trust in scicomm (mainly based on direct interactions with targeted audiences), and about critical topics where trust plays a key role. Such topics include vaccines and the role of pharmaceutical companies, climate change and environmental issues, medical sciences, communication of health risks and public health issues. According to our results, issues related to health and environment were perceived as critical and controversial subjects for trust in scicomm even before the pandemic. The same pool of experts also expressed very diverse views regarding risks and threats to trust in scicomm, and the social, cultural, political and environmental factors that can increase and promote trust in scientific communication among lay audiences. Such diversity reveals the need for further research to explore differences due to the context, based on the individual views of experts or generated from a conceptualisation of trust in scicomm which may be still fuzzy and unclear.

            Main article text

            Key messages
            A qualified pool of experts in science communication, consulted using the Delphi method before and during the COVID-19 pandemic, identified a set of critical topics affecting trust in science communication: vaccines and the role of pharmaceutical companies, climate change and environmental issues, medical sciences, communication of health risks and public health issues.
            The same group of experts reached a consensus about a set of good practices promoting trust in science communication by direct interactions with targeted audiences where science is experienced, not just learned.
            Their views varied about the risks and threats to trust in science communication, and about the social, cultural, political and environmental factors that can promote trust in scientific communication among lay audiences.

            Context and previous research: the academic debate on trust

            In the academic literature, the subject of trust spans multiple research fields: social sciences in the domain of politics and international affairs have produced measures on aggregated level of social trust (Justwan et al., 2017), communication studies have analysed trust of information in online environments (Metzger and Flanagin, 2013), and political studies have related the level of trust in public institutions expressed by journalists with the social environment in their country (Hanitzsch and Berganza, 2012).

            In our societies, the concept of ‘trust in science’ is no longer a paradox. The type of trust needed to benefit from scientific knowledge is not a blind ‘leap of faith’ clashing with the Royal Society’s motto ‘Nullius in verba’ (take nobody’s word for it). Trust is not only required for non-scientific audiences to grasp complex phenomena without mastering the underlying theory and research. It has also become a fundamental skill within the scientific community, where knowledge advancement implies some trust in other people’s outcomes. Trust in scientific knowledge produced, verified and analysed by others is becoming part of many scientific projects: research teams may be large and geographically distributed, so that ‘even within the same research team, trust in the knowledge of others is essential for everyday scientific practice’ (Hendriks et al., 2016a: 145), because ‘the cooperation of researchers from different specializations and the resulting division of cognitive labor are, consequently, often unavoidable if an experiment is to be done at all’ (Hardwig, 1991: 695).

            Trust is also ‘a mechanism for the reduction of complexity … it enables people to maintain their capacity to act in a complex environment’ (Siegrist, 2021: 481). This is a fundamental function in our societies, because ‘all social arrangements rely on trust, and many involve expertise … if trust in experts were to come to a halt, society would come to a halt, too’ (Oreskes, 2019: 247). The level of public trust granted to scientists and science is still higher than the one granted to other social actors and fields (Krause et al., 2019), with a steady trend confirmed by reports such as the General Social Survey realised by the National Opinion Research Center (NORC) of the University of Chicago, the Public Attitudes to Science published by the market research company Ipsos MORI, the Global Monitor of the Wellcome Trust, the Science & Engineering Indicators compiled by the National Science Board in the US, the British Social Attitudes survey produced by the social research organisation NatCen, the Public and Scientists’ Views on Science and Society issued by the Pew Research Center, and the ‘Eurobarometer’ surveys collected by Eurostat (the statistical office of the European Union) about ‘European citizens’ knowledge and attitudes towards science and technology’ (see Eurostat, 1993, 2001, 2005, 2010, 2013, 2021; Ipsos MORI, 2011, 2014, 2018, 2019; NORC, 2013; Curtice et al., 2019; National Science Board, 2018, 2020a, 2020b; Pew Research Center, 2015; Funk et al., 2020; Wellcome Trust, 2018, 2020).

            Even in this general climate of trust in science, scholars have reported various criticalities. For example, the polarisation around specific cultural, political or religious identities may generate mistrust and social controversy about certain scientific issues (Kahan, 2017; Hendriks et al., 2016a). The ‘chain of trust’ linking scientists to citizens, and scientific research to public health measures, involves also the political and industrial sphere, where the levels of trust in business leaders and governments managing our public health systems are far from the trustworthiness accorded to scientists and scientific research (Larson et al., 2018; Ipsos MORI, 2019). Our scientific institutional culture shows:

            a lack of recognition of the increasing strains on public credulity and trust in which science itself has been an agent, [with an] apparent institutional lack of ability to imagine that public concerns may be based on reasonable questions that are not being recognised and addressed, rather than being rooted in ignorance and mis-understanding. (Wynne, 2006: 219)

            Scholarly debate around the concept of trust in science has also explored the drawbacks of ‘uncritical trust in science’. This is considered a potential risk when citizens are asked to trust partial and provisional scientific outcomes concerning topics still under scrutiny (such as the ongoing pandemic) and coming from ‘zones of uncertainties’ where the scientific community is still struggling to find a clear consensus, grounded on a strong base of scientific evidence. Meanwhile, science is used to legitimate public policies, fostering ‘the idea that support for the policy stance is determined by scientific fact, and that no alternative is left’ (Wynne, 2006: 214).

            According to Krause et al. (2021: 230), ‘insisting that citizens simply trust the science on any given study is not only disingenuous, it is likely unethical’ and ‘uncritical trust in science would be democratically undesirable’ as a goal per se, because certain levels of mistrust are linked to legitimate concerns coming from inequities in our public health systems, and ‘efforts to force scientific trust on society could make the worst fear a reality: that trust in science will become politicized’. From this perspective, uncritical trust in science as a social compliance requirement is a risk that may contaminate the democratic sphere with a politicised and controversial social conversation around science, resulting in a polluted ‘science communication environment’ (Kahan, 2017: 45). To some writers, this risk seems not to be merely a scholarly hypothesis, because it is increasingly tangible today, given that ‘in the context of COVID-19 crisis, science was invoked by politicians, or scientific legitimacy was claimed by advisers to governments, to support measures that sought total compliance and thus limited conversation’ (Bucchi and Trench, 2021: 10).

            The trust needed in complex societies does not exempt us from critical thinking and duties of vigilance: epistemic trust in knowledge that scientists have produced or provided ‘rests not only on the assumption that one is dependent on the knowledge of others who are more knowledgeable; it also entails a vigilance toward the risk to be misinformed’ (Hendriks et al., 2016a: 143). Far from being a passive acceptance of scientific claims, actual trust in science comes from personal evaluations affected by successful replication of studies (Hendriks et al., 2020), open discussion about ethical implications of preliminary scientific results (Hendriks et al., 2016c), and perceived expertise, integrity and benevolence of sources (Hendriks et al., 2015, 2016a).

            Trust in scicomm as research topic

            In this context, research on the role of science communication (scicomm) as ‘the social conversation around science’ (Bucchi and Trench, 2021: 8) is crucial to understand how this conversation can counteract the tendency of scientific communities of being perceived as structurally monolithic and inaccessible to lay audiences. It is necessary to facilitate the process of sense making around scientific topics, support an informed and critical trust of science among non-specialised publics, and extend scientific debate from the academic community to a wide range of communities, practices and initiatives, also through social media (Davies and Horst, 2016).

            A further reason for researching trust in science communication is that credibility and trust in connection with science may be ‘even more important than in any other area of social life’ (Weingart and Guenther, 2016: 9), and this topic is also linked to the controversial role taken in recent years by social media and personal blogs. These digital communication environments are used to spread misinformation and pseudoscience, jeopardising trust in scicomm, and legitimating pseudoscience and anti-science attitudes on established channels, ranging from popular blogs to the aggressive use of Twitter made by the White House (Chan et al., 2017). However, they have also been tools within the mechanisms of public scrutiny which have been fundamental in cases of correction (Hendriks et al., 2016b) or even retraction of scientific papers (Yeo and Brossard, 2017). Recent studies (Battiston et al., 2021) scrutinise also the role of scicomm in fostering citizens’ compliance with public health policies during the pandemic.

            Trust in scicomm is an important research topic for social sciences also because of the increased availability of scientific information through those same digital channels today. This exposes online audiences to more direct interactions with experts and a higher quantity of science news than is possible through traditional news outlets. This access to authorities in the scientific community gives the public an enhanced sense of trust, rooted also in the social recommendations accompanying such news (Huber et al., 2019).

            The role of scicomm as a connector between the best available science and lay audiences makes it relevant to question how trust in scicomm itself is formed, shaped and lost, especially for politicised, polarised and controversial topics where there is the tendency to regard controversy as something ‘that should be kept within the scientific community’ (Miller, 2001: 118). The changing nature of the trust relationship between lay audiences and scicomm initiatives has led scholars, scicomm practitioners and journalists specialised in scientific issues to work to keep up with changes in technology, media and culture, adapting their communication activities to an environment where contents, formats, habits and communication channels have radically evolved over the years (Davies and Horst, 2016).

            In 1985, the ‘need for an overall awareness of the nature of science and, more particularly, of the way that science and technology pervade modern life’ shaped the well-known Public Understanding of Science report released in London by the Royal Society, which stated that ‘improving the general level of public understanding of science is now an urgent task for the well-being of the country’ and ‘scientific literacy is becoming an essential requirement for everyday life’ (Bodmer, 1985: 10). Besides these efforts towards ‘public understanding’, numerous scicomm activities have adopted the ‘deficit model’ based on the assumption that ‘the public has a “knowledge deficit” that affects perception of science and scientists’, and ‘science communicators can change attitudes towards science by providing more information’ (Short, 2013: 40).

            This model is still in use now after over three decades, as the idea of a ‘public deficit’ never left the scientific debate (Ko, 2016; Cortassa, 2016; Raps, 2016; Meyer, 2016; Suldovsky, 2016). Indeed, the scientific community regularly reinvents the public deficit model explanation for public alienation from institutional science, producing ‘a repertoire of possible alibis which prevent honest institutional-scientific self-reflective questioning’ (Wynne, 2006: 216). This is happening even though the deficit assumption has been strongly questioned by studies showing that factual scientific information and individual scientific literacy can become irrelevant for changing attitudes towards science because of prevailing (or coexisting) social, ethical, religious and cultural beliefs (Short, 2013), or other psychological phenomena, such as cognitive polyphasia (Li and Tsai, 2019) and various forms of cognitive bias, confirming that ‘human cognition appears organized to resist belief modification’ (Bronstein and Vinogradov, 2021: 1).

            Meanwhile, alternative models and practices based on ‘dialogic’ (or ‘consultative’) and ‘participatory’ (or ‘deliberative’) approaches have been discussed and practised over the years (Davies and Horst, 2016) for cases and contexts where the need for an exchange of inputs and concerns between scientists and citizens (or the need for an active engagement of citizens in open debates over scientific issues for shaping public policies) has become more prominent than the educational and social concerns that were addressed with scicomm activities based on the ‘deficit model’.

            The multifaceted nature of the activities that fall under the wide category of ‘science communication’ has prompted science communication scholars to put in context the traditional narrative depicting the evolution of scicomm as linear historical progress from ‘deficit to dialogue’. According to Trench (2008: 123), ‘the supposed shift from deficit to dialogue has not been comprehensive, nor is it irreversible’. Davies and Horst (2016: 5) propose a more complex perspective on the evolution of science communication, conceiving a ‘scicomm ecosystem’ where multiple models are coexisting. In this complex ecosystem, we do not have ‘a narrative of progress, but one of multiplication of discourses’ (Bauer, 2009: 222) where different (and sometimes conflicting) forms of science communication are entangled with the diversity of models, cultures, contexts, practices and practitioners contributing to the public discourse about science.

            In the recent scientific debate around trust in scicomm, new models of science communication have proposed to move beyond a naive view of science as ‘value-free’, rejecting the assumption that the only value shared by the scientific community is a pure interest for the progress of knowledge. After previous research showing that ‘we tend to trust and to believe the arguments of a person whose values are similar to our own’ (Siegrist and Hartmann, 2017: 449), critics of the ‘pure science model’ have argued that trustworthiness of science is better communicated sharing non-scientific values, to find a common ground between science and society (Oreskes, 2019).

            Considering the relevance of trust in scicomm as research topic, the changing context for science communication and the specific challenges posed by the COVID-19 pandemic, we reached out to experts in scicomm (researchers, science journalists and scicomm professionals) asking them to share their experience regarding trust of lay audiences in science communication. The key questions under scrutiny in our analysis are:

            Q1. According to the pool of experts who took part in this study, what are the critical topics, the key factors, the possible risks and the good practices that can affect the bond of trust between lay audience and science communication?

            Q2. Before and during the COVID-19 pandemic, on which of these issues did the individual feedback of the experts converge on a shared consensus, and on which items?


            To explore trust in science communication from different perspectives, our exploratory, qualitative research submitted a series of iterative online questionnaires to a multiple-stakeholder pool of experts comprising researchers/academics, journalists and scicomm practitioners, based in two countries (Italy and Belgium, chosen for their cultural and physical proximity to the research team).

            The feedback provided by the pool of experts was collected, organised and analysed using the Delphi method. Developed in the 1950s, this method is recognised as a flexible technique to ‘obtain the most reliable consensus of a group of experts’ (Okoli and Pawlowski, 2004: 16) in situations where there is ‘incomplete knowledge about a problem or phenomena’ that may benefit from subjective judgements of experts (Skulmoski et al., 2007: 12), and for cases where other statistical methods ‘are not practical or possible because of the lack of appropriate historical/economic/technical data and thus where some form of human judgmental input is necessary’ (Marchais-Roubelat and Roubelat, 2011: 1496).

            The Delphi method is based on iteration cycles (Figure 1) starting from an initial researcher-defined questionnaire, and the subsequent collection of responses from the experts, each of which inflects the questionnaire that follows on. The goal is to realise a series of one-to-many controlled interactions between the experts and the researchers, reducing the complexity of the communication flow of an open discussion to facilitate the detection of a majority consensus, or the lack of such consensus, over a specific set of topics. This iteration process also allows the participants to refine their view with a controlled feedback from the group outcomes (Skulmoski et al., 2007). In a Delphi panel, the validity and the value of the result rely on the qualifications of the experts involved, and not on the size of the sample: the recommended size for a Delphi panel of experts varies from 10 to 18. (Okoli and Pawlowski, 2004). Figure 1 summarises the workflow of our Delphi research process.

            Figure 1.

            Summary workflow of the Delphi method adopted for this research (Source: Adapted from Skulmoski et al., 2007: 3)

            In order to select the target groups for our research, we adopted the procedure described by Okoli and Pawlowski (2004: 20) to ‘categorize the experts before identifying them’, using a Knowledge Resources Nomination Worksheet (KRNW) which lists relevant discipline or skills, organisations and related literature (Table 1).

            Table 1.

            Universe for the selection of participants in the Delphi panel, based on the Knowledge Resources Nomination Worksheet (KRNW) (Source: Adapted from Okoli and Pawlowski, 2004: 8)

            Discipline or skillsOrganisationsRelated literature
            Group 1: Academics/Researchers
            Researchers and scientists with scicomm experience inferred by received grants, or linked to their networking activities.
            ERC beneficiaries in Italy/Belgium
            Marie Curie beneficiaries in Italy/Belgium
            Members of PSCT network in Italy/Belgium
            Jamieson et al. (2017), The Oxford Handbook of the Science of Science Communication.
            Davies and Horst (2016), Science Communication: Culture, identity and citizenship.
            Group 2: Journalists
            Journalists listed in national associations of science journalism practitioners.
            Associations of scientific journalists
            ABJSC members (Belgium)
            AJP members (Belgium)
            AGJPB members (Belgium)
            UGIS members (Italy)
            Group 3: Media practitioners of scicomm
            Press officers of academic institutions, social media managers of scientific institutions, members of advocacy groups dedicated to promoting scientific culture, organisers of events and science fairs.
            Media practitioners of scicomm
            ECSITE members in Italy/Belgium
            Oggiscienza (Italy)

            We then populated the defined categories with names of experts who could be involved in the Delphi process, picking from our direct contacts and from the list of organisations to be contacted according to the KRNW. When the first list was completed, we contacted the experts on the list, asking them to nominate other recognised experts in their fields with a ‘snowball technique’, aiming at achieving a sample size that provided a diversity of voices from each of the categories.

            ‘Questionnaire Zero’, asking for availability to take part in the research and names of other scicomm experts, was submitted to an initial list of 395 experts in four languages: English/Italian for contacts based in Italy, and French/Dutch for contacts based in Belgium. After the first group of experts was contacted, other people identified by them as peers with similar expertise were contacted as well, checking their availability with the same questionnaire, raising the number of contacted experts to 457. At the end of this process, we had a list of 49 experts confirming their willingness to contribute to our research (Table 2).

            Table 2.

            Composition of participants in the Delphi panel (Source: Authors, 2022)

            ParticipantsContacted% of contactedAvailable% of availableAvailable/contacted ratio
            Practitioner of scicomm12126.482040.8216.53

            To represent the diversity of the panel, the participants were categorised in three groups. These do not correspond to the concept of ‘cohort’ used in statistical methods for social sciences. Rather, and according to the Delphi method, the pool of experts was considered as a single entity, providing qualitative results based on the opinions of experts coming from different fields of knowledge and social groups. The only requirement in this methodology is to guarantee inclusiveness and diversity of qualified voices by drawing on a maximally extended panel of experts (rather than prioritising balance) with the use of the ‘snowball technique’, asking the experts identified in the KRNW table to name peers to be invited to join the pool, and asking the invitees to provide new names of other relevant experts, until this iterative process comes to an end. The result is a list of relevant knowledge brokers meeting the four ‘expertise requirements’ cited in Skulmoski et al. (2007): knowledge and experience with the issues under investigation; capacity and willingness to participate; sufficient time to participate in the Delphi panel; and effective communication skills.

            To the best of our knowledge, there are no publicly available lists of people registered as academics with expertise in science communication, so we gathered names of academics and researchers who received public grants for research which requires science communication activities, or who belonged to the international Public Communication of Science and Technology (PCST) network. As Table 2 reveals, this resulted in a relatively low number of scientists and researchers compared with the number of practitioners or journalists specialised in scientific topics and engaged with scicomm, because we could gather the latter from national lists that are publicly available.

            Table 2 shows that this imbalance was subsequently reduced by the different ‘availability’ of each group (expressed as the share of experts available among those contacted). Even if scicomm practitioners were less than half compared with journalists, in the end they joined the expert pool with almost the same number of people because their availability was almost double. Academics also showed a higher level of availability than journalists, contributing to the extension and diversity of the pool of experts required for the application of the Delphi method.

            After completing the participant list, we started the iterative submission of questionnaires and data collection to all the experts who accepted the invitation. Even if at each step of the research there were some losses, at each stage of the process we still had a number of participants that ranged from 17 to 46, which was always more than the minimum number of 10 participants recommended for Delphi panels according to the general guidelines of the Delphi method (Okoli and Pawlowski, 2004). This allows us to consider the feedback provided by the pool of experts as meaningful from a qualitative point of view.

            We considered the risk of bias resulting from ‘strategic answering’ from experts who could theoretically have a potential conflict of interest to be negligible, because of the general nature of the questions posed, which focused only on the concept of trust in science communication in the public sphere and the nature of such trust.

            In the first questionnaire, sent in April 2019, we asked for open answers to the following questions:

            • Positive factors: Could you please mention some key factors (like social, cultural, political or environmental factors) that can increase and promote trust in scientific communication among the general public?

            • Concerned domains: Could you please mention some critical topics or scientific domains where the bond of trust in science communication plays a key role according to your experience?

            • Risks and threats: Could you please mention some potential risks and threats that can undermine the trust in scientific communication for lay audiences?

            • Good practices: Could you please mention some good practices (like private activities, public initiatives or social regulations of any kind) that could promote trust in scientific communication?

            The answers provided to the first questionnaire were organised, aggregated and rephrased to avoid duplicates and clarify concepts, and we submitted the overall list of answers to the participants for validation, to confirm that there was no loss of concepts and meaning introduced by the summarisation process.

            After this validation step, we asked participants for the second questionnaire, launched in June 2019, to choose exactly 10 items from each of the aggregate lists produced with the previous questionnaire concerning positive factors, concerned domains, risks/threats and good practices. The number of choices was fixed and mandatory to avoid distortions in the feedback that would have resulted from allowing different ‘weights’ to the answers, corresponding to a different number of choices made by each participant. The feedback provided for the second questionnaire allowed us to check if the pool of experts expressed some consensus on items from the four lists that we asked them to provide individually (positive factors, concerned domains, risks/threats and good practices).

            Following the Delphi method, we marked a consensus over one item of a list if more than 50 per cent of the experts included that item on the list. On the third questionnaire, launched in August 2019, we asked participants to ‘prioritise the consensus’, ranking in decreasing order the items of each list indicated by a majority of experts.

            To measure the level of agreement between the different lists ordered by priority provided by each participant we used Kendall’s coefficient of concordance (W), defined as ‘a measure of the agreement between several judges who have rank ordered several entities’ (Field, 2005), where a small ratio corresponds to a disagreement between judges, and ‘a W value of 0.7 or greater would indicate satisfactory agreement’ (Okoli and Pawlowski, 2004: 26).

            In November 2020, after the COVID-19 pandemic affected the global scenario of science communication, we submitted the questionnaire to the same pool of experts with the four aggregated lists of items linked to each research question, asking them to reconsider their choices of items for the proposed lists in order to check whether the crisis had brought change to the consensus expressed beforehand.


            The use of the Delphi method enabled us to extract from a relevant pool of experts meaningful qualitative information about a complex, multifaceted issue. Despite the complexity, we found a ‘strong consensus’ (where ‘strong’ means confirmed before and during the COVID-19 pandemic) on two lists of items chosen as relevant by more than 50 per cent of the experts consulted, suggesting that behind the complexity we can outline a shared ‘common feeling’, representing a relevant and usable qualitative result.

            For potential risks that can undermine trust in scicomm, before the COVID-19 pandemic the pool of experts expressed a consensus on a small set of items: lack of critical thinking, dissemination of false pseudoscientific information and ideological propaganda. This consensus was not confirmed in November 2020, when these factors seemed to become less relevant. More than half of the same group of experts, once the pandemic had begun, indicated only ‘sensationalism over possible scientific discoveries raising false expectations’ and ‘science-illiterate journalists covering scientific topics acritically’ as potential risk factors.

            A similar uncertainty emerged about the key factors that increase trust in scicomm: a consensus was found in 2019 regarding only four items (the need to increase scientific awareness starting from school; communicate complexity in an open and transparent way; encourage the habit of critical thinking; and promote dialogue between people, experts and institutions), but in 2020, no consensus at all emerged after repeating the same questionnaire, once COVID-19 had spread.

            There was no strong consensus regarding ‘potential risks’ or ‘key factors’ for trust in scicomm among the pool of experts, and the number of items where a limited consensus emerged is too low to draw any conclusions. A wide variety of risks and positive factors affecting trust in science communication was emphasised. The outcome of such diversity is shown in the aggregated lists of items in Table 3. The table shows in alphabetical order the list of items indicated by the experts, filtered to those chosen by at least 20 per cent of the respondents in 2019 or in 2020. None of the items was mentioned by more than 50 per cent of the panellists in both rounds of questionnaires (2019 and 2020).

            Table 3.

            Positive factors and potential risks mentioned by at least 20 per cent of experts contacted in the Delphi panel (items listed in alphabetical order for each list) (Source: Authors, 2022)

            Positive factors promoting trust in science communicationPotential risks compromising trust in science communication
            Avoiding hype and sensationalismAbsence of experts’ voices from media
            Communicate complexity in an open and transparent wayAnti-scientific beliefs coming from culture, education or relationships
            Cultural and social developmentBad perception of pharma companies
            Develop skills to deal with pseudoscience and anti-scienceConflicts of interest
            Education of the public, effective dissemination of scienceDissemination of false pseudoscientific information
            Encourage the habit of critical thinkingIdeological propaganda
            Firm answer to scientific nonsenseImposing scientific culture as an absolute truth
            Guarantee the quality of science communicationInability to understand uncertainty of science
            Highlight science embedded in our everyday livesIncreased dependence of science on economic and private interests
            Improve the understanding of the progress in science and medicineKnowledge gap
            Increase scientific awareness starting from schoolLack of critical thinking
            Integrate science in the political decision-making processLack of transparency when dealing with research misconduct
            Make science communication enjoyable and funLobbying and conflicts of interest among science communicators
            Merging humanistic and scientific culturesOversimplification
            Non-polarising behaviourPolitical bias
            Political support to sciencePoliticians supporting opinions against scientific evidence
            Promote dialogue between people, experts and institutionsPremature publication of scientific results raising false expectations
            Promoting scientific awareness among lay peopleReligious bias
            Reliability of the scientific communicationScience-illiterate journalists covering scientific topics acritically
            Reliability of the sourcesScientific fraud or misconduct
            Rely on the positive value associated to science as a cultural factorScientists hiding or minimising possible negative drawbacks of their results
            TransparencyScientists not considering values and concerns on the side of lay audience
            Understanding of scientific methodSelf-referential attitude of scientists in dealing with the public
            Use of a clear language avoiding complexitySensationalism over possible scientific discoveries raising false expectations
            Social media bubbles or ‘echo chambers’
            Wrong sources of information

            In contrast with these results, a ‘strong’ consensus (confirmed in 2019 and 2020) is associated with critical topics where trust in scicomm plays a key role, and good practices to promote such trust. For both lists, a relevant number of items were consistently indicated by more than half of the experts in June 2019 and November 2020 (Tables 4 and 5).

            Table 4.

            Domains in which trust in scicomm is perceived as critical by the Delphi panel (Source: Authors, 2022)

            Concerned domainsConsensus emerged in June 2019Consensus emerged in November 2020
            Climate changeYesYes
            Role of pharma companiesYesYes
            Environmental issuesYesYes
            Medical sciencesYesYes
            Communication of health risksYesYes
            Public health issuesYesYes
            Genetically modified organismsYesNo
            Topics related with an increased perception of riskYesNo

            Note: Following the Delphi method, consensus is considered to be reached over an item if more than 50 per cent of the respondents choose it to be included in the list.

            Table 5.

            Good practices to foster trust in scicomm proposed by the Delphi panel (Source: Authors, 2022)

            Good practicesConsensus emerged in June 2019Consensus emerged in November 2020
            Activities in primary school to stimulate curiosity and passion for researchYesYes
            Provide training about communication techniques to scientists and researchersYesYes
            Joint initiatives between scientific institutions and the media, especially at the local levelYesYes
            Promote scientific literacy in school textbooksYesYes
            Public events about scienceYesYes
            Implement regulations and laws based on scientific evidenceYesNo
            Understand society concerns and engage the audience as stakeholdersYesNo
            Organise meetings with researchers and patients to promote trust in medical scienceYesNo
            Make scientific role models more visibleYesNo
            Promote public science-based debates before taking public health decisionsYesNo
            Direct encounters with science communicators and scientistsNoYes
            Science festivals targeted to lay audience and young peopleNoYes

            Note: Following the Delphi method, consensus is considered to be reached over an item if more than 50 per cent of the respondents choose it to be included in the list.

            No new topics with over 50 per cent of respondents emerged in the inquiry during the pandemic, and three topics that were chosen by a majority in 2019 lost relevance during the pandemic (Table 4). Other domains of concern where trust in science communication plays a key role (chosen by between 20 per cent and 50 per cent of the pool of experts in any of the questionnaires) included access to new therapies, animal experimentation, chemistry, economic issues, evolutionary biology, genetically modified organisms, genetics, industrial chemical accidents, nuclear energy, oncology and waste disposal.

            Concerning good practices to foster trust in scicomm (Table 5), the experts expressed a strong consensus over five good practices, and two good practices that emerged during the pandemic achieved a consensus that they did not have in the first round of questionnaires. Both emerging practices are activities involving physical encounters with scientists and scientific activities, suggesting that trust in the scientific endeavour can come from learning, but also from direct experiences with direct, in-person relations, even more in times of ‘social distancing’.

            Other good practices fostering trust in science communication (chosen by between 20 per cent and 50 per cent of the participants in any of the questionnaires) included: a coherent approach for any type of message; extend the research process to include lay audiences; facilitate access to the best scientific evidence and expertise with ‘science media centres’; increase public funds for research to avoid interference by private interests; increase regulations on lobbies to protect scientific institutions such as the World Health Organization (WHO); promote public participation in science within museums; restrict the practice of science communication to journalists with a scientific background, science centres such as Exploratorium (San Francisco) or Science Gallery (Dublin); make scientific conferences accessible to lay audiences; and ‘open access’ initiatives for visiting research laboratories.

            With the third questionnaire, in August 2019, we asked the pool of experts to prioritise the lists of 10 good practices and 10 critical topics where a consensus of more than half of the experts was found before the pandemic. The outcome of this prioritisation phase indicated a clear lack of consensus regarding priorities, with low values of Kendall’s coefficient of concordance (0.25 for key topics and 0.13 for good practices), very close to the value of 0 associated with a total disagreement over priorities, and far from the value of 1 described in literature as an expression of perfect agreement, or even the value of 0.7 representing the minimum threshold for a partial agreement (Everitt and Howell, 2005).

            If we consider prioritisation as a proxy for implementation, we could say that even when there is an agreement about ‘what’ we can do to promote trust in scicomm (good practices) and ‘where’ this trust can be supported (concerned domains), the diversity of environments, perspectives and contexts represented by the experts obstructed an agreement on the ‘how’ (which good practice should be implemented with the highest priority). As no consensus over priorities emerged before the COVID-19 pandemic, we did not repeat the ‘prioritisation’ step in 2020 because no ‘strong consensus’ (confirmed in the two separate waves) was possible in this case.


            If we consider the diversity of opinions and perspectives that emerged about key factors promoting trust in scicomm and the risk factors jeopardising it, and compare this with the consensus found within the same panel of experts (about critical topics where trust in scicomm plays a key role, and good practices to foster such trust), we can say that this exploratory, qualitative research confirmed the critical analysis of the limits of one-size-fits-all scicomm activities coming from previous literature. In other words, ‘there’s a thousand publics out there that one could address, any of whom has to be understood by the scientists in order to know how to deal with them, how to work with them, engage them, try to benefit them and be benefited by them’ (Mooney, 2010: 10).

            Our research therefore reinforces the need identified by scholars to invite scicomm practitioners and researchers to consider the specific context, community, target audience, culture and cultural history, biases, demographic composition, misinformation and social debate characterising any local science communication ecosystem, which opens several paths for further research focused on public segmentation (Mooney, 2010; Füchslin, 2019; Metag and Schäfer, 2018), strategic communication (Besley et al., 2019) or framing (Druckman and Lupia, 2017).

            The outcome of our research can also be interpreted as a confirmation of the limits of the ‘diffusionist ideology’ of science communication, which ‘fundamentally rests on a notion of communication as transfer’, assuming that ‘the same knowledge in different contexts will result in the same attitudes and eventually in the same type of behavior’ (Bucchi, 2008: 66) and treats knowledge as ‘a fixed, context-independent phenomenon that ought to be taken from the scientific community and delivered, unchanged, to the public’ (Suldovsky, 2016: 419). In line with previous research work, the outcome of this Delphi analysis seems to challenge the diffusionist model, suggesting that each communication act, in order to be effective and fulfil its purpose, should be adapted when moving from one context to another.


            The most relevant outcome of this work is the information collected from scicomm experts before the COVID-19 pandemic. Comparison of this information with responses collected during the ongoing pandemic from the same group of concerned stakeholders provides evidence suggesting that topics related to health and environment were considered as critical and controversial subjects for trust in scicomm also before the pandemic. The pandemic cannot therefore be considered as a single ‘triggering event’ for the ongoing scientific controversies.

            Within the limits and caveats of any exploratory and qualitative research, our findings identify a set of critical topics or scientific domains where the bond of trust in science communication plays a key role. Such topics include vaccines and the role of pharmaceutical companies, climate change and environmental issues, medical sciences, communication of health risks and public health issues. The result has an operational value for scicomm practitioners and/or policy actors working to trigger constructive engagement, dialogue and participation around these critical topics. Our contribution could also be useful for scicomm scholars interested in further analysis of proactive and pre-emptive ‘pre-bunking’ initiatives focused on the same set of topics (Basol et al., 2021; Lewandowsky and Van der Linden, 2021).

            The list of best practices to promote trust in scicomm revealed a shared perception of effectiveness for science communication activities based on direct interactions with targeted audiences, and the consensus around this list became even more meaningful after the same pool of experts confirmed it during the COVID-19 pandemic. The focus among best practices was on activities for schools, training for scientists and researchers, joint initiatives at the local level and public science events. The COVID-19 pandemic made science-based law implementation, visibility of scientific role models and public debates lose relevance among the choices of experts. At the same time, direct engagement activities such as ‘direct encounters with science communicators and scientists’ and ‘science festivals targeted to lay audiences and young people’ found a consensus in 2020 that was not reached before the pandemic.

            This orientation of the pool of experts towards ‘hands on’ activities (where science is experienced and not just learned) is another relevant result for scicomm practitioners looking at best practices for their activities, and for researchers interested in undertaking further research on the effectiveness of the experiences highlighted by this exploratory work.


            The consensus emerging on a defined set of topics considered as critical for trust in scicomm reveals a complexity which does not contradict the high level of general trust in science and scientists recorded in polls collected over the last decades, confirming a consistent trend where in the United States ‘confidence in the other highly ranked institutions has not been as stable as it has been for science’ (Krause et al., 2019: 2) and ‘nine in ten EU citizens think that the overall influence of science and technology is positive’ (Eurostat, 2021: 90).

            Within this complexity frame, where trust in science and controversies on mediated science coexist in the same ‘scicomm ecosystem’, we need further research to better understand perceptions of a ‘crisis of public mistrust of science’ (Wynne, 2006: 211), ‘crisis in science literacy and communication’ (Smol, 2018: 952) and an ‘anti-science crisis’ (Medvecky and Leach, 2019: 103) reported by scholars even before the pandemic.

            Such perceptions may be reconsidered as a potential cognitive bias effect induced by the increased space given to misinformation, disinformation, anti-science and pseudoscience in mainstream traditional media (Zarocostas, 2020) and digital media (Xiao et al., 2021), resulting in what WHO defined as an ‘infodemic’ (Tangcharoensathien et al., 2020). This hypothesis deserves more in-depth and specific research, with different methodologies such as discourse analysis of semi-structured interviews with concerned stakeholders, focused on the topics highlighted as ‘critical’ by our panel of experts.

            The problematisation of the diversity expressed by experts for lists where a consensus was not found (positive factors and potential risks for trust in scicomm) may encourage scholars to develop the analysis of trust relationship with scicomm in local contexts and with specific audiences, using the approach suggested by Scheufele and Krause (2019: 1), who envisioned ‘more systematic analyses of science communication in new media environments, and a (re)focusing on traditionally underserved audiences’, where empirical work is scant.

            The noted diversity of feedback, coming from the same pool of experts and consistent over time before and during the pandemic, can also raise meaningful new research questions to ‘locate the differences’, in order to understand if and how such diversity is a context-dependent variable leading different experts to multiple ‘local certainties’, or an expression of uncertainty between experts sharing the same vision of a well-known problem, or even a symptom of a fuzzy understanding of a problem which is still out of focus, because of different assumptions and oversimplifications about what ‘trust in scicomm’ is, the nature of such trust and the way it is expressed on a social level.

            If further research confirms the latter hypothesis, this fuzzy understanding of trust in scicomm (resulting in implementation problems for science policymakers and scicomm practitioners) will require an additional theoretical and conceptual effort. In the ongoing pandemic crisis, mistrust in scientific information communicated to non-specialised audiences was reported as the direct cause of ‘a rampant increase in the number of coronavirus cases and deaths’ (Nasr, 2021: 2) and therefore reaching a common ground of ‘understanding of trust – and doubt – as contextual, relational and fluctuating’ (Irwin and Horst, 2016: 4) can be a promising research path and a life-saving epistemological challenge.

            Declarations and conflicts of interest

            Research ethics statement

            The authors conducted the research reported in this article in accordance with the Code of ethics and integrity in research, development and creation of spin-offs from the Université Libre de Bruxelles.

            Consent for publication statement

            The authors declare that research participants’ informed consent to publication of findings – including photos, videos and any personal or identifiable information – was secured prior to publication.

            Conflicts of interest statement

            The authors declare no conflicts of interest with this work. All efforts to sufficiently anonymise the authors during peer review of this article have been made. The authors declare no further conflicts with this article.


            1. Basol M, Roozenbeek J, Berriche M, Uenal F, McClanahan WP, Linden S. 2021. Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformation. Big Data and Society. Vol. 8(1):1–18. [Cross Ref]

            2. Battiston P, Kashyap R, Rotondi V. 2021. Reliance on scientists and experts during an epidemic: Evidence from the COVID-19 outbreak in Italy. SSM – Population Health. Vol. 13:100721. [Cross Ref]

            3. Bauer MW. 2009. The evolution of public understanding of science: Discourse and comparative evidence. Science, Technology and Society. Vol. 14(2):221–40. [Cross Ref]

            4. Besley JC, O’Hara K, Dudo A. 2019. Strategic science communication as planned behavior: Understanding scientists’ willingness to choose specific tactics. PLOS ONE. Vol. 14(10):e0224039. [Cross Ref]

            5. Bodmer WF. 1985. The Public Understanding of Science. The Royal Society. Accessed 26 February 2022 https://royalsociety.org/~/media/Royal_Society_Content/policy/publications/1985/10700.pdf

            6. Bronstein MV, Vinogradov S. 2021. Education alone is insufficient to combat online medical misinformation. EMBO Reports. Vol. 22(3):e52282. [Cross Ref]

            7. Bucchi M. 2008. Of deficits, deviations and dialogues: Theories of public communication of scienceBucchi M, Trench B. Handbook of Public Communication of Science and Technology. London: Routledge. p. 57–76

            8. Bucchi M, Trench B. 2021. Introduction: Science communication as the social conversation around scienceBucchi M, Trench B. Handbook of Public Communication of Science and Technology. London: Routledge. p. 1–13

            9. Chan MPS, Jones C, Albarracín D. 2017. Countering false beliefs: An analysis of the evidence and recommendations of best practices for the retraction and correction of scientific misinformationJamieson KH, Kahan DM, Scheufele DA. The Oxford Handbook of the Science of Science Communication. Oxford: Oxford University Press. p. 341–50. [Cross Ref]

            10. Cortassa C. 2016. In science communication, why does the idea of a public deficit always return?: The eternal recurrence of the public deficit. Public Understanding of Science. Vol. 25(4):447–59. [Cross Ref]

            11. Curtice J, Clery E, Perry J, Phillips M, Rahim N. 2019. British Social Attitudes: The 36th report. London: The National Centre for Social Research. Accessed 26 February 2022 https://www.bsa.natcen.ac.uk/media/39363/bsa_36.pdf

            12. Davies SR, Horst M. 2016. Science Communication: Culture, identity and citizenship. London: Palgrave Macmillan.

            13. Druckman JN, Lupia A. 2017. Using frames to make scientific communication more effectiveJamieson KH, Kahan DM, Scheufele DA. The Oxford Handbook of the Science of Science Communication. Oxford: Oxford University Press. p. 351–60. [Cross Ref]

            14. Eurostat. 1993. Europeans, Science and Technology: Public understanding and attitudes. EUR 15461 EN. European Commission. Accessed 26 February 2022 https://op.europa.eu/en/publication-detail/-/publication/634eb2b8-aaab-4a35-b2be-772d4abb7e01

            15. Eurostat. 2001. Special Eurobarometer 154: Europeans, science and technology. European Commission. Accessed 26 February 2022 https://data.europa.eu/data/datasets/s209_55_2_ebs154

            16. Eurostat. 2005. Special Eurobarometer 224: Europeans, science & technology. European Commission. Accessed 26 February 2022 https://data.europa.eu/data/datasets/s447_63_1_ebs224

            17. Eurostat. 2010. Special Eurobarometer 340: Science and technology. European Commission. Accessed 26 February 2022 https://data.europa.eu/data/datasets/s806_73_1_ebs340

            18. Eurostat. 2013. Special Eurobarometer 401: Responsible Research and Innovation (RRI), science and technology. European Commission. Accessed 26 February 2022 https://data.europa.eu/data/datasets/s1096_79_2_401

            19. Eurostat. 2021. Special Eurobarometer 516: European citizens’ knowledge and attitudes towards science and technology. European Commission. Accessed 26 February 2022 https://europa.eu/eurobarometer/surveys/detail/2237

            20. Everitt BS, Howell D. 2005. Encyclopedia of Statistics in Behavioral Science. Vol. 4. Chichester: Wiley. [Cross Ref]

            21. Field AP. 2005. Kendall’s Coefficient of ConcordanceEveritt BS, Howell DC. Encyclopedia of Statistics and Behavioral Science. Chichester: Wiley. [Cross Ref]

            22. Füchslin T. 2019. Science communication scholars use more and more segmentation analyses: Can we take them to the next level? Public Understanding of Science. Vol. 28(7):854–64. [Cross Ref]

            23. Funk C, Tyson A, Kennedy B, Johnson C. 2020. Science and scientists held in high esteem across global publics. Pew Research Center. Accessed 26 February 2022 https://www.pewresearch.org/science/wp-content/uploads/sites/16/2020/09/PS_2020.09.29_global-science_REPORT.pdf

            24. Hanitzsch T, Berganza R. 2012. Explaining journalists’ trust in public institutions across 20 countries: Media freedom, corruption, and ownership matter most. Journal of Communication. Vol. 62(5):794–814. [Cross Ref]

            25. Hardwig J. 1991. The role of trust in knowledge. The Journal of Philosophy. Vol. 88(12):693–708. [Cross Ref]

            26. Hendriks F, Kienhues D, Bromme R. 2015. Measuring laypeople’s trust in experts in a digital age: The Muenster Epistemic Trustworthiness Inventory (METI). PLOS ONE. Vol. 10(10):1–20. [Cross Ref]

            27. Hendriks F, Kienhues D, Bromme R. 2016a. Trust in science and the science of trustBlöbaum B. Trust and Communication in a Digitized World: Models and concepts of trust research. Cham: Springer. p. 143–59. [Cross Ref]

            28. Hendriks F, Kienhues D, Bromme R. 2016b. Disclose your flaws!: Admission positively affects the perceived trustworthiness of an expert science blogger. Studies in Communication Sciences. Vol. 16(2):124–31. [Cross Ref]

            29. Hendriks F, Kienhues D, Bromme R. 2016c. Evoking vigilance: Would you (dis)trust a scientist who discusses ethical implications of research in a science blog? Public Understanding of Science. Vol. 25(8):p. 992–1008. [Cross Ref]

            30. Hendriks F, Kienhues D, Bromme R. 2020. Replication crisis = trust crisis?: The effect of successful vs failed replications on laypeople’s trust in researchers and research. Public Understanding of Science. Vol. 29(3):270–88. [Cross Ref]

            31. Huber B, Barnidge M, Gil de Zúñiga H, Liu J. 2019. Fostering public trust in science: The role of social media. Public Understanding of Science. Vol. 28(7):759–77. [Cross Ref]

            32. Ipsos MORI. 2011. Public Attitudes to Science 2011: Main report. Accessed 26 February 2022 https://www.ipsos.com/sites/default/files/migrations/en-uk/files/Assets/Docs/Polls/sri-pas-2011-main-report.pdf

            33. Ipsos MORI. 2014. Public Attitudes to Science 2014. Accessed 26 February 2022 https://www.ipsos.com/sites/default/files/migrations/en-uk/files/Assets/Docs/Polls/pas-2014-main-report.pdf

            34. Ipsos MORI. 2018. Ipsos MORI Veracity Index 2018. Accessed 26 February 2022 https://www.ipsos.com/sites/default/files/ct/news/documents/2018-11/veracity_index_2018_v1_161118_public.pdf

            35. Ipsos MORI. 2019. Global Trust in Professions: Who do global citizens trust? Accessed 26 February 2022 https://www.ipsos.com/sites/default/files/ct/news/documents/2019-09/global-trust-in-professions-trust-worthiness-index-2019.pdf

            36. Irwin A, Horst M. 2016. Communicating trust and trusting science communication – some critical remarks. Journal of Science Communication. Vol. 15(6):1–5. [Cross Ref]

            37. Jamieson KH, Kahan D, Scheufele DA. 2017. The Oxford Handbook of the Science of Science Communication. Oxford: Oxford University Press.

            38. Justwan F, Bakker R, Berejikian JD. 2017. Measuring social trust and trusting the measure. The Social Science Journal. Vol. 55(2):149–59. [Cross Ref]

            39. Kahan DM. 2017. On the sources of ordinary science knowledge and extraordinary science ignoranceJamieson KH, Kahan DM, Scheufele DA. The Oxford Handbook of the Science of Science Communication. Oxford: Oxford University Press. p. 35–50. Accessed 1 March 2022 https://www.aau.edu/sites/default/files/AAU-Files/Constituent%20Meetings/SRO%20Meetings/SRO%202018/Dan%20Kahan.pdf

            40. Ko H. 2016. In science communication, why does the idea of a public deficit always return?: How do the shifting information flows in healthcare affect the deficit model of science communication? Public Understanding of Science. Vol. 25(4):427–32. [Cross Ref]

            41. Krause NM, Brossard D, Scheufele DA, Xenos MA, Franke K. 2019. Trends – Americans’ trust in science and scientists. Public Opinion Quarterly. Vol. 83(4):817–36. [Cross Ref]

            42. Krause NM, Scheufele DA, Freiling I, Brossard D. 2021. The trust fallacy: Scientists’ search for public pathologies is unhealthy, unhelpful, and ultimately unscientific. American Scientist. Vol. 109(4):226–32. Accessed 26 February 2022 https://link.gale.com/apps/doc/A669437356/AONE?u=anon~c4bde008&sid=bookmark-AONE&xid=1fca8211

            43. Larson HJ, Clarke RM, Jarrett C, Eckersberger E, Levine Z, Schulz WS, Paterson P. 2018. Measuring trust in vaccination: A systematic review. Human Vaccines and Immunotherapeutics. Vol. 14(7):1599–609. [Cross Ref]

            44. Lewandowsky S, Van der Linden S. 2021. Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology. Vol. 32(2):348–84. [Cross Ref]

            45. Li Y-Y, Tsai C-Y. 2019. The science, the paranormal, cohorts and cognitive polyphasia: The authority of science in TaiwanBauer MW, Pansegrau P, Shukla R. The Cultural Authority of Science: Comparing across Europe, Asia, Africa and the Americas. London: Routledge. p. 190–209

            46. Marchais-Roubelat A, Roubelat F. 2011. The Delphi method as a ritual: Inquiring the Delphic Oracle. Technological Forecasting and Social Change. Vol. 78(9):1491–9. [Cross Ref]

            47. Medvecky F, Leach J. 2019. Is science communication ethical?: A question of justiceMedvecky F, Leach J. An Ethics of Science Communication. Cham: Palgrave Pivot. p. 103–11

            48. Metag J, Schäfer MS. 2018. Audience segments in environmental and science communication: Recent findings and future perspectives. Environmental Communication. Vol. 12(8):995–1004. [Cross Ref]

            49. Metzger MJ, Flanagin AJ. 2013. Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics. Vol. 59:210–20. [Cross Ref]

            50. Meyer G. 2016. In science communication, why does the idea of a public deficit always return? Public Understanding of Science. Vol. 25(4):433–46. [Cross Ref]

            51. Miller S. 2001. Public understanding of science at the crossroads. Public Understanding of Science. Vol. 10(1):115–20. [Cross Ref]

            52. Mooney CC. 2010. Do Scientists Understand the Public? Cambridge, MA: American Academy of Arts and Sciences. Accessed 26 February 2022 https://www.amacad.org/sites/default/files/academy/pdfs/scientistsUnderstand.pdf

            53. Nasr N. 2021. Overcoming the discourse of science mistrust: How science education can be used to develop competent consumers and communicators of science information. Cultural Studies of Science Education. Vol. 16:345–56. [Cross Ref]

            54. National Science Board. 2018. 2018 Science & Engineering Indicators. National Science Foundation. Accessed 26 February 2022 https://www.nsf.gov/statistics/2018/nsb20181/assets/nsb20181.pdf

            55. National Science Board. 2020a. Science and Technology: Public attitudes, knowledge, and interest. National Science Foundation. Accessed 26 February 2022 https://files.eric.ed.gov/fulltext/ED612113.pdf

            56. National Science Board. 2020b. The State of U.S. Science and Engineering. National Science Foundation. Accessed 26 February 2022 https://ncses.nsf.gov/pubs/nsb20201/assets/nsb20201.pdf

            57. NORC (National Opinion Research Center). 2013. Trends in Public Attitudes about Confidence in Institutions. NORC General Social Survey 2012. Accessed 26 February 2022 https://www.norc.org/PDFs/GSS%20Reports/Trends%20in%20Confidence%20Institutions_Final.pdf

            58. Okoli C, Pawlowski SD. 2004. The Delphi method as a research tool: An example, design considerations and applications. Information & Management. Vol. 42(1):15–29. [Cross Ref]

            59. Oreskes N. 2019. Why Trust Science? Princeton, NJ: Princeton University Press.

            60. Pew Research Center. 2015. Public and Scientists’ Views on Science and Society. Accessed 26 February 2022 https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2015/01/PI_ScienceandSociety_Report_012915.pdf

            61. Raps BG. 2016. In science communication, why does the idea of a public deficit always return? Public Understanding of Science. Vol. 25(4):460–4. [Cross Ref]

            62. Scheufele DA, Krause NM. 2019. Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences. Vol. 116(16):7662–9. [Cross Ref]

            63. Short DB. 2013. The public understanding of science: 30 years of the Bodmer report. School Science Review. Vol. 95(350):p. 39–44. Accessed 1 March 2022 https://www.researchgate.net/publication/255712425_The_public_understanding_of_science_30_years_of_the_Bodmer_report

            64. Skulmoski GJ, Hartman FT, Krahn J. 2007. The Delphi method for graduate research. Journal of Information Technology Education: Research. Vol. 6:1–21. [Cross Ref]

            65. Siegrist M. 2021. Trust and risk perception: A critical review of the literature. Risk Analysis. Vol. 41(3):480–90. [Cross Ref]

            66. Siegrist M, Hartmann C. 2017. Overcoming the challenges of communicating uncertainties across national contextsJamieson KH, Kahan DM, Scheufele DA. The Oxford Handbook of the Science of Science Communication. Oxford: Oxford University Press. p. 445–54. [Cross Ref]

            67. Smol JP. 2018. A crisis in science literacy and communication: Does reluctance to engage the public make academic scientists complicit? Facets. Vol. 3(1):952–7. [Cross Ref]

            68. Suldovsky B. 2016. In science communication, why does the idea of the public deficit always return?: Exploring key influences. Public Understanding of Science. Vol. 25(4):415–26. [Cross Ref]

            69. Tangcharoensathien V, Calleja N, Nguyen T, Purnat T, D’Agostino M, Garcia-Saiso S, Landry M, Rashidian A, Hamilton C, AbdAllah A, Ghiga I, Hill A, Hougendobler D, Van Andel J, Nunn M, Brooks I, Sacco PL, De Domenico M, Mai P, Gruzd A, Alaphilippe A, Briand S. 2020. Framework for managing the COVID-19 infodemic: Methods and results of an online, crowdsourced WHO technical consultation. Journal of Medical Internet Research. Vol. 22(6):e19659. [Cross Ref]

            70. Trench B. 2008. Towards an analytical framework of science communication modelsCheng D, Claessens M, Gascoigne NRJ, Metcalfe J, Schiele B, Shi S. Communicating Science in Social Contexts. Dordrecht: Springer. p. 119–35

            71. Weingart P, Guenther L. 2016. Science communication and the issue of trust. Journal of Science Communication. Vol. 15(5):1–11. [Cross Ref]

            72. Wellcome Trust. 2018. How does the world feel about science and health? Wellcome Global Monitor. Accessed 26 February 2022 https://wellcome.org/sites/default/files/wellcome-global-monitor-2018.pdf

            73. Wellcome Trust. 2020. How Covid-19 affected people’s lives and their views about science. Wellcome Global Monitor. Accessed 26 February 2022 https://cms.wellcome.org/sites/default/files/2021-11/Wellcome-Global-Monitor-Covid.pdf

            74. Wynne B. 2006. Public engagement as a means of restoring public trust in science – Hitting the notes, but missing the music? Community Genetics. Vol. 9(3):211–20. [Cross Ref]

            75. Xiao X, Borah P, Su Y. 2021. The dangers of blind trust: Examining the interplay among social media news use, misinformation identification, and news trust on conspiracy beliefs. Public Understanding of Science. Vol. 30(8):977–92. [Cross Ref]

            76. Yeo SK, Brossard D. 2017. The (changing) nature of scientist–media interactionsJamieson KH, Kahan DM, Scheufele DA. The Oxford Handbook of the Science of Science Communication. Oxford: Oxford University Press. p. 261–72

            77. Zarocostas J. 2020. How to fight an infodemic. The Lancet. Vol. 395(10225):676[Cross Ref]

            Author and article information

            Research for All
            UCL Press (UK )
            05 April 2022
            : 6
            : 1
            : e06109
            [1]Université Libre de Bruxelles, Belgium
            Author notes
            Author information
            Copyright 2022, Carlo Gubitosa and David Domingo

            This is an open-access article distributed under the terms of the Creative Commons Attribution Licence (CC BY) 4.0 https://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            : 15 June 2021
            : 26 January 2022
            Page count
            Figures: 1, Tables: 5, References: 77, Pages: 17

            Assessment, Evaluation & Research methods,Education & Public policy,Educational research & Statistics
            scicomm,COVID-19,Delphi method,trust in science communication,science communication


            Comment on this article