Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Feb 16;16(2):e0239247. doi: 10.1371/journal.pone.0239247

Trust and transparency in times of crisis: Results from an online survey during the first wave (April 2020) of the COVID-19 epidemic in the UK

Luisa Enria 1, Naomi Waterlow 2, Nina Trivedy Rogers 3, Hannah Brindle 4, Sham Lal 4, Rosalind M Eggo 2, Shelley Lees 1,#, Chrissy h Roberts 4,*,#
Editor: Adriano Gianmaria Duse5
PMCID: PMC7886216  PMID: 33591985

Abstract

Background

The success of a government’s COVID-19 control strategy relies on public trust and broad acceptance of response measures. We investigated public perceptions of the UK government’s COVID-19 response, focusing on the relationship between trust and perceived transparency, during the first wave (April 2020) of the COVID-19 pandemic in the United Kingdom.

Methods

Anonymous survey data were collected (2020-04-06 to 2020-04-22) from 9,322 respondents, aged 20+ using an online questionnaire shared primarily through Facebook. We took an embedded-mixed-methods approach to data analysis. Missing data were imputed via multiple imputation. Binomial & multinomial logistic regression were used to detect associations between demographic characteristics and perceptions or opinions of the UK government’s response to COVID-19. Structural topic modelling (STM), qualitative thematic coding of sub-sets of responses were then used to perform a thematic analysis of topics that were of interest to key demographic groups.

Results

Most respondents (95.1%) supported government enforcement of behaviour change. While 52.1% of respondents thought the government was making good decisions, differences were apparent across demographic groups, for example respondents from Scotland had lower odds of responding positively than respondents in London. Higher educational levels saw decreasing odds of having a positive opinion of the government response and decreasing household income associated with decreasing positive opinion. Of respondents who thought the government was not making good decisions 60% believed the economy was being prioritised over people and their health. Positive views on government decision-making were associated with positive views on government transparency about the COVID-19 response. Qualitative analysis about perceptions of government transparency highlighted five key themes: (1) the justification of opacity due to the condition of crisis, (2) generalised mistrust of politics, (3) concerns about the role of scientific evidence, (4) quality of government communication and (5) questions about political decision-making processes.

Conclusion

Our study suggests that trust is not homogenous across communities, and that generalised mistrust, concerns about the transparent use and communication of evidence and insights into decision-making processes can affect perceptions of the government’s pandemic response. We recommend targeted community engagement, tailored to the experiences of different groups and a new focus on accountability and openness around how decisions are made in the response to the UK COVID-19 pandemic.

Introduction

In response to the pandemic spread of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) (with cases first reported in Wuhan in China’s Hubei province in December 2019) governments across the world introduced a diverse range of control measures, varying in stringency and timing of implementation [1]. Interventions have included a spectrum of responses from the predominantly voluntary guidance (eg: Sweden) to broad-ranging and near complete societal lockdowns in some regions of China.

The relative efficacy of different policy decisions have been, and continue to be, debated amongst scientists, decision-makers and the public. Previous epidemics across the world have shown that a key component for the success of any outbreak response measure is the extent of public acceptance of its legitimacy [24]. Trust is crucial, but it is also contextual: citizens’ experiences of specific interventions and their perceptions of the institutions delivering them are shaped by social, political and economic structures and historical trajectories [5, 6]. At the same time, trust is not static: it can be built or lost over the course of the response. Research during the 2014–2016 West African Ebola outbreak for example showed how local responders employed varied “technologies of trust”, such as openness, accountability and reflexivity to respond to on the ground realities and build confidence in the measures implemented to contain the epidemic [7]. As risk communication and community engagement become increasingly recognised as central to global epidemic response strategies, understanding the dynamics of (mis)trust and the factors that influence the legitimacy of various public health measures is key for developing effective interventions [810]. In the current Coronavirus disease 2019 (COVID-19) pandemic, as governments have requested, and in some instances strictly enforced, significant behavioural change and sacrifices in the midst of lockdowns and economic slowdown, building trust and buy-in from citizens has been highlighted as a particular challenge [11].

The UK registered its first case of COVID-19 on the 29thJanuary 2020 and in the two months that followed, the government implemented a number of increasingly stringent measures, initially delaying lockdown in favour of light-touch recommendations that the population should adopt social distancing and self-quarantine if experiencing symptoms. By 16th March, Prime Minister Boris Johnson advised against ‘non-essential travel’ and contact with others, whilst adults over the age of 70 and those with specific pre-existing conditions received recommendations to ‘shield’ for at least 12 weeks. The UK entered a nation-wide lockdown on 23rd March. Two days later the Coronavirus Act 2020 was passed, giving the government powers that prohibited gatherings and specified police powers to detain and fine people contravening the rules of lockdown. Our survey therefore captured the first period following the implementation of stringent measures (April 2020) and the ‘acute’ phase as numbers of infections and deaths rose steadily.

Whilst the importance of trust in an effective outbreak response is widely recognised, the determinants of (mis)trust in epidemic response measures are less well understood. In particular, qualitative research in recent epidemics has shown that we need to understand the dynamics of trust as they vary by socio-political context and the specific outbreak [6, 7, 10, 12].

In this paper we explore a particular aspect of the dynamics of public trust in the UK government’s response to COVID-19, namely the relationship of perspectives on the transparency of information being made available to the public and participants’ evaluations of the government’s pandemic response. We expand existing qualitative work in this field by using an embedded mixed methods approach to data analysis that combines statistical analysis, structural topic modelling (STM) and qualitative thematic coding. The paper explores how perceptions of UK government transparency (or lack thereof) influence broader narratives of trust in institutional responses to the COVID-19 pandemic.

Methods

Research design

The project was designed by a multidisciplinary team, including anthropologists [13]. This meant embedding qualitative research questions and analysis in a quantitative survey. This multidisciplinary and mixed methods approach allowed us to combine an understanding of general trends in participants’ attitudes of trust and perceptions of transparency with qualitative questions about process, allowing participants to expand on their reasons. Quantitative and qualitative approaches were not used independently but rather designed to complement and build on each other. For example, decisions about coding of qualitative responses, as discussed further below, was directed by a first round of statistical analysis and machine learning supported Structural Text Modelling (STM). This enabled us to supplement quantitative data with qualitative explanations whilst also triangulating between datasets.

Online survey

Anonymous survey data from UK residents were collected online between 2020-04-06 and 2020-04-22 using an ODK XLSForm (https://getodk.github.io/xforms-spec/) deployed on Enketo smart paper (https://enketo.org/) via ODK Aggregate v.2.0.3 (https://github.com/getodk/aggregate). Form level encryption and end-to-end encryption of data transfer were implemented on all submissions. The survey is included in the supporting information as both PDF (S1 File) and XLSForm (S2 File) formats.

The survey included 49 questions which covered a broad range of topics including (1) Demographics, (2) Health and health behaviours, (3) Adherence to COVID-19 control measures, (4) Information sources used to learn about COVID-19, (5) Trust in various information sources, government and government decision-making, (6) Rumours and misinformation, (7) Contact & communication during COVID-19 and (8) Fear and isolation.

The survey was distributed using Facebook’s premium “Boost Post” feature. A “boosted” post functions as an advert which can be targeted at specific demographics. We boosted details of the survey and its URL to a target audience of 113,280 Facebook users aged 13–65+ years and living in England, Wales, Scotland and Northern Ireland. The survey was further distributed using a ‘daisy-chaining’ approach in which respondents were asked to share and encourage onward sharing of the survey’s URL among friends & colleagues. A number of faith institutions, schools and special interest groups were also contacted directly for assistance in dissemination of the URL.

Trust and transparency

In this paper we focus on five survey questions that, taken together, allowed us to explore the relationship between trust in the UK government’s COVID-19 response and perceptions of transparency.

Of these questions: four broadly assessed self-reported trust quantitatively, including assessments of the response and perceptions about prioritisation and acceptability of enforcement of pandemic measures. To gain insights into self-reported levels of trust in the UK government’s epidemic response, participants were asked the question Q1: “Do you think the government is making good decisions about how to control COVID-19? (options “Yes” or “No”). To identify how they viewed the government’s prioritisation at the start of the pandemic, participants were asked Q2: “Do you think that the government cares more about people and their health or the economy? (options “Don’t know”, “They care more about people and their health”, “They care more about the economy” and “About the same”). Respondents were asked Q3: “Do you think that it is acceptable for governments to force some people to change their behaviours in order to control COVID-19? (options “Yes” or “No”).

In order to explore the interplay between trust in the response and the perceptions of transparency we asked Q4: “Do you think the government tells you the whole truth about coronavirus and COVID-19? (options “Always”, “Mostly”, “Sometimes”, “Almost never”, “Never” and “I don’t know”). Any participant who did not reply “Always” to the latter question was then invited to answer Q5: “Briefly describe what it is that you think the government is not being fully truthful about” in an open-ended text response.

Ethics, confidentiality & participant wellbeing

The study was approved by the London School of Hygiene & Tropical Medicine observational research ethics committee (Ref: 21846). During the survey, participants were asked to provide (voluntarily) the first two letters of their UK postcode, thus allowing the study team to localise respondents to broad geographical “postcode areas”. These areas cover on average several hundred thousand individuals. All data were fully anonymous and the study team had no means by which they could identify individual respondents. All participants provided informed consent to participate in the study by ticking a box on the survey web-form after first having read a short passage of information about the study. The LSHTM ethics committee approved a minimal information form, but links were also provided to a project website which included a more detailed background information and study protocol. A copy of the informed consent text is included in the supporting information (S1 File). All questions in the survey were optional (excepting age and number of people in the household), meaning that participants could skip questions if they chose to.

Statistical analysis

All analysis was performed in R v4.0 and R scripts required to reproduce the statistical methods are included in the supporting information (S3 File). Using the mice package in R, we imputed missing data by performing multiple imputation by chained equations, completing 20 imputed datasets for all relevant fields and pooling results of the 20 separate analyses using Rubin’s rules. All reported percentages were calculated from valid data of the non-imputed dataset. We used logistic regression (binomial glm) to test for associations between demographic factors (age, education, gender, geographical region & income) and data on participants’ opinions of the quality of UK government decision-making. Chi-squared analysis was used to test whether there was any significant association between participants’ perceptions on the government’s truthfulness and their opinions on the quality of government decision making. To estimate the magnitude of these effects we then re-ran the above regression analysis, including the truthfulness variable as an additional explanatory factor. The ‘nnet’ R package was then used to apply a multinomial log-linear model via neural networks to the detection of factors which were associated with opinions on government response priority (which had three possible outcome classes). These analyses were corrected for all demographic covariates.

Topic modelling and analysis of qualitative data

Participants who thought that the government was not being fully truthful about coronavirus and COVID-19 were asked to “Briefly describe what it is that [they thought] the government [was] not being fully truthful about”. We applied a Structural Text Modelling (STM) [14] approach to identify key topics in these open-ended text data responses. STM employs machine learning (ML) to explore open ended survey questions in a structured and reproducible way [14, 15] and with a goal to identify topics and perspectives in free-text data. Unlike more conventional topic modelling, STM makes it possible to link topic models to metadata [14, 15] and by doing so to identify groups of responses with similar topic content. This analysis was performed using the ‘stm’ package [14] for R. The text data were processed into a corpus and numbers, common punctuation, capitalisation and stop-words (such as “I”, “me”, “that’s” and “because”) were removed. Only words which appeared in 15 or more responses to the survey were retained. The number of topics was then determined by looking for a balance between semantic coherence (clear and understandable topics) and exclusivity (lack of cross-over between topics). The topic modelling was then run and the resulting topics were labelled manually by assessing both key words used within topics and representative quotes. The number of topics and the topic labels were the main subjective parts of the STM. Expected text proportions (ETP) were defined as the proportion of the total corpus which related to each topic.

Survey submissions with no response to the open text question (mostly from those who felt that the government was fully truthful, a group who were not asked to comment in open-text) were excluded from this analysis.

Qualitative analysis

The qualitative data analysis focused on responses to the question analysed through STM, namely: Q5: “Briefly describe what it is that [they thought] the government [was] not being fully truthful about”. As noted above, this question was included to understand perceptions of transparency and our analysis focused on articulations of (mis)trust within these responses. In order to do so, we chose three topics from the STM analysis that we felt would give qualitative insights into the relationship between trust and transparency and which also closely related to our other three quantitative questions that focused on trust as defined above (Q1, Q2, Q3). As such, we conducted in depth thematic coding on topics for responses to Q5 that elaborated on perceptions of transparency itself (T1: extent of truth) and perceptions of the government’s implementation and prioritisation (T5: implementation and T7: rationale/politics). In order to further tease out the relationship between trust and transparency we focused analysis on responses from the social groups that were found to have been statistically most and least likely to positively evaluate the government’s decisions on COVID-19.

Thematic coding was conducted and individual codes, as well as consistency between them, were triangulated with the results of the STM modelling. Our experienced research team conferred regularly to check, refine and agree on the final codes.

Results

Quantitative analysis

The analysis was based on data provided by 9,322 respondents aged 20 years and over. A post-hoc power calculation for logistic regression was used to determine that the sample size of 9,322 gave us 98% power to detect odds ratios greater than 1.1 at alpha = 0.05 for any explanatory classes of frequency 0.1 and above. No appropriate method for power calculation in multinomial logistic regression was available, but we expect that the large sample size and the limited number (three) of outcome classes in the multinomial analysis were sufficient to adequately power the study to detect small effects in all but the most rare explanatory classes; for instance in Black, Asian and Minority Ethnic (BAME) groups, a limitation which we discuss below.

Respondents of the study were predominantly female (78.5%) and aged between 35 and 69 years (81.6%) (Table 1). A substantial percentage of the participants (61.4%) had a university education. The majority of participants were members of a white ethnic group (95.4%) and there was under-representation of BAME participants (4.1%). There was almost universal agreement (96.5%, n = 8,863) amongst respondents that it would be “acceptable for governments to force some people to change their behaviours in order to control COVID-19”. When asked whether they thought the government was making good decisions about how to control COVID-19, 52.7% (n = 4,845) answered positively. Self-reported trust in the government’s response was not uniform across different demographic groups (Table 2). Compared to participants living in London, those in Scotland had a lower odds (OR 0.71, 95% CI 0.51–0.91, p = 0.001) of thinking that the government was making good decisions. Meanwhile, participants from the East of England, the South East and the West Midlands all had higher odds than Londoners of thinking that the government was making good decisions (Table 2). Increasing educational levels saw a decreasing odds of having a positive opinion of government decisions (Table 2). Similarly, decreasing household income correlated with decreased positivity in this respect. Males and younger adults had relatively lower odds of having a positive opinion of government decision making than the reference groups (females and age 70+, respectively).

Table 1. Demographic characteristics of the study population.

Variable Stats / Values Freqs Missing
(% of Valid) (% total)
Are government making good decisions about COVID-19? No 4352 (47.3%) 125 (1.34%)
Yes 4845 (52.7%)
What is the government’s response priority? Don’t know 757 (8.2%) 124 (1.33%)
Economy 3139 (34.1%)
People and their health 1777 (19.3%)
About the same 3525 (38.3%)
Extent to which government tells truth about COVID-19 Never 503 (5.5%) 95 (1.02%)
Almost never 1335 (14.5%)
Sometimes 3094 (33.5%)
Mostly 3367 (36.5%)
Always 535 (5.8%)
I don’t know 393 (4.3%)
Government trust free text Provided text response 7617 (81.7%)  0 (0%)
No text response. 1705 (18.3%)
Acceptable to use force to change people’s behaviours No 324 (3.5%) 135 (1.45%)
Yes 8863 (96.5%)
Age 20–34 618 (6.6%) 0 (0%)
35–54 3307 (35.5%)
55–69 4295 (46.1%)
70+ 1102 (11.8%)
Education Completed Primary School 64 (0.7%) 344 (3.69%)
GCSE/O-levels 873 (9.7%)
A level/Higher 591 (6.6%)
Further education 1945 (21.7%)
University (first) degree 2556 (28.5%)
Post-graduate degree 2949 (32.9%)
Gender Female 7244 (78.5%) 89 (0.95%)
Male 1938 (21.0%)
All other genders 51 (0.5%)
Income Less than £15,000 1046 (13.1%) 1319 (14.15%)
£15,000 - £24,999 1510 (18.9%)
£25,000 - £39,999 1830 (22.9%)
£40,000 - £59,999 1673 (20.9%)
£60,000 - £99,999 1328 (16.6%)
More than £100,000 616 (7.7%)
Region East Midlands 638 (7.1%) 401 (4.3%)
East of England 961 (10.8%)
North East 606 (6.8%)
North West 919 (10.3%)
Northern Ireland 87 (1.0%)
London 1331 (14.9%)
Scotland 591 (6.6%)
South East 1484 (16.6%)
South West 1052 (11.8%)
Wales 491 (5.5%)
West Midlands 761 (8.5%)
Ethnicity Arabic 8 (0.1%) 55 (0.59%)
Asian 105 (1.1%)
Black 20 (0.2%)
Mixed (Other) 72 (0.8%)
Mixed (White/Asian) 44 (0.5%)
Mixed (White/Black) 29 (0.3%)
Prefer not to say 78 (0.8%)
White 8840 (95.4%)
Another Ethnic Group 71 (0.8%)

Table 2. Relative odds of respondents having a positive opinion of UK government decision-making during the COVID-19 lockdown, by demographic group.

Variable Group OR p
Region East Midlands 1.19 (0.99–1.39) 0.085
East of England 1.26 (1.09–1.43) 0.008
London Ref -
North East 1.15 (0.95–1.35) 0.157
North West 0.98 (0.80–1.16) 0.793
Northern Ireland 0.85 (0.41–1.29) 0.488
Scotland 0.71 (0.51–0.91) 0.001
South East 1.26 (1.11–1.41) 0.003
South West 1.08 (0.91–1.25) 0.369
Wales 0.89 (0.68–1.10) 0.262
West Midlands 1.36 (1.17–1.55) 0.001
Age 20–34 0.77 (0.56–0.98) 0.012
35–54 0.65 (0.50–0.80) <0.001
55–69 0.76 (0.62–0.90) <0.001
70+ Ref -
Education Completed Primary School 0.49 (-0.04–1.02) 0.008
GSCE/O-Levels (ref) Ref -
A level/Higher 0.62 (0.40–0.84) <0.001
Further education 0.63 (0.45–0.81) <0.001
University (first) degree 0.41 (0.24–0.58) <0.001
Post-graduate degree 0.32 (0.15–0.49) <0.001
Gender Female (ref) Ref -
Male 0.77 (0.67–0.87) <0.001
All other genders 0.73 (0.15–1.31) 0.284
Income Less than £15,000 0.51 (0.29–0.73) <0.001
£15,000 - £24,999 0.56 (0.36–0.76) <0.001
£25000 - £39,999 0.60 (0.41–0.79) <0.001
£40,000 - £59,999 0.72 (0.53–0.91) 0.001
£60,000 - £99,999 0.85 (0.65–1.05) 0.097
£100,000+ (ref) Ref -

Model is adjusted for all covariates.

There was diversity in the opinion of different demographic groups with respect to whether the UK government strategy prioritised the economy, people & their health, or a balance of both (Fig 1). People living in Scotland (OR 2.18, 95% CI 1.94–2.42, p < 0.001) and Northern Ireland (OR 1.69, 95% CI 1.18–2.20, p = 0.043) had a higher odds ratio of believing that the economy was the priority than those in other areas. The regions which had higher odds than Londoners of thinking that priorities were focussed on people & their health included the East Midlands (OR 1.32, 95% CI 1.06–1.58, p = 0.046), South East (OR 1.23, 05% CI 1.03–1.43, p = 0.046) and West Midlands (OR 1.28, 95% CI 1.04–1.52, p = 0.049). Groups under the age of 70 had higher odds of citing the economy as the priority (S1 Table). Education also played a role and compared to those whose highest educational achievement was O-Levels or GSCEs, the participants who had A-levels (OR 1.53, 95% CI 1.27–1.79, p = 0.002) or further educational qualifications (OR 1.36, 95% CI 1.16–1.56, p = 0.003) had similar tendency to believe that the focus of response was on the economy rather than on a balanced prioritisation. This effect was stronger still in the group with either a first (OR 2.1, 95% CI 1.9–2.3, p < 0.001) or higher degree (OR 2.22, 95% CI 2.02–2.42), or indeed among the small number of participants who left school after primary education (S1 Table). There was a linear correlation between increasing household income and odds of citing the economy as the government priority (Fig 1).

Fig 1. Participant opinions on UK government prioritisation of COVID-19 response to economy or people and their health.

Fig 1

The statistical model was adjusted for all covariates. Odds ratios compared to those who thought that the priority was a balance of both.

There was a strong relationship (Fig 2) between responses to the questions about government priority and quality of decision making (X-squared = 2999.4, df = 3, p-value < 2.2e-16). Around 60% of participants who thought that the government was not making good decisions also thought that the economy was the priority. 4.7% of this group thought that people and their health were being prioritised, while 25% thought that it was “about the same”. In the group who thought more positively of government decisions, 10.17% also thought that the economy was the priority area. 32% thought that the focus was on people and their health compared to 49% who thought that the response took a balanced approach to the two areas.

Fig 2. Perspectives on government prioritisation of health and the economy, stratified by response to the question "Do you think the government is making good decisions about how to control COVID-19?" (Yes/No).

Fig 2

Missing answers were excluded (125 for ‘right decision’ question, 124 for ‘priority’ question). Whiskers show 95% confidence intervals.

Around one third (36.5%) of respondents answered that they believed that the government “mostly” told the truth and compared to this group, those who answered ‘always’ to this question (5.8%) were more likely (OR 2.84, 95% CI 2.47–3.21, p < 0.001) to believe that the government were making good decisions about COVID-19 control. Conversely, those who thought that the government ‘never’ (0.5% OR 0.03, 95% CI -0.25–0.31, p < 0.001), ‘almost never’ (14.4% OR 0.03, 95% CI -1.17–0.23, p < 0.001) or ‘sometimes’ (33.3% OR 0.12, 95% CI 0.00–0.24, p < 0.001) told the truth were all less likely to think that the government was making the good decisions.

Structural text modelling

STM analysis of the open-text responses resulted in a corpus of 7,589 documents and 786 terms. The model was run with 7 topics until convergence was reached. Seven topics was an adequate number based on review of the survey responses and the consistency within topic outcomes. Through analysis of example quotes (S2 Table) and keywords used, topics were manually labelled as (T1) Extent of Truth [ETP = 0.214], (T2) Equipment [ETP = 0.179], (T3) Settings [ETP = 0.181], (T4) Long-term [ETP = 0.124], (T5) Implementation [ETP = 0.105], (T6) Numbers [ETP = 0.095] and (T7) Rationale/Politics [ETP = 0.100].

Qualitative analysis

As discussed above, we then used the results of the STM analysis to focus our in-depth qualitative analysis on a subset of responses that (a) mapped to topics T1: Extent of Truth, T5: Implementation, or T7: Rationale/Politics and (b) came from social groups that were found to have been statistically most and least likely to positively evaluate the government’s decisions on COVID-19. These groups were (1) respondents from the UK’s devolved nations [Scotland, Wales & Northern Ireland, which have separate legislatures and executives and a range of legal powers that are autonomous of the UK central government at Westminster], (2) respondents who were resident in England with lower and higher levels of education and (3) respondents resident in England and with incomes either under £15,000 or over £100,000.

Responses were then free coded in our qualitative analysis to identify sub-themes that emerged directly from the findings to identify particular narratives, explore qualitative differences between the groups and build a more complex picture of the dynamics of trust within these groups.

Our manual qualitative coding highlighted five major sub-themes that linked together our three chosen topics from the STM analysis for Q5 to produce a coherent qualitative narrative about the relationship between trust and transparency. These sub-themes were similar across groups, though there were differences in the way responses were articulated and in the prominence of particular narratives for different groups.

Justifying a lack of transparency

A recurrent theme across all groups included explanations of why the government could not or should not divulge all information about the pandemic. There were four main types of justification in this sub-theme. Firstly, and most commonly, respondents argued that the government had to balance transparency with an avoidance of “panic”, “hysteria” or “civil unrest”. In these kinds of responses, participants emphasised that they did not feel that “untruthful” was a correct characterisation, pointing rather to a necessary withholding of information because “we need to keep a steady hand to come through this one [to] the other side”. In the University-educated group, some respondents argued that whilst they recognised a need for the government to control the narrative this may have also “been detrimental to early efforts of containment.”

A second variation of the theme noted that withholding information was necessary to keep a very simple message and to ensure effective behaviour change in the population:

“It’s not necessarily untruthfulness, I think the government needs to withhold some information to make rules more general. I don’t think everyone can be trusted with having enough common sense to curtail their activities and keep social distance for example. So, generalised rules and possibly over-restrictive guidelines are necessary to maintain an average level of obedience.

This was also expressed in a third variation, namely that government could not share all information because people would not be able to understand it:

“I think the information they give is what they think we should know. I think they have to cater for the common denominator. I think they must have sensitive information which the masses don’t need to know”.

Finally, respondents argued that the government could not tell “the whole truth” because they likely do not have all the information. Given that “scientifically, no one really knows what the ‘truth’ is yet”, it would be necessary to produce messaging that will ensure citizens abide by the rules:

This is a new disease, so no-one knows the ‘whole truth’ about it, including the government […] They tell us what they think will lead us to follow their instructions. Truth is not arrived at by democratic vote.”

Generalised mistrust

A second set of responses focused on perceptions of a lack of transparency based on overall negative assessments of the government and politics more generally. This finding is important not only in itself, but also because it alerts us to the possibility that attitudes of (mis)trust towards the government’s COVID-19 response could be influenced not only by an assessment of pandemic management, but also by broader perceptions and past assessments of the government and public institutions.

Generalised mistrust was articulated for example through broad statements that the government was being untruthful about “everything” or: “I’m not sure [what they are untruthful about] but they have not been truthful about much in the past so it’s difficult to believe everything they say now”. For some respondents this reflected general perceptions that politicians are untruthful, self-interested and intent on prioritising economic interests.

Particularly amongst low-income respondents, this was linked to assessments of the government’s track record, with frequent mentions of austerity and underfunding of the NHS.

Mistrust of government was especially common amongst residents of the UK’s devolved nations, who expressed dissatisfaction with “Westminster” (the central UK government) as a reason for why they felt the government was not telling the whole truth on COVID-19. This was particularly pronounced for Scottish respondents, who contrasted the response of central government with that of the devolved Scottish government in their assessments of transparency and competence:

Westminster are not telling the truth, they clearly do not have a clue and they all need to be [held responsible] when this is under control. Scottish government [is] more transparent, faster to react and more all round supportive.

Welsh respondents linked this more explicitly with central government’s track record in their region:

“I don’t trust this government to fully tell the truth. In fact, given their track record over the last ten years, they lie, underfund vital services and appear not to care about the general population. They care about making money and their rich buddies”.

Few respondents 87 (1.0%) were from Northern Ireland, but amongst those there was a disproportionate mention of “Brexit” as a factor, suggesting for example that the government’s contemporaneous focus on negotiations to leave the European Union may have distracted them when it came to planning a pandemic response.

Role of evidence

Although, as noted above, some respondents accepted that existing knowledge about COVID-19 in the initial months of the pandemic was limited, there were significant concerns about what kind of evidence was being used to make decisions and how this evidence was conveyed. Respondents in all groups expressed concern about the balance of science against political or economic considerations. For example:

It is not always clear what scientific advice is being given to the government and where this is adjusted by political priorities, in addition the lines between scientific advice, government guidance, opinions of individuals and actual regulations/ legislation are very blurred and not well understood by a lot of the general population.”

Concerns about the role of scientific expertise were particularly prominent amongst respondents with higher education (university degree and above). Comparative word searches between responses from residents in England with higher and lower education backgrounds for example showed that terms like ‘science’, ‘expertise’ and ‘data’ were more frequently cited amongst higher education respondents than lower education respondents (67, 23 and 87 versus 0, 3, 3 respectively).

For example, these concerns were articulated through suggestions that the government “don’t listen to experts”. University-educated respondents in England gave more specific comments about the kinds of expertise that was either not explained or followed, including:

“Interpretation of the modelling. Statistics do not always tell the truth

In these discussions of evidence, questions about “herd immunity” were prominent. Although the government repeatedly denied that it was following a strategy that would see the virus spreading through the population unfettered so as to increase immunity, respondents in the survey who mentioned the controversy believed that this was unofficially the “overriding policy aim”. As one respondent put it:

“I don’t believe the government is being transparent about their strategy. I believe they continue to follow their herd immunity strategy as they consider public loss of life acceptable.

For some respondents this had been the main reason for a loss of confidence in the government: “After the herd immunity thing, I can’t trust them.”

Communication

Related to the role of evidence, respondents also expressed their concern with government communication of key information about the pandemic, guidance and strategy. A main strand of discussion was around a perceived lack of coherence and clarity in messaging. This was especially mentioned in relation to seeming contradictions and frequent changes in policy:

“…there is sometimes one piece of information one day which is contradicted the next but I think this is mostly scientific and medical experts who are advising the government and who tend to sometimes not agree with each other.

Mixed messages” and perceptions that risk communication involved “spin” or efforts to “manage” or “massage” the evidence were cited as sources of confusion and mistrust. These narratives envisioned information being “spun” to present the government in a positive light, to obscure mistakes or lack of knowledge about COVID-19. These comments centred especially on the press briefings, which respondents felt delivered “the agreed message”. The sense that the pandemic response was “run on slogans” with “no detailed information” meant that the reasoning behind policies and policy changes were not clear.

In contrast, some respondents argued that communication might be forgiven for vagueness and inconsistency, as long as government officials were more open and accountable, for instance by

“Admitting their mistakes and apologising. We do not expect them to have all the answers and understand if errors have been made but they need to be admitted”

This was seen to be essential to build confidence: “Ministers would be well advised to get some help from PR firms who have dealt with crises as to how to really start to build trust.”

Decision-making and implementation

A final group of themed responses centred on a wish for more transparency not only on key statistics, but also on how decisions are made. This was particularly pronounced in responses that argued that mistakes in implementation had been made. These mistakes were put down to the unclear role of evidence in defining strategies, the balance of priorities and especially “political decisions” and a focus on the economy.

That perception that the government’s “priorities are largely involved in keeping the economy alive and may not involve keeping the number of deaths at a minimum” was a concern for many respondents. A smaller group, primarily among respondents in the high-income bracket, this was cited as a genuine dilemma: “I think the government has a difficult job of balancing public health with long term fiscal security.”

The timing of implementation was a common concern in this sub-theme, with a particular focus on when the lockdown was implemented and future plans to lift restrictions. Initial “inaction” and delays in locking down were contrasted with the experience from other countries. Keeping the country open too long, some argued might have been based on political calculations:

[Prime Minister] Boris [Johnson] is the man who said that the real hero in Jaws was the guy who tried to keep the beach open. His own popularity and cabinet over-confidence has come at the expense of following best practice from other counties and medical experts.”

Other responses around the timing and nature of implementation focused on the preparedness of the NHS, levels of planning at the beginning of the pandemic and the availability of testing and PPE.

Discussion

Our survey results and mixed methods analysis offer insights into respondents’ perspectives of the UK government’s COVID-19 response during the first wave of the pandemic and its response in April 2020. In summary, we found that amongst our respondents, there was near unanimous support for government enforcement of behaviour change. Just over half of our respondents thought the government was doing a good job, but this varied across demographic categories, with lower odds for respondents in Scotland, those who were younger, those with higher education and lower income levels. Respondents who did not believe the government was doing a good job were also more likely to believe the economy was prioritised over people and their wellbeing. Around 36% of respondents thought the government mostly told the truth. Amongst those who expressed concerns about a lack of transparency, we found a number of common narratives that offer insights into the relationship between trust and transparency, including reflections on whether a lack of transparency is justified in a time of crisis, deep-seated mistrust in government and concerns about evidence, communication and the politicisation of decision-making.

Our first set of findings relate to overall levels of trust in government decision-making and their leadership in enforcing COVID-19 measures. Political trust, as a “basic evaluative orientation toward the government” [16] is widely recognised as key for the effective functioning of democratic institutions. This becomes ever more important in moments of crisis, including health emergencies, where high levels of uncertainty require confidence in the actors and organisations making decisions about emergency response measures. Before the COVID-19 pandemic, political trust was a major topic of debate amongst political scientists and the public alike against the backdrop of a political crisis triggered by the 2016 referendum to leave the European Union. Analyses of the ‘Brexit crisis’ highlighted that the referendum reflected long-standing social divisions in the UK and low levels of trust in politicians and institutions [17]. This is in line with trends across Europe and the United States where, in the aftermath of the 2008 financial crisis, confidence in political institutions has steadily declined, with populist ‘anti-establishment’ parties doing increasingly well electorally [18]. In the UK, the 2019 Eurobarometer survey showed that 21% of respondents said they “tend to trust [the] government to do the right thing”, 10 points lower than when the question was asked in 2001 [19]. Against this backdrop, our respondents’ evaluations of the UK government’s decisions over the first months of the COVID-19 pandemic, whereby 52.7% answered positively, would appear higher than expected. This may suggest that our respondents were more willing to back government decisions at the onset of the crisis. This potential “crisis effect” amongst our respondents is further supported by the fact that there was almost universal agreement that it would be “acceptable for governments to force some people to change their behaviours in order to control COVID-19”.

Public acceptance of strong-handed government leadership may increase during times of crisis, particularly in the acute phase of an emergency. Research on counter-terrorism legislation after the 9–11 attacks in the U.S. has shown that ‘states of emergency’ can affect the perception of legitimacy of measures that curtail civil liberties in a climate of fear and heightened sense of risk [20]. This work has also pointed to the long-term consequences of these “crisis effects” for democratic values. This literature provides useful parallels for understanding the very high support of government enforcement of behaviour change. This response does not necessarily tell us about respondents’ perspectives on whether the government should be able to forcibly change their own individual behaviour, but rather whether enforcement is justified in relation to others. Ignatieff [21] has argued (in the context of counter-terrorism) that majority support for restrictive measures relies on the assumption that these are going to be enforced against a minority who pose a threat to the community at large and that majoritarian acceptance of restrictions on civil liberties plays a role in the securitisation of minorities. In previous epidemics, divisive narratives that distinguished “compliant” citizens and those who were “resistant” individualised responsibility and blame, justifying forcible containment measures that had considerable political consequences [22]. Higher willingness to back the government and acceptance of a need for collective behaviour change are undoubtedly crucial for the outbreak response. However, our participants’ responses also reinforce these questions around the broader implications on political rights of the acceptance of strong-handed leadership during moments of crisis.

This is not however the full story, as positive evaluation of the government’s COVID-19 decisions was not the same across different groups of respondents. In particular, we found there to be geographical differences, with participants from some of the devolved nations (Scotland in particular) being less likely to evaluate government decisions positively. Income was positively correlated with trust (i.e. wealthier participants were more positive), and education inversely correlated (i.e. more educated participants were less positive) (Table 2). These effects might suggest that pre-existing levels of mistrust are important and previous research on institutional trust (prior to COVID-19) would suggest a positive relationship between income and institutional trust [23]. We might also consider the fact that COVID-19 measures such as lockdowns may have a higher impact on low-income respondents, for instance because of their different experiences of lockdown, differing choices and opportunities with regards to working from home and/or avoiding high risk environments. Such factors could all in turn have an impact on trust. Conversely, the negative relationship between education and trust goes counter to data from OECD research on institutional trust [18]. The higher levels of concern with the role of scientific evidence and expertise cited by respondents in England with higher levels of education suggests that at least for some in this sub-set, observations of the management of the COVID-19 response directly affected their perceptions of transparency and their trust in the government’s handling of the pandemic. Respondents who believed that the government was prioritising the economy were more likely to negatively evaluate government decisions (Fig 2). This offers insights into the qualitative dynamics of political trust.

We went on to explore the qualitative mechanisms of (mis)trust in the COVID-19 response, with a particular focus on its relationship to perceptions of transparency. Whilst it is well established that trust is important for both democracy and crisis management in general, how trust is achieved, maintained or lost during an emergency is less well understood. Political scientists expect transparency to be an important mechanism, with citizens’ ability to access information and to hold governments accountable representing a core pillar of “good governance” [24]. This is particularly pertinent for the COVID-19 pandemic given the attention that has been given to the role of information and misinformation, with the World Health Organization (WHO) alerting to the dangers of an unfolding “infodemic” [25] and political scandals in the UK having influenced popular debate on the topic of good governance during lockdown.

Our findings offer some initial insights on the complex role that transparency plays in citizens’ perspectives of the government’s response to COVID-19. Whilst 52.7% of respondents said the government was making the right decisions, only 42.3% thought the government tells the truth about COVID-19 most or all of the time. This appears counterintuitive if we consider common assumptions that transparency is a necessary condition for trustworthy governance. Our qualitative analysis of the free text answers suggests that this gap could be partly explained by some respondents’ justification that a lack of government transparency during a crisis is legitimate. These responses argued that governments may have to withhold information in order to prevent panic, because people might not fully understand or because the complexity of the full truth would make it difficult for everyone to comply with guidance. This further supports that there may exist a “crisis effect”, which conditions some of our participants’ assessments of the UK government’s response. This also adds to our previous question about the longer-term impact of emergencies on democratic values, including transparency and accountability, though we found that once again the picture was more complex than it might have initially seemed. Text mining and qualitative coding allowed us to develop a more nuanced analysis of the perspectives amongst our respondents, with a focus on groups who were most and least likely to have a positive perception of the government’s response. Generalised mistrust in politics was shared across all groups as a reason for questioning the truthfulness of official information on COVID-19. This suggests firstly that the relationship between trust and perceived transparency is not unidirectional, that is that pre-existing trust in institutions as well as observations of how a pandemic is being managed (in particular how response measures are communicated) can also affect perceptions of transparency. More generally, these expressions of generalised mistrust that pre-existing institutional trust affects attitudes towards an emergency response. This is supported by the fact that responses reflecting low trust in central government, or focusing on the government’s track record of defunding public services were particularly prominent amongst low-income, Welsh and Scottish respondents. This points towards both structural and historical determinants of confidence in the epidemic response.

Respondents highlighted a range of other factors that influenced their perceptions of the UK government’s response to COVID-19. Across both high-trust and low-trust groups, there were consistent concerns about the coherence, transparency and accountability of communications and decision-making, including uncertainty about the role of evidence and experts, as well as fears that the response was being politicised. This not only gives an insight into the reasons for a lack of trust in the response in low-trust groups, but also suggests that for high trust groups, a positive assessment of government decisions and support for enforcement in a time of crisis did not entirely eliminate concerns about transparency. In the context of debates about misinformation and the role of “fake news” circulating in unregulated communications platforms, our study shows that it is also important to consider trust in official information channels.

Limitations

Our sample was not population representative and respondents were predominantly white, female and with higher educational attainment. This means that for example, higher levels of trust when compared to pre-crisis levels, could reflect higher levels of structural trust in the sample group. In addition we expect some bias in recruitment towards demographic groups who use Facebook. Because of low uptake, our study was unable to elicit responses from Black, Asian and Minority Ethnic (BAME) communities and we could not draw any conclusions on the perceptions of a demographic group that has been shown to be disproportionately affected by the pandemic [26, 27]. In addition, ethnicity matters for understanding structural levels of political trust, as BAME communities are more likely to experience discrimination and institutional racism across a spectrum of interactions with government [28, 29]. Indeed, as the COVID-19 pandemic develops, ethnic minorities have been shown to be disproportionately targeted by the enforcement of COVID-19 regulations, including higher rates of fines and arrests. In London, black people were twice as likely to be arrested than white people [30]. This reiterates the importance, as noted above, of exploring the political consequences of epidemic control measures in contexts of structural inequality.

Recommendations

The extent to which a government may be able to foster public trust in their responses to pandemics appears to be closely linked to the coherence and transparency of their communication strategies. Concerns among UK-based respondents centred around the way the government had used scientific evidence and on how important decisions were made. Based on our findings, we recommend that in order to maintain public trust and acceptance, governments should invest in more transparent, honest governance during pandemics and to provide justification for decisions they make, including the information they cannot share.

Further investigation is required to explore other factors that influence trust in the UK government’s response to the COVID-19 pandemic, including the role of personal experiences of disease, levels of trust in the health system, economic and social impacts of the crisis and trust in different kinds of interventions. Comparative analysis across countries will also be able to highlight the relevance of different political structures, histories and relations for the effects of this health emergency on trust and political rights. In addition, our study has only looked at the epidemic’s first acute phase of April 2020, and it will be important to continue to explore how perceptions of government performance change over the course of the emergency and beyond. This should include efforts to understand the long-term effects of the COVID-19 crisis on institutional confidence. Maintaining trust is ever more important as the UK transitions in and out of subsequent waves of COVID-19 with fast moving changes to policy, lockdown and other restrictive measures.

Our participants’ assessments lead us to reflect on our key finding that there are significant differences in levels of trust across geographical, income and educational backgrounds. Whilst structural determinants of (mis)trust may be hard to act upon in the short-term, it will be important to develop measures such as targeted community engagement that tailor messaging and public deliberation to the realities faced by particular social groups. In contrast to centralised and top-down communication, this approach can directly address the diversity of experiences and perspectives that exist across the country.

Across all demographic groups and regardless of levels of trust, we found that some study participants felt that a lack of transparency was justifiable given the exigencies of crisis. For those respondents who were concerned about transparency, the reasons for those concerns were the same across all groups. Coherent communication, explanations about the sources and roles of different forms of evidence and priorities, a willingness to own up to mistakes and to explain what information cannot be shared could all be practical steps to increasing and maintaining trust across different groups. This would also strengthen accountability beyond the extraordinary times of the COVID-19 emergency.

We speculate that initially very high levels of public acceptance of more draconian control measures may relate to a ‘crisis effect’ that could be significant, but has the potential to be both acute and short-lived. As the pandemic progresses, governments may not be able to depend on such effects and instead may need to rely on deeper levels of public trust in their strategies to enable them to implement more extreme and restrictive control measures.

Supporting information

S1 File. A PDF copy of the survey questionnaire.

(PDF)

S2 File. ODK XLSForm design file for the survey questionnaire.

(XLSX)

S3 File. R script illustrating analysis.

(R)

S1 Table. Participant opinions on UK government prioritisation of COVID-19 response to economy or people & their health.

(PDF)

S2 Table. STM topics, expected topic proportions and summaries of thematic content.

(PDF)

Acknowledgments

The study team would like to thank the study participants for their contributions to the data and to acknowledge the invaluable contribution of Eleanor Martins’ & Esther Amon’s administrative support.

Data Availability

The data used in this study are available from LSHTM Data Compass at the following DOI https://doi.org/10.17037/DATA.00001860. The quantitative data (https://doi.org/10.17037/DATA.00001851) are available without restriction, whilst the qualitative data (https://doi.org/10.17037/DATA.00001859) contain sensitive individual level data and will require a data sharing agreement. This can be obtained by requesting access to the data through LSHTM Data Compass (https://datacompass.lshtm.ac.uk/1860/)

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Hale T, Angrist N, Kira B, Petherick A, Phillips T. Variation in government responses to COVID-19. Blavatnik School of Government, University of Oxford; 2020. Report No.: BSG-WP-2020/032 v6.0. Available: https://www.bsg.ox.ac.uk/research/publications/variation-government-responses-covid-19
  • 2.Abramowitz S. Epidemics (Especially Ebola). Annual Review of Anthropology. 2017;46: 421–445. 10.1146/annurev-anthro-102116-041616 [DOI] [Google Scholar]
  • 3.Larson HJ, Heymann DL. Public health response to influenza A(H1N1) as an opportunity to build public trust. JAMA. 2010;303: 271–272. 10.1001/jama.2009.2023 [DOI] [PubMed] [Google Scholar]
  • 4.Lau LS, Samari G, Moresky RT, Casey SE, Kachur SP, Roberts LF, et al. COVID-19 in humanitarian settings and lessons learned from past epidemics. Nature Medicine. 2020;26: 647–648. 10.1038/s41591-020-0851-2 [DOI] [PubMed] [Google Scholar]
  • 5.Dhillon RS, Kelly JD. Community Trust and the Ebola Endgame. New England Journal of Medicine. 2015;373: 787–789. 10.1056/NEJMp1508413 [DOI] [PubMed] [Google Scholar]
  • 6.Enria L, Lees S, Smout E, Mooney T, Tengbeh AF, Leigh B, et al. Power, fairness and trust: understanding and engaging with vaccine trial participants and communities in the setting up the EBOVAC-Salone vaccine trial in Sierra Leone. BMC Public Health. 2016;16: 1140 10.1186/s12889-016-3799-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Ryan MJ, Giles-Vernick T, Graham JE. Technologies of trust in epidemic response: openness, reflexivity and accountability during the 2014–2016 Ebola outbreak in West Africa. BMJ Global Health. 2019;4: e001272 10.1136/bmjgh-2018-001272 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Marcis FL, Enria L, Abramowitz S, Saez A-M, Faye SLB. Three Acts of Resistance during the 2014–16 West Africa Ebola Epidemic. Journal of Humanitarian Affairs. 2019;1: 23–31. 10.7227/JHA.014 [DOI] [Google Scholar]
  • 9.Nguyen V-K. An Epidemic of Suspicion—Ebola and Violence in the DRC. New England Journal of Medicine. 2019;380: 1298–1299. 10.1056/NEJMp1902682 [DOI] [PubMed] [Google Scholar]
  • 10.Wilkinson A, Parker M, Martineau F, Leach M. Engaging ‘communities’: anthropological insights from the West African Ebola epidemic. Philos Trans R Soc Lond B Biol Sci. 2017;372 10.1098/rstb.2016.0305 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Boland S, McKay G, Black B, Mayhew S. Covid-19 has forced a reckoning—the UK has much to learn from low income settings—The BMJ. [cited 10 Aug 2020]. Available: https://blogs.bmj.com/bmj/2020/05/14/covid-19-has-forced-a-reckoning-the-uk-has-much-to-learn-from-low-income-settings/
  • 12.Bardosh K, de Vries D, Stellmach D, Abramowitz S, Thorlie A, Cremers L, et al. Towards People-Centred Epidemic Preparedness & Response: From Knowledge to Action. Amsterdam Institute for Global Health and Development (AIGHD); 2019. July pp. 1–64. Available: https://www.glopid-r.org/wp-content/uploads/2019/07/towards-people-centered-epidemic-preparedness-and-response-report.pdf 10.1093/inthealth/ihw029 [DOI] [Google Scholar]
  • 13.Cresswell JW, Plano Clark VL, Gutmann ML, Hanson WE. Advanced Mixed Methods Research Designs SAGE Handbook of Mixed Methods in Social & Behavioral Research (Eds Tashakkori A, & Teddlie C). Thousand Oaks, California; 2003. 10.4135/9781506335193 [DOI] [Google Scholar]
  • 14.Roberts ME, Stewart BM, Tingley D. stm: R Package for Structural Topic Models. Journal of Statistical Software.: 41. [Google Scholar]
  • 15.Roberts ME, Stewart BM, Tingley D, Lucas C, Leder‐Luis J, Gadarian SK, et al. Structural Topic Models for Open-Ended Survey Responses. American Journal of Political Science. 2014;58: 1064–1082. 10.1111/ajps.12103 [DOI] [Google Scholar]
  • 16.Ellinas AA, Lamprianou I. Political trust in extremis. Comparative Politics. 2014;46 Available: https://works.bepress.com/antonis_ellinas/14/ [Google Scholar]
  • 17.Abrams D, Travaglino GA. Immigration, political trust, and Brexit–Testing an aversion amplification hypothesis. British Journal of Social Psychology. 2018;57: 310–326. 10.1111/bjso.12233 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Murtin Fabrice, Fleischer Lara, Siegerink Vincent, Aassve Arnstein, Algan Yann, Boarini Romina, et al. Trust and its determinants: Evidence from the Trustlab experiment. OECD Publishing; 2018. 10.1016/j.compbiomed.2017.10.027 [DOI] [Google Scholar]
  • 19.Eurobarometer Interactive Search System. [cited 10 Aug 2020]. Available: http://www.brin.ac.uk/eurobarometer-interactive-search-system/
  • 20.Agamben G. State of Exception (Stato di eccezione). Translated by Kevin Attell. University of Chicago Press, 2005. 104 pages. Int J Const Law. 2006;4: 567–575. 10.1093/icon/mol023 [DOI] [Google Scholar]
  • 21.Ignatieff M. The Lesser Evil: Political Ethics in an Age of Terror. Princeton University Press; 2013. Available: https://www.hks.harvard.edu/publications/lesser-evil-political-ethics-age-terror [Google Scholar]
  • 22.Enria L. The Ebola Crisis in Sierra Leone: Mediating Containment and Engagement in Humanitarian Emergencies. Development and Change. 2019;50: 1602–1623. 10.1111/dech.12538 [DOI] [Google Scholar]
  • 23.Statistical Insights: Trust in the United Kingdom—OECD. [cited 22 Jan 2021]. Available: https://www.oecd.org/sdd/statistical-insights-trust-in-the-united-kingdom.htm
  • 24.Johnston M. Good Governance: Rule of Law, Transparency and Accountability. United Nations Public Administration Network; 2006. p. 34 Available: https://etico.iiep.unesco.org/sites/default/files/unpan010193.pdf [Google Scholar]
  • 25.Depoux A, Martin S, Karafillakis E, Preet R, Wilder-Smith A, Larson H. The pandemic of social media panic travels faster than the COVID-19 outbreak. J Travel Med. 2020;27 10.1093/jtm/taaa031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Kirby T. Evidence mounts on the disproportionate effect of COVID-19 on ethnic minorities. The Lancet Respiratory Medicine. 2020;8: 547–548. 10.1016/S2213-2600(20)30228-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Williamson E, Walker AJ, Bhaskaran KJ, Bacon S, Bates C, Morton CE, et al. OpenSAFELY: factors associated with COVID-19-related hospital death in the linked electronic health records of 17 million adult NHS patients. medRxiv. 2020; 2020.05.06.20092999. 10.1101/2020.05.06.20092999 [DOI] [Google Scholar]
  • 28.Lammy D. The Lammy Review: An independent review into the treatment of, and outcomes for, Black, Asian and Minority Ethnic individuals in the Criminal Justice System. The Government of the United Kingdom; Available: https://www.gov.uk/government/publications/lammy-review-final-report [Google Scholar]
  • 29.Sanders D, Fisher SD, Heath A, Sobolewska M. The democratic engagement of Britain’s ethnic minorities. Ethnic and Racial Studies. 2014;37: 120–139. 10.1080/01419870.2013.827795 [DOI] [Google Scholar]
  • 30.Dodd V. Met police twice as likely to fine black people over lockdown breaches–research. The Guardian. 3 Jun 2020. Available: https://www.theguardian.com/uk-news/2020/jun/03/met-police-twice-as-likely-to-fine-black-people-over-lockdown-breaches-research. Accessed 10 Aug 2020.

Decision Letter 0

Adriano Gianmaria Duse

6 Jan 2021

PONE-D-20-28524

Trust and Transparency in times of Crisis: Results from an Online Survey During the First Wave (April 2020) of the COVID-19 Epidemic in the UK

PLOS ONE

Dear Dr. Roberts

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Most importantly, we'd like you to address the comments and/or recommendations made by Reviewers (1, 3 and 4) pertaining to the design/methodology and discussion sections of your manuscript. These reviewers have additionally requested points of clarity and/or made recommendations that will enhance the discussion and conclusions. All the Reviewers' reports are included in this letter. 

Furthermore, there are a few minor grammatical/typographical changes that need to be made.

Structural changes recommended by Reviewers 3 & 4 are for your consideration.

Please submit your revised manuscript by Feb 20 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Adriano Gianmaria Duse, MD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified whether consent was informed.

3. Please provide a sample size and power calculation in the Methods, or discuss the reasons for not performing one before study initiation.

4. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information.

5. To comply with PLOS ONE submission guidelines, in your Methods section, please provide additional information regarding your statistical analyses. For more information on PLOS ONE's expectations for statistical reporting, please see https://journals.plos.org/plosone/s/submission-guidelines.#loc-statistical-reporting.

6. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

7. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: I Don't Know

Reviewer #4: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: Yes

Reviewer #4: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The research covers a topical discussion which is particularly relevant during this pandemic. The sample size is adequate but not rerpesentative of a broad spectrum of the population. ( acknowledged in the report) This is particularly important as the sample largely included educated, white, females - whereas covid affected largely the BAME population - perhaps the study would have produced completely different results had this population group been sampled. The choice of a face book sampling method excludes many of the lower income BAME population as they are less likely to have access to internet and social media ( due to costs). Relying on a sampled person to then forward the invitation to participate would then perpetuate this sample choice. Also females ar more likely to respond than males to online social media requests

Reviewer #2: The authors indicated that the quantitative data are available without restriction, but not the qualitative data, because they contain sensitive information. They provided details on how to request access to this data.

This is a well-written, clear and nicely structured paper. It focuses on very topical and important issues related to government control strategies in the Covid-19 response in the UK. The mixed methods study is well-described and executed, and the methods are appropriate. The results are expertly and clearly presented, and the discussion is enlightening, perceptive and valuable. The limitations are frankly and transparently addressed. Important and helpful recommendations are made.

All in all, it is an excellent paper, based on an important study that makes a very valuable novel contribution to the field.

I am quite comfortable recommending that it be published without any amendments.

Reviewer #3: Overall, a very well-written paper on an incredibly important topic in this pandemic year. I have only a few minor comments. I would ask the authors to better describe the design of the study so that the importance and rigour of mixed methods studies is highlighted. I further appreciate the authors' recognition of the limitations of online surveys.

General comments

• It lacks a section on the design of the study. You mention a quantitative survey and mixed method analysis separately, however this really sounds like an embedded mixed methods design (Creswell, Plano Clark et al 2003) to me. Can you explain this better.

• Would have preferred to see the results per theme rather than per method, integrating both quantitative and qualitative results under thematic headings (with a first paragraph describing Respondent Characteristics). However this is just a preference not an actual requirement. Some of the explanations that are now under Results (for example: the second paragraph under Structural Text Modelling) fits better in the Methodology section. It would have been a more enticing read for me if the complementarity of the qualitative and quantitative data could have been highlighted through such integrated thematic sections.

Minor comments results:

• “were members of a white ethnic group” – do you mean “reported to identify with…”? or how was this assessed?

• Explain “devolved nations” for non-UK residents

• Can you explain more about the rationale underlying these decisions? (in terms of choice of topics and choice of respondents): “We used the results of the STM analysis to focus our in-depth qualitative analysis on a subset of responses that (a) mapped to topics T1: Extent of Truth, T5: Implementation, or T7: Rationale/Politics and (b) came from social groups that were found to have been statistically most and least likely to positively evaluate the government’s decisions on COVID-19. These groups were (1) respondents from devolved nations, (2) respondents who were resident in England with lower and higher levels of education and (3) respondents resident in England and with incomes either under £15,000 or over £100,000.”

Minor comments discussion:

• Would be good to read about why you think respondents with higher education and lower income were less likely to trust the government, and how this relates to the pre-existing institutional trust that you talk about later in the discussion. It is hinted that the consistent underfunding of the NHS plays a role in pre-existing institutional mistrust, but not really how this relates to either low income or high education levels.

• Perhaps more ideas could be elicited from a comparative analysis of what highly educated people and lower income respondents said. Overall, a more elaborated description of the differences between the groups selected for the qualitative exploration seems to be lacking. Perhaps there was little difference, but then that qualifies as an interesting result to be explored further in the discussion.

Reviewer #4: A very interesting study! I have identified a few points that need revising.

Abstract:

Mentions trust and transparency. Perhaps clarify that it is perceived transparency.

Methods in abstract need more clarity. Online questionnaire shared through which channels? What type of statistical analyses were performed? Please add one sentence on how each methodological technique contributed towards addressing the main research question.

Conclusion: A typo in the first sentence "different groups experiences." The conclusion could highlight at least one specific interesting finding. For instance, could the authors recommend that the role of scientific evidence be highlighted? Or that such highlights be more obvious when targeting certain groups in society? What would be the one specific finding from this study that the authors hope that readers would retain? Recommend explicitly stating that finding in the conclusion.

Article:

Introduction: "...qualitative research in recent epidemics has shown that we need to understand the

dynamics of trust as they vary by socio-political context and the specific outbreak." Please add citations to support this claim.

How do the authors address the challenge that pre-existing (mis) trust may have shaped the respondents' evaluation of the government's pandemic response and perspectives on the transparency of information being made available to the public? If they acknowledge it, they need to discuss how this process might impact the interpretations of their quantitative results.

Introduction last paragraph, 1st sentence: Suggest re-ordering, or even better, stating in the form of "relationship of X with Y" instead of "between Y and X," where X is theorized as the cause and Y and the effect.

Need to include arguments for why the authors think that perceptions of transparency would influence trust in institutional responses to the pandemic and why not that deep trust in institutional responses to the pandemic (and in general) might influence perception of transparency.

The last sentence in the introduction section needs to be split into two.

Methods:

Please explicitly clarify the dimensions of trust that were investigated. The methods section description of the questions that were asked in order to assess trust and transparency do not make it clear which question is considered as measuring transparency. Need a clear description of which questions measure trust and which measure transparency.

Need more detail in qualitative coding. Any steps taken to strengthen the reliability of the coding? And to triangulate the findings?

Table 2: Title, typo. "respodents"

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

Reviewer #4: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Feb 16;16(2):e0239247. doi: 10.1371/journal.pone.0239247.r002

Author response to Decision Letter 0


25 Jan 2021

Response to reviewers’ comments

Editor’s comments

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf

and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Response : We have changed the manuscript structure so that it now adheres to these guidelines.

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified whether consent was informed.

Response : Thanks for highlighting this omission. We have now provided a statement on this in the section on consent and have additionally included a citation to the new supporting information file S1. This file is a full text PDF copy of the survey that includes the study information page. Please also note that the exact text of the survey information was approved by the LSHTM research ethics committee as being sufficient for the purposes of informed consent in this study.

3. Please provide a sample size and power calculation in the Methods, or discuss the reasons for not performing one before study initiation.

Response : We have now added a power calculation for the logistic regression, which demonstrates that we have very good power to detect effects greater than odds ratio = 1.1. We used multinomial regression as part of the work and it is hard to conceptualise an appropriate power calculation for this method. We have therefore added a line to indicate this, but also stating that the very large sample size gives us confidence that we were powered for small effects in all but those variables where some classes were particularly rare (i.e. BAME classes, which are discussed in detail in the section on the study’s limitations).

4. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information.

Response : We have now included a copy of the survey in two formats. S1 File provides a PDF copy of the survey questions, along with possible answers for qualitative variables. S2 File provides the survey as an XLSForm design file (.xlsx). This format can be used directly in ODK to reproduce the survey and includes full detail of the form structure and logic controls used in the survey.

5. To comply with PLOS ONE submission guidelines, in your Methods section, please provide additional information regarding your statistical analyses. For more information on PLOS ONE's expectations for statistical reporting, please see https://journals.plos.org/plosone/s/submission-guidelines.#loc-statistical-reporting.

Response : We have added substantially more detail to the methods section on statistical analysis and for the purposes of replication by third parties we now provide our R analysis script as supporting information S3 File.

6. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

Response : After further discussion with our institutional data protection staff, we have been advised that there are no outstanding ethical or consent issues that restrict us from sharing the qualitative data. We have therefore made the entire dataset available through our institutional repository (datacompass.lshtm.ac.uk). The quantitative data can be accessed through https://doi.org/10.17037/DATA.00001851. Qualitative data are available at https://doi.org/10.17037/DATA.00001859.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

Response : This issue has been resolved and all data are now available using the links provided above.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide.

Response : This issue has been resolved and all data are now available using the links provided above. Thanks for updating the data availability statement on our behalf.

7. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://protect-eu.mimecast.com/s/B_blCZ01QiMp5ppIXfKdF?domain=journals.plos.org.

Response : We have now added these captions

Reviewers' comments:

Reviewer #1:

The research covers a topical discussion which is particularly relevant during this pandemic. The sample size is adequate but not representative of a broad spectrum of the population. ( acknowledged in the report) This is particularly important as the sample largely included educated, white, females - whereas covid affected largely the BAME population - perhaps the study would have produced completely different results had this population group been sampled.

Response : We certainly agree that better representation of BAME communities in our survey would have potentially led to particularly interesting and somewhat different findings, but we do not agree that COVID-19 is a disease that affects mainly the BAME population. Whilst the diagnosis rates and risk of death among individuals from BAME backgrounds are disproportionately higher than that of, for instance, the white female demographic, whites still make up the majority of the population of the UK and in the period of the study, around 83% of all COVID deaths were in white patients.

The choice of a facebook sampling method excludes many of the lower income BAME population as they are less likely to have access to internet and social media ( due to costs).

Response : We appreciate this concern and agree that very low income groups may be disadvantaged with respect to internet access. We are however unaware of any specific evidence for substantial issues of inequity for BAME groups in internet access in the UK. Government data from 2019 suggested that more than 90% of all individuals in the UK had recently accessed the internet and that several BAME groups in fact had significantly higher levels of internet use; for instance among the Chinese community 98.6% of individuals had access, whilst among the white group, 90.5% had access. The true situation may be more complicated than the UK.gov data are able to describe, but in the UK context, we expect that engagement of BAME individuals in surveys such as this is a bigger barrier to participation than lack of internet access. Difficulties in the engagement, recruitment and retention of BAME participants to research studies are common throughout health sciences research and this is a problem that is of itself the subject of ongoing research.

Relying on a sampled person to then forward the invitation to participate would then perpetuate this sample choice. Also females are more likely to respond than males to online social media requests

Response : This method of daisy-chained invitational dissemination accounted for only a small minority of responses, with the majority having come from the paid advertisement on Facebook. The advert targeting algorithms used by Facebook are certainly not random (for instance, they iteratively refine their target audience to maximise response, rather than to ensure demographic evenness) and we expect that this is a greater source of skewness in our demographic reach. The response to our survey is certainly biased towards females, but it should be noted that in numerical terms, we did in fact reach nearly 2000 males; providing a very substantial body of data.

Reviewer #2:

The authors indicated that the quantitative data are available without restriction, but not the qualitative data, because they contain sensitive information. They provided details on how to request access to this data.

Response : We have addressed this issue in our response to the editor (above)

This is a well-written, clear and nicely structured paper. It focuses on very topical and important issues related to government control strategies in the Covid-19 response in the UK. The mixed methods study is well-described and executed, and the methods are appropriate. The results are expertly and clearly presented, and the discussion is enlightening, perceptive and valuable. The limitations are frankly and transparently addressed. Important and helpful recommendations are made.

All in all, it is an excellent paper, based on an important study that makes a very valuable novel contribution to the field.

I am quite comfortable recommending that it be published without any amendments.

Reviewer #3:

Overall, a very well-written paper on an incredibly important topic in this pandemic year. I have only a few minor comments. I would ask the authors to better describe the design of the study so that the importance and rigour of mixed methods studies is highlighted. I further appreciate the authors' recognition of the limitations of online surveys.

• It lacks a section on the design of the study. You mention a quantitative survey and mixed method analysis separately, however this really sounds like an embedded mixed methods design (Creswell, Plano Clark et al 2003) to me. Can you explain this better.

Response : Thank you for this very helpful comment. We have now added a section to our methodology on research design, which includes a discussion of our multi-disciplinary team and how an embedded design allowed us to complement quantitative methods with a qualitative dataset on participants’ reasoning and narratives around the relationship between trust and transparency.

• Would have preferred to see the results per theme rather than per method, integrating both quantitative and qualitative results under thematic headings (with a first paragraph describing Respondent Characteristics). However this is just a preference not an actual requirement. Some of the explanations that are now under Results (for example: the second paragraph under Structural Text Modelling) fits better in the Methodology section. It would have been a more enticing read for me if the complementarity of the qualitative and quantitative data could have been highlighted through such integrated thematic sections.

Response : Thanks for this suggestion. When writing the original manuscript, we aimed to write a paper that followed a thematically organised structure such as the one described by the reviewer, but our initial attempts led to a rather messy narrative that somewhat jumped around. We feel that the way we currently present the data shows the work in a more intuitive way, leading from the quantitative statistical analysis, through the hybrid qual-quant STM, to the fully qualitative text coding and interpretation. In places (such as in the second paragraph of results on the STM analysis) we made decisions to include some passages in the results because the methodological step required reference to the topic names or other data that were derived from preceding steps in the analysis. By including these things in the results, we hoped to strengthen the clarity of the work and to prevent excessive need to keep referring back to the methods at key points such as this.

Minor comments results:

• “were members of a white ethnic group” – do you mean “reported to identify with…”? or how was this assessed?

Response : The data set is entirely self-reported and includes no variables that have been objectively verified. We have however no reason to doubt the veracity of the participants’ responses with respect to self-reported ethnic, cultural or gender identity; or indeed with respect to any other variable. Our feeling is that if we were to suggest that individuals ‘identified’ as members of a particular ethnic or gender grouping, then this could seem to imply that our team of researchers may not recognise the validity, or factuality, of the identities of our respondents. We prefer therefore to assume that misclassification is rare and to expect in a sample as large as this, that the impact of any incorrect or misclassified data will be very minor.

• Explain “devolved nations” for non-UK residents

Response : Thanks very much for highlighting how we had assumed a degree of knowledge on the UK’s governmental organisation that many readers would not have. We have now added a parenthesis to identify the UK’s devolved nations and to also explain their semi-autonomy from Westminster.

• Can you explain more about the rationale underlying these decisions? (in terms of choice of topics and choice of respondents): “We used the results of the STM analysis to focus our in-depth qualitative analysis on a subset of responses that (a) mapped to topics T1: Extent of Truth, T5: Implementation, or T7: Rationale/Politics and (b) came from social groups that were found to have been statistically most and least likely to positively evaluate the government’s decisions on COVID-19. These groups were (1) respondents from devolved nations, (2) respondents who were resident in England with lower and higher levels of education and (3) respondents resident in England and with incomes either under £15,000 or over £100,000.”

Response : Thank you for this comment. We have added the following text to our qualitative analysis section to highlight how we coded one particular open-ended question on transparency and then focused on those topics that allowed us to study directly how attitudes towards transparency were articulated through narratives about trust:

“The qualitative data analysis focused on responses to the question analysed through STM, namely: Q4: “Briefly describe what it is that [they thought] the government [was] not being fully truthful about”. As noted above, this question was included to understand perceptions of transparency and our analysis focused on articulations of (mis)trust within these responses. In order to do so, we chose three topics from the STM analysis that we felt would give qualitative insights into the relationship between trust and transparency and which also closely related to our other three quantitative questions that focused on trust as defined above (Q1, Q2, Q3 and Q5). As such, we conducted in depth thematic coding on topics for responses to Q4 that elaborated on perceptions of transparency itself (T1: extent of truth) and perceptions of the government’s implementation and prioritisation (T5 implementation and T7 rationale/politics).”

“In order to further tease out the relationship between trust and transparency we focused analysis on responses from the social groups that were found to have been statistically most and least likely to positively evaluate the government’s decisions on COVID-19.”

“Following the statistical analysis, these were identified as residents in devolved nations of Wales, Scotland and Northern Ireland (low trust) and high income/ low education (higher trust) and low income/ high education (lower trust), so we performed our qualitative coding on these 7 sub-sets of open-ended responses.”

Minor comments discussion:

• Would be good to read about why you think respondents with higher education and lower income were less likely to trust the government, and how this relates to the pre-existing institutional trust that you talk about later in the discussion. It is hinted that the consistent underfunding of the NHS plays a role in pre-existing institutional mistrust, but not really how this relates to either low income or high education levels.

Response This is an excellent point that encouraged us to extend our analysis. We have made two key changes. In the section on the role of evidence we have emphasised that this was particularly a concern for English respondents with higher levels of education (qualitative responses by Scottish and Welsh participants were analysed all together and were not differentiated by education and income level). In the discussion then we proposed some interpretations. Whilst lower levels of incomes were already associated with lower levels of trust prior to COVID-19, education is normally expected to have a positive relation to institutional trust. This, combined with a greater concern for the transparent use of scientific evidence, suggests that for people with higher levels of education their observations of the management of the covid-19 response affect their levels of trust regardless of pre-existing institutional trust.

• Perhaps more ideas could be elicited from a comparative analysis of what highly educated people and lower income respondents said. Overall, a more elaborated description of the differences between the groups selected for the qualitative exploration seems to be lacking. Perhaps there was little difference, but then that qualifies as an interesting result to be explored further in the discussion.

Response: As above, we have now included this particular distinction. In the introduction to the qualitative analysis section we also highlight that though overall responses were quite similar, under each theme we highlight where responses were more prominent for one or more sub-groups of respondents.

Reviewer #4:

A very interesting study! I have identified a few points that need revising.

Abstract:

Mentions trust and transparency. Perhaps clarify that it is perceived transparency.

Response - Thanks for making this important distinction. We have added text to the abstract to make it clear that we studied perceived and not objectively measured transparency.

Methods in abstract need more clarity. Online questionnaire shared through which channels? What type of statistical analyses were performed? Please add one sentence on how each methodological technique contributed towards addressing the main research question.

Response : We have now added some more detail to the abstract as requested.

Conclusion: A typo in the first sentence "different groups experiences."

Response : We could not find this error in our draft, but have revised the sentence to have a clearer grammatical structure and hopefully it is improved by the change.

The conclusion could highlight at least one specific interesting finding. For instance, could the authors recommend that the role of scientific evidence be highlighted? Or that such highlights be more obvious when targeting certain groups in society? What would be the one specific finding from this study that the authors hope that readers would retain? Recommend explicitly stating that finding in the conclusion.

Response : We have added a summary sentence to the section on recommendations, before extending the more nuanced discussion of findings.

Article:

Introduction: "...qualitative research in recent epidemics has shown that we need to understand the

dynamics of trust as they vary by socio-political context and the specific outbreak." Please add citations to support this claim.

Response : Thank you, these have now been added

How do the authors address the challenge that pre-existing (mis) trust may have shaped the respondents' evaluation of the government's pandemic response and perspectives on the transparency of information being made available to the public? If they acknowledge it, they need to discuss how this process might impact the interpretations of their quantitative results.

Response : In our qualitative analysis we discuss ‘generalised mistrust’ – identifying how some of the responses directly refer to this mistrust. We have added a further sentence to that section to highlight that this kind of pre-existing disposition towards government may also influence other types of responses—i.e. Respondents may be more likely to be suspicious of the government if trust is low. In other words, we are not claiming that trust is simply related to how a response is managed, and our qualitative data shows that pre-existing disposition towards government matters.

Introduction last paragraph, 1st sentence: Suggest re-ordering, or even better, stating in the form of "relationship of X with Y" instead of "between Y and X," where X is theorized as the cause and Y and the effect.

Response : We have made some changes to the paragraph that we hope address this point appropriately.

Need to include arguments for why the authors think that perceptions of transparency would influence trust in institutional responses to the pandemic and why not that deep trust in institutional responses to the pandemic (and in general) might influence perception of transparency.

Response: Thank you for this very important point. We have qualified this in the discussion to show that qualitative findings support that the relationship is not unidirectional, but rather that trust may also impact perceptions of transparency. This is further supported by the fact that we analysed perceptions of trust across groups with high and low levels of reported trust.

The last sentence in the introduction section needs to be split into two.

Response : This is now done.

Methods:

Please explicitly clarify the dimensions of trust that were investigated. The methods section description of the questions that were asked in order to assess trust and transparency do not make it clear which question is considered as measuring transparency. Need a clear description of which questions measure trust and which measure transparency.

Response : Thank you for this, we have now clarified this in the section on Trust and Transparency.

Need more detail in qualitative coding. Any steps taken to strengthen the reliability of the coding? And to triangulate the findings?

Response : This has now been added to the section on qualitative analysis. The STM process uses an iterative expectation maximisation algorithm to classify the topics. It works consistently across multiple runs. Because the coding is almost entirely performed without human interventions, it is also relatively resistant to bias, at least with regards to grouping subsets of responses together as belonging to a specific topic.

Table 2: Title, typo. "respodents"

Response : Thanks, though we couldn’t find this one in our draft.

Attachment

Submitted filename: Response to Reviewers.pdf

Decision Letter 1

Adriano Gianmaria Duse

27 Jan 2021

Trust and Transparency in times of Crisis: Results from an Online Survey During the First Wave (April 2020) of the COVID-19 Epidemic in the UK

PONE-D-20-28524R1

Dear Dr. C Roberts

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Adriano Gianmaria Duse, MD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Adriano Gianmaria Duse

1 Feb 2021

PONE-D-20-28524R1

Trust and transparency in times of crisis: Results from an online survey during the first wave (April 2020) of the COVID-19 epidemic in the UK

Dear Dr. Roberts:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Adriano Gianmaria Duse

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. A PDF copy of the survey questionnaire.

    (PDF)

    S2 File. ODK XLSForm design file for the survey questionnaire.

    (XLSX)

    S3 File. R script illustrating analysis.

    (R)

    S1 Table. Participant opinions on UK government prioritisation of COVID-19 response to economy or people & their health.

    (PDF)

    S2 Table. STM topics, expected topic proportions and summaries of thematic content.

    (PDF)

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Data Availability Statement

    The data used in this study are available from LSHTM Data Compass at the following DOI https://doi.org/10.17037/DATA.00001860. The quantitative data (https://doi.org/10.17037/DATA.00001851) are available without restriction, whilst the qualitative data (https://doi.org/10.17037/DATA.00001859) contain sensitive individual level data and will require a data sharing agreement. This can be obtained by requesting access to the data through LSHTM Data Compass (https://datacompass.lshtm.ac.uk/1860/)


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES