This piece is one in a new series we will be hosting on our blog, aiming to help make science more accessible to the public. This is a short, comprehensive summary of a recent paper: “Examining the Impact of Expert Voices: Communicating the Scientific Consensus on Genetically-modified Organisms”. Thank you to our volunteer Justin Marleau for contributing this post!
Showing scientific consensus may not impact individual's beliefs, but there is no consensus on that either
Picture this: A new technology stokes controversy and media coverage is sensational. Studies, of high- and low-quality, are bandied about by promoters and skeptics through the press and social media. Governments, worried about public perception and uncertain of what to do, call on a panel of prestigious, multidisciplinary experts to provide a definitive analysis of the technology and provide recommendations. After months of research, interviews and evidence synthesis, the panel issues a three-hundred-page report that declares in unequivocal terms that there is no evidence the technology causes any harm. Surely, this would end the controversy, would it not?
Recent work by Dr. Asheley Landrum and others from the University of Pennsylvania’s Anneburg Public Policy Centre suggests that such consensus-building exercises actually have little to no impact on public opinion. Dr. Landrum’s team created a survey which was distributed to workers in Amazon’s “Mechanical Turk” program (a service which crowdsources people to complete a variety of tasks for financial remuneration). The survey asked participants to answer questions about a statement on genetically modified organisms (GMOs). The statements either stated that 1) the scientific consensus is that GMOs are just as safe as conventionally-bred organisms, 2) that there is no scientific consensus on the safety of GMOs or 3) that there are conflict-of-interest problems with a report that examined GMOs. The study also had a control statement that was about the history of baseball.1
The authors used perceptions of GMOs to test the “gateway belief model” of public opinion formation and change. A central idea of this model is that people care about being in agreement with their group and pay attention to what that consensus is. In this model, receiving information that their beliefs are not in line with the group can lead individuals to revise their beliefs towards the consensus position. Using the logic of the “gateway belief model”, the authors expected that those presented with the scientific consensus on GMOs would update their views in line with that consensus. The authors used several metrics to evaluate this expectation. Participants were asked several questions about their “concern about GMOs” as well as being asked to estimate the total percentage of scientists who believe that GMOs are safe. According to the study’s hypothesis, participants who viewed the initial statement about scientific consensus on GMOs should demonstrate reduced concern and increase their estimated percent of scientists in support of GMOs compared to those seeing the control statement. Conversely, when exposed to the ‘no-consensus’ statement, the opposite response would be expected. Of note, the researchers also evaluated baseline attitudes about GMOs by asking participants how often they buy non-GMO foods. To avoid any bias, this question was asked alongside a number of questions about shopping habits.
What the authors saw was that the statements had little effect: overall, all participants thought about 60% of the scientific community was in agreement that GMOs were safe, and they were ‘somewhat concerned’ independent of which statement they had seen. In this study, what mattered was the prior beliefs of the participants. The more often participants indicated they specifically purchased non-GMO foods, the more likely they thought there was no scientific consensus. Furthermore, the perceived persuasiveness of the statements was strongly related to prior beliefs: for example, those who indicated they normally avoid GMOs found the consensus statements to be less persuasive than those who did not normally avoid GMOs. Based on these results, Dr. Landrum and colleagues draw the conclusion that exercises in developing and communicating a ‘scientific consensus’ on an issue will not lead to desired outcomes of lay people adopting the consensus position.
However, we should not be too quick to draw the same conclusions as the authors do from this study. One methodological issue is that the statements were not presented as arguments, and in many cases were about tangential issues. One statement described how Nobel prize winners wrote a letter to try get Greenpeace to end their anti-GMO campaign. Another statement was primarily about conflict of interest issues for members of the National Academies of Science, Engineering and Medicine (NASEM) panel that produced the report on GMOs. If a participant in the study just read the text in either of these statements, one would have no idea that there is any scientific consensus on GMOs and their health effects. This problem was partially addressed in the ‘Nobel letter’ treatment with a graphic indicating that there is a consensus according to the letter, but it would be difficult to link the graphic to the text.
The conflict of interest statement, however, did not benefit from a graphic. Since the participants did not have the NASEM report or its conclusions when exposed to the conflict of interest statement they could not know if there was a scientific consensus or not. Therefore, the unexpected patterns of response to the statements make more sense: the readers had no idea what the statement had to do with GMOs and health risks, and had to guess at what it meant. The fact that the authors suggest that the conflict of interest statement was just like the European Network of Scientists for Social and Environmental Responsibility (ENSSER) anti-consensus statement, which explicitly says “There is NO scientific consensus”, is not supported by their result once the prior beliefs of participants are taken into account. This study would have benefited greatly by being more explicit in the major limitations of the conflict of interest treatment.
Furthermore, let’s consider the participants’ estimate of the ‘percent of scientists who agree with the ‘consensus. In looking at the data, there is a ten-percentage point gap between the ENSSER anti-consensus treatment (54% of scientists agree) and the NASEM consensus treatment (64% of scientists agree). While this is not statistically significant at the current number of participants, it would be interesting to see if a higher-powered study with more participants would show a sustained and significant gap between the two and the control. If this were the case, then there is an effect of such messaging on the public, though it would be relatively small. Another quirk in the results is that participants felt comfortable saying that less than 50% of scientists agreeing that GMOs are safe represents a ‘scientific consensus’. This suggests that many people do not have any idea what ‘consensus’ means, which is very troubling for studies of this type. Perhaps using ‘scientific majority view’ would have led to clearer results.
Lastly, the study did not measure the direct impact of the statements on ‘updating’ prior beliefs. Perhaps being exposed to the consensus statements ‘revised’ the consensus level in the participants mind. Perhaps being exposed to the anti-consensus statement lowered the consensus level. We cannot know as there was no ‘pre-treatment’ measurement. While the authors were worried about pre-contaminating the responses by asking for this prompt, it makes the results just that much harder to interpret.
Overall, studies like this one provide scientists with an important reality-check. We might think that with enough blue-ribbon panels and reports, we can convince our friends, families and neighbours that cellphones, GMOs and other technologies are relatively safe. However, we have to accept that many people are skeptical of consensus views, even those from the scientific community. Such difficulties do not mean it is not worthwhile for the scientific community to try to find consensus, which can obviously have many other benefits such as having better messaging with policy makers. It also doesn’t mean scientists shouldn’t continue communicating their research to the public, this is an important part of the process of conducting meaningful research. Nevertheless, it means we might want to shift our resources towards other forms of scientific communication (social media, public talks, opinion pieces) rather than relying on blue-ribbon panel reports.
1A control treatment is used to establish baseline results in order to evaluate whether the experimental treatments deviate from it. When there are no deviations from the baseline results, we usually consider that a treatment has had no effect. In this study, the control treatment was a brief statement about the history of baseball from its origins to today. As luck would have it, the baseball statement has a factual inaccuracy as a team does not need to win its division to make the playoffs ever since the introduction of the wildcard in 1994. This error helps highlight one of the weaknesses of the study: they did not take sufficient care in creating their treatments.
You can access the full, original publication here: https://www.tandfonline.com/doi/full/10.1080/17524032.2018.1502201
Justin N. Marleau is a post-doctoral fellow and course lecturer at McGill University. He currently works on building models of ecological communities and ecosystems to help predict their capacity to recover from lethal stressors like acid pollution through ecological and evolutionary mechanisms. He is also working on an illustrated textbook focusing on the history of science and is actively engaged in Canadian science policy through the Mitacs Canadian Science Policy Fellowship and collaborations with the Public Health Agency of Canada.