Volume 13 Supplement 3

Articles from the Eisenberg Center Conference Series 2012: Supporting informed decision making when clinical evidence and conventional wisdom collide

Open Access

Against conventional wisdom: when the public, the media, and medical practice collide

BMC Medical Informatics and Decision Making201313(Suppl 3):S4

DOI: 10.1186/1472-6947-13-S3-S4

Published: 6 December 2013



In 2009, the U.S. Preventive Services Task Force released new mammography screening guidelines that sparked a torrent of criticism. The subsequent conflict was significant and pitted the Task Force against other health organizations, advocacy groups, the media, and the public at large. We argue that this controversy was driven by the systematic removal of uncertainty from science communication. To increase comprehension and adherence, health information communicators remove caveats, limitations, and hedging so science appears simple and more certain. This streamlining process is, in many instances, initiated by researchers as they engage in dissemination of their findings, and it is facilitated by public relations professionals, journalists, public health practitioners, and others whose tasks involve using the results from research for specific purposes.


Uncertainty is removed from public communication because many communicators believe that it is difficult for people to process and/or that it is something the audience wants to avoid. Uncertainty management theory posits that people can find meaning and value in uncertainty. We define key terms relevant to uncertainty management, describe research on the processing of uncertainty, identify directions for future research, and offer recommendations for scientists, practitioners, and media professionals confronted with uncertain findings.


Science is routinely simplified as it is prepared for public consumption. In line with the model of information overload, this practice may increase short-term adherence to recommendations at the expense of long-term message consistency and trust in science.


In 2009, the U.S. Preventive Services Task Force (USPSTF) announced a change in mammography guidelines, a recommendation that sparked a torrent of criticism [13]. Previously, the USPSTF recommended a B grade for mammography screening every 1 to 2 years for women age 40 and older. A B grade means, “The USPSTF recommends the service. There is high certainty that the net benefit is moderate or there is moderate certainty that the net benefit is moderate to substantial” [4]. In 2009, the USPSTF altered their recommendations such that biennial screening for women from age 50 to 74 received a B grade and biennial screening for women from age 40 to 49 was downgraded to a C grade. A C grade means, “Clinicians may provide this service to selected patients depending on individual circumstances. However, for most individuals without signs or symptoms there is likely to be only a small benefit from this service” [4]. The new recommendation of the USPSTF was challenged by other organizations and patient advocacy groups as it was in direct conflict with the guidelines that had been communicated—by those groups and the USPSTF—for years [2].

One interpretation of this controversy is that the USPSTF encountered problems not because their message was perceived as unsubstantiated or inaccurate, but rather that it deviated from recommendations of the past in a fairly significant manner. Past communication about mammography had focused on a simple message: Women should have an annual mammogram starting at age 40 because screening saves lives. This message was a central component of health education efforts devoted to cancer, and advocacy groups were mobilized across the United States in support of retaining annual mammographic screening as recommended practice. However, the uncertainties of the benefits and harms associated with annual mammography were rarely included in these advocacy efforts. The USPSTF had acknowledged these uncertainties in its own reports, but changing the resulting recommendation based on these uncertainties created the appearance of a discrepant message or a flip-flop [3]. Changing the recommendation for women aged 40–49 from a B grade to a C grade suggested that the USPSTF had incorrectly categorized the certainty of the mammography in the past (i.e., it went from high/moderate certainty to uncertain).

It is clear in hindsight that the USPSTF members did not fully appreciate how contradictory their recommendation was or the potential backlash it would invoke at the time that the recommendation was published [4]. The researchers and research-oriented practitioners who comprised the USPSTF believed that there were significant uncertainties concerning the value of mammography screening. Those uncertainties were known within the research community, and results from simulation research were starting to suggest the need for alterations in the screening guidelines [1]. Unfortunately, the USPSTF members and other contributors to the effort did not appreciate how the new recommendation would be received by members of the public (e.g., women at risk, community-based clinicians, public health officials) that had been steeped in messages about the value of annual screening mammography for decades. Nor did they appreciate how the public would perceive a downgrade in their recommendation.

The controversy over mammography guidelines raises significant questions about the communication of health recommendations, including: What went wrong in this situation? Who had responsibility for the significant misjudgment concerning the public reaction? How could it have been avoided? Based on these questions, we argue that this controversy is a predictable response to the systematic removal of uncertainty from the public communication of scientific content; a problem that undermines the credibility of science and confuses the public [510]. In other words, the 2009 mammography controversy was symptomatic of larger structural problems undermining the public dissemination of science rather than an isolated incident. The goal of this article is to articulate how conflicts of this type arise and to review possible means of redress.


The public learns about research primarily through media—television, the Internet, and newspapers. These channels are used to convey recommendations from official entities (e.g., the USPSTF), the results of single studies, and/or the accumulation of decades of research evidence [11]. This approach is necessary because few people have direct access to the research enterprise. Thus, the media disseminate research findings—a somewhat uncomfortable situation that often places media-focused enterprises and outlets at the center of scientific debates [12].

To understand the tension of this dissemination process, it is pivotal to know that scientists and journalists have distinct professional norms that often conflict [7]. Scientists value uncertainty, and this leads them to favor hedged discourse and longer, denser prose [13, 14]. Journalists value concise narratives that represent myriad perspectives to achieve balanced coverage [15, 16]. One can easily see this tension by comparing an academic journal article to its subsequent news coverage. Such comparisons reveal that news coverage of research often maximizes conflict by providing space to divergent voices—which are routinely manufactured or magnified to maximize conflict—and frequently omits the caveats, limitations, and uncertainties presented in the original journal reports [7, 9, 10, 12, 1720]. For example, Lai and Lane [21] found that 43% of front-page newspaper stories about science were based on preliminary evidence. Of those stories, only 18% were described as preliminary or mentioned the limitations of the research.

Conflicting professional norms have led scientists to go through periods of media engagement and withdrawal. In the early 1900s, scientists embarked on a period of media engagement driven largely by the efforts of the Progressives—a term used to describe groups involved in a massive reform movement in the United States from approximately 1890 to 1920— who viewed research as the guiding force of reform [22]. During the Progressive Era, scientists were trained to streamline their messages when communicating with the public to ensure that science was the voice of authority in matters of policy [14, 15]. This period of media engagement was followed by a significant withdrawal near the middle of the 20th century. The rationale for withdrawing was articulated best by Popper, who argued that science needed to embrace uncertainty and abandon the desire “to be right” ([23] p. 280-281). Moreover, concerns about the relationship between science and the media ultimately led others to eschew publication of research that was prematurely disseminated to the public. For example, Franz Ingelfinger, editor of the New England Journal of Medicine from 1967–1976, decreed in 1969 that his journal would no longer publish research that had been released to the media in advance, as “premature publicity about medical research and publicity about work that has not yet been documented cause confusion among laymen and the profession alike” ([24] p.825). Ingelfinger’s policy still allowed for interactions between scientists and media professionals, but only after research had been sufficiently vetted by the peer review process. Thus, Ingelfinger advocated a divide between scientists and media professionals to protect scientific inquiry from the negative influences of hasty public dissemination. Both Popper and Ingelfinger seemed to appreciate that media professionals value a definitive claim, and that once that claim is made it could undermine or jeopardize scientific credibility.

More recently, science seems to be moving back toward engagement. Scientists are once again seeking training in how to interact with media professionals [25, 26]. Not surprisingly, this training often focuses on simplifying scientific statements so they appear more certain and (presumably) more lucid for nonscientists. The Progressives supported this approach to solidify scientists as key decision makers, but modern advocates of simplification are interested in increasing comprehension and adherence among members of larger audiences, including the general public. The health literacy and plain language movements, for example, both posit that crafting scientific information for public consumption is primarily a process of simplification [27]. The logic seems to be that many problems in the communication environment could be solved by the removal of scientific jargon and/or extraneous verbiage.

As science returns to media engagement, some have cautioned that effective communication of scientific information should be guided by the philosophy of science (e.g., Popper’s caution about avoiding the “need to be right”) and a growing body of literature evaluating the benefits and harms of removing uncertainty from scientific discourse [8, 27]. The basic assumption of this approach is that conflict among scientists, the media, and the public will occur, but the emergence of such conflict should be a secondary concern among those charged with the communication of scientific information. Their primary concern should be to foster a conversation that includes uncertainty, rather than streamlining messages to achieve (what are often) short-term objectives in conveying a specific point. In the analysis section, the logic, evidence, and future directions of this alternative strategy are outlined.


Streamlining and uncertainty

Two key terms need to be defined for this discussion: streamlining and uncertainty. Streamlining is the process of removing information as a message moves through communication channels. In science, Star [28] argued that streamlining often begins with researchers as they omit countless details from their research reports. Of course, streamlining is necessary as it is impossible to include all details in a message. For example, researchers might note that the temperature of their laboratory was kept at 72 F during the study but fail to mention relative humidity (as they view that as irrelevant). The streamlining process continues as research is moved forward by researchers positioning their work for publication and after publication as public relations professionals craft shorter press releases to drive media coverage. Journalists further streamline the material to fit the space requirements of their publication, and additional streamlining may occur when news coverage is reappropriated by bloggers, social media, or even in interpersonal conversation.

What information is streamlined in science communication? Two factors are systematically removed during the streamlining process: lexical complexity and uncertainty. Scientific discourse is more lexically complex than other forms of communication [29], and the removal of jargon or multisyllabic terms is standard practice when preparing a document for public consumption [27]. This practice often lowers the reading level of the message, which may benefit audiences with lower literacy [30], although the costs and benefits of reducing lexical complexity have yet to be fully investigated [27].

In an effort to reduce lexical complexity, communicators frequently cut uncertainty from the message as well. Uncertainty is both a perception and a message feature. A person can feel uncertain and a message can convey uncertainty. Brashers argued that uncertainty is a complex self-perception that a situation is “ambiguous, complex, unpredictable, or probabilistic,” and it occurs “when information is unavailable or inconsistent” ([31] p.478). As a message feature, uncertainty is cut to reduce lexical complexity and because many communicators believe that audiences want to reduce or avoid uncertainty [3234]. Uncertainty management theory, on the other hand, posits that people sometimes prefer uncertainty [31]. Identifying when and why people prefer uncertainty is the primary objective of uncertainty management research.

Uncertainty in science comes in at least two forms: lexical and discourse-based [35]. Lexical uncertainty occurs when a communicator uses hedging (e.g., may, if, perhaps, might) to suggest uncertainty. “Blueberries may prevent throat cancer” is a claim with lexical uncertainty whereas “blueberries prevent throat cancer” does not contain lexical uncertainty. Discourse-based uncertainty occurs when a communicator provides a reason that a claim is uncertain. For instance, if researchers note that a study used tomato powder instead of tomatoes, and thus the impact of tomatoes is still unknown, that would be an example of discourse-based uncertainty.

Streamlining, uncertainty, and the public communication of science

Public communication of science moves fast, perhaps faster than scientists recognize. The desire for definitive information on pressing issues of the day can foster a culture of short, overly certain messages that seem to change over time. An infamous example of this tendency is news coverage of margarine and butter. For several decades, researchers have examined the relative health benefits of margarine and butter. Individual studies have yielded data supporting one or the other (and sometimes neither), which has led to a series of stories touting margarine over butter, then butter over margarine, then margarine over butter, and so on [36, 37]. Research on this topic is relatively uncertain, yet news coverage has often presented the issue as certain and in line with the findings of each new study. The margarine-versus-butter storyline is typical of news coverage in that science is often presented in brief stories that seem to flip-flop over time [38]. This flip-flopping is driven by journalistic norms that cut lexical and discourse-based uncertainty and favor conflict and newsworthiness [39].

Despite a renewed interest in simplification, the reality is that public communication of science typically is simple (in the short term), and this fact can produce confusion (in the long term). For example, the controversy about the USPSTF mammography guidelines was predicated by decades of simple, adherence-focused communication. Simple messages advocating annual mammography increased adherence among U.S. women over 40 years of age from 29% in 1987 to 70.4% in 2000 [40]. Streamlining communication may maximize behavioral response, but that same simplicity potentially triggers backlash if the recommendation needs to be changed. In other words, the controversy about the USPSTF mammography guidelines was, in many respects, a classic margarine-versus-butter situation.

Uncertainty and the public

Research suggests that many adults have limited health and science literacy [27]. In light of these skill deficiencies, communicating scientific uncertainty to the public may sound like a misguided idea. However, lexical complexity and uncertainty are distinct message features. That is, there is nothing about uncertainty that requires lexical complexity. For example, the following sentences are written at a first-grade reading level (Flesch-Kincaid Grade Level = 0.6), contain no passive sentences, and have a reading ease score of 100%: “This study used mice. Mice and people are not the same. We do not know if it will work in people.” In terms of science literacy, existing measures of this construct do not test comprehension of scientific uncertainty; therefore, the public’s ability to process uncertain scientific statements is still largely unknown [34].

What is known is that, among the public, there is widespread fatalism and overload concerning health information. The Health Information National Trends Survey (HINTS) is a national survey of U.S. adults conducted approximately every other year. Focused primarily on cancer, the HINTS data have shown that: 27% of adults believe, “There's not much people can do to lower their chances of getting cancer”; 41% agree with the statement, “It seems like almost everything causes cancer”; and 71.5% express the view that, “There are so many recommendations about preventing cancer, it's hard to know which ones to follow” [41]. The first two beliefs are examples of cancer fatalism, whereas the last belief is cancer information overload [42]. Adults who embrace ideas associated with higher fatalism and overload are less likely to engage in cancer prevention and detection behaviors [42, 43].

Importantly, fatalism and information overload are negatively correlated with education [42, 44]. That is, people with less education are more likely to exhibit fatalism and overload. This relationship suggests that education provides something that allows adults to process cancer information without triggering negative reactions. It is possible that education provides the basic literacy and numeracy skills necessary to comprehend news coverage of research (news coverage is typically written at a ninth-grade reading level [45]). It is also possible that education provides a context for understanding news coverage. Research to date has shown that including discourse-based uncertainty in news coverage of cancer research decreases cancer fatalism and nutritional backlash [8]. No study has examined whether uncertainty is related to information overload.

The future of uncertainty

The first question that needs to be addressed is whether lay adults can meaningfully process scientific uncertainty. Studies examining the positive and negative effects of textual and discourse-based uncertainty would be especially useful. Uncertainty management theory posits that people respond to uncertain information in complex ways (e.g., avoidance, engagement, anger, relief, confusion). How people respond to uncertainty will likely depend on the type of uncertainty, how it is communicated, and individual skill and dispositions [34, 46]. From a theoretical standpoint, the controversy about the USPSTF mammography guidelines raises questions about the concealment of uncertainty. Is uncertainty perceived differently if people believe it was initially downplayed or omitted from discourse? The perception that information was withheld may have detrimental impact on the credibility of the communicator, as well as identity issues for those who advocated the original message. Indeed, the mammography controversy frustrated many screening advocates who had personally endorsed a course of action (and message) that was now being questioned. Understandably, many screening advocates felt betrayed, embarrassed, and angry. All of these reactions raise questions about logic and timing in communicating uncertainty. Of course, only the aggregation of numerous studies across different contexts, time intervals, and forms of uncertainty will yield generalizable knowledge to guide effective communication practice [7, 8, 33, 34].

The USPSTF was hindered by a suboptimal communication strategy about the mammography controversy, but they may be working in the right direction in a larger sense. For example, one USPSTF goal is to categorize the state of knowledge on particular public health issues. Part of this categorization process is an assessment of the level of certainty regarding the net benefit of a health behavior. Evidence is categorized as low, moderate, or high in terms of certainty (see Table 1 for the criteria of each category [47]). Categorizing certainty is a potentially useful idea, and research should investigate public comprehension of the categories. However, health researchers should also consider whether the number of categories—three at present—is sufficient. The goal is to provide a sufficiently nuanced categorization scheme to allow researchers to accurately describe the evolution of a research program. It could be argued that the categorization scheme currently used by the USPSTF failed the 2009 Task Force, as their desired course of action (i.e., a change in the recommendation based on growing uncertainties) could not be properly conveyed. An alternative categorization scheme could focus on the state of the research with categories like no studies, isolated, infancy, emerging, and established. The model of information overload posits that people need to categorize information to process scientific research meaningfully [8]; therefore, cultivating a widely recognized and sufficiently detailed categorization scheme could be a valuable addition to public communication of science.
Table 1

How the U.S. Preventive Services Task Force categorizes level of certainty

Level of Certainty



The available evidence usually includes consistent results from well-designed, well-conducted studies in representative primary care populations. These studies assess the effects of the preventive service on health outcomes. This conclusion is therefore unlikely to be strongly affected by the results of future studies.


The available evidence is sufficient to determine the effects of the preventive service on health outcomes, but confidence in the estimate is constrained by such factors as:

• the number, size, or quality of individual studies

• inconsistency of findings across individual studies

• limited generalizability of findings to routine primary care practice

• lack of coherence in the chain of evidence

As more information becomes available, the magnitude or direction of the observed effect could change, and this change may be large enough to alter the conclusion.


The available evidence is insufficient to assess effects on health outcomes. Evidence is insufficient because of:

• the limited number or size of studies

• important flaws in study design or methods

• inconsistency of findings across individual studies

• gaps in the chain of evidence

• findings that are not generalizable to routine primary care practice

• a lack of information on important health outcomes

More information may allow an estimation of effects on health outcomes.

Note: Evidence regarding the net benefit of health behaviors is categorized as low, moderate, or high using the above criteria.

One intriguing area of future research is the study of visual depictions of uncertainty [48]. Effective visuals may overcome literacy and numeracy deficits. Visuals could also convey complexity more efficiently and address space issues that often drive the streamlining process. The 2009 mammography controversy may have been avoided if communicators had effective visuals formats for contextualizing the magnitude of uncertainty about mammography, especially as that research unfolded over time. Concerning the latter, a timeline visual depicts key milestones/events in a way that establishes the duration of activities (years, decades, centuries), evolution of ideas, and level of uncertainty. In practice, timeline visuals should encourage both communicators and audiences to consider the totality of a situation rather than focusing on the details of the latest event (only). Timelines have only been in use for approximately 250 years, and the public initially struggled to comprehend this new visual format [49]. Social scientific research on the use and efficacy of this visual format remains in its infancy. There is evidence that timelines enhance information recall [50], but social scientific research concerning comprehension is limited. This is an interesting omission in the research given the tendency of visual researchers to use timeline-oriented visuals (e.g., Florence Nightingale’s visual depiction of deaths during the Crimean War [1853–1856]) as exemplars of quality [48]. Visuals may also be ideal to communicate context for the magnitude of the uncertainty (henceforth, context-magnitude visuals). Researchers often use context-magnitude visuals to demonstrate the relative size of an effect; for example, scholars have contextualized the relationship between exposure to media violence and aggression by visual depicting other (weaker, stronger, comparable) relationships at the same time [51]. Similar visuals could be constructed and evaluated for communicating the magnitude (or even form) of scientific uncertainty in a given situation. Of course, researchers should also be cautious because there is a risk that communicating uncertainty in this fashion could mislead or confuse target audiences.

Another promising area for future research is the communication of uncertainty via interactive media. Media are evolving in ways that challenge traditional journalism practices including the organization and form of content. Interactivity allows content to unfold at the discretion of the consumer, and this provides a vehicle for conveying complex information to diverse audiences in meaningful ways [27].


Competition between conflicting opinions is a healthy part of public discussion in situations defined by uncertainty. Silencing dissenting opinions when an optimal course of action is unclear creates a potentially hostile communication environment. That aside, conflict that derives from known biases in communication channels is a concern. Such conflict may stem from systematic efforts to simplify information for public consumption. Though well intentioned, simplification strategies may have unintentional negative impacts on certain individuals or population subsets such as the cultivation of fatalism, backlash, and overload [8]. Simplification strategies could also undermine or damage the credibility of the science [7].

Practitioners and media professionals will be interested in possible solutions. Continued research of uncertainty management, uncertainty categorization, and visual depictions of uncertainty may identify promising communication strategies for specific populations and situations. Until then, communicators should consider the long-term goals/consequences of their strategies in addition to the philosophical and ethical foundations of science. Even if future research suggests that adults with skill deficits struggle to process uncertainty, and that this contributes to problems they have in managing their own health care, communicators will be faced with the challenge of determining if it is ethical to conceal such information from populations with literacy deficits or the population as a whole [34]. Cutting, removing, or simplifying information for public consumption is (once again) a popular strategy. Yet, simplification, in and of itself, it is not a virtue of communication, even if it may be effective at achieving some goals. Simplification is a message strategy or feature that can yield positive and negative effects. Rather than focusing solely on simplification as a goal, communicators should strive to be meaningful and to embrace strategies that achieve that goal regardless of their simplicity or complexity. Meaningful health recommendations may need to include indicators of uncertainty even if doing so sacrifices short-term adherence for long-term coherence.

List of abbreviations used


Health Information National Trends Survey


U.S. Preventive Services Task Force



The authors would like to thank Rick Street for helpful guidance during this process.

Declarations and disclaimer

The Eisenberg Conference Series 2012, Supporting Informed Decision Making When Clinical Evidence and Conventional Wisdom Collide, was conducted in Rockville, Maryland, by the John M. Eisenberg Center for Clinical Decisions and Communications Science, Baylor College of Medicine, Houston, Texas, for the Agency for Healthcare Research and Quality under Contract No. HHSA 290-2008-10015-C. Publication costs for this supplement were funded by this contract. The author of this article is responsible for its content. No statement may be construed as the official position of the Agency for Healthcare Research and Quality and of the U.S. Department of Health and Human Services.

This article has been published as part of BMC Medical Informatics and Decision Making Volume 13 Supplement 3, 2013: Articles from the Eisenberg Center Conference Series 2012: Supporting informed decision making when clinical evidence and conventional wisdom collide. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcmedinformdecismak/supplements/13/S3.

Authors’ Affiliations

Department of Communication, University of Utah


  1. Mandelblatt JS, Cronin KA, Bailey S, Berry DA, de Koning HJ, Draisma G, Huang H, Lee SJ, Munsell M, Plevritis SK, Ravdin P, Schechter CB, Sigal B, Stoto MA, Stout NK, van Ravesteyn NT, Venier J, Zelen M, Feuer EJ: Effects of mammography screening under different screening schedules: model estimates of potential benefits and harms. Ann Intern Med. 2009, 151: 738-747. 10.7326/0003-4819-151-10-200911170-00010.PubMed CentralView ArticlePubMedGoogle Scholar
  2. American Cancer Society: American Cancer Society Responds to Changes to USPSTF Mammography Guidelines. [http://pressroom.cancer.org/index.php?s=43&item=201]
  3. Welch HG, Woloshin S, Schwartz LM: The sea of uncertainty surrounding ductal carcinoma in situ—the price of screening mammography. J Natl Cancer Inst. 2008, 100: 228-229. 10.1093/jnci/djn013.View ArticlePubMedGoogle Scholar
  4. Moyer V: When evidence-based recommendations collide with conventional wisdom: lessons in communication learned by the U.S. Preventive Services Task Force. Paper presented at the Eisenberg Center Conference Series. 2012, Rockville, MDGoogle Scholar
  5. Brechman J, Lee C, Cappella JN: Lost in translation? A comparison of cancer genetics reporting in the press release and its subsequent coverage in the press. Sci Commun. 2009, 30: 453-474. 10.1177/1075547009332649.PubMed CentralView ArticlePubMedGoogle Scholar
  6. Brody JE: Communicating cancer risk in print journalism. J Natl Cancer Inst Monogr. 1999, 25: 170-172.View ArticlePubMedGoogle Scholar
  7. Jensen JD: Scientific uncertainty in news coverage of cancer research: effects of hedging on scientists’ and journalists’ credibility. Hum Commun Res. 2008, 34: 347-369. 10.1111/j.1468-2958.2008.00324.x.View ArticleGoogle Scholar
  8. Jensen JD, Carcioppolo N, King AJ, Bernat JK, Davis L, Yale R, Smith J: Including limitations in news coverage of cancer research: effects of news hedging on fatalism, medical skepticism, patient trust, and backlash. J Health Commun. 2011, 6: 486-503.View ArticleGoogle Scholar
  9. Pellechia MG: Trends in science coverage: a content analysis of three US newspapers. Public Underst Sci. 1997, 6: 49-68. 10.1088/0963-6625/6/1/004.View ArticleGoogle Scholar
  10. Tankard JW, Ryan M: News source perceptions of accuracy of science coverage. Journalism Q. 1974, 51: 219-225. 10.1177/107769907405100204.View ArticleGoogle Scholar
  11. National Science Board: Science and Engineering Indicators 2012. 2012, Arlington, VA: National Science Foundation, [http://www.nsf.gov/statistics/seind12/]Google Scholar
  12. Jensen JD, Hurley RJ: Conflicting stories about public scientific controversies: effects of news convergence and divergence on scientist’s credibility. Public Underst Sci. 2012, 21: 659-704.Google Scholar
  13. Hartz J, Chappell R: Worlds Apart: How the Distance Between Science and Journalism Threatens America’s Future. 1997, Nashville, TN: First Amendment Center, [http://www.freedomforum.org/publications/first/worldsapart/worldsapart.pdf]Google Scholar
  14. Zehr SC: Scientists’ representations of uncertainty. Communicating Uncertainty: Media Coverage of News and Controversial Science. Edited by: Dunwoody S, Rogers C. 1999, Mahwah, NJ: Lawrence Erlbaum, 3-21.Google Scholar
  15. Nelkin D: Selling Science: How the Press Covers Science and Technology. 1995, New York: W.H. Freeman & Company, 2Google Scholar
  16. Tuchman G: Objectivity as strategic ritual: an examination of newsmen’s notions of objectivity. Am J Sociol. 1972, 77: 660-679. 10.1086/225193.View ArticleGoogle Scholar
  17. Niederdeppe J, Lee T, Robbins R, Kim HK, Kresovich A, Kirshenblat D, Standridge K, Clarke CE, Jensen JD, Fowler EF: Content and effects of news stories about uncertain cancer causes and preventive behaviors. Health Commun.
  18. Singer E: A question of accuracy: how journalists and scientists report research on hazards. J Commun. 1990, 40: 102-116.View ArticleGoogle Scholar
  19. Singer E, Endreny PM: Reporting on Risk: How the Mass Media Portray Accidents, Diseases, Disasters, and Other Hazards. 1993, New York: Russell Sage FoundationGoogle Scholar
  20. Stocking SH: How journalists deal with scientific uncertainty. Communicating Uncertainty: Media Coverage of New and Controversial Science. Edited by: Friedman SM, Dunwoody S, Rogers CL. 1999, Mahwah, NJ: Lawrence Erlbaum, 23-42.Google Scholar
  21. Lai WY, Lane T, Jones A: Characteristics of medical research news reported on front pages of US newspapers. PLoS One. 2009, 4: e6856-10.1371/journal.pone.0006856.PubMed CentralView ArticlePubMedGoogle Scholar
  22. Jensen RE: Using science to argue for sexual education in U.S. public schools: Dr. Ella Flagg Young and the 1913 “Chicago Experiment.”. Sci Commun. 2007, 29: 217-241. 10.1177/1075547007309101.View ArticleGoogle Scholar
  23. Popper K: The Logic of Scientific Discovery. 2002, New York: Routledge, 2Google Scholar
  24. Relman AS: The Ingelfinger rule. New Engl J Med. 1981, 305: 824-826. 10.1056/NEJM198110013051408.View ArticlePubMedGoogle Scholar
  25. Besley JC, Tanner AH: What science communication scholars think about training scientists to communicate. Sci Commun. 2011, 33: 239-263. 10.1177/1075547010386972.View ArticleGoogle Scholar
  26. Dunwoody S, Brossard D, Dudo A: Socialization or rewards? Predicting U.S. scientist-media interactions. Journalism Mass Commun Q. 2009, 86: 299-314. 10.1177/107769900908600203.View ArticleGoogle Scholar
  27. Jensen J: Addressing health literacy in the design of health messages. Health Communication Message Design: Theory and Practice. Edited by: Cho H. 2012, Thousand Oaks, CA: Sage, 171-190.Google Scholar
  28. Star SL: Simplification in scientific work: an example from neuroscience research. Soc Stud Sci. 1983, 13: 205-228. 10.1177/030631283013002002.View ArticleGoogle Scholar
  29. Hayes DP: The growing inaccessibility of science. Nature. 1992, 356: 739-74. 10.1038/356739a0.View ArticleGoogle Scholar
  30. Pignone M, DeWalt DA, Sheridan S, Berkman N, Lohr KW: Interventions to improve health outcomes for patients with low literacy. J Gen Intern Med. 2005, 20: 185-192. 10.1111/j.1525-1497.2005.40208.x.PubMed CentralView ArticlePubMedGoogle Scholar
  31. Brashers DE: Communication and uncertainty management. J Commun. 2001, 51: 477-497. 10.1111/j.1460-2466.2001.tb02892.x.View ArticleGoogle Scholar
  32. Berger CR, Calabrese RJ: Some explorations in initial interaction and beyond: toward a developmental theory of interpersonal communication. Hum Commun Res. 1975, 1: 99-112. 10.1111/j.1468-2958.1975.tb00258.x.View ArticleGoogle Scholar
  33. Han PK, Klein WM, Lehman T, Killam B, Massett H, Freedman AN: Communication of uncertainty regarding individualized cancer risk estimates: effects and influential factors. Med Decis Making. 2011, 31: 354-366. 10.1177/0272989X10371830.PubMed CentralView ArticlePubMedGoogle Scholar
  34. Han PK: Conceptual, methodological, and ethical problems in communicating uncertainty in clinical evidence. Med Care Res Rev. 2013, 70: 14S-36S. 10.1177/1077558712459361.PubMed CentralView ArticlePubMedGoogle Scholar
  35. Hyland K: Talking to the academy: forms of hedging in science research articles. Written Commun. 1996, 13: 251-281. 10.1177/0741088396013002004.View ArticleGoogle Scholar
  36. Goldberg JP, Sliwa SA: Communicating actionable nutrition messages: challenges and opportunities. Proc Nutr Soc. 2011, 70: 26-37. 10.1017/S0029665110004714.View ArticlePubMedGoogle Scholar
  37. Lupton D, Chapman S: ‘A healthy lifestyle might be the death of you’: discourses on diet, cholesterol control and heart disease in the press and among the lay public. Sociol Health Ill. 1995, 17: 477-494. 10.1111/1467-9566.ep10932547.View ArticleGoogle Scholar
  38. Friedman SM: The never-ending story of dioxin. Communicating Uncertainty: Media Coverage of New and Controversial Science. Edited by: Friedman SM, Dunwoody S, Rogers CL. 1999, Mahwah, NJ: Erlbaum, 113-136.Google Scholar
  39. Bennett WL: News: The Politics of Illusion. 2007, White Plains, NY: LongmanGoogle Scholar
  40. National Center for Health Statistics: Health, United States, 2011: With Special Feature on Socioeconomic Status and Health. 2012, Hyattsville, MD: Centers for Disease Control and Prevention, U.S. Department of Health and Human Services, [http://www.ncbi.nlm.nih.gov/books/NBK98752/pdf/TOC.pdf]Google Scholar
  41. Arora NK, Hesse BW, Rimer BK, Viswanath K, Clayman ML, Croyle RT: Frustrated and confused: the American public rates its cancer-related information-seeking experiences. J Gen Intern Med. 2008, 23: 223-228. 10.1007/s11606-007-0406-y.PubMed CentralView ArticlePubMedGoogle Scholar
  42. Jensen J, Carcioppolo N, King A, Scherr C, Jones C, Niederdeppe J: The cancer information overload (CIO) scale: Establishing predictive and discriminant validity. Patient Education and Counseling. 2013, 10.1016/j.pec.2013.09.016.Google Scholar
  43. Niederpeppe J, Levy AG: Fatalistic beliefs about cancer prevention and three prevention behaviors. Cancer Epidemiol Biomarkers Prev. 2007, 16: 998-1003. 10.1158/1055-9965.EPI-06-0608.View ArticleGoogle Scholar
  44. Powe BD, Finnie R: Cancer fatalism: the state of the science. Cancer Nurs. 2003, 26: 454-465.View ArticlePubMedGoogle Scholar
  45. Meyer P: The Vanishing Newspaper: Saving Journalism in the Information Age. 2009, Columbia, MO: University of Missouri Press, 2Google Scholar
  46. Politi MC, Han PK, Col NF: Communicating the uncertainty of harms and benefits of medical interventions. Med Decis Making. 2007, 27: 681-695. 10.1177/0272989X07307270.View ArticlePubMedGoogle Scholar
  47. Sawaya GF, Guirguis-Blake J, LeFevre M, Harris R, Petitti D: Update on the methods of the U.S. Preventive Services Task Force: estimating certainty and magnitude of net benefit. Ann Intern Med. 2007, 147: 871-875. 10.7326/0003-4819-147-12-200712180-00007.View ArticlePubMedGoogle Scholar
  48. Spiegelhalter D, Pearson M, Short I: Visualizing uncertainty about the future. Science. 2011, 333: 1393-1400. 10.1126/science.1191181.View ArticlePubMedGoogle Scholar
  49. Grafton A, Rosenberg D: Cartographies of Time: A History of the Timeline. 2010, New York: Princeton Architectural Press, 1Google Scholar
  50. van der Vaart W, Glasner T: Applying a timeline as a recall aid in a telephone survey: a record check study. Appl Cogn Psychol. 2007, 21: 227-238. 10.1002/acp.1338.View ArticleGoogle Scholar
  51. Bushman BJ, Huesmann LR: Effects of televised violence on aggression. Handbook of Children and the Media. Edited by: Singer DG, Singer JL. 2001, Thousand Oaks, CA: SageGoogle Scholar


© Jensen et al; licensee BioMed Central Ltd. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.