Introduction

In recent years, social media (SM) platforms have enabled individuals to search, obtain, and share health information more readily. SM platforms are commonly used to disseminate health-related information and connect communities of users with similar interests or experiences [1, 2]. However, given the ubiquity of information online, the quality of health information on SM is mixed, generating concern over the legitimacy and reliability of health information found on these platforms [2, 3]. As many individuals are inundated with health information on SM, the burden of critically evaluating content and assessing their credibility largely falls on users [4]. Limited ability to discern the credibility of health information could have concerning real-world consequences if user-generated content on these platforms leads SM users to endorse or spread uncorroborated health claims [5]. For example, current research regarding the COVID-19 pandemic has demonstrated that widespread misinformation on SM could be associated with problematic real-world behaviors, such as non-compliance with wearing a mask and social distancing guidelines [6].

In order to mitigate the negative effects of health misinformation, it is imperative to first understand factors, such as user characteristics, associated with how people assess the quality and believability of a health message. First, a person’s health literacy, which broadly refers to the “ability to obtain, process, understand, and communicate about health-related information needed to make informed health decisions,” plays a critical role in information processing and evaluation [7]. When confronted with health information on SM platforms, users must utilize literacy skills to discern facts from falsehoods. Indeed, with increasingly more equitable access to health information for people across health literacy levels [8], users with limited health literacy may be unable to accurately assess information credibility [4]. Depending on the health topic, source, message veracity, and message format, individuals’ health literacy may play different roles in how people process and determine validity of a health message.

Second, message features, such as veracity, content, and format, all play a significant role in the way SM messages are evaluated. Research suggests much of the health information disseminated on SM platforms tends to be misinformation rather than evidence-based information [9, 10]. This requires individuals to engage their cognitive skills to determine the veracity or legitimacy of a health message on SM [11]. In addition, features like information complexity affect believability of a message [12]. Information complexity includes how message content (i.e., specific elements of the body of the text) is created, which in turn influences message believability [13]. Further, use of syntax, technical jargon, and general choice of terminology can alter perception of the credibility of a message [14]. Marketing research has shown that how a message is presented, in addition to the text of the message, can influence how it is perceived [15]. Lastly, the format, such as the use of narratives and storytelling to convey a message, has been found to influence perceived believability given the emotional engagement associated with a health topic, making people vulnerable to spreading misinformation [16]. Conversational and testimonial formats as well as personal experience narratives have been shown to increase perceived message authenticity compared to didactic formats [17,18,19].

As established, one’s health literacy skills and the features of a message play a role in an individual’s evaluation of message believability. Subsequently, it becomes important to investigate whether those factors impact attention to a message. Assessing an individual’s actual cognitive processing provides an opportunity to uncover the way one views certain aspects of a message and whether attention to different components of a message influence their assessments of message believability. “Elaboration,” using the term from the elaboration likelihood model (ELM), on a message entails their motivation to pay attention as well as their cognitive ability to engage in the information being presented [20, 21]. One’s ability to engage in and scrutinize a message directly relates to a person’s health literacy and whether he/she is persuaded by a message [22, 23]. ELM posits those with varying health literacy levels process information through one of two “routes”: (1) central route, which operates when one has high elaboration and cognitive ability to process message quality, or (2) peripheral route, which operates when one elaborates through low-effort processes and utilizes heuristics cues to understand a message [24].

Researchers can measure visual attention of SM users’ in these “routes” of persuasion through eye-tracking methodology, which gives insight into users’ attention to various components of a message [25]. For example, one study measured attention through total fixation duration (a common eye-tracking metric defined as a stable gaze of more than 80 ms) and highlighted that participants paid attention to the source when looking at a news post and used this information as a criterion for the decision to read or to skip the news post [26]. Although reliance on source cues has been conceptualized as a lower-effort mode of information processing when it comes to message persuasion [23], source credibility plays a prominent role in message assessment when other information that may be used to judge the actual quality of a text is limited [27].

Eye-tracking studies have suggested that SM users do not have unlimited cognitive capacity to process health messages, and therefore attend to only salient factors of a message [28, 29]. The source of a message is one such salient factor, commonly viewed as a heuristic cue, that people often use to evaluate information presented to them when the content is difficult to comprehend [30,31,32,33,34,35,36]. A previous eye-tracking study examined the time spent on simulated cancer-related Facebook messages, and found that participants tend to spend more time on health messages from lay people compared to health organizations [28]. Research has also established that source trustworthiness (an information provider’s perceived intention to tell the truth or give unbiased information) is directly associated with message credibility [37]. An additional analysis from another eye-tracking study indicated that participants who report having a high level of trust in a message source tend to rate a health message from that source as more believable [38]. As highlighted, attention to the source of a message plays a vital role in message credibility and may help explain participants’ perceived message believability.

Expanding upon prior work using a combination of eye-tracking and survey methodologies on SM message processing [28, 38], we investigated how people with varying health literacy view and assess simulated Facebook health messages. Specifically, this mixed methods study sought to assess how message features and participants’ health literacy predict assessment of message believability and time spent looking at the messages. As illustrated in Fig. 1, three research aims guided our conceptual model.

Fig. 1
figure 1

Conceptual model

First, we explored how message features and participant’s health literacy were associated with perceived message believability (Aim 1). Second, complementing Aim 1 and relying on eye-tracking metrics, we assessed the association of attention to specific message components on perceived message believability (Aim 2). Lastly, we explored how message features and health literacy were associated with attention to message components (Aim 3). The goal of Aim 3 was to understand the implications of the predictors on attention to the text of the message to examine if message content was impacted by individual characteristics and message features. However, to fully understand the implications of the effects of health literacy and message features, we also explored how these predictors were associated with attention to source and image of the message.

Methods

Recruitment and Participants

Eighty participants from the metropolitan Washington DC-Maryland-Virginia area were recruited by a research recruiting firm, which screened participants to determine eligibility. Eligible participants had to be at least 18 years old and use SM regularly—defined as logging into at least one SM account (e.g., Facebook, Twitter, Instagram, or Pinterest) daily. Efforts were made to recruit respondents from diverse demographic groups. A total of 27 participants were excluded from the final sample, for example, due to cancellations, technical issues, or poor eye calibration and low gaze samples. The final analytic sample included 53 participants (see Fig. 2 highlighting the participant flowchart). Data were collected June–October 2018.

Fig. 2
figure 2

Participant flowchart

Study Stimuli

A total of 16 target posts were developed for the study. Each post focused on one of two prominent cancer prevention topics frequently discussed on SM: human papillomavirus (HPV) vaccination and sunscreen safety [39, 40]. Target posts varied based on the following manipulated features: (1) message format (whether the text of the message was a narrative/story or a factual statement without a narrative), (2) message source (lay individual or a health organization), and (3) message veracity (evidence-based or non-evidence-based information given current scientific evidence on the topic).

Procedures

The study employed a mixed methods approach, integrating eye-tracking and survey questionnaires. After obtaining informed consent, participants were guided through a standardized onscreen calibration exercise for accuracy check using a Tobii T120 Eye Tracker [41], which was used to track eye movements as participants completed the study. A Dell computer with a 27-in. monitor and with a screen aspect ratio of 16:9 was used for the experiment. Participants sat approximately 24 in. from the monitor. After calibration was completed, participants were directed to view three simulated Facebook feeds. Each feed contained six messages formatted to look like standard Facebook posts: five “distractor” posts about non-health topics (e.g., weather, fashion) and one investigator-developed target post that always appeared as the second post in the Facebook feed. Each participant was randomized to view three of the 16 possible target posts in the simulated feeds (each participant viewed at least one post about the HPV vaccine and at least one about sunscreen). Although the format for target posts approximated the look of an authentic Facebook post, the posts themselves were not made interactive (i.e., hyperlinks were not active and the “like” button could not be clicked) in order to facilitate interpretation of eye-tracking data. Distractor and target posts were generally comparable in size, length of text, and use of imagery. All participants received a random sequence of stimuli to limit order effects. After viewing the three feeds, participants completed a series of message assessment surveys asking them to evaluate each of the three target posts they viewed in the feeds, as well as three new target posts they were not previously exposed to. Each message was displayed onscreen while individuals answered the survey questions; participants continued to have their eye movements tracked while completing the surveys. Eye-tracking data collected while participants were filling out the message assessment surveys were utilized for the purposes of this analysis. At the end of the session, participants were debriefed with scientifically accurate information about the HPV vaccine and sun safety and received $75 in compensation for their time. The protocol was reviewed and deemed exempt by the Ethics Committee and IRB at the authors’ institution.

Survey Measures

Perceived message believability was assessed with one survey item that asked, “Please tell us what you think about this post: This post is…” Response options ranged from 1 (Not Believable) to 7 (Believable).

Health literacy was calculated by using The Newest Vital Sign Assessment Tool [42]. A score of 0–2 suggests an individual has limited health literacy, while a score of 3–4 suggests adequate health literacy.

Eye-Tracking Metrics

Areas of interest (AOIs) were created to capture the time spent on a specific area of the simulated post (namely, the text, source, or image portion of the post). Defining AOI allows researchers to measure the distribution of attention between specific regions on a stimulus [43]. An AOI was allocated a specific shape on the stimulus to delineate the amount of time spent in that specific region. For the purposes of this study, attention was measured as relative time within an AOI (a commonly used eye-tracking metric) and was defined as “time spent on each AOI divided by total duration of time spent on post.” To account for the variability in the amount of visual information on the survey pages (e.g., text length, image size), the analyses were pixel size adjusted. The primary eye-tracking metric outcome of interest was relative time on text AOI, measuring the time spent on the content of the message. In addition, as exploratory analyses, our secondary outcomes of interest included relative time on source AOI and relative time on image AOI. Given that attention is defined as relative time spent on specific areas of the stimuli and are dependent on each other (e.g., spending 50% of time on text leaves only 50% of total time to spend on source and image), our primary analysis focuses on analyzing text only. However, we do report results on source for reference.

Analytical Approach

Perceived Message Believability Ratings

Participants’ believability ratings were measured on an interval scale (1–7) for each post. The data were entered into an ordered-probit Bayesian hierarchical model (BHM) [44]; an ordered-probit model is specifically designed for interval data. For interval data (e.g., Likert scales), BHM provides more accurate estimates than models that assume the data (e.g., participant believability ratings) come from a continuous distribution [45]. Predictor variables for Aim 1 included health literacy at the participant level (Adequate, Limited), message format (Narrative, Non-narrative), and message veracity (Evidence-based, Non-evidence-based). Predictor variables for Aim 2, relative time on AOIs (text, source, image), were input as continuous variables. Along with analyzing the main effects of each predictor variable, interaction effects were assessed to understand the joint effect of the predictors on perceived message believability. Posterior distributions were used to estimate mean differences between conditions for main effects and interactions of the predictor variables. For each trial, the model accounted for variance associated with each participant (1–53), the simulated Facebook post (1–16), and post topic (i.e., HPV vaccine, sunscreen safety). In Bayesian statistics, an analysis (e.g., mean difference, interaction effects) is considered “statistically credible” if 95% of the highest density interval (HDI) of its posterior distribution does not contain zero. It is analogous to confidence intervals in traditional statistical methods.

Eye-Tracking Metrics

Relative time spent on the AOI for each post was entered separately into a BHM. This model assumed that a given proportion comes from a beta distribution with a mean of theta (ϴ). The primary analysis included text AOI as the main dependent variable, while secondary analyses included source AOI and image AOI. Similarly, to the perceived believability analysis, for Aim 3, health literacy, message veracity, and message format were input as predictor variables separately, to assess main effects, and then combined to examine joint interaction effects. For each trial, the model controlled for pixel size, each participant (1–53), the simulated Facebook post (1–16), and post topic (i.e., HPV vaccine, sunscreen safety). The model estimates the mean differences between conditions in the experiment, which are reported here as percentages along with 95% HDIs.

Results

The final sample of 53 participants (Table 1) included more females (n = 40, 76%) than males. Participants’ reported race was nearly equally divided between Black/African American and other races. Most of the participants (72%) reported having a college or graduate degree and most (71%) had adequate health literacy skills based on the Newest Vital Sign score. We describe the key findings by study aims below.

Table 1 Participant characteristics

Perceived Message Believability (Aim 1)

Aim 1 examined whether one’s health literacy level, message veracity, and message format were associated with perceived message believability. Results from Aim 1 highlighted a main effect of message veracity as well as interaction effects for health literacy and message veracity. The main effect showed evidence-based messages were deemed as more believable compared to non-evidence-based posts. There was no main effect of health literacy nor message format on perceived message believability. Mean comparisons are given in Table 2.

Table 2 Mean comparisons between main effects and the Health Literacy × Veracity interaction for believability of posts

Importantly, the main effect of message veracity was qualified by health literacy. Individuals with adequate health literacy rated evidence-based posts as being more believable than non-evidence-based posts (mean difference = 2.3%, 95% HDI [0.74, 3.81]). In contrast, individuals with limited health literacy did not rate evidence-based posts as being more believable than non-evidence-based posts. The first graph in the Appendix highlights the interaction between health literacy and message veracity. No other interactions were statistically credible at 95% HDI.

Eye-Tracking Metrics (Aim 2 and Aim 3)

Subsequently, Aim 2 examined the association of attention to text of message on message believability, based on ELM’s premise individuals engage in high elaboration through central route processing. There was no statistical relationship between relative time spent on the text AOI and message believability (β = 0.42%, 95% HDI [− 0.85, 1.86]). After observing no relationship between attention to text and message believability, it became necessary to explore if source AOI and image AOI were associated with message believability to understand whether peripheral route processing differed from other aspects of a message; neither of which were significant.

The goal of the exploratory Aim 3 was to assess whether message features and one’s health literacy were associated with relative time on text AOI. Results showed a main effect that participants spent more time on non-evidence-based text compared to evidence-based text (mean difference = 4.94%, 95% HDI [0.47, 9.64]). In addition, there were no other main effects for health literacy nor message format on relative time spent on text. Table 3 highlights results for the mean comparisons.

Table 3 Mean comparisons between main effects and the Narrative × Veracity interaction for relative time (percent) participants spent on text

Interestingly, the main effect of message veracity was qualified by message format. Participants spent more relative time on the text of non-evidence-based narratives compared to evidence-based narratives (mean difference = 15.11%, 95% HDI [7.21, 21.76]). However, for non-narrative posts, there was no difference in the amount spent on the text of the post, regardless of whether it was evidence-based or non-evidence-based. The second graph in the Appendix shows the interaction effect of message veracity and message format.

To fully understand participants’ attention to all components of a message, additional exploratory analyses were conducted to assess relative time on source AOI. As confirmed by ELM, there was a main effect of health literacy on time spent on source of the message. Participants with limited health literacy spent more relative time on the source compared to participants with adequate health literacy (mean difference = 2.86%, 95% HDI [0.38, 5.99]). There were no main effects for message veracity nor message format on relative time spent on source AOI. Table 4 highlights results for the mean comparisons and the third graph in the Appendix shows interaction effects. There were no main effects for health literacy nor message format on relative time spent on the image AOI. As noted, attention is defined as relative time on specific areas of the message making up 100% of time spent on the stimulus; the mean comparisons and graph of the interactions for image AOI can be found in the Appendix.

Table 4 Mean comparisons between main effects and the Narrative × Veracity interaction for relative time participants spent on source

Discussion

This study examined how message features and SM users’ health literacy predict their message believability assessment and how much attention they pay to various components of simulated Facebook posts. Several key findings emerged that have important implications for communication research methodologies, such as eye-tracking and public health efforts. First, those with adequate health literacy correctly rated evidence-based posts as being more believable than non-evidence-based posts; however, there was no statistically significant difference between believability ratings of evidence-based posts vs. non-evidence-based posts among participants with limited health literacy. This suggests that health literacy, as measured in this study, may play a critical role in the way people discern accurate information from false information. This finding provides additional support for prior research suggesting that online health information seekers who have the ability to obtain, process, and understand information are more likely to identify accurate information from the “noise” found on the internet and possess the ability to evaluate health information found online [46, 47]. More generally, people with adequate health literacy are observed to be more suspicious about online health information than those with limited health literacy [46, 48]. Consequently, they are more likely to search and verify information presented online, and place greater significance on the reliability of that information once it’s been verified [49], whereas individuals with limited health literacy may be susceptible to believing myths and misconceptions [50] and, therefore, less likely to believe evidence-based information. While SM platforms facilitate access to health information generally, individuals with limited health literacy, who are at greater risk of being misled by inaccurate information, may be unable to discern information credibility and therefore may be disproportionately harmed by effects of misinformation exposure. As a result, disparities may be exacerbated when individuals with limited health literacy cannot locate quality information and when exposed to information of mixed quality, are unable to distinguish between evidence-based and non-evidence-based posts.

A second notable finding was that participants spend more time on the text of non-evidence-based narrative posts compared to evidence-based narrative posts. This may suggest that participants in this study are attempting to scrutinize those posts in an effort to evaluate the credibility of the message [51, 52]. It is possible that this occurred because the non-evidence-based narrative posts in our study were perceived more vivid or shocking because they graphically described stories about the harms of sunscreen and the HPV vaccine. Novel and emotional rhetoric, such as fear appeals, may influence SM users into sharing such information, suggesting evocative falsehoods tend to spread faster than true information [53]. Additionally, participants may be spending more time on non-evidence-based narratives and anecdotal messages as the power of other individuals’ experiences, such as deleterious and questionable consequences of HPV immunization, becomes evidence for one’s own actions to refuse vaccines due to “warnings” from others [54]. In fact, the power of storytelling has been leveraged by anti-vaccine activists to spread fear and doubt among parents [55], and subsequently, personal experience anecdotes have generated endorsement and sharing of false information about vaccine risks and harms. Future communication efforts could seek to leverage the power of storytelling towards science-based, accurate information for positive effects on public health issues.

Lastly, through our exploratory analysis, results showed individuals with limited health literacy spent more relative time on the source of the message compared to individuals with adequate health literacy. This finding may be in part explained by ELM. According to ELM, when someone lacks the ability and motivation to elaborate on a message, he/she may utilize peripheral cues to evaluate information [56]. This finding suggests that participants were engaging in peripheral processing and relying on source as a heuristic cue to evaluate the credibility of the post. It is possible that those with limited literacy skills rely more heavily on the message source for assessing message credibility because they find the text of the message difficult to evaluate [57]. Source is often utilized as a heuristic for the evaluation of information credibility [31]. For example, SM users tend to believe in false news if the message comes from trusted sources on SM, suggesting that reliance on heuristics and peripheral processing might make users more vulnerable to misinformation [58], and our study suggests this problem may be more acute among SM users with limited health literacy. It might be helpful for SM platforms to vet accounts, encourage fact checking groups to debunk misleading health claims and support responsible verification efforts for health information, and potentially engage in correcting misinformation on SM platforms. These strategies may assist SM users (especially those with limited health literacy) in identifying expert sources and avoiding less reliable sources in situations where they find it difficult to distinguish accurate information from misleading information based on message content alone. Collaborative efforts between public health leaders and SM companies to verify sources of SM posts would help alleviate some of the burden placed on individuals to distinguish credible health messages from inaccurate messages online.

This study was not without limitations. First, the simulated Facebook posts for this study were static and did not allow for fully dynamic features (e.g., liking, sharing, commenting), which means the experiment did not perfectly recreate the experience participants usually have on Facebook. Future research utilizing non-laboratory settings and observing individuals interacting with their personal Facebook pages would be needed to confirm the findings of this study in more naturalistic contexts. Additionally, the measurement used to assess health literacy in this study encompassed both literacy and numeracy, which may have hindered the accuracy of measuring only health literacy. Future health communication studies focusing on SM messages should include a measure of eHealth literacy. Furthermore, in addition to message features, participants’ sociodemographic characteristics (e.g., age, gender) may potentially impact the time spent examining cancer messages believability on social media. Further research is needed to understand if relative time on various message components is impacted by such participants’ characteristics. Lastly, this study was focused on a single SM platform, Facebook. Whether our findings apply to other SM platforms is uncertain. Misinformation is also prevalent on other SM outlets; therefore, future studies should examine messages on a wider array of platforms.