skip to main content
10.1145/3613904.3642640acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Free Access

Personalizing Privacy Protection With Individuals' Regulatory Focus: Would You Preserve or Enhance Your Information Privacy?

Published:11 May 2024Publication History

Abstract

In this study, we explore the effectiveness of persuasive messages endorsing the adoption of a privacy protection technology (IoT Inspector) tailored to individuals’ regulatory focus (promotion or prevention). We explore if and how regulatory fit (i.e., tuning the goal-pursuit mechanism to individuals’ internal regulatory focus) can increase persuasion and adoption. We conducted a between-subject experiment (N = 236) presenting participants with the IoT Inspector in gain ("Privacy Enhancing Technology"—PET) or loss ("Privacy Preserving Technology"—PPT) framing. Results show that the effect of regulatory fit on adoption is mediated by trust and privacy calculus processes: prevention-focused users who read the PPT message trust the tool more. Furthermore, privacy calculus favors using the tool when promotion-focused individuals read the PET message. We discuss the contribution of understanding the cognitive mechanisms behind regulatory fit in privacy decision-making to support privacy protection.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Studies show that people can be reluctant to manage their online privacy and security; for example, they may be unwilling to explore the privacy settings on a social media account to prevent an unfavorable disclosure [73], or be uninterested in adopting protective technologies, such as virtual private networks that can address security vulnerabilities [81]. Therefore, finding ways to motivate users to adopt protective technologies is an important challenge in information security and privacy management [71, 75, 79, 81, 96, 99, 100].

One way to increase individuals’ motivation to take action is to provide them with personalized messages that specifically appeal to them [26, 51, 83, 106]. For example, a message highlighting excitement and social rewards is found to be more persuasive for extroverts considering the purchase of a new cell phone [51]. In this work, we study the efficacy of using personalized messages to persuade individuals to adopt a privacy-protection technology. While previous privacy literature has shown that using different message framing may lead to various privacy-protective behaviors [4, 7, 90], no work has examined whether such framing should be tailored to different users, i.e., users with different regulatory foci. To that end, we draw on regulatory focus theory to motivate our research questions and design [44, 49]. Regulatory focus theory suggests that people’s goal orientation is a trait variable (i.e., similar to personality traits) and can be promotion-focused or prevention-focused [44, 49]. Individuals with a high promotion focus have a high desire for growth [49]. Those with a high prevention focus desire safety and direct their efforts to prevent unfavorable outcomes [49]. This makes regulatory focus especially relevant to privacy management [20, 63], as privacy management can be equally viewed as either a function of effective privacy risk prevention or as the promotion of privacy protection.

When one’s internal regulatory focus matches one’s goal-pursuit strategy, there is a regulatory fit [44]. Research shows that with regulatory fit, individuals show more positive attitudes towards the task [33] and are more likely to fulfill it [64]. In this paper, we explore whether regulatory fit can enhance the efficacy of personalized persuasive messages on individuals’ adoption of a privacy-protection technology, and seek to explore the underlying mechanisms through which regulatory fit may increase individuals’ pursuit of a goal:

RQ: Does tailoring a persuasive message encouraging individuals to adopt a privacy-protection technology to their internal regulatory focus (i.e., regulatory fit) increase adoption behavior? What are the mechanisms through which regulatory fit increases this adoption behavior?

Privacy literature depicts privacy calculus and trust as prominent mediators of privacy decisions [16, 27, 28, 36]. To answer our research question and understand the mechanisms through which regulatory fit may increase adoption behavior, we study both privacy calculus and trust as potential mediators of the effects of regulatory fit on privacy behavior. Consequently, we designed a between-subject experiment and studied the adoption of "IoT Inspector" as a privacy-protection technology. IoT Inspector is a tool that can help smart-device users monitor the network communications of their smart devices [53]. Using this tool, users can view the domains with which their devices communicate and the size of these communications in bits. We used IoT Inspector as it is a widely adopted open-source tool that helps non-expert smart-device users explore their privacy [53]. We introduced IoT Inspector either with a gain framing of Privacy Enhancing Technology (PET) or a loss framing of Privacy Preserving Technology (PPT) to 236 participants recruited through Prolific, a crowd-sourcing platform. After reading the introductory message (either PET or PPT), participants answered survey questions regarding their initial impressions of the tool and were given a chance to actually download and use the IoT Inspector.

Our findings show how privacy calculus that users make for using vs. not using the IoT Inspector and their trust in the tool can change based on the message framing (PET vs. PPT) and the user’s regulatory focus. Specifically, we found that regulatory fit for promotion-focused individuals can induce a more positive privacy calculus towards the product: when we present individuals who have a high promotion focus with the "privacy-enhancing technology" message framing, they have more thoughts in favor of using the tool. On the other hand, regulatory fit for prevention-focused users can induce trust: when we present individuals who have a high prevention focus with the "privacy-preserving technology" message framing, they trust the technology more and, in turn, are more likely to install the tool.

To the best of our knowledge, this is the first study to examine the persuasive effect of regulatory fit in the privacy decision-making domain. Our work contributes to the theoretical understanding of regulatory fit as we explore its underlying mechanisms by studying potential mediations by privacy calculus and trust. Furthermore, our findings have practical implications for marketing a product to potential customers. Finally, this work can inform policy design and help encourage individuals to adopt privacy-protection technologies.

Figure 1:

Figure 1: Our Conceptual model through which we explore the effects of regulatory fit on the evaluations of the product (privacy calculus), trust, and privacy behaviors (download behavior).

Skip 2LITERATURE REVIEW AND HYPOTHESIS DEVELOPMENT Section

2 LITERATURE REVIEW AND HYPOTHESIS DEVELOPMENT

2.1 Persuading to Adopt Privacy Measures

Scholars have explored ways to persuade individuals to protect their privacy. For example, they have discovered that communicating the purpose of data collection can persuade individuals to disclose their data [62], and using password meters can increase the likelihood of them choosing stronger passwords [19]. However, these persuading mechanisms do not consistently lead to the desired outcome. Huh et al.[54] surveyed LinkedIn users who received password reset appeals from LinkedIn. They found that users are reluctant to reset their passwords, since after several weeks, only around 46% of email recipients changed their passwords. They highlight the ineffectiveness of current persuasive mechanisms that are used by companies. In another study, Egelman et al. [30] studied the efficacy of messages encouraging users to choose stronger passwords. They found that while the messages led to choosing stronger passwords in a hypothetical scenario, such interventions did not consistently lead to stronger passwords in real scenarios. In another field study, Ghaiumy Anaraky et al. [8] showed that persuasive messages that encourage people to automatically tag themselves in Facebook photos would result in lower persuasion and tagging behaviors if the default setting is opt-in. These mixed results show that persuading individuals to take privacy protection measures is a complex topic.

In order to uncover the efficacy of persuasive messages, it is crucial to consider two points: first, it is important to study if and how the effect of persuasive messages on behavior is mediated by the key relevant variables (e.g., privacy calculus and trust in the privacy domain). Second, personal characteristics such as regulatory focus are central to persuasion [20, 69, 115]. It is likely that users with different regulatory foci react differently to persuasive messages. Currently, no work has considered designing privacy persuasive messages with respect to individuals’ regulatory focus and testing the mediating role of privacy calculus and trust.

2.2 The Role of Privacy Calculus and Trust in Privacy Decision Making

In this section, we briefly explain how privacy calculus and trust can influence privacy decisions. Then, we discuss how loss- and gain-framed messages can influence privacy calculus, trust, and behaviors.

2.2.1 Privacy Calculus.

Privacy calculus theory suggests that privacy decisions involve making a trade-off between the risks and benefits associated with a decision [24]. Many studies have adopted the privacy-calculus model to explore the dynamics behind privacy decisions [17, 43, 55, 94, 108]. For example, Li et al. [68] studied the adoption of wearable healthcare devices based on the privacy-calculus model. They showed that when users decide to adopt wearables, they consider the benefits they may gain from using the device as well as the risks involved with using the device (e.g., having to disclose personal information). Consequently, their findings suggest that if the benefits of using the wearable outweigh the privacy risks, people are more likely to opt for using the wearable. On the other hand, if the risks outweigh the benefits, they are less likely to use the wearable. In another study, Jalali et al. [57] explored the adoption of Internet of Things devices as a function of their risks and rewards. Their work suggests that while low levels of risk and high reward would result in the highest adoption rates, high levels of risk and low reward would result in no adoption. Based on the privacy-calculus literature, we hypothesize the following:

H1: Individuals whose privacy calculus is more strongly in favor of adopting the privacy-protection technology are more likely to download it.

While privacy calculus made significant contributions to the privacy literature, we know that it has shortcomings. The underlying assumption in the privacy-calculus model is that humans make deliberate trade-offs between the risks and benefits of decisions. However, research shows that this assumption does not always hold as individuals have limited cognitive resources when making privacy decisions [16, 28]. Consequently, many scholars provide evidence suggesting that privacy calculus is not the sole mechanism involved in privacy decision-making and that individuals use mental shortcuts to fast-track their privacy decisions [1, 2, 8, 36, 101]. In the following, we explain how trust acts as a mental shortcut for privacy decisions.

2.2.2 Trust.

Several studies highlight the important role of trust in privacy decisions [27, 28, 66]. Lewicki [66] presents trust as one of the decision shortcuts since trusting an entity can reduce individuals’ sensitivity to information. This, in turn, reduces the complexity of the decision-making [66, 67]. Consequently, a trusted brand name or a trusted authority plays a significant role in users’ decisions about revealing personal information online [103]. In another study, Dinev and Hart [27] showed that a high trust can overwrite the perceived risks associated with information disclosure to an e-commerce website. Furthermore, trust is found to be a predictor in adopting virtual private networks [81]. In line with this literature, we pose the following hypothesis:

H2: Individuals who trust the privacy-protection technology more, are more likely to adopt it.

2.3 Tailoring Loss and Gain—Framed Messages to Regulatory Focus

Presenting the same information with different wordings can result in various behavioral outcomes [105]. This is called the framing effect and is studied in different disciplines such as Psychology [13] and Economics [21]. Several studies found the framing effect to be an effective means of persuading individuals to take an action [40, 41, 89]. These studies often present the scenario in terms of gains—presenting favorable consequences of taking an action, or losses—presenting unfavorable consequences of not taking an action. For example, Peng et al. [89] studied the efficacy of gain and loss-framed messages in persuading individuals to take COVID-19 vaccines. They found a loss-framed message explaining the negative consequences of not being vaccinated more persuasive than a gain-framed message explaining the benefits of getting vaccinated. Tversky and Kahneman [105] suggest loss aversion as the reason behind the framing effect. This means that losses loom greater than gains, and when individuals see the loss-framed message, they would be more inclined to take action than when they see a gain-framed message [58].

Additionally, research highlights the effects of message framing on various attitudes [12, 35, 77, 112]. This effect is broadly explored in health communication literature. For example, when people read a loss-framed message—cautioning people about the unfavorable consequences of not taking a cancer preventative measure (vs. a gain-framed message—explaining the benefits of taking a cancer preventative measure), they show more positive attitudes about the preventative behavior [77]. Similarly, a loss-framed message about a cancer screening test can increase a reader’s perceived cancer susceptibility more than a gain-framed message [35].

The effect of message framing has also been studied in security and privacy research. Most studies have found more persuasive effects from loss-framed messages compared to gain-framed messages. For instance, Ma and Birrell found that users who saw the cookie banner with negative framing (i.e., “degrade your experience”) were more likely to accept cookies, compared to positive framing (i.e., “improve your experience”) [72]. Similarly, Qu et al. found that showing disadvantages in the message framing (i.e., “your account is at risk”) can be useful to nudge participants [90]. Acquisti et al. found that users tend to sacrifice privacy when asked to “pay $2 to protect their privacy” but tend to protect privacy when asked to “give away privacy for $2” [3, 39]. Adjerid et al. found that a privacy notice suggesting an "increase" in privacy protection also results in increased disclosures, while a privacy notice suggesting a "decrease" in privacy protection elicits decreased disclosures [4]. However, some studies did not find a significant effect of framed messages. For example, DeGiulio et al. did not find a difference between messages that emphasize the benefits of allowing tracking vs. messages that emphasize the potential negative consequences of opting out of tracking [25].

With one notable exception [12], previous studies have not examined the underlying mechanisms that cause the effects of gain and loss framing. The change of framing is a heuristic manipulation [28] that can manifest differently in terms of cognitive and emotional appraisals. For example, the default effect—another prominent heuristic effect—suggests that people are more likely to proceed with the option that is pre-selected by default. Some have argued that this is a cognitively mediated effect (i.e., users may perceive a pre-selected option as an implicit endorsement and decide to select it accordingly [29, 56]), while others have argued that the default effect is an affect-based manifestation stimuli (i.e., influencing users’ emotional appraisal of the proposed options [34]). Similarly, we explore the effects of loss and gain framing heuristic on privacy calculus, which constitutes a cognitive evaluation of the input signal, and trust, which encompasses an emotional appraisal. The next hypothesis explores the effect of framing on our dependent variables:

H3: A loss-framed (vs. gain-framed) message leads to having a) the privacy calculus in favor of using the privacy-protection technology, b) a higher trust for the privacy-protection technology, and c) higher adoption of the privacy-protection technology.

Additionally, studies have shown that the effectiveness of a message framing depends on the characteristics of the audience [91, 109]. Privacy management involves both the effective prevention of privacy risks and the promotion of privacy enhancement. As such, regulatory focus is a relevant trait in examining both approach- and avoidance-related goal orientations pertaining to privacy.

2.3.1 Regulatory Focus.

The regulatory focus theory [44, 49] suggests that individuals have two distinct motivational systems for pursuing goals: promotion and prevention systems. These systems originate from different fundamental human needs and seek different outcomes (goals). The promotion system is derived from the desire for growth and nurturance. Individuals with a high promotion focus are concerned with having positive outcomes over the absence of positive outcomes (having gains over non-gains). The prevention system is derived from the need for safety and security. Individuals with a high prevention focus emphasize avoiding negative outcomes over the presence of such outcomes (having non-losses over losses) [49]. In addition, promotion and prevention systems are independent, such that an individual can have high promotion and high prevention or low promotion and low prevention foci [49].

To explain promotion and prevention foci further, Higgins [46, 47] discusses the distinct ways these systems construe their end-goal state. Higgins considers "0" as the status quo state. People with a strong promotion focus consider the state of "+1" as the gain or success status. Therefore, not achieving this gain (i.e., maintaining the "0" status quo) looms as a loss for these individuals. On the contrary, individuals with a strong prevention focus consider the maintenance of the "0"—the status quo— and not going below it a success, and the "-1" state a failure.

Regulatory focus theory has been used in the Human-Computer Interaction (HCI) literature to promote user experience in human-robot interactions [5, 23, 31], privacy decision-making [20, 63], and human interactions with virtual agents [32]. Le et al. [63] used regulatory focus theory to study individuals’ privacy decision-making in a mobile payment application. They found that users with a high prevention focus are more cautious and have a lower intention to disclose personal information. However, promotion-focused users are more likely to disclose information if the disclosure scenario serves their goals. Cho et al. [20] showed that people with a high promotion focus have a more positive attitude about managing their online privacy preference on a social media platform and perceive this task as less effortful. In the following, we discuss our approach to leveraging regulatory focus in personalizing privacy interventions.

2.3.2 Regulatory Fit.

Regulatory fit happens when an individual’s motivational orientation (i.e., promotion or prevention foci) matches their goal-pursuit strategy [44]. When individuals experience regulatory fit, they are more likely to be persuaded to take an action [64] such as purchasing a product [11] or getting tested to see if they have a disease and need treatment [86]. For example, Werth and Foerster studied how regulatory fit affects consumers’ purchasing behavior [109]. They created two versions of a car advertisement. In one of the advertisements, they focused on safety aspects (aspects important for individuals with a high prevention focus), while in the other advertisement, they emphasized comfort (aspects important to individuals with a high promotion focus). They found that when the advertisement aligns with the consumer’s regulatory focus (i.e., when there is regulatory fit), the consumer expresses more positive opinions about the product than when the advertisement is incompatible with the consumer’s regulatory focus [109].

These behavioral effects of regulatory fit may stem from its capacity to promote positive attitudes about the task. For example, regulatory fit can promote perceived enjoyment [33] and performance [45, 65]. Overall, with a regulatory fit, individuals engage more strongly in the task and feel good about it [20].

Figure 2:

Figure 2: The hypothesized model.

Based on the regulatory fit literature, we pose the following hypotheses for individuals with a high promotion focus (see Figure 2 for a summary of our hypotheses):

H4: A gain-framed (vs. loss-framed) message for individuals with a high (vs. low) promotion focus leads to having a) the privacy calculus in favor of using the privacy-protection technology, b) a higher trust for the privacy-protection technology, and c) higher adoption of the privacy-protection technology.

Likewise, based on the regulatory fit literature, we hypothesize that the loss-framed message results in more positive attitudes and behaviors than the gain-framed messages for those with a high prevention focus:

H5: A loss-framed (vs. gain-framed) message for individuals with a high (vs. low) prevention focus leads to having a) the privacy calculus in favor of using the privacy-protection technology, b) a higher trust for the privacy-protection technology, and c) higher adoption of the privacy-protection technology.

Skip 3METHODS Section

3 METHODS

3.1 Study Overview

We designed a between-subject experiment in which we present a privacy-protection technology—the IoT Inspector—either in a gain frame of Privacy Enhancing Technology (PET) or in a loss frame of Privacy Preserving Technology (PPT). This study was reviewed and approved by our Institutional Review Board (IRB), as well as the institution of the developer team behind IoT Inspector, since the download of that tool was the subject of our study. After giving consent and agreeing to participate in the study, participants answered a brief survey measuring their privacy concerns as a control variable. Then, they read a short piece of information about the IoT Inspector (the framing manipulation) and answered post-survey questions, including their perception of IoT Inspector.We used Qualtrics to administer the survey. Finally, participants were given a chance to download and use the IoT Inspector. Whether they downloaded it or not is used as a binary indicator of their adoption of the IoT Inspector, the outcome variable in the hypothesized model. We validated downloads by giving those who proceeded to download the tool a unique ID to enter into the survey. Figure 3 shows the study overview. After finishing the study, participants received $5 as an incentive and were debriefed about the PET and PPT conditions and the purpose of the study.

Figure 3:

Figure 3: An overview of our study.

3.2 Stimuli

IoT Inspector is an open-source software designed by researchers across several universities to help IoT device users monitor and understand the data-sharing practices of their home IoT devices [53]. It monitors the network traffic of IoT devices and allows users to track the frequency at which their smart devices send out data, the domains to which their data goes (e.g., Google.com), and the geographical location of these domains (e.g., USA). In order to design IoT Inspector’s gain- and loss-framing introductory text, the authors had several meetings at which they discussed the text. The overall goal was to use the relevant gain (e.g., increase data security) or loss (e.g., decrease data breaches) terminology in each condition while keeping the text concise. One important criterion was that the framing manipulation should not have any semantic implications, such that the two PET and PPT versions should communicate the same information. This was especially important because if the two versions communicate different information, we would be unable to determine whether potential findings are due to using different gain vs. loss terminologies or due to the different information presented to the users. We present the full stimuli in the Appendices.

3.3 Measurement Instruments

We measured participants’ baseline privacy concerns in the pre-survey as a control variable before presenting the framed text. Privacy concerns are the most studied variable in the privacy literature [16, 28], and are considered as an antecedent for adopting privacy and security technologies [10, 15]. We used Malhotra et al.’s Global Information Privacy Concerns [74]. This instrument includes five items (see Table 7). The responses were recorded on a 7-point Likert scale from "Strongly Disagree" to "Strongly Agree."

Then, we presented the framed text about the IoT Inspector, and measured participants’ perceived trust in the tool, regulatory focus, cognitive aspect of decision-making through privacy calculus, technology use frequencies, and demographics. Perceived trust in the tool is a shortcut of privacy heuristic [66, 67]. McKnight’s work suggests that trust has several dimensions [76]. We used the benevolence dimension, which is geared to measure the moral dimension of trust [111] and aligns better with the conceptual definition of trust that we discussed in Section 2.2.2. We tuned this construct to our context (e.g., "The IoT Inspector puts my interests first"; see Table 7 to view all statements). We used the Regulatory Focus Questionnaire (RFQ) developed by Higgins et al. [48] to measure participants’ promotion and prevention regulatory foci (see Section 7). Following Higgins et al.’s [50] guidelines, we conducted a median split to identify individuals with high and low promotion and prevention regulatory focuses 1. Similar to the pre-survey, we recorded responses on a 7-point Likert scale from "Strongly Disagree" to "Strongly Agree."

To capture the cognitively-mediated aspect of the decision to download the IoT Inspector (or not), we asked participants to list their reasons for or against using the tool. They had to type at least three and at most five reasons. Then, for each reason, they specified whether that reason was for or against using the tool. This method is a common means of process tracing in the psychology literature that helps scholars explore the cognitively-mediated aspect of decisions [59, 78]. It follows that the reasons that are listed first are often more important for people, and if people favor a choice, they tend to list more reasons for it rather than against it [59]. We considered these findings in coding the outcome of privacy calculus by summing the inverse signed ranks for these questions (please see below for the formula). Overall, a higher value means that participants have more positive reasons for using the IoT Inspector (i.e., the outcome of privacy calculus more greatly favors using the IoT Inspector). (1) \(\begin{equation} \sum _{i=1}^{5} \frac{Q_{i} valence}{i} \end{equation} \)

In addition, participants answered questions about their ethnicity, age, and gender. We also measured the frequency of technology use with a question, “How frequently do you use technology (e.g., smartphone, internet)?" We recorded the responses on a 7-point Likert scale from “less often" to “almost constantly." Finally, participants were given a chance to download the tool. We clarified that the decision to download or not does not influence their incentives.

3.4 Participant Recruitment

Our power analysis showed that to identify a small effect (0.25) with a power of 0.95 and an α of 0.05, we need 210 participants. We recruited 238 US-based participants via Prolific, a crowd-sourcing platform (please see the Appendices to view the recruitment script). All participants agreed to participate in the study and were paid $5 through the Prolific platform after completing the study. We took several measures to ensure the quality of the data. First, we recruited participants with at least 90% successful approvals. These individuals are more likely to pay attention to the study. Furthermore, we included two attention-check questions in the survey to exclude those who may not read the survey carefully. In addition, to improve the ecological validity of our study, we used Prolific’s built-in screening feature to recruit only those who use smart devices. This was important because the IoT Inspector would not be useful for those who do not use smart devices.

Of 238, two participants missed one or both attention check questions and were removed from the analysis. Thus, a total of 236 valid responses were collected. Participants were randomly assigned to the loss (N = 120) and gain (N = 116) conditions. We recruited participants with the goal of inclusivity, recruiting across a broad age range from 19 to 94 years (Mean = 42.86, SD = 19.03). One hundred twenty-five respondents identified as women, 104 identified as men, and seven as non-binary. One hundred eighty-one participants were White, 27 were Black or African American, eight were Asian, and 20 were multi-racial or people of varied ethnicities. Lastly, 21 participants had at least a master’s degree, 90 had a four-year college degree, 89 had an associate’s degree, 35 had a high school or an equivalent degree, and one had an educational level below high school.

Table 1:
Loss Framing (PPT)Gain Framing (PET)
High PreventionLow PreventionHigh PreventionLow Prevention
High Promotion35 (15)26 (9)41 (12)27 (12)
Low Promotion27 (8)32 (4)18 (4)30 (6)

Table 1: A breakdown of our participants’ promotion and prevention regulatory focuses within each framing condition. The values in parentheses are the app downloads.

Figure 4:

Figure 4: We conducted an SEM with all the hypotheses as shown in Figure 2. To improve readability, we removed the non-significant findings from this figure. Table 2 reports all of the effects.

3.5 Data Analysis

3.5.1 Quantitative Analysis.

Although we borrowed measurement instruments from previously validated scales in the literature, we measured Cronbach’s alpha to assess the reliability and internal consistency of measures in our context. The constructs we used in the survey showed a high internal consistency, with all of them having Cronbach’s alpha values exceeding the acceptable thresholds of 0.7 [22, 82]. In addition, the measurement model showed good fit (χ2(20) = 45.178, p < 0.001, RMSEA = 0.073, p = 0.086, CFI = 0.964, TLI = 0.949) [37, 52]. Consequently, we conducted a Structural Equation Model (SEM) to test our hypothesis. We used a robust maximum likelihood estimator, which is robust to non-normality [97, 113]. Besides the variables in the hypothesized model, we also included participants’ reported privacy concerns in the SEM as a controlled variable. We conducted the SEM analyses in Mplus, and examined the explained variance (R-squared) in the outcome variable, download behavior. Additionally, we conducted difference testing using the Loglikelihoods [80] to study whether the model hypothesizing regulatory fit interactions is superior to the model without the two regulatory fit interaction effects. Finally, we analyzed a fully saturated model by including all possible two-way interaction effects to study the best possible model.

3.5.2 Qualitative Analysis.

At the end of the study, participants were asked to indicate their reasons for using or not using the IoT Inspector via an open-ended question. These responses were subjected to qualitative analysis using the six-stage thematic analysis approach [98]. Two researchers independently performed open coding using an inductive approach on de-identified, unlabeled data (i.e., participants’ PET/PPT condition and regulatory foci were removed from the data) to mitigate potential biases. Both researchers then independently performed axial coding to develop initial categories. Next, both researchers reviewed the independently-developed categories and reached a consensus to define the final themes for the selective coding stage. Following selective coding, the frequencies of responses in each theme across the PET/PPT conditions and promotion and prevention regulatory focuses were reported.

Table 2:
Variablesb (OR)SEp-value
DV: Download BehaviorR-squared = 22.6%
Privacy Concerns0.541 (1.718)0.1880.004
H1: ISR0.319 (1.375)0.1280.013
H2: Triust0.472 (1.603)0.2010.019
H3c: Loss Framing (vs. Gain)0.069 (1.072)0.3320.834
High Promotion Focus (vs. Low)0.575 (1.777)0.3330.084
H4c: Loss Framing X High Promotion0.054 (1.056)0.6540.934
High Prevention Focus (vs. Low)-0.103 (0.902)0.3160.743
H5c: Loss Framing X High Prevention0.664 (1.943)0.6340.295
DV: Privacy CalculusR-squared = 5.3%
Privacy Concerns-0.0010.1130.993
H3a: Loss Framing (vs. Gain)0.1130.1940.558
High Promotion Focus (vs. Low)0.2260.1880.229
H4a: Loss Framing X High Promotion-0.9580.3780.011
High Prevention Focus (vs. Low)0.3510.1840.057
H5a: Loss Framing X High Prevention0.4380.3690.235
DV: TrustR-squared = 10.0%
Privacy Concerns-0.0280.0880.753
H3b: Loss Framing (vs. Gain)-0.1790.1500.231
High Promotion Focus (vs. Low)0.4630.1540.003
H4b: Loss Framing X High Promotion-0.2930.2950.321
High Prevention Focus (vs. Low)0.2660.1510.077
H5b: Loss Framing X High Prevention0.6520.2870.023

Table 2: Results of the full SEM model. We used bold text to show the significant effects. Since the download behavior is a binary variable, we include both beta coefficients and odds ratios.

Skip 4RESULTS Section

4 RESULTS

In the following, we first report some descriptive statistics about participants’ technology use and their attitudes toward the IoT Inspector. Then, we report the results of hypothesis testing, followed by our qualitative findings.

4.1 Descriptive Statistics

On average, participants reported using smart devices at least once a day, with 107 participants using them several times a day and 51 participants using them almost constantly. This shows that our participants are frequent technology users. In addition, they reported having an average of five smart devices. Therefore, the context of this study, smart home privacy, is relevant to these participants. Furthermore, participants had an average sum score of 22.800 for privacy concerns (min = 5, max = 35, SD = 5.982) and an average score of 15.444 for trust (min = 7, max = 21, SD = 2.257). In addition, participants listed a minimum of three and a maximum of five reasons for or against using the tool. On average, they entered 2.525 (SD = 1.497) reasons for and 1.182 (SD = 1.382) reasons against using the IoT Inspector. Therefore, they were more geared to list positive reasons than negative reasons. Across all participants, we collected 596 reasons for and 279 reasons against using the IoT Inspector. Ultimately, 70 participants downloaded the IoT Inspector. Therefore, the adoption rate was at about 30%. In addition, there were 121 participants with a high prevention focus and 129 participants with a high promotion focus. Table 1 shows a breakdown of individuals’ regulatory focuses and the framing of the message they viewed.

4.2 Hypothesis testing

We tested the hypothesized model (see Figure 2) using a Structural Equation Modeling (SEM) framework. Overall, this model accounted for 23.2% of the variance in download behavior. Table 2 reports all of the SEM results. Confirming H1, we found a significant positive association between the privacy calculus and download behavior. When participants have more positive reasons for using the IoT Inspector, they are more likely to download the tool (OR = 1.375, p = 0.026). Furthermore, we found support for H2; by one standard deviation increase in trust, participants are 62.7% more likely to download the IoT Inspector (p < 0.001). However, we did not find a direct effect of framing on the download behavior (p = 0.834), privacy calculus (p = 0.558), or trust (p = 0.231, H3a-c rejected).

Figure 5:

Figure 5: While we only reached statistically significant effects across some of these graphs, regulatory fit conditions consistently lead to higher trust and privacy calculus.

To explore the regulatory fit hypothesis, we study the interaction effect between individuals’ regulatory foci and the framing conditions. A regulatory fit for participants with a high promotion was found to significantly change their privacy calculus, such that if they see the gain-framed message, they are more likely to have positive reasons for using the IoT Inspector (b = 0.958, p = 0.011, H4a supported2). However, regulatory fit for participants with a high promotion does not significantly improve trust (p = 0.321, H4b rejected), nor does it directly increase the likelihood of the download behavior (p = 0.934, H4c rejected).

A regulatory fit for participants with a high prevention focus involves the loss-framed message. When individuals with a high prevention focus see the privacy-preserving framing, they do not show a significantly different privacy calculus (p = 0.235, H5a rejected), but their trust perceptions are significantly higher by 0.625 standard deviations (p = 0.023, H5b supported). Furthermore, while a regulatory fit for those with a high prevention regulatory focus increases the likelihood of the download behavior by 94.3%, this effect is not significant (p = 0.295—H5c rejected). Lastly, to study whether the significant regulatory fit interaction effects improve the model, we compared the model with these interactions against the model without these interaction effects. The results show that adding such interaction effects significantly improves the model fit (χ2(2) = 10.225, p = 0.006). Table 3 summarizes the results of hypothesis testing.

Table 3:
HypothesesSupport
H1: Privacy CalculusSupported
H2: TrustSupported
H3: Framing Not Supported
H4: Regulatory fit (for promotion)Partially Supported
H5: Regulatory fit (for prevention)Partially Supported

Table 3: A summary of hypothesis testing results.

Table 4:
Regulatory fit Trust Privacy Calculus Download Behavior
Promotion fitDirect: 0.169, p = 0.320Direct: -0.163, p = 0.011Direct: -0.011, p = 0.878
Prevention fitDirect: 0.155, p = 0.021Direct: 0.075, p = 0.234Direct: 0.058, p = 0.065

Table 4: Direct and indirect effects of regulatory fit on dependent variables. "na" shows the paths that do not exist.

4.3 Fully Saturated Model

In order to explore other possible effects, we studied a fully saturated model by adding all possible two-way interaction effects to the hypothesized model. We iteratively trimmed the newly added non-significant interaction terms and found only one additional significant interaction effect between privacy calculus and prevention regulatory focus, predicting download behavior (OR = 1.833, p =.038). Chi-square difference testing suggests that adding this interaction term significantly improves the hypothesized model (χ2(1) = 5.034, p = 0.024). Finally, we report the direct and indirect effects of regulatory fit on dependent variables in Table 4.

4.4 Qualitative Insights

Thematic analysis of the reasons participants provided in favor or against using the IoT Inspector provides rich qualitative insights. Below, we present the major themes resulting from this analysis (see Table 5 for a summary).

Table 5:
ThemesExamplesFrequency
Reasons for using IoT Inspector
Data monitoring"protect yourself from a data breach"423
Reduce efforts"It could save me time on monitoring my privacy"57
Worry mitigation"It makes you not worry much I feel it will look out for me"43
Improved"IoT Inspector helps you monitor and understand how your devices interact23
Reasons against using IoT Inspector
Low trust"It feels like swapping one evil for another. IoT is going to tell me what136
Need information"I need to better understand how IoT Inspector works"28
Unconcerned"I don’t disclose barely any of my personal info "19
Extra time,"its just another thing I have to set up"57
Relinquish"There is nothing to entirely protect personal privacy data"14
Need to see"Need to see reviews from real people"9
Compatibility"have to make sure my stuff runs fine through it"35

Table 5: The results of our qualitative analysis with themes and examples of each theme. The numbers represent the frequency of each theme in the reasons participants listed.

4.4.1 Themes Supporting Use of the IoT Inspector.

The most frequent theme for using the IoT Inspector (N = 423) was monitoring and protection. The second most prevalent theme in support of the IoT Inspector was that it could curtail users’ efforts in managing their privacy (N = 53). Several statements showed an appreciation of how IoT Inspector can address users’ worries and provide ease of mind (N = 43). Last, we observed several reasons for participant’s appreciation of IoT Inspector as a tool that can improve transparency by helping users know what data is being collected (N = 23).

4.4.2 Themes Opposing Use of the IoT Inspector.

Thematic analysis revealed that low trust was the most prevalent reason against using the IoT Inspector (N = 136). Fifty-seven reasons were listed against using the tool as it may entail spending extra time setting up or potentially require paid subscriptions. Several reasons suggested that some individuals were worried about the tool having compatibility issues with their existing tools (N = 35). Another theme that emerged as a reason for not using the IoT Inspector was the need for more information about the tool (N = 28). Nineteen reasons suggested a lack of concerns, as individuals were not disclosing important information online (see Table 5 for examples); 14 reasons argued that nothing could really protect one’s privacy, and nine reasons focused on the need to hear other users’ inputs, experiences, and reviews.

Table 6:
Themes High/Low Promotion Focuses High/Low Prevention Focuses
PETPPTPETPPT
High Low High Low High Low High Low
Reasons in favor of using IoT Inspector
Data monitoring12295104102111106106100
Reduce efforts151871718151311
Worry mitigation1581010158119
Improved64764685
Reasons against using IoT Inspector
Low trust3534343332372839
Need information1285381244
Unconcerned4121251121
Extra time,19196132018910
Given up13372255
Need to see43204302
Compatibility41091268714

Table 6: Representation of each theme across experimental conditions and regulatory focuses.

4.4.3 Reflecting on Regulatory Fit Findings in Qualitative Themes.

In this section, we synthesize the qualitative findings with the quantitative results centered on regulatory fit. We highlight only the themes with a more substantial difference (e.g., not only by two or three frequencies). First, we explore patterns observed amongst individuals in the gain-framed condition who had a high promotion focus (regulatory fit with high promotion). Data monitoring and protection was the major listed reason in favor of using the IoT Inspector overall and was cited most (at 56.22%) among such individuals. In addition, we observed another noteworthy difference within the worry-mitigation theme, suggesting that these individuals consider IoT Inspector a means of mitigating their privacy worries (65.22%). However, these individuals do not appear to trust the tool differently than others. These results are in line with our quantitative findings, suggesting that the privacy calculus is more pronounced for regulatory fit with high promotion.

Prevention regulatory fit applies to individuals who read the loss-framed message and have a high prevention regulatory focus. While themes such as worry mitigation and reduced effort were not substantially different for people with high and low prevention regulatory foci in the PPT condition, low trust was the least cited reason for not using the IoT Inspector among the group with the high prevention focus (41.79%). This is in line with our quantitative findings, as it suggests that in the case of a prevention regulatory fit, individuals have a higher trust in the IoT Inspector.

Skip 5DISCUSSION Section

5 DISCUSSION

Previous research explored the efficacy of persuasive messages based on regulatory fit in various domains (e.g., health [86], marketing [11]), but not in privacy. To the best of our knowledge, our work is the first to explore how regulatory fit can affect privacy-protection behaviors. Our findings shed light on the mechanisms through which regulatory fit influences persuasion in adopting a real-world privacy-protection technology.

We studied if and how privacy calculus and trust mediate the effects of regulatory fit on privacy decisions. We found that individuals with a high promotion focus who saw the gain-framed (PPT) message found the IoT Inspector more beneficial (i.e., their privacy calculus was more positive-leaning). High promotion focus is associated with approach orientation and commission bias (i.e., preference for action rather than inaction). With a gain-framed message, individuals with high promotion are more likely to be "motivated" to think about reasons "for" the action. However, since the total effect of promotion fit on the download behavior was not significant (see Table 4), we cannot conclude that a promotion fit actually led to a behavioral outcome. On the other hand, the total effect of prevention fit on the download behavior was significant. We found that individuals with a high prevention focus who saw the loss-framed (PET) message reported a higher level of trust in the IoT Inspector. Extant research has studied humans’ "loss aversion," showing that losses trigger a heuristic mechanism in which individuals perceive losses as more significant than gains of similar size [58, 105]. It follows that loss-averse individuals tend to take more risks to avoid unfavorable outcomes [105]. Hence, a loss-framed message is more alarming to prevention-focused individuals, who inherently want to avoid losses. This leads them to heighten their trust in the IoT Inspector, through which they aim to avoid unfavorable outcomes.

These results demonstrate that the mechanisms through which the persuading message influences users’ adoption of privacy protection tools are complex, and the mediation analysis helped us scrutinize the adoption decision and gain deeper insights. This approach can inform research in other areas of persuasion. Researchers have examined the effectiveness of various persuading strategies, such as explaining the purpose of data collecting [62], informing users on the number of apps accessing their information [6], and indicating the level of security for different configuration options [116]. However, the efficacy of such persuading strategies is mixed, such that while some studies found the desired effects on persuasion [6, 19, 116], others did not find persuading messages as effective [8, 30, 54]. Studying the underlying mechanisms behind these effects and accounting for users’ regulatory orientations can contribute to our understanding of the circumstances under which persuasive messages may or may not work (e.g., if the persuasive message does not align with an individual’s regulatory focus, it may not work).

Our qualitative findings highlight the key reasons that potential users may consider when deciding to adopt a new technology, with implications for product design and marketing. While the vast majority of our participants highlighted the major application of the IoT Inspector (i.e., data monitoring and protection), many appreciated how the tool could save them time or mitigate their worries about their smart devices. Therefore, it is important for product designers to think not only about the immediate application of their product (e.g., data monitoring and protection) but also highlight and design for other relevant areas through which the product can benefit users (e.g., mitigating worries). Moreover, our results unveil several barriers to adopting IoT Inspector. We found trust to be the most profound barrier. While earning users’ trust may take time, there may be some means to form an initial trust (e.g., through honest communication of the product’s drawbacks [60]). Furthermore, while extensive information about a product may overwhelm some users [18], our results show that some users need more information before making the adoption decision. Therefore, it is important to make this information accessible to such users. However, addressing trust and providing information does not necessarily lead to adoption, as some users specified that they were unconcerned and simply did not need the tool.

In addition, our findings have important implications for policymakers who seek to promote responsible informed consent in policy design [92, 102]. Policymakers can leverage regulatory fit to promote privacy-oriented informed consent. For instance, a loss-framed message such as "To protect your privacy, please read the policy statement" may be more effective in engaging people with a high prevention focus than a gain-framed message of "To enhance your privacy, please read the policy statement," and the reverse may be true for people with a high protection focus. Further research should examine the efficacy of gain vs. loss framing in developing policy and regulation and determine if they are successful in motivating individuals with different regulatory foci.

However, policymakers, companies, and designers should also be cautious about the potential misuse of personalized persuasion. For example, dark pattern designers who attempt to maximize users’ data disclosure [9, 14, 38] might use regulatory fit to amplify their data-collection efforts. As such, a dark pattern designer may frame a cookie consent request as a gain (e.g., “Allow cookies to make your shopping experience more convenient”) for users with a promotion focus and as a loss (e.g., “Allow cookies so that we don’t lose track of the items in your shopping cart”) for users with a prevention focus. To do that, they need to learn their target audience’s regulatory orientation. Overall, personalization may not be feasible without tracking any user data. While the 11-item RFQ is the main method currently used for inferring regulatory focus, there may be other means to discern regulatory orientation. For instance, IP addresses can unveil users’ geographic location [104], and people living in countries with highly individualistic cultures are more loss-averse than collectivistic cultures [107, 110] and may be more likely to be promotion-focused. If such relationships are validated, dark pattern designers can infer individuals’ regulatory orientations (e.g., based on geographic locations). Additionally, studies suggest that we can have assumptions about a user’s regulatory focus based on demographics such as age (e.g., younger adults being more promotion-focused [70], and gender (e.g., women being more prevention-focused than men [42]). Similarly, prior research [61, 114] has shown that people’s social media behavior can be used to predict their personality traits. Using similar methods, users’ regulatory focus could potentially be elicited through their social media behavior and digital traces and make them susceptible to dark pattern interventions.

Finally, there is a need for more research and debate within the CHI community to explore the ethical boundaries around persuasion. While there is consensus on some applications of personalized persuasion being unethical [14, 84, 95], in certain other cases, there can be a fine line between persuasive design and manipulative design [93]. The HCI community, as one of the major user advocates, should establish a framework that sets ethical guidelines for using persuasive mechanisms in various contexts and decide whether or not, and to what extent, users’ data can be used for personalization.

Skip 6LIMITATIONS AND FUTURE WORK Section

6 LIMITATIONS AND FUTURE WORK

We showed the efficacy of regulatory fit only in a limited context of adopting a privacy-protection technology. Future research can explore this effect in various privacy scenarios to study the generalizability of the findings (e.g., whether these personalized messages can motivate individuals to explore a new privacy feature in an existing app, read policy documents, or choose stronger passwords). In addition, we used a convenience sampling methodology by recruiting only US-based participants from Prolific. Prolific explicitly informs participants that they are recruited for participation in research [85] and requires participants to be paid a minimum hourly wage. Although Prolific provided high data quality in terms of attention, comprehension, and honesty [88], and prior studies found that Prolific participants were less dishonest and from a more diverse demographic than MTurkers [87], we face some limitations in terms of our recruiting approach. For example, participants in Prolific may not be interested in downloading an application as their main task is participating in the surveys. We notified participants that their download decision does not influence their participation reward, and about 30% of the participants downloaded the app. It is possible that this ratio would be different among non-prolific users. In addition, due to different cultural backgrounds or levels of digital literacy, it is possible to find different effects among non-prolific users. Overall, our results are not generalizable to the broader population, and our study is subject to other limitations, such as social desirability bias. Furthermore, we studied only participants’ download behaviors and did not explore their actual usage of the IoT Inspector, nor did we follow up with participants to deeply understand their motives. Future research can longitudinally explore the regulatory fit and study if regulatory fit can have longitudinal effects (e.g., influence the duration and frequency of user interactions).

Skip 7CONCLUSION Section

7 CONCLUSION

This study explored the effects of message personalization on adopting a privacy and security measure. More specifically, we communicated a privacy-protection technology using either a gain-framed message (Privacy Enhancing Technology) or a loss-framed message (Privacy Preserving Technology) to people with promotion and prevention regulatory foci. Our results suggest that individuals react to the same message differently based on their regulatory focus. Our study showed that a regulatory fit (i.e., tailoring the persuasive message to one’s regulatory focus) can increase their trust and influence the outcome of their privacy calculus.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

We would like to express our sincere gratitude to Professor Burcu Bulgurcu for her insightful and valuable feedback during the initial phase of this project.

Skip 8APPENDICES Section

8 APPENDICES

Table 7:
Privacy Concerns
1- All things considered, Smart Devices cause serious privacy problems
2- Compared to others, I am more sensitive about the way Smart Devices handle my personal information
3- To me, it is the most important thing to keep my privacy intact from Smart Devices.
4- I believe other people are too concerned with Smart Devices’ privacy issues
5- I am concerned about threats to my personal privacy today.
Trust
1- This IoT Inspector puts my interests first.
2- This IoT Inspector keeps my interests in its mind.
3- This IoT Inspector wants to understand my needs and preferences.
Regulatory Focus Questionnaire
1- Compared to most people, I am typically unable to get what you want out of life.
2- Growing up, I “crossed the line” by doing things that my parents would not tolerate.
3- I accomplished things that got me "psyched" to work even harder.
4- I often got on my parents’ nerves when I was growing up.
5- I often obeyed the rules and regulations that were established by my parents.
6- Growing up, I acted in ways that my parents thought were objectionable.
7- I often do well at different things that I try.
8- Not being careful enough has gotten me into trouble at times.
9- When it comes to achieving things that are important to me, I find that I don’t perform as well as I ideally would like to do.
10- I feel like I have made progress toward being successful in my life.
11- I have found very few hobbies or activities in my life that capture my interest or motivate me to put effort into them.

Table 7: In the regulatory focus questionnaire, items 1 (reversed), 3, 7, 9 (reversed), 10, and 11 (reversed) measure promotion regulatory focus and items 2 (reversed), 4 (reversed), 5, 6 (reversed), and 8 (reversed) measure prevention regulatory focus.

Below, we present the manipulation. There are several ’/’ symbols in the text. Participants in the PPT condition read the text before the ’/’, and participants in the PET condition read the text after the ’/’.

Preserve/Enhance Your Privacy

It is essential to preserve/enhance your privacy. Privacy-Preserving Technologies (PPT) /Privacy-Enhancing Technologies (PET) can help you defend yourself/increase your protection in the online world. PPTs /PETs are technologies that embody fundamental privacy-preserving /privacy-enhancing principles by minimizing the disclosure/ maximizing the confidentiality of your personal data and decreasing data breaches/increasing data security . PPTs /PETs allow you to protect/increase the privacy of your personally identifiable information (PII) provided to and handled by services or applications.

The smart devices in your home may potentially gather data without your knowledge, sometimes with malevolent intent. For instance, certain apps on your phone could potentially expose your data to harmful third parties. In response to this challenge, researchers at New York University have developed a tool called IoT Inspector.

The IoT Inspector is a privacy preserving/enhancing software designed to monitor the types of data being transmitted from your devices, such as audio, video, or text, and identify to which domains this information is being sent. By understanding the nature of the data each domain collects and the purpose of that specific domain, IoT Inspector can assess whether the collected data aligns with the domain’s stated purpose. This allows IoT Inspector to flag any suspicious activity and preserve/enhance your privacy by rejecting/accepting network communications that are unsafe/safe . This way, IoT Inspector can limit/increase your SmartHome’s vulnerability/security . IoT Inspector helps you not lose/gain control over your smart devices. The overall goal of IoT Inspector is for you to use your Smart Devices, and, at the same time, avoid anxiety/gain peace of mind .

Recruitment Script:

If you are a user of smart-home devices, please consider participating in our study. The study will take up to 15 minutes. You will receive a $5 compensation for participating in this study.

Footnotes

  1. 1 While this is a common practice in regulatory focus literature and makes the results more comparable and easier to interpret, a median split may have statistical disadvantages. We analyzed the data after removing 30% of the sample around the medians and found that the effects of regulatory fit on trust and privacy calculus do not change. Therefore, we proceeded with the whole sample without removing data.

    Footnote
  2. 2 In Table 2 this effect has a negative sign (-0.958) as it shows the misfit situation of having Privacy Preserving framing for individuals who have high promotion regulatory focuses.

    Footnote
Skip Supplemental Material Section

Supplemental Material

Video Preview

Video Preview

mp4

4.3 MB

Video Presentation

Video Presentation

mp4

106.5 MB

References

  1. Alessandro Acquisti, Laura Brandimarte, and George Loewenstein. 2015. Privacy and human behavior in the age of information. Science 347, 6221 (2015), 509–514.Google ScholarGoogle Scholar
  2. Alessandro Acquisti and Jens Grossklags. 2005. Privacy and rationality in individual decision making. IEEE security & privacy 3, 1 (2005), 26–33.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Alessandro Acquisti, Leslie K John, and George Loewenstein. 2013. What is privacy worth?The Journal of Legal Studies 42, 2 (2013), 249–274.Google ScholarGoogle Scholar
  4. Idris Adjerid, Alessandro Acquisti, Laura Brandimarte, and George Loewenstein. 2013. Sleights of privacy: Framing, disclosures, and the limits of transparency. In Proceedings of the ninth symposium on usable privacy and security. 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Roxana Agrigoroaie, Stefan-Dan Ciocirlan, and Adriana Tapus. 2020. In the wild hri scenario: influence of regulatory focus theory. Frontiers in Robotics and AI 7 (2020), 58.Google ScholarGoogle ScholarCross RefCross Ref
  6. Hazim Almuhimedi, Florian Schaub, Norman Sadeh, Idris Adjerid, Alessandro Acquisti, Joshua Gluck, Lorrie Faith Cranor, and Yuvraj Agarwal. 2015. Your location has been shared 5,398 times! A field study on mobile app privacy nudging. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 787–796.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Reza Anaraky, Tahereh Nabizadeh, Bart Knijnenburg, and Marten Risius. 2018. Reducing Default and Framing Effects in Privacy Decision-Making. SIGHCI 2018 Proceedings (Dec. 2018). https://aisel.aisnet.org/sighci2018/19Google ScholarGoogle Scholar
  8. Reza Ghaiumy Anaraky, Bart P Knijnenburg, and Marten Risius. 2020. Exacerbating mindless compliance: The danger of justifications during privacy decision making in the context of Facebook applications. AIS Transactions on Human-Computer Interaction 12, 2 (2020), 70–95.Google ScholarGoogle ScholarCross RefCross Ref
  9. Reza Ghaiumy Anaraky, Byron Lowens, Yao Li, Kaileigh A Byrne, Marten Risius, Xinru Page, Pamela Wisniewski, Masoumeh Soleimani, Morteza Soltani, and Bart Knijnenburg. 2023. Older and younger adults are influenced differently by dark pattern designs. arXiv preprint arXiv:2310.03830 (2023).Google ScholarGoogle Scholar
  10. Corey M Angst and Ritu Agarwal. 2009. Adoption of electronic health records in the presence of privacy concerns: The elaboration likelihood model and individual persuasion. MIS quarterly (2009), 339–370.Google ScholarGoogle Scholar
  11. Tamar Avnet and E Tory Higgins. 2003. Locomotion, assessment, and regulatory fit: Value transfer from “how” to “what”. Journal of Experimental Social Psychology 39, 5 (2003), 525–530.Google ScholarGoogle ScholarCross RefCross Ref
  12. Paritosh Bahirat, Martijn Willemsen, Yangyang He, Qizhang Sun, and Bart Knijnenburg. 2021. Overlooking context: How do defaults and framing reduce deliberation in smart home privacy decision-making?. In Proceedings of the 2021 chi conference on human factors in computing systems. 1–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Herbert Bless, Tilmann Betsch, and Axel Franzen. 1998. Framing the framing effect: The impact of context cues on solutions to the ‘Asian disease’problem. European journal of social psychology 28, 2 (1998), 287–291.Google ScholarGoogle Scholar
  14. Christoph Bösch, Benjamin Erb, Frank Kargl, Henning Kopp, and Stefan Pfattheicher. 2016. Tales from the dark side: Privacy dark strategies and privacy dark patterns. Proceedings on Privacy Enhancing Technologies 2016, 4 (2016), 237–254.Google ScholarGoogle ScholarCross RefCross Ref
  15. Aaron R Brough and Kelly D Martin. 2020. Critical roles of knowledge and motivation in privacy research. Current opinion in psychology 31 (2020), 11–15.Google ScholarGoogle Scholar
  16. Christoph Buck, Tamara Dinev, and Reza Ghaiumy Anaraky. 2022. Revisiting apco. In Modern Socio-Technical Perspectives on Privacy. Springer International Publishing Cham, 43–60.Google ScholarGoogle Scholar
  17. Daphne Chang, Erin L Krupka, Eytan Adar, and Alessandro Acquisti. 2016. Engineering information disclosure: Norm shaping designs. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 587–597.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Peng Cheng, Zhe Ouyang, and Yang Liu. 2020. The effect of information overload on the intention of consumers to adopt electric vehicles. Transportation 47 (2020), 2067–2086.Google ScholarGoogle ScholarCross RefCross Ref
  19. Eunyong Cheon, Jun Ho Huh, and Ian Oakley. 2023. GestureMeter: Design and Evaluation of a Gesture Password Strength Meter. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Hichang Cho, Sungjong Roh, and Byungho Park. 2019. Of promoting networking and protecting privacy: effects of defaults and regulatory focus on social media users’ preference settings. Computers in Human Behavior 101 (2019), 1–13.Google ScholarGoogle ScholarCross RefCross Ref
  21. Richard Cookson. 2000. Framing effects in public goods experiments. Experimental Economics 3, 1 (2000), 55.Google ScholarGoogle ScholarCross RefCross Ref
  22. Jose M Cortina. 1993. What is coefficient alpha? An examination of theory and applications.Journal of applied psychology 78, 1 (1993), 98.Google ScholarGoogle Scholar
  23. Arturo Cruz-Maya, Roxana Agrigoroaie, and Adriana Tapus. 2017. Improving user’s performance by motivation: Matching robot interaction strategy with user’s regulatory state. In Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings 9. Springer, 464–473.Google ScholarGoogle ScholarCross RefCross Ref
  24. Mary J Culnan and Pamela K Armstrong. 1999. Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization science 10, 1 (1999), 104–115.Google ScholarGoogle Scholar
  25. Anzo DeGiulio, Hanoom Lee, and Eleanor Birrell. 2021. “Ask App Not to Track”: The Effect of Opt-In Tracking Authorization on Mobile Privacy. In International Workshop on Emerging Technologies for Authorization and Authentication. Springer, 152–167.Google ScholarGoogle Scholar
  26. Arie Dijkstra. 2008. The psychology of tailoring-ingredients in computer-tailored persuasion. Social and personality psychology compass 2, 2 (2008), 765–784.Google ScholarGoogle Scholar
  27. Tamara Dinev and Paul Hart. 2006. An extended privacy calculus model for e-commerce transactions. Information systems research 17, 1 (2006), 61–80.Google ScholarGoogle Scholar
  28. Tamara Dinev, Allen R McConnell, and H Jeff Smith. 2015. Research commentary—informing privacy research through information systems, psychology, and behavioral economics: thinking outside the “APCO” box. Information Systems Research 26, 4 (2015), 639–655.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Isaac Dinner, Eric J Johnson, Daniel G Goldstein, and Kaiya Liu. 2011. Partitioning default effects: why people choose not to choose.Journal of Experimental Psychology: Applied 17, 4 (2011), 332.Google ScholarGoogle Scholar
  30. Serge Egelman, Andreas Sotirakopoulos, Ildar Muslukhov, Konstantin Beznosov, and Cormac Herley. 2013. Does my password go up to eleven? The impact of password meters on password selection. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2379–2388.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Maha Elgarf, Natalia Calvo-Barajas, Ana Paiva, Ginevra Castellano, and Christopher Peters. 2021. Reward seeking or loss aversion? impact of regulatory focus theory on emotional induction in children and their behavior towards a social robot. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Caroline Faur, Jean-Claude Martin, and Celine Clavel. 2015. Matching artificial agents’ and users’ personalities: designing agents with regulatory-focus and testing the regulatory fit effect.. In CogSci.Google ScholarGoogle Scholar
  33. Antonio L Freitas and E Tory Higgins. 2002. Enjoying goal-directed action: The role of regulatory fit. Psychological science 13, 1 (2002), 1–6.Google ScholarGoogle Scholar
  34. Jean-Francois Gajewski, Marco Heimann, and Luc Meunier. 2021. Nudges in SRI: the power of the default option. Journal of Business Ethics (2021), 1–20.Google ScholarGoogle Scholar
  35. Kristel M Gallagher, John A Updegraff, Alexander J Rothman, and Linda Sims. 2011. Perceived susceptibility to breast cancer moderates the effect of gain-and loss-framed messages on use of screening mammography.Health Psychology 30, 2 (2011), 145.Google ScholarGoogle Scholar
  36. Reza Ghaiumy Anaraky, Kaileigh Angela Byrne, Pamela J Wisniewski, Xinru Page, and Bart Knijnenburg. 2021. To disclose or not to disclose: examining the privacy decision-making processes of older vs. younger adults. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Reza Ghaiumy Anaraky, Yao Li, and Bart Knijnenburg. 2021. Difficulties of measuring culture in privacy studies. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–26.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Colin M Gray, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L Toombs. 2018. The dark (patterns) side of UX design. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Jens Grossklags and Alessandro Acquisti. 2007. When 25 Cents is Too Much: An Experiment on Willingness-To-Sell and Willingness-To-Protect Personal Information.. In WEIS. Citeseer.Google ScholarGoogle Scholar
  40. Junius Gunaratne and Oded Nov. 2015. Informing and improving retirement saving performance using behavioral economics theory-driven user interfaces. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 917–920.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Junius Gunaratne, Lior Zalmanson, and Oded Nov. 2018. The persuasive power of algorithmic and crowdsourced advice. Journal of Management Information Systems 35, 4 (2018), 1092–1120.Google ScholarGoogle ScholarCross RefCross Ref
  42. Dinah Gutermuth and Melvyn RW Hamstra. 2023. Are there gender differences in promotion–prevention self-regulatory focus?British Journal of Psychology (2023).Google ScholarGoogle Scholar
  43. Julia Hanson, Miranda Wei, Sophie Veys, Matthew Kugler, Lior Strahilevitz, and Blase Ur. 2020. Taking Data Out of Context to Hyper-Personalize Ads: Crowdworkers’ Privacy Perceptions and Decisions to Disclose Private Information. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. E. T. Higgins. 1997. Beyond pleasure and pain.American Psychologist 52 (1997), 1280–1300. Issue 12. https://doi.org/10.1037/0003-066x.52.12.1280Google ScholarGoogle ScholarCross RefCross Ref
  45. E Tory Higgins. 2005. Value from regulatory fit. Current directions in psychological science 14, 4 (2005), 209–213.Google ScholarGoogle Scholar
  46. E Tory Higgins. 2014. Promotion and prevention: How “0” can create dual motivational forces. Dual-process theories of the social mind 1 (2014), 423–436.Google ScholarGoogle Scholar
  47. E Tory Higgins. 2018. What distinguishes promotion and prevention? Attaining “+ 1” from “0” as non-gain versus maintaining “0” as non-loss. Polish Psychological Bulletin 49, 1 (2018), 40–49.Google ScholarGoogle Scholar
  48. E Tory Higgins, Ronald S Friedman, Robert E Harlow, Lorraine Chen Idson, Ozlem N Ayduk, and Amy Taylor. 2001. Achievement orientations from subjective histories of success: Promotion pride versus prevention pride. European Journal of Social Psychology 31, 1 (2001), 3–23.Google ScholarGoogle ScholarCross RefCross Ref
  49. E Tory Higgins, Emily Nakkawita, and James FM Cornwell. 2020. Beyond outcomes: How regulatory focus motivates consumer goal pursuit processes. Consumer Psychology Review 3, 1 (2020), 76–90.Google ScholarGoogle ScholarCross RefCross Ref
  50. E Tory Higgins, James Shah, and Ronald Friedman. 1997. Emotional responses to goal attainment: strength of regulatory focus as moderator.Journal of personality and social psychology 72, 3 (1997), 515.Google ScholarGoogle Scholar
  51. Jacob B Hirsh, Sonia K Kang, and Galen V Bodenhausen. 2012. Personalized persuasion: Tailoring persuasive appeals to recipients’ personality traits. Psychological science 23, 6 (2012), 578–581.Google ScholarGoogle ScholarCross RefCross Ref
  52. Li-tze Hu and Peter M Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural equation modeling: a multidisciplinary journal 6, 1 (1999), 1–55.Google ScholarGoogle Scholar
  53. Danny Yuxing Huang, Noah Apthorpe, Frank Li, Gunes Acar, and Nick Feamster. 2020. Iot inspector: Crowdsourcing labeled network traffic from smart home devices at scale. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 2 (2020), 1–21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Jun Ho Huh, Hyoungshick Kim, Swathi SVP Rayala, Rakesh B Bobba, and Konstantin Beznosov. 2017. I’m too busy to reset my LinkedIn password: On the effectiveness of password reset emails. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 387–391.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Hannah J Hutton and David A Ellis. 2023. Exploring User Motivations Behind iOS App Tracking Transparency Decisions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Jon M Jachimowicz, Shannon Duncan, Elke U Weber, and Eric J Johnson. 2019. When and why defaults influence decisions: A meta-analysis of default effects. Behavioural Public Policy 3, 2 (2019), 159–186.Google ScholarGoogle ScholarCross RefCross Ref
  57. Mohammad S Jalali, Jessica P Kaiser, Michael Siegel, and Stuart Madnick. 2019. The internet of things promises new benefits and risks: a systematic analysis of adoption dynamics of IoT products. IEEE Security & Privacy 17, 2 (2019), 39–48.Google ScholarGoogle ScholarCross RefCross Ref
  58. Eric J Johnson, Steven Bellman, and Gerald L Lohse. 2002. Defaults, framing and privacy: Why opting in-opting out. Marketing letters 13 (2002), 5–15.Google ScholarGoogle Scholar
  59. Eric J Johnson, Gerald Häubl, and Anat Keinan. 2007. Aspects of endowment: a query theory of value construction.Journal of experimental psychology: Learning, memory, and cognition 33, 3 (2007), 461.Google ScholarGoogle Scholar
  60. Gideon Keren. 2007. Framing, intentions, and trust–choice incompatibility. Organizational Behavior and Human Decision Processes 103, 2 (2007), 238–255.Google ScholarGoogle ScholarCross RefCross Ref
  61. Margaret L Kern, Paul X McCarthy, Deepanjan Chakrabarty, and Marian-Andrei Rizoiu. 2019. Social media-predicted personality traits and values can help match people to their ideal jobs. Proceedings of the National Academy of Sciences 116, 52 (2019), 26459–26464.Google ScholarGoogle ScholarCross RefCross Ref
  62. Bart Piet Knijnenburg, Alfred Kobsa, and Hongxia Jin. 2013. Counteracting the negative effect of form auto-completion on the privacy calculus. (2013).Google ScholarGoogle Scholar
  63. Mai Thi Thu Le, Hien Thu Pham, Minh Thi Nguyet Tran, and Thao Phuong Le. 2022. Intention of Personal Information Disclosure in Mobile Payment Apps. International Journal of E-Services and Mobile Applications (IJESMA) 14, 1 (2022), 1–14.Google ScholarGoogle ScholarCross RefCross Ref
  64. Angela Y Lee and Jennifer L Aaker. 2004. Bringing the frame into focus: the influence of regulatory fit on processing fluency and persuasion.Journal of personality and social psychology 86, 2 (2004), 205.Google ScholarGoogle Scholar
  65. Angela Y Lee, Punam Anand Keller, and Brian Sternthal. 2010. Value from regulatory construal fit: The persuasive impact of fit between consumer goals and message concreteness. Journal of Consumer Research 36, 5 (2010), 735–747.Google ScholarGoogle ScholarCross RefCross Ref
  66. Roy J Lewicki and Chad Brinsfield. 2011. Framing trust: trust as a heuristic. Framing matters: Perspectives on negotiation research and practice in communication (2011), 110–135.Google ScholarGoogle Scholar
  67. J David Lewis and Andrew Weigert. 1985. Trust as a social reality. Social forces 63, 4 (1985), 967–985.Google ScholarGoogle Scholar
  68. He Li, Jing Wu, Yiwen Gao, and Yao Shi. 2016. Examining individuals’ adoption of healthcare wearable devices: An empirical study from privacy calculus perspective. International journal of medical informatics 88 (2016), 8–17.Google ScholarGoogle Scholar
  69. Ying-Ching Lin, Chiu-chi Angela Chang, and Yu-Fang Lin. 2012. Self-construal and regulatory focus influences on persuasion: The moderating role of perceived risk. Journal of Business Research 65, 8 (2012), 1152–1159.Google ScholarGoogle ScholarCross RefCross Ref
  70. Penelope Lockwood, Alison L Chasteen, and Carol Wong. 2005. Age and regulatory focus determine preferences for health-related role models.Psychology and aging 20, 3 (2005), 376.Google ScholarGoogle Scholar
  71. Chen-Chung Ma, Kuang-Ming Kuo, and Judith W Alexander. 2015. A survey-based study of factors that motivate nurses to protect the privacy of electronic medical records. BMC medical informatics and decision making 16, 1 (2015), 1–11.Google ScholarGoogle Scholar
  72. Eryn Ma and Eleanor Birrell. 2022. Prospective consent: The effect of framing on cookie consent decisions. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Michelle Madejski, Maritza Lupe Johnson, and Steven Michael Bellovin. 2011. The failure of online social network privacy settings. Technical ReportCUCS-010-11, Columbia University.Google ScholarGoogle Scholar
  74. Naresh K Malhotra, Sung S Kim, and James Agarwal. 2004. Internet users’ information privacy concerns (IUIPC): The construct, the scale, and a causal model. Information systems research 15, 4 (2004), 336–355.Google ScholarGoogle Scholar
  75. Mehdi Marani, Morteza Soltani, Mina Bahadori, Masoumeh Soleimani, and Atajahangir Moshayedi. 2023. The Role of Biometric in Banking: A Review. EAI Endorsed Transactions on AI and Robotics 2, 1 (2023).Google ScholarGoogle Scholar
  76. D Harrison McKnight, Vivek Choudhury, and Charles Kacmar. 2002. Developing and validating trust measures for e-commerce: An integrative typology. Information systems research 13, 3 (2002), 334–359.Google ScholarGoogle Scholar
  77. Beth E Meyerowitz and Shelly Chaiken. 1987. The effect of message framing on breast self-examination attitudes, intentions, and behavior.Journal of personality and social psychology 52, 3 (1987), 500.Google ScholarGoogle Scholar
  78. Carey K Morewedge and Colleen E Giblin. 2015. Explanations of the endowment effect: an integrative review. Trends in cognitive sciences 19, 6 (2015), 339–348.Google ScholarGoogle Scholar
  79. Reza Mousavi, Rui Chen, Dan J Kim, and Kuanchin Chen. 2020. Effectiveness of privacy assurance mechanisms in users’ privacy protection on social networking sites from the perspective of protection motivation theory. Decision Support Systems 135 (2020), 113323.Google ScholarGoogle ScholarCross RefCross Ref
  80. Mplus. 2023. Chi-Square Difference Testing Using the Satorra-Bentler Scaled Chi-Square. Accessed: 2023-11-18.Google ScholarGoogle Scholar
  81. Moses Namara, Daricia Wilkinson, Kelly Caine, and Bart P Knijnenburg. 2020. Emotional and practical considerations towards the adoption and abandonment of vpns as a privacy-enhancing technology. Proceedings on Privacy Enhancing Technologies 2020, 1 (2020), 83–102.Google ScholarGoogle ScholarCross RefCross Ref
  82. Eunjung No and Jin Ki Kim. 2014. Determinants of the adoption for travel information on smartphone. International Journal of Tourism Research 16, 6 (2014), 534–545.Google ScholarGoogle ScholarCross RefCross Ref
  83. Oded Nov and Ofer Arazy. 2013. Personality-targeted design: theory, experimental procedure, and preliminary results. In Proceedings of the 2013 conference on Computer supported cooperative work. 977–984.Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. Harri Oinas-Kukkonen, Sami Pohjolainen, and Eunice Agyei. 2022. Mitigating Issues With/of/for True Personalization. Frontiers in Artificial Intelligence 5 (2022), 844817.Google ScholarGoogle ScholarCross RefCross Ref
  85. Stefan Palan and Christian Schitter. 2018. Prolific. ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance 17 (2018), 22–27.Google ScholarGoogle ScholarCross RefCross Ref
  86. Sun-Young Park. 2012. The effects of message framing and risk perceptions for HPV vaccine campaigns: focus on the role of regulatory fit. Health marketing quarterly 29, 4 (2012), 283–302.Google ScholarGoogle Scholar
  87. Eyal Peer, Laura Brandimarte, Sonam Samat, and Alessandro Acquisti. 2017. Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of experimental social psychology 70 (2017), 153–163.Google ScholarGoogle ScholarCross RefCross Ref
  88. Eyal Peer, David Rothschild, Andrew Gordon, Zak Evernden, and Ekaterina Damer. 2022. Data quality of platforms and panels for online behavioral research. Behavior Research Methods (2022), 1.Google ScholarGoogle Scholar
  89. Lihong Peng, Yi Guo, and Dehua Hu. 2021. Information framing effect on public’s intention to receive the COVID-19 vaccination in China. Vaccines 9, 9 (2021), 995.Google ScholarGoogle ScholarCross RefCross Ref
  90. Leilei Qu, Cheng Wang, Ruojin Xiao, Jianwei Hou, Wenchang Shi, and Bin Liang. 2019. Towards better security decisions: applying prospect theory to cybersecurity. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1–6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Alexander J Rothman and Peter Salovey. 1997. Shaping perceptions to motivate healthy behavior: the role of message framing.Psychological bulletin 121, 1 (1997), 3.Google ScholarGoogle Scholar
  92. Sonam Samat and Alessandro Acquisti. 2017. Format vs. content: the impact of risk and presentation on disclosure decisions. In Proceedings of the 13th Symposium on Usable Privacy and Security (SOUPS 2017). 377–384.Google ScholarGoogle Scholar
  93. Lorena Sánchez Chamorro, Kerstin Bongard-Blanchy, and Vincent Koenig. 2023. Ethical Tensions in UX Design Practice: Exploring the Fine Line Between Persuasion and Manipulation in Online Interfaces. In Proceedings of the 2023 ACM Designing Interactive Systems Conference. 2408–2422.Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Shruti Sannon and Dan Cosley. 2019. Privacy, power, and invisible labor on Amazon Mechanical Turk. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. Anastasia Sergeeva, Björn Rohles, Verena Distler, and Vincent Koenig. 2023. “We Need a Big Revolution in Email Advertising”: Users’ Perception of Persuasion in Permission-based Advertising Emails. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Steve Sheng, Bryant Magnien, Ponnurangam Kumaraguru, Alessandro Acquisti, Lorrie Faith Cranor, Jason Hong, and Elizabeth Nunge. 2007. Anti-phishing phil: the design and evaluation of a game that teaches people not to fall for phish. In Proceedings of the 3rd symposium on Usable privacy and security. 88–99.Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. Dexin Shi, Christine DiStefano, Xiaying Zheng, Ren Liu, and Zhehan Jiang. 2021. Fitting latent growth models with small sample sizes and non-normal missing data. International Journal of Behavioral Development 45, 2 (2021), 179–192.Google ScholarGoogle ScholarCross RefCross Ref
  98. Jonathan A Smith. 2015. Qualitative psychology: A practical guide to research methods. Qualitative psychology (2015), 1–312.Google ScholarGoogle Scholar
  99. Peter Story, Daniel Smullen, Rex Chen, Yaxing Yao, Alessandro Acquisti, Lorrie Faith Cranor, Norman Sadeh, and Florian Schaub. 2022. Increasing adoption of tor browser using informational and planning nudges. UMBC Faculty Collection (2022).Google ScholarGoogle ScholarCross RefCross Ref
  100. Peter Story, Daniel Smullen, Yaxing Yao, Alessandro Acquisti, Lorrie Faith Cranor, Norman Sadeh, and Florian Schaub. 2021. Awareness, adoption, and misconceptions of web privacy tools. UMBC Faculty Collection (2021).Google ScholarGoogle Scholar
  101. Alina Stöver, Nina Gerber, Sushma Kaushik, Max Mühlhäuser, and Karola Marky. 2021. Investigating simple privacy indicators for supporting users when installing new mobile apps. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. Jeremy Sugarman, Douglas C McCrory, and Robert C Hubal. 1998. Getting meaningful informed consent from older adults: a structured literature review of empirical research. Journal of the American Geriatrics Society 46, 4 (1998), 517–524.Google ScholarGoogle ScholarCross RefCross Ref
  103. S Shyam Sundar, Jinyoung Kim, Mary Beth Rosson, and Maria D Molina. 2020. Online privacy heuristics that predict information disclosure. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Jamie Taylor, Joseph Devlin, and Kevin Curran. 2012. Bringing location to IP Addresses with IP Geolocation.Journal of Emerging Technologies in Web Intelligence 4, 3 (2012).Google ScholarGoogle Scholar
  105. Amos Tversky and Daniel Kahneman. 1981. The framing of decisions and the psychology of choice. science 211, 4481 (1981), 453–458.Google ScholarGoogle Scholar
  106. John Paul Vargheese, Somayajulu Sripada, Judith Masthoff, Nir Oren, Patricia Schofield, and Vicki L Hanson. 2013. Persuasive dialogue for older adults: Promoting and encouraging social interaction. In CHI’13 Extended Abstracts on Human Factors in Computing Systems. 877–882.Google ScholarGoogle Scholar
  107. Mei Wang, Marc Oliver Rieger, and Thorsten Hens. 2017. The impact of culture on loss aversion. Journal of Behavioral Decision Making 30, 2 (2017), 270–281.Google ScholarGoogle ScholarCross RefCross Ref
  108. Jeffrey Warshaw, Tara Matthews, Steve Whittaker, Chris Kau, Mateo Bengualid, and Barton A Smith. 2015. Can an Algorithm Know the" Real You"? Understanding People’s Reactions to Hyper-personal Analytics Systems. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 797–806.Google ScholarGoogle ScholarDigital LibraryDigital Library
  109. Lioba Werth and Jens Foerster. 2007. How regulatory focus influences consumer behavior. European journal of social psychology 37, 1 (2007), 33–51.Google ScholarGoogle Scholar
  110. Yuxin Xie, Soosung Hwang, and Athanasios A Pantelous. 2018. Loss aversion around the world: Empirical evidence from pension funds. Journal of Banking & Finance 88 (2018), 52–62.Google ScholarGoogle ScholarCross RefCross Ref
  111. Jingjun David Xu, Ronald T Cenfetelli, and Karl Aquino. 2016. Do different kinds of trust matter? An examination of the three trusting beliefs on satisfaction and purchase behavior in the buyer–seller context. The Journal of Strategic Information Systems 25, 1 (2016), 15–31.Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. Changmin Yan. 2015. Persuading people to eat less junk food: A cognitive resource match between attitudinal ambivalence and health message framing. Health Communication 30, 3 (2015), 251–260.Google ScholarGoogle ScholarCross RefCross Ref
  113. Funda NALBANTOĞLU Yilmaz. 2019. Comparison of different estimation methods used in confirmatory factor analyses in non-normal data: A Monte Carlo study. International Online Journal of Educational Sciences 11, 4 (2019), 131–140.Google ScholarGoogle ScholarCross RefCross Ref
  114. Brahim Zarouali, Tom Dobber, Guy De Pauw, and Claes de Vreese. 2022. Using a personality-profiling algorithm to investigate political microtargeting: assessing the persuasion effects of personality-tailored ads on social media. Communication Research 49, 8 (2022), 1066–1091.Google ScholarGoogle ScholarCross RefCross Ref
  115. Meng Zhang, Guang-yu Zhang, Dogan Gursoy, and Xiao-rong Fu. 2018. Message framing and regulatory focus effects on destination image formation. Tourism Management 69 (2018), 397–407.Google ScholarGoogle ScholarCross RefCross Ref
  116. Verena Zimmermann and Karen Renaud. 2021. The nudge puzzle: matching nudge interventions to cybersecurity decisions. ACM Transactions on Computer-Human Interaction (TOCHI) 28, 1 (2021), 1–45.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Personalizing Privacy Protection With Individuals' Regulatory Focus: Would You Preserve or Enhance Your Information Privacy?

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
            May 2024
            18961 pages
            ISBN:9798400703300
            DOI:10.1145/3613904

            Copyright © 2024 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 11 May 2024

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed limited

            Acceptance Rates

            Overall Acceptance Rate6,199of26,314submissions,24%
          • Article Metrics

            • Downloads (Last 12 months)145
            • Downloads (Last 6 weeks)145

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format