ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Brief Report
Revised

Prediction of self-efficacy in recognizing deepfakes based on personality traits 

[version 2; peer review: 1 approved, 1 approved with reservations]
PUBLISHED 10 Jul 2023
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Social Psychology gateway.

Abstract

Background: While deepfake technology is still relatively new, concerns are increasing as they are getting harder to spot. The first question we need to ask is how good humans are at recognizing deepfakes - realistic-looking videos or images that show people doing or saying things that they never actually did or said generated by an artificial intelligence-based technology. Research has shown that an individual’s self-reported efficacy correlates with their ability to detect deepfakes. Previous studies suggest that one of the most fundamental predictors of self-efficacy are personality traits. In this study, we ask the question: how can people’s personality traits influence their efficacy in recognizing deepfakes?  Methods: Predictive correlational design with a multiple linear regression data analysis technique was used in this study. The participants of this study were 200 Indonesian young adults. Results: The results showed that only traits of Honesty-humility and Agreeableness were able to predict the efficacy, in the negative and positive directions, respectively. Meanwhile, traits of Emotionality, Extraversion, Conscientiousness, and Openness cannot predict it. Conclusion: Self-efficacy in spotting deepfakes can be predicted by certain personality traits.

Keywords

deepfake detection, deepfake recognition, self-efficacy, personality, traits

Revised Amendments from Version 1

The Introduction section is added with the reasons for choosing HEXACO personality traits as predictors, as well as the hypotheses proposed. The Methods section is added with more credible references regarding the number of Generation Z in Indonesia and their sampling techniques, introductory texts exposed to research participants, as well as criteria for testing the validity and reliability of research instruments. The title of Figure 1 is made more clear. The Discussion section is added with 10 paragraphs that focus on the bigger picture of the research implications on how some personality traits avoid falling for deepfakes. The Extended Data section is added with a link to the Analysis Script.

To read any peer review reports and author responses for this article, follow the "read" links in the Open Peer Review table.

Introduction

One of the biggest threats and disruptions to privacy and democracy in this digital age is deepfake technology. A ‘deepfake’ or synthetic media, is a video editing technology that manipulates and mimic a person’s facial expressions, mannerisms, voice, and inflections based on a large amount of data of other people to create a hyper-realistic video depicting them doing or saying things that never happened (Westerlund, 2019).

The current consensus is that the average human’s ability in recognizing deepfakes is similar to the machines (Vitak, 2022). However, the result seems to vary depending on their own confidence and belief in their cognitive abilities. Some studies suggest that some individual differences determine if a person is good at recognizing deepfakes or not (Shahid et al., 2022). In this study, we will look at the relationship between personality traits and people’s self-reported efficacy in recognizing deepfakes.

The HEXACO personality model describes six facets of personality structures: Honesty-humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to experience (Lee & Ashton, 2009; unpublished report; Zettler et al., 2020). This instrument model is selected because of three reasons: (1) the measurement covers a wider and more complex range of personality facets that go beyond the five-factor model (Ashton & Lee, 2007); (2) the Honesty-Humility factor measures traits like sincerity, boastfulness, pretentiousness, and fair-mindedness that are associated with dishonest or inauthentic behaviors (Ashton & Lee, 2008) relating to self-reported efficacy; and (3) the model provides flexibility in measuring contextually unique situations (Oostrom et al., 2019; Pletzer et al., 2020). Advancements in information technology, including AI, socially intelligent robots, and other autonomous systems, will have a profound impact on human life, necessitating research in typical personality to understand and address individual differences in adapting to these new challenges (Matthews et al., 2021), not to mention deepfakes. Multiple studies in various contexts have shown that personality traits influence an individual’s self-efficacy (Lodewyk, 2018).

The Honesty-humility dimension reflects an individual’s fair-mindedness, modesty, and cooperation. A person with high Honesty-humility might not think they are good at recognizing deepfakes, regardless of their true ability while an individual with low Honesty-humility might be biased in their self-reported ability in recognizing a deepfake.

Emotionality reflects an individual’s degree of anxiousness, fearfulness, and sentimentality - the experience of anxiety in response to life’s stressors. To overcome this anxiety, the sense of being able to recognize deepfakes is important to reduce that anxiety. One way to become less anxious is to appreciate deepfakes as a “cultural technology” (Cover, 2022) that contains artistic and creative values. People with high Emotionality may be more motivated to use deepfakes as an “antidote” from the pressures of everyday life, so they have higher self-reported efficacy to detect them, not to be avoided but as potential things to be used according to their interests (technology appropriation; see Prayoga & Abraham, 2017)

Extraversion reflects an individual’s degree of sociability. Individuals high in Extraversion might have higher self-efficacy due to their higher social esteem, boldness, and familiarity. Van der Zee et al. (2002) found that extroverts are friendly and less formal in their interactions with others. This is closely connected with emotion recognition (part of emotional intelligence) which affects the success of negotiations. By using the paradigm of the social construction of technology (Kwok & Koh, 2021), humans are parties who “negotiate” with technology to better recognize the technology, including deepfakes, and can adapt it to not become victims of technology—or misappropriate technology for evil interests—but rather agents who utilize technology to improve humanity and prevent harm posed by technology (such as deepfakes).

An individual’s degree of cooperation, tolerance, flexibility, and patience is reflected in the Agreeableness dimension. More agreeable people are at a larger risk for security, and social engineers (like deepfake designers) specifically target Agreeableness attributes like benevolence and compliance.

Conscientiousness reflects precisions, cautiousness, and a degree of self-control. Individuals with higher Conscientiousness thread might have higher self-efficacy in recognizing deepfakes. This is in line with the hypothesis of Köbis et al. (2021) that increasing Conscientiousness will make people motivated to invest cognitive resources to detect deepfakes, thereby enhancing their capacity to recognize truth and decreasing their desire to spread false information.

Openness reflects the willingness to experience new things and is associated with lower risk aversion. Research by Uebelacker and Quiel (2014) shows that open people don’t create suitable coping mechanisms because they misjudge their vulnerability to being a target of social engineering (like deepfake designers).

This confirmatory study tested the hypotheses that the dimensions of HEXACO personality traits, i.e. (1) Honesty-humility, (2) Emotionality, (3) Extraversion, (4) Agreeableness, (5) Conscientiousness, and (6) Openness can predict self-reported efficacy in recognizing deepfakes.

Methods

There is only one data collection stage. There is no exposure in this study because the research was not an experimental study.

Ethical considerations

This present study was initially approved by the Bina Nusantara University Research Committee, vide Letter of Approval No. 042/VR.RTT/VI/2021, strengthened with Letter No. 127/VR.RTT/VI/2022. The ethical decree is stated in Article 1 Paragraph 2 of the Letter.

Written informed consent was obtained from all participants of this study, which included consent for the research procedure to be carried out and for the publication of this article containing anonymized, analyzed, and interpreted data.

Participants filled out an electronic questionnaire consisting of demographic data and two scales, namely HEXACO Personality Traits (as the predictors) and Self-efficacy in recognizing deepfake (as the criterion variable). The design of this study was predictive correlation.

The eligibility criteria of the samples were young adults aged 18–25 years (Generation Z), which, according to a YouGov survey, is an age group who are concerned about a deepfake video of themselves going viral online (Help Net Security, 2022; unpublished report). In addition, Generation Z account for more than a quarter, precisely 26.47% of the total Indonesia’s population (Badan Pusat Statistik, 2020a, 2020b). This group were the less likely to risk falling victim to misinformation like deepfakes compared to the older generation (Caramancion, 2021). The 18 to 24 age group was the most confident one in detecting deepfakes (iProov, 2020). Thus, understanding the self-efficacy of this age group in relation to their individual differences provides a huge potential for deepfakes detection strategies.

The participants of this study were 200 young adults (139 women, 61 men; M=22.06 years old; SD=1.98 year) who came from a non-Western country, Indonesia, and were recruited using a convenience sampling technique. The number of sample came from a calculation using the Sample Size Calculator (Calculator.net, 2022), with the following parameters: Confidence level of 95%, population size of 71,509,082 and population proportion of 26.47% - which was the total population of generation Z in Indonesia, as well as a margin of error of 6.2% - which is still in the range of 3–7%, the acceptable one (National Institutes of Health, 2005; unpublished report).

The research was conducted for 6 months from planning, participant recruitment, to data analysis. The research location is in Indonesia in an online setting for 3 months, namely 1 May to 31 July 2022. The research was a cross-sectional study, so no follow-up procedure was applied.

To measure self-efficacy in recognizing deepfakes, the authors constructed a self-efficacy measuring tool based on Bandura’s theory (1977) which was adapted with the recommended checklists to pay attention when detecting deepfakes from the cyber-security company Norton taken from its unpublished report (Johansen, 2020). The introductory question was: “How sure are you that you can recognize or detect the presence of non-original or unnatural or unnatural elements (e.g. because it has been EDITED/MANIPULATED) from every image, photo, sound, and video you encounter?” Examples of items were: (1) I feel able to see abnormal eye movements; (2) I feel that I recognize awkward faces, e.g. if someone’s face is pointing in one direction and the nose is pointing the other way; (3) I feel able to see any inappropriate skin tone in a video; (4) I am confident of being able to recognize when a person’s face does not seem to convey the emotion that should be in line with what the person is supposed to say. There were six answer choices, ranging from “Feeling Very Incompetent” (scored 1) to “Feeling Very Capable” (scored 6).

To measure personality traits, this study used the short version of HEXACO-PI-R (60 items) (Lee & Ashton, 2009) with a scoring key. The response option ranged from “Strongly Disagree” (scored 1) to “Strongly Agree” (scored 6). The author translated the measuring tool into Indonesian.

All psychological scales in the questionnaire were tested for validity and reliability with the criteria of item validity (corrected item-total correlation) of at least 0.250 and internal consistency (Cronbach’s α) of at least 0.600. A number of HEXACO trait items were eliminated because they did not meet these criteria. The test results are listed in Table 1.

The underlying data (Abraham & Alamsyah, 2022a), complete questionnaire (Abraham & Alamsyah, 2022b), and analysis script (Abraham, 2023) are openly available.

Results

Demographically, some participants were residents of DKI Jakarta province (N=90) which is the capital of Indonesia. In addition, other participants were residents of the Java Island (non-DKI Jakarta; N=86); Sumatera Island (N=21); and the rest (N=3) came from East Kalimantan, North Maluku, and West Nusa Tenggara provinces.

The psychometric properties and descriptive statistics of the variables are shown in Table 1. The results of this study indicate that the residuals of the multiple regression model are normally distributed (Figure 1) and all HEXACO personality dimensions are negatively correlated with self-efficacy in recognizing deepfakes; except for Agreeableness, which positively correlated (see Table 2). However, the results of the regression analysis with F(6,199)=13,295, p=0.000, R2=0.292, showed that only Honesty-humility and Agreeableness were able to predict the efficacy (see Table 3). No difference was found between women and men, t(198)=−0.120, p=0.904, Cohen’s d=0.018, SE Cohen’s d=0.154, in terms of self-efficacy.

Table 1. Descriptives (N=200).

VariableCronbach’s αCorrected Item-Total Correlationsn of items [before; after validation]MSDSE
Honesty-humility0.8510.534-0.72310; 62.9101.0150.072
Emotionality0.6710.433-0.52810; 32.6800.9520.067
Extraversion0.7600.363-0.81110; 52.3250.7660.054
Agreeableness0.6980.305-0.53310; 63.7820.6530.046
Conscientiousness0.8170.478-0.62510; 62.6640.8940.063
Openness0.7290.472-0.61910; 52.7020.8190.058
Self-efficacy in recognizing deepfake0.9350.483-0.69623; 234.3600.7620.054
91a540a4-6720-4dea-880b-193756501e92_figure1.gif

Figure 1. Normal probability (Q-Q) plot of multiple regression model’s standardized residuals.

Table 2. Pearson’s Correlations (N=200).

Variable1234567
1. HPearson’s r
p
2. EPearson’s r0.641***
p1.524e-24
3. XPearson’s r0.510***0.378***
p1.178e-143.511e-8
4. APearson’s r-0.487***-0.548***-0.364***
p2.469e-134.219e-171.132e-7
5. CPearson’s r0.740***0.606***0.554***-0.443***
p6.084e-361.965e-211.668e-175.348e-11
6. OPearson’s r0.674***0.591***0.460***-0.483***0.641***
p7.910e-283.221e-207.048e-124.106e-131.713e-24
7. SEPearson’s r-0.463***-0.367***-0.285***0.465***-0.403***-0.381***
p5.244e-129.018e-84.268e-54.229e-123.278e-92.591e-8

* p <0.05,

** p <0.01,

*** p <0.001.

Table 3. Multiple linear regression predicting self-efficacy in recognizing deepfake (N=200).

Collinearity Statistics
ModelBSEβtpToleranceVIF
H0(Intercept)4.3600.05480.8713.532e-154
H1(Intercept)3.7330.4917.6031.234e-12
Honesty-humility-0.1920.077-0.255-2.4910.0140.3492.863
Emotionality0.0230.0700.0290.3320.7400.4752.104
Extraversion0.0030.0750.0030.0440.9650.6551.528
Agreeableness0.3610.0880.3094.0906.326e-50.6431.555
Conscientiousness-0.0680.085-0.079-0.7980.4260.3722.691
Openness-0.0260.083-0.028-0.3120.7550.4622.166

Table 3 shows the unadjusted (B) and adjusted (β) estimates for each predictor of which the potential confounders are the personality traits dimensions other than the focused predictor.

Discussion

Recognizing all deep fakes elements requires a certain level analytical capability and general intelligence (Ahmed, 2021). We need to look not just at people’s cognitive abilities, but also at their belief in carrying out these abilities to recognize the information of deep fakes contextually. In other words, their self-efficacy.

This study found that, to a certain degree, individuals’ personality traits do affect their self-efficacy in terms of detecting deepfakes. Because self-efficacy expression depends on context-to-context, it is not surprising that some traits can predict it better than the others.

Personality trait of Honesty-humility had negative predictive correlation with self-efficacy in recognizing deepfakes, β=-0.255, t(193)=-2.491, p<0.05 (Table 3). “Persons with very high scores on the Honesty-Humility scale avoid manipulating others for personal gain, feel little temptation to break rules, are uninterested in lavish wealth and luxuries, and feel no special entitlement to elevated social status” (Lee & Ashton, 2009, para 1). A person’s Honesty-humility trait do not want to engineer others but, ironically, this trait makes them vulnerable to being manipulated by others (Ternovski et al., 2021), including deepfakes. It can drive higher errors for the trait in recognizing deepfakes, exposing weaknesses that could be exploited.

Thompson et al. (2016, p. 54) once stated, “Honesty-Humility may not only be less likely to exploit others, they may also be strongly opposed to being the target of exploitation.” Nevertheless, this present study shows that in the presence of deepfakes technology that has a high possibility to manipulate someone, Generation Z with trait Honesty-Humility feels helpless, so it is less functional in detecting deepfakes.

That is a notable discovery of this present study, and could be explained by the findings of Weger et al. (2022) that Honesty-Humility has a negative correlation with general (r=-0.168, p=0.002) and specific (r=-.0270, p<0.001) technology acceptance. This is reinforced by the findings of Sindermann et al. (2020) that Honesty-Humility has a negative correlation with all aspects of technology acceptance, namely perceived usefulness (r=-0.25, p<0.001), perceived ease of use (r=-0.16, p<0.001), intention to use (r=-0.17, p<0.001), and predicted usage (r=-0.18, p<0.001). In fact, someone with high technology affinity is able to perceive deepfakes less negatively (Kleine, 2022). This is presumably because they feel they have knowledge and “master” deepfakes.

Therefore, to not fall for deepfakes, Generation Z with a high Honesty-Humility trait need to reduce their conservative attitude towards technology in order to detect potential harm and even utilize deepfakes effectively. Future studies can test this with an experimental design that involves measuring these two traits and people’s ability to detect malicious vs. non-malicious deepfakes videos.

Emotionality cannot predict self-efficacy in recognizing deepfakes, β=-0.029, t(193)=0.332, p>0.05 (Table 3). Austin and Vahle (2016) found that Emotionality—a trait that is positively correlated with empathy and social engagement—can predict the dimensions of Enhance (providing support and reassurance as interpersonal emotion management strategies) and Divert (the practice of using humor and pleasure pursuits to lift the spirits of others) of the Managing the Emotion of Others Scale (MEOS). This means that the Emotionality dimension is also positively correlated with the emotional intelligence needed to recognize deepfakes. Yang et al. (2022) emphasized the pivotal role of emotional intelligence in improving artificial intelligence technology so that it becomes a useful deepfake in the context of clinical encounters. By knowing that deepfakes themselves are increasingly being prepared with elements of emotional intelligence, then recognizing deepfakes also requires a better one; and this intelligence can actually be found in people with higher Emotionality. However, individuals high in Emotionality might be less confident in their own ability to accurately recognize deepfakes, as they might consider more factors and doubt themselves more (Thompson, 1998). With this uncertain direction, it is not surprising that no predictive power of Emotionality was found on self-efficacy.

Extraversion is a personality trait that cannot predict self-efficacy in recognizing deepfakes, β=0.003, t(193)=0.044, p>0.05 (Table 3). Hosler et al. (2021) put forward that detecting deepfakes is actually recognizing unnatural displays of emotion in voices and faces. Emotion apparently plays a central role in recognizing deepfakes because emotion is a higher-level semantic construct—which is difficult to counterfeit up to now—that could offer hints for detection. In an unpublished report, Kill states that emotion recognition is an ability that is honed in someone with a high extraversion trait (2021). However, Extraversion is also found to be positively correlated with excitement-seeking and a lower preference for consistency (Uebelacker & Quiel, 2014) - whereas “pairwise self-consistency learning” (Zhao et al., 2021, p. 15023) is needed to recognize deepfakes. Therefore, the effects of Extraversion traits appear to cancel out of each other resulting in no predictive correlation with the self-efficacy.

Agreeableness trait can predict self-efficacy in recognizing deepfakes; however, not as hypothesized, the direction was found positive – not negative, β=0.309, t(193)=4.090, p<0.05 (Table 3). People with high Agreeableness are eager to cooperate and reach a compromise with others (Lee & Ashton, 2009). One of the good “others” in the context of deepfake recognition or detection is the “wisdom of the crowds” (Groh et al., 2022), which Surowiecki (2004) defines as “the collective intelligence that arises when our imperfect judgments are aggregated”. Agreeing with (or high Agreeableness to) the collective intelligence should reduce the chance of falsely recognizing deepfakes, including its algorithm attempts that present visual obstructions such as misalignment, partial occlusion, and inversion.

Agreeableness trait has a positive correlation with perception of forensic science (Sarki & Mat Saat, 2020). Deepfakes detection can be seen as part of forensic science. People with high agreeableness are known for their cooperativeness; agreeableness is often referred to as safeguards against antisocial behavior (Frias Armenta & Corral-Frías, 2021), including - in the context of this present study - deepfakes creation and distribution. They esteem innovative forensic methods in their environment and have a positive attitude toward it for the common good (Sarki & Mat Saat, 2020).

People who are more agreeable tend to make more accurate decisions about whether to believe information, which reduces their vulnerability to victimization (Cho et al., 2016). This is confirmed by the empirical findings of van Winsen (2020) that agreeable individuals exhibit more secure online behavior and have a lesser likelihood of becoming a victim of cybercrime.

This study found that Conscientiousness was not able to predict self-efficacy in recognizing deepfakes, β=-0.079, t(193)=-0.798, p>0.05 (Table 3). Although deepfake recognition requires conscientious characteristics such as prudence and a sense of responsibility, Lawson and Kakkar’s (as cited in Sütterlin et al., 2022) research recently found that Conscientiousness is partially correlated with belief in conspiracy and conservatism - making it less efficacious in recognizing deepfakes.

This study found that Openness was not able to predict self-efficacy in recognizing deepfakes β=-0.028, t(193)=-0.312, p>0.05 (Table 3). In an unpublished report, Jin (2020) found that values of Openness to change do not correlate with the perceived ethical implications of deepfakes (e.g., “These videos can uncontrollably deceive and influence many people”, p. 24). In addition, contrast with the certain direction of the influence of Agreeableness and Honesty-humility on the self-efficacy; the direction of the Openness prediction is ambiguous. On the one hand, Openness is related to the low ability to recognize deepfakes. It is because Openness was found to be positively correlated with cognitive ability (Curtis et al., 2015; Rammstedt et al., 2016), but cognitive abilities encourage more protective online behavior, indicated by more interest in discussing how people who use deepfakes manipulate their audiences - rather than developing ability to apply scepticism on the authenticity of videos (Ahmed, 2021). On the other hand, there is a logic in favor of Openness as a buffer to prevent vulnerabilities from being manipulated by social engineering. For example, Eftimie et al. (2022) associated Openness with cognitive exploration tendencies which, based on their study, will stimulate responsible behavior including security best practices - which in the context of this study is deepfake recognition.

Based on the study findings, to avoid falling for deep fakes, there are two “optimal” personality traits that are worth exercising, i.e. Honesty-Humility and Agreeableness. First, the Honesty-Humility trait needs to be positioned strategically so that people with this trait can not easily be trapped or “absorbed” by the counterfeits from deepfakes technology, ie by reducing conventionalism (Leone et al., 2012) towards technology, that is allegedly inherent in this trait. Second, agreeableness trait should be directed at various deepfake detection methods and technologies that are beneficial to community members.

A number of studies have shown that both general and technological self-efficacy are able to predict the actual ability associated with the use of the technology (Alnoor et al., 2020; Raghuram et al., 2003; Tetri & Juujärvi, 2022). This is because the efficacies determine organizing actions, behavioral intention and strategies, and preparedness for change, as well as reducing emotional sensitivity which is a source of performance anxiety.

Of course, there is no denying the possibility of inflated or overestimated belief, or the Dunning-Kruger effect (Koc et al., 2022), which in the context of this study means that people who have high self-efficacy in detecting deepfakes actually have low actual abilities. In their research on bullshit detection, Cavojová et al. (2022) explained that the overestimation is caused by metacognitive (un) awareness, i.e. “These highly overconfident people suffer from a double curse – not only they do not know, but they also do not know that they do not know ... [that] is the result of self-enhancement motivation” (p. 1, 2).

The limitation of this research is the use of non-probability sampling with limited generalizability. Nevertheless, this study has implication for the development of psychoinformatics - a branch of psychology that explains attitudes, competencies, and behavior in using information technology. Further research is suggested to implement random sampling and experimental methods to ensure a causal–not only predictive–relationship between personality traits and deepfakes detection self-efficacy.

Comments on this article Comments (0)

Version 3
VERSION 3 PUBLISHED 19 Dec 2022
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Abraham J, Putra HA, Prayoga T et al. Prediction of self-efficacy in recognizing deepfakes based on personality traits  [version 2; peer review: 1 approved, 1 approved with reservations] F1000Research 2023, 11:1529 (https://doi.org/10.12688/f1000research.128915.2)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 2
VERSION 2
PUBLISHED 10 Jul 2023
Revised
Views
12
Cite
Reviewer Report 14 Sep 2023
Dilrukshi Gamage, Department of Innovation Science, Tokyo Institute of Technology, Tokyo, Japan 
Approved
VIEWS 12
In summary, authors have used HEXACO Personality Traits as the independent variable and  the criteria used in Self-efficacy in recognizing deepfake. The 6 factors of the HEXACO were used to understand the self efficacy of recognizing deepfakes in a 6 ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Gamage D. Reviewer Report For: Prediction of self-efficacy in recognizing deepfakes based on personality traits  [version 2; peer review: 1 approved, 1 approved with reservations]. F1000Research 2023, 11:1529 (https://doi.org/10.5256/f1000research.152590.r185694)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
19
Cite
Reviewer Report 17 Jul 2023
Sandra Grinschgl, Institute of Psychology, University of Graz, Graz, Austria 
Approved with Reservations
VIEWS 19
This is a revised version of a manuscript that I previously reviewed. The authors addressed my previously raised comments and improved their manuscript.

Here are a few comments that the authors might still want to consider: ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Grinschgl S. Reviewer Report For: Prediction of self-efficacy in recognizing deepfakes based on personality traits  [version 2; peer review: 1 approved, 1 approved with reservations]. F1000Research 2023, 11:1529 (https://doi.org/10.5256/f1000research.152590.r185695)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Version 1
VERSION 1
PUBLISHED 19 Dec 2022
Views
23
Cite
Reviewer Report 20 Feb 2023
Dilrukshi Gamage, Department of Innovation Science, Tokyo Institute of Technology, Tokyo, Japan 
Not Approved
VIEWS 23
Overall, the objective of this brief report is to find out the personality traits that affect the efficacy of spotting deepfakes. 
  • Is the work clearly and accurately presented and does it cite the current literature?
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Gamage D. Reviewer Report For: Prediction of self-efficacy in recognizing deepfakes based on personality traits  [version 2; peer review: 1 approved, 1 approved with reservations]. F1000Research 2023, 11:1529 (https://doi.org/10.5256/f1000research.141554.r162152)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
23
Cite
Reviewer Report 10 Feb 2023
Sandra Grinschgl, Institute of Psychology, University of Graz, Graz, Austria 
Approved with Reservations
VIEWS 23
Summary: This article deals with a highly relevant topic - the identification of deepfakes. The authors investigated potential predictors of the self-reported ability to detect deepfakes, namely individuals’ personality traits based on the HEXACO model. While the traits honesty-humility and ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Grinschgl S. Reviewer Report For: Prediction of self-efficacy in recognizing deepfakes based on personality traits  [version 2; peer review: 1 approved, 1 approved with reservations]. F1000Research 2023, 11:1529 (https://doi.org/10.5256/f1000research.141554.r160499)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 3
VERSION 3 PUBLISHED 19 Dec 2022
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.