skip to main content
research-article
Open Access

Patient Acceptance of Self-Monitoring on a Smartwatch in a Routine Digital Therapy: A Mixed-Methods Study

Published:29 November 2023Publication History

Skip Abstract Section

Abstract

Self-monitoring of mood and lifestyle habits is the cornerstone of many therapies, but it is still hindered by persistent issues including inaccurate records, gaps in the monitoring, patient burden, and perceived stigma. Smartwatches have the potential to deliver enhanced self-reports, but their acceptance in clinical mental health settings is unexplored and rendered difficult by a complex theoretical landscape and need for a longitudinal perspective. We present the Mood Monitor smartwatch application for mood and lifestyle habits self-monitoring. We investigated patient acceptance of the app within a routine 8-week digital therapy. We recruited 35 patients of the UK’s National Health Service and evaluated their acceptance through three online questionnaires and a post-study interview. We assessed the clinical feasibility of the Mood Monitor by comparing clinical, usage, and acceptance metrics obtained from the 35 patients with a smartwatch with those from an additional 34 patients without a smartwatch (digital treatment as usual). Findings showed that the smartwatch app was highly accepted by patients, revealed which factors facilitated and impeded this acceptance, and supported clinical feasibility. We provide guidelines for the design of self-monitoring on a smartwatch and reflect on the conduct of human-computer interaction research evaluating user acceptance of mental health technologies.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Mental health interventions habitually aim to help people develop self-awareness and adopt healthy behaviors. Self-monitoring one’s mood and lifestyle habits is often recommended as a means of identifying patterns and encouraging healthy habits. Technology opens up a multitude of opportunities for improving such interventions, and as a result, the area of digital mental health is rapidly growing within the Human-Computer Interaction (HCI) field. Previous works have shown that the digital delivery of mental health interventions in routine care, such as through platforms delivering Internet-based Cognitive Behavioral Therapy (iCBT) [80, 110], can improve patient engagement with self-monitoring. Yet, important issues persist, including inaccurate records, gaps in the monitoring, patient burden [7, 35, 70], and perceived stigma [2, 9]. Smartwatches, being wearable devices specifically designed to enable the passive collection of lifestyle information (through embedded sensors) and provide a discreet, immediate way to interact with content, are commonly used in the general population [28, 69]. They show a high potential for being an acceptable means to engage in self-report [26, 62].

We present the Mood Monitor, a smartwatch application for self-monitoring of mood and lifestyle habits within a routine iCBT intervention for depression offered by the UK’s National Health Service (NHS). The Mood Monitor enables in-the-moment mood logging and automated recording of lifestyle habits (including sleep and physical activity). It has the potential to help patients consistently record accurate self-report entries, with minimal burden. However, although smartwatches are commonly used to monitor physiological data (e.g., for fitness exercises), we explore here a new clinical context of use for these devices. Patient concerns regarding self-monitoring on a smartwatch in this sensitive context might result in a lack of acceptance of the technology, and previous research has shown that insufficient acceptance can impede individuals’ uptake and continued use of technology [92]. This might risk exacerbating patients’ lack of adherence or drop-out from the digital therapy. Thus, as we are introducing the Mood Monitor smartwatch app, it is crucial to investigate its acceptance by patients and the factors which facilitate or impede it, as well as to reflect on how they can be addressed in design [67]. Moreover, as we are exploring a novel use for smartwatches in a sensitive mental health context, it is important to check whether the introduction of this technology impacts on patient engagement or clinical outcomes.

We conducted a mixed-methods study with 69 patients receiving the iCBT intervention. Our study addressed the following questions:

What is the level of patient acceptance of mood and lifestyle self-monitoring on the smartwatch, as part of an established iCBT intervention?

What are the facilitators and barriers to patient acceptance of this new modality for self-monitoring?

What is the clinical feasibility of the smartwatch-delivered self-monitoring (i.e., does it impact patient clinical outcomes, usage of the therapy program, and acceptance of the self-report)?

The contributions of this work are fourfold. First, we introduce the Mood Monitor smartwatch app for self-monitoring in routine digital therapy. Next, we evaluate patient acceptance of the Mood Monitor in real clinical settings and identify facilitators and barriers. Then, we examine the clinical feasibility of integrating the Mood Monitor app into the digital therapy program. Finally, we propose guidelines for designing self-monitoring on a smartwatch, and we reflect on the conduct of HCI research to evaluate user acceptance of mental health technologies. The protocol for this study was published previously and is available in that work [66].

Skip 2RELATED WORK Section

2 RELATED WORK

We first describe mood monitoring and existing types of delivery, before reviewing the theoretical frameworks of user acceptance and existing evaluation approaches.

2.1 Mood Monitoring in Mental Health Interventions

Self-report (or self-monitoring) is the cornerstone of many therapies. Because it helps individuals with mental difficulties better understand their experiences and difficulties, self-report is often encouraged by mental health professionals. For this reason, individuals undergoing therapy may be asked to keep track of different aspects of their life, such as their mood, sleep quality, and medication intake. Sharing this information during therapy sessions enables patients to reflect on past experiences and feelings, and informs therapists on behavioral patterns and eventual trigger elements for their patients [102]. With a deeper insight into patients’ experience, therapists are better equipped to set up adapted interventions (e.g., behavioral change interventions) and provide personalized follow-up. Lane and Terry [53] define mood as “a set of feelings, ephemeral in nature, varying in intensity and duration, and usually involving more than one emotion” (p. 7). Although mood tracking is a core component of evidence-based mental health therapies such as cognitive behavioral therapy [60], it can also inform symptom monitoring (e.g., in bipolar disorder [91]) and the detection of mental health difficulties (e.g., for stress in student populations [123]). Mood self-report is often complemented by the tracking of lifestyle events, such as daily bedtime and amount of exercise, to support the identification of patterns [60] and help the person reflect on behaviors which might enhance their mood and those which might impair it.

2.1.1 Traditional Self-Monitoring.

The traditional way for patients to record their moods would be to keep a log on a paper diary. However, this approach presents several issues. Studies have shown that patients were likely to forget to report in the moment and tended to complete their logs retrospectively [95, 101]. In addition, retrospective logging is dependent on the person’s ability to accurately reflect on their past moods. It is also subject to recall biases [37, 99], including mood-related bias, or when mood recall is substantially influenced by the mood at the time of recall [12], events salience, which is the tendency to recall (infrequent) significant events [102, 104], and depressive symptoms, such as the tendency to recall negative information [12, 51, 98]. The task of self-report itself was also experienced as repetitive and binding because of the necessity to carry a journal throughout the day [111].

2.1.2 Mobile Self-Monitoring.

The advent of smartphones has facilitated Ecological Momentary Assessment (EMA) [103], and mood monitoring has become a frequent activity for a number of people [29]. EMA on a smartphone enables individuals to log their mood in the moment, avoiding retrospective reflection and therefore minimizing recall biases, which is particularly relevant in depression given its associated memory impairments [78]. EMA also permits the collection of contextual information (the time of the day, activity level, etc.) to situate the mood in one’s daily experiences [124]. For these reasons, EMA is becoming increasingly used for mood monitoring in individuals with mental health difficulties [24, 112]. Research also started to move away from fully manual EMA, which is said to be “limited by human effort” [57, p. 473], toward automated monitoring via sensors embedded in smart technologies [41]. This was particularly motivated by the potential for automatic data collection to improve self-report accuracy, lessen the risk of missing records, and reduce the burden associated with manual data entry [7, 35, 70]. Finally, research has explored semi-automated monitoring, combining manual self-report with automatic data collection to support patient awareness, long-term engagement, and sense of agency [21, 125]. Initial findings from this exploratory work indicate that semi-automated monitoring can be successfully used to monitor sleep and physical activity in mental health interventions [1, 6, 123], and leveraged in commercial apps [13, 79].

2.1.3 Self-Monitoring on a Smartwatch.

Wearable devices are increasingly used by the general population, particularly smartwatches for the monitoring of health-related behaviors (e.g., sleep and exercise [76]). Smartwatches have advantages over smartphones in this regard, affording the possibility of continuous monitoring of a wider range of physiological variables, the ability to provide biofeedback [116], and greater convenience [73]. In particular, most smartwatches are specifically designed for collecting and processing physiological and contextual data, giving insight into numerous facets of a person’s life. As a result, commercially available smartwatches have begun to be used in healthcare research [114] for applications such as the manual self-monitoring of symptoms (e.g., knee osteoarthritis [8]), automated self-monitoring of physiological irregularities (e.g., atrial fibrillation [75, 109, 113]), and detection of unhealthy behaviors (e.g., cigarette smoking [23, 97]). Studies have also explored the use of automated self-report on wrist-worn devices for mental health diagnosis (e.g., the detection of depression through sleep and heart rate monitoring [122]), interventions (e.g., understanding outcomes of therapy for social anxiety through heart rate and movements [11]), and symptom monitoring (e.g., physical activity monitoring to detect depressive symptoms [17, 72], stress [32], and schizophrenia relapse [85]). In addition, because mental health stigma has been a long-standing barrier to help-seeking and intervention compliance [2, 9], it is essential that self-report technologies maintain a high level of privacy. Smartwatches also enable EMA through microinteractions, potentially reducing the burden associated with self-report and supporting better user engagement [77]. Therefore, smartwatches’ proximity enabling immediate, discreet, and private interaction make them good candidates for supporting mood self-monitoring [26, 62].

To conclude, although smartwatches are widely used by the general population, their use in clinical contexts is still at a very early stage. Patients’ willingness to engage with self-report on a smartwatch will significantly be influenced by their acceptance of the device and its sensing capabilities [92]. Therefore, research aiming to use smartwatches with patients should investigate their acceptance of the technology and do so in real clinical settings [87].

2.2 Understanding User Acceptance of Health Technologies

Understanding the reasons behind users’ acceptance or rejection of technology is particularly important in the context of digital mental health interventions. In the past three decades, HCI researchers have developed various models of user acceptance, most of them relying on the Technology Acceptance Model (TAM) of Davis [30]. With systems for the workplace as a primary focus, the TAM introduced three factors influencing technology acceptance: perceived usefulness, perceived ease of use, and attitude. Building on the TAM, other models were developed [117120], introducing multiple antecedents to these factors (e.g., technology anxiety [52, 117, 118]). With technology becoming increasingly pervasive, researchers started looking into the user acceptance in broader contexts, adapting existing models [25, 121]. Particularly in the healthcare context, user acceptance theories gave birth to more adapted models, such as the Health Information Technology Acceptance Model (HITAM) [52], and others [20, 33, 38, 47]. These models discarded antecedent factors highly specific to the use of technology for work (e.g., job relevance) and introduced antecedents more relevant to the patient journey with technology (e.g., health status and health beliefs and concerns [52]). Despite the progress of the field, there are still no models for the acceptance of mental health technologies [65]. Because mental health systems deal with particularly vulnerable populations, sensitive contexts, and associated issues (e.g., stigma), tailored acceptance models are needed to help understand people’s uptake and use of these technologies. Moreover, due to the large spectrum of mental health difficulties, there might be value in developing multiple models to investigate different settings. This theoretical gap has led researchers to modify existing models to address mental health contexts [68].

Another body of work has started to envisage user acceptance as a dynamic process instead of a single-point, static variable, introducing the temporal dimension into the equation [34, 42, 56, 68, 86, 92, 100, 107]. In an attempt to articulate the different stages of user acceptance, and address confusion resulting from inconsistent use of terminology, Nadal et al. [68] proposed the Technology Acceptance Lifecycle (TAL), a time scale laying out the three stages of user acceptance: pre-use acceptance (before the first use), initial use acceptance (first interactions with the system), and sustained use acceptance (long-term use). Considering the long-term and progressive nature of mental health difficulties [46], and the duration of supportive interventions such as cognitive behavioral therapy, it is important to adopt a longitudinal approach when assessing acceptance of mental health technologies.

2.3 Evaluating User Acceptance

A major strand of work on the measurement of user acceptance is formed by studies which validate acceptance models. This section outlines the measurement tools and timeline these studies adopted.

2.3.1 Measurement Tools.

A common approach to measure acceptance involves the evaluation of each potential acceptance factor (e.g., perceived threat [52]) against self-reported usage behavior. The majority of studies employed questionnaires, relying on Likert scales [38, 47, 52, 117121]. Each factor was evaluated through a number of measurement items, ranging from a minimum of 2 to a maximum of 11. Davis et al. [31] described the development of these measurement items as follows: (1) generating 14 candidate items for each construct, based on their definition; (2) pre-testing these items to refine the wording; (3) narrowing down the set to 10 items per construct; (4) assessing the reliability and validity of this subset; (5) narrowing down to 6 items per construct; and (6) repeating the validity assessment and narrowing down to 4 items per construct. Some studies piloted the questionnaire with focus groups [33, 117, 119], or a sample of users [33]. Most studies used Cronbach’s alpha coefficients to assess the internal consistency of the measurement questionnaire.

2.3.2 Measurement Timeline.

Most validation studies evaluated technology acceptance at different time points in the user journey. For instance, the TAM study [31] looked at the stages of pre-use, asking participants to fill in the first acceptance survey after watching a demo of the system, and post-use, giving the second questionnaire after 14 weeks of use. Drawing on this methodology but going a step further, the following studies assessed user acceptance at three time points in the user journey, including pre-use, after 1-month use, and after 3-month use [117121]. This aligns with the body of work (later published) theorizing acceptance as a multi-stage process [34, 42, 56, 86, 100, 107]. Surprisingly, fewer recently published validation studies have taken such a longitudinal approach, instead assessing user acceptance at one single point, post-uptake of the technology [33, 38, 47, 52].

Skip 3DESIGNING SELF-REPORT ON A SMARTWATCH WITHIN A DIGITAL THERAPY FOR DEPRESSION Section

3 DESIGNING SELF-REPORT ON A SMARTWATCH WITHIN A DIGITAL THERAPY FOR DEPRESSION

Responding to the exposed areas of improvement for self-monitoring in mental health interventions, we strive to lower the barrier to self-report within digital therapy. We designed the Mood Monitor smartwatch app to empower patients to record moods and lifestyle data consistently and accurately and reflect on the influence of their lifestyle choices on mood, as well as to minimize burden.

3.1 Design Context: The Space from Depression Digital Therapy

The Space from Depression program is a widely used, validated iCBT intervention for depression [80, 83] offered by the UK’s NHS. The intervention is accessible through a website (desktop) and mobile app, with program completion estimated to be reached at around 8 weeks. The program’s structure and content, and the support provided to patients, are outlined in the study protocol published previously [66]. The Space from Depression program offers access to the Mood Monitor online tool, which is a core element of the therapy [81]. The tool allows patients to record their mood (as shown in Figure 1(a)) by selecting from five weather icons (sun, sun-cloud, cloud, cloud-rain, rain) the one that best reflects their current mood, as well as their lifestyle choices (see Figure 1(b)), which include hours of sleep, quality of exercise, diet, consumption of caffeine drinks units of alcohol, and level of medication). Patients’ moods can be displayed alongside lifestyle factors (see Figure 1(c)), encouraging them to reflect on the evolution of their mood and the influence of their lifestyle.Daily prompts can also be scheduled to remind patients to self-report.

Fig. 1.

Fig. 1. Mood Monitor tool in the usual therapy platform [94].

3.2 The Mood Monitor Smartwatch App

The Mood Monitor smartwatch application was collaboratively designed and implemented over 18 months by an interdisciplinary team of HCI researchers, clinical psychologists, and professional UX designers with extensive expertise in digital mental health, and with the substantial contribution of the first author. The Mood Monitor enables patients to manually self-report their mood and also offers automated monitoring of sleep and physical activity. The features of the app are summarized in Appendix A.

3.2.1 Supporting Consistent and Accurate Self-Report.

The core functionality of the Mood Monitor watch app is the mood self-report. Taking advantage of the ubiquity of the watch, this was designed in the form of an EMA, for the collection of mood data in daily life. The EMA was implemented by prompts on the watch screen, reminding the user to log their current mood at random times of the day (Figure 2(a)), to account for the variance of mood across the day [124]. Prompts are generally an effective way to get users to engage [10]. However, receiving prompts in this context (via a wearable device and as part of a clinical mental health intervention) might be perceived as intrusive and might be an obstacle to patient acceptance of the smartwatch app.

Fig. 2.

Fig. 2. The Mood Monitor watch app prompts the patient to log their mood several times a day (a), lets them select influencing factors (b), and displays a daily and weekly summary of their mood alongside bedtime (bed icon), hours slept (moon icon), and step count (jogger icon) (c).

After logging their mood, patients are presented with an evidence-based list of lifestyle elements likely to influence their mood, and are asked which one(s) they think might have affected it (see Figure 2(b)). Reflecting on these lifestyle factors is key in the identification of patterns in one’s behaviors and mood. Although interaction with this screen is optional (as in the desktop and mobile app), prompting patients with possible options is a step for encouraging introspection and action. The Mood Monitor app also contains a menu1 enabling the independent logging of moods (Figure 3(a)).

Fig. 3.

Fig. 3. (a) Mood Monitor app menu (a). (b) Tips to stay well. (c) Settings.

In addition to the mood logging, the Mood Monitor smartwatch app integrates the monitoring of daily data related to sleep (bedtime and number of hours slept) and physical activity (step count), giving context to the patient’s moods. Previously requiring manual logging, the monitoring of these lifestyle factors is automated on the smartwatch, therefore enabling a passive self-report, less subject to bias, human error (e.g., when typing in entries), and inconsistencies (e.g., gaps in the data). This automatically captured lifestyle data is available to both the patient and their therapist, and therefore has the potential to inform therapy sessions toward better personalization.

To complete its integration with the digital therapy platform, the mood and lifestyle entries recorded on the Mood Monitor smartwatch app are automatically uploaded to the online platform and accessible in the patient’s personal space.2

3.2.2 Encouraging Introspection and the Identification of Mood Patterns.

The Mood Monitor smartwatch app is also designed with the aim to encourage patients’ reflection and support them in identifying patterns. Although awareness might be gained from simply recording one’s mood and lifestyle habits, encouraging deeper reflection is essential to support behavior changes. After logging their mood on the Mood Monitor app, the user is directly brought to the application’s home screen, and is immediately presented with a visualization of their mood, bedtime, hours of sleep, and step count throughout the past week (see Figure 2(c)). Providing a detailed view of how the patient has been doing, this visualization acts as a prompt to reflection.

The app further encourages behavior change through cues materializing the user’s progress. First, three arrows under the current day’s report compare the lifestyle variables to those of the previous day—for instance, showing an improvement in the bedtime (up arrow for earlier bedtime), a stable time asleep (right arrow), or a decline in the step count (down arrow). Second, this progress is made visible in the icons, as their appearance evolves depending on how the patient has been doing: the moon representing the hours of sleep fills up, and the step count jogger either walks, runs slowly, or runs fast. Finally, to support frequent and consistent logging of mood, encouraging prompts (validated by a clinical psychologist) were displayed3 when users reached goals regarding usage of the mood logging, and the maintenance of a sleep routine, acting as positive reinforcement (Figure 4).

Fig. 4.

Fig. 4. Encouragement prompted to the user after logging three moods in the Mood Monitor watch app.

Finally, we explored the addition of psycho-education snippets on the smartwatch (validated by a clinical psychologist) in the form the ‘Tips to stay well’ feature accessible from the application’s menu (see Figure 3(b)). Patients can go through this list of 31 brief educational pieces on lifestyle choices that may influence depression symptoms by tapping the ‘pulsing’ lotus icon.

3.2.3 Minimizing Patient Burden and Perceived Stigma.

The mood logging on the smartwatch was intended to be as effortless as possible for patients, to support their engagement with the self-report activity. This was implemented through the immediate interactions enabled by watchOS notifications. Due to the proximity of the watch, time is also saved in terms of the user not having to reach for their phone and logging into the program. In addition, with passive monitoring of sleep and physical activity, patients are not required to type in this information at the start and end of each day, thus eliminating a repetitive task that risked impeding patient engagement. The proximity of the watch, along with its small screen and discreet interactions (quick vibration on the wrist), enable the Mood Monitor app to deliver self-report more subtly and privately than via a smartphone or computer [63].

In line with the call for personalization in mobile health interventions [4, 50, 115], we chose to let users increase/reduce reminder frequency (from the default twice a day recommended by a clinical psychologist) by switching On/Off time ranges, at the first launch of the app (see Figure 3(c)).

Finally, creating consistency between the Mood Monitor watch app and the default iPhone and Apple Watch apps was important to support learnability and ease of use. The design of the weekly visualization of the moods and lifestyle habits required several iterations, to find the best way to display a large amount of information in a sufficiently concise format to fit the small smartwatch screen. After reviewing existing applications representing information over time on the watchOS, we eventually decided on a visual similar to that of the Apple Watch Weather app.

Skip 4METHODS Section

4 METHODS

We recruited 69 patients signed up to receive an 8-week routine digital therapy for depression: the Space from Depression program. The study was conducted within Berkshire Healthcare NHS Foundation Trust in the UK.

4.1 Ethics

The study received approval from the NHS Wales Research Ethics Committee 5, through the Health Research Authority (ID 281255). We registered it at ClinicalTrials.gov (NCT04568317), and conducted in compliance with the General Data Protection Regulation (EU) 2016/679 (GDPR) and the Data Protection Act 2018 (Section 36(2)) (Health Research) Regulations.

4.2 Trial Procedure

When a patient was invited for initial assessment at the NHS site, a supporter would also assess their eligibility for the study. This includes that they were older than 18 years, eligible for the Space from Depression program,4 and owned an iPhone 6 or upward. The supporter then described the research to them. All eligible patients willing to take part in the study received an e-mail with a link to an online survey, presenting the participant information sheet and the informed consent form. Upon providing consent via a digital signature, participants were asked to fill in contact details and socio-demographic information. Participants were then automatically randomized to either the group receiving digital therapy with access to the smartwatch app (experimental group) or that receiving digital treatment as usual. The survey then asked all participants to complete the first Acceptance Questionnaire (AQ) (T1), capturing their pre-use acceptability of the self-report, on a smartwatch or on the mobile/desktop app, depending on their study group. In the following days, participants in the smartwatch group received a package containing an Apple Watch SE, instructions to get started with the Mood Monitor watch app and return the watch, and a return envelope. Participants could also take part with their own smartwatch; those participants were e-mailed instructions on how use it for the study. During the intervention period, all participants used the Space from Depression program with support from a trained supporter, as per normal service procedures. All participants received the second AQ through an online survey at 3 weeks (T2) and the third one at 8 weeks (T3). To minimize non-compliance and drop-out from the study, participants received e-mails reminding them to complete each AQ. At T3, participants also completed the Satisfaction with Treatment questionnaire (see Section 4.3.4), and those in the smartwatch group who had indicated consent were invited for a follow-up interview on Zoom. Participants who had been lent a smartwatch were then asked to unpair it from their mobile phone and return it in the envelope provided. Participants were also given instructions on how to delete sleep and physical activity data stored on their own mobile phone, should they wish to do so.5 All returned smartwatches had a factory reset performed to erase all data. All participants received a $20 e-voucher upon completion of the final AQ (T3) and return of the watch for the smartwatch group. Those who took part in the interview received an additional $10 e-voucher.

4.3 Evaluating Patient Acceptance of the Mood Monitor

Our longitudinal approach6 to measuring patient acceptance of the self-report on a smartwatch is highly grounded in the technology acceptance literature, and the body of work arguing for considering user acceptance as a multi-stage process [34, 42, 56, 86, 92, 100, 107]. Because of the lack of standardized measurement methods to evaluate acceptance of mental health care technologies, the proposed methodology adopts a mixed-methods approach. In an attempt to build a rich picture of user acceptance of the technology, we examine the question through different lenses supported by the literature, namely users’ demographics, acceptance mediators, and patient satisfaction. We complement these quantitative measures with qualitative insight into user acceptance, by means of additional open-ended questions and a post-study interview.

4.3.1 Measurement Timeline.

Our longitudinal approach to evaluate acceptance considered the three stages of the TAM (introduced by Nadal et al. [68]): pre-use, initial use, and sustained use. By conducting repeated measures of acceptance following this timeline, we aimed to identify the different facilitators and barriers to user acceptance at each stage, and used this data to inform the design of the technology.

4.3.2 Patient Demographics.

Multiple validated acceptance models examine user demographics [25, 120, 121], to enable detection of possible acceptability issues in specific user groups which would impede uptake of the technology. To examine possible associations between users’ demographics and pre-use acceptability of the technology, we asked participants, upon giving consent, to complete socio-demographic details online, including information on gender, age, ethnicity, employment status, marital status, and their experience with the smartwatch technology. For each question, a “Prefer not to answer” option was present.

4.3.3 Acceptance Mediators.

Grounded in the research validating acceptance models discussed previously, we examined user acceptance through the lens of acceptance mediators. This involved selecting among the existing validated acceptance models the best fit for our population and study context, and measuring its acceptance mediators. Because there presently exists no acceptance model specific to the mental health care context, we decided to draw on the HITAM developed by Kim and Park [52]. The HITAM includes the following acceptance mediators: perceived threat, perceived usefulness, perceived ease of use, attitude, behavioral intention, and usage behavior. As the study validating the HITAM did not provide the measurement questionnaire used, we reused question wordings from other validation studies measuring the same mediators. This resulted in the proposed AQ, which contains 15 measurement items, answerable through a 5-point Likert scale from strongly agree to strongly disagree. We assessed the key outcome usage behavior by collecting the amount of mood, sleep, and activity data recorded by each participant. The AQ was sent to participants at pre-use (day 0, T1), initial use (3 weeks, T2), and sustained use (8 weeks, T3). These three versions can be found in Appendix B. We adjusted the wording of the AQ to these measurement time points by referring to expected, present, or past experiences. Additionally, the number of times each participant accessed the ‘Tips to stay well’ feature was recorded.

4.3.4 Patient Satisfaction.

A review by Nadal et al. [68] showed that user satisfaction was often considered a factor of acceptance of digital health [15, 40, 59, 71, 89]. In the present study, we were interested in possible associations between long-term technology acceptance and patient satisfaction. At 8 weeks, along with the third AQ, we asked participants to complete the Satisfaction with Treatment measure [82], a 5-item questionnaire that had been used previously to evaluate patient satisfaction with the Space from Depression program [82].

4.4 Identifying the Facilitators and Barriers to Patient Acceptance

To uncover the specific facilitators and barriers to patient acceptance of the Mood Monitor, we collected qualitative data from participants in the smartwatch group.

4.4.1 Open-Ended Questions.

Each time participants received the AQ, they were given the opportunity to write about their experience with the Mood Monitor app. At T1, an open-ended question asked participants the reasons they had decided to use the watch app and enroll in the study. Similarly at T2, they were asked “How would you describe your use of the watch app?” and “What difficulties (if any) did you experience while installing or using the Mood Monitor app?” Finally at T3, a series of open-ended questions (see Appendix C) explored supplementary acceptance factors, which according to previous research might impact patient acceptance: satisfaction [15, 40, 59, 71, 89], engagement [84, 96], recommendation rate [16, 50, 89, 96], sharing [58], privacy protection [20], resistance to change [38], and match with expectations [89].

4.4.2 Post-Study Interviews.

Participants in the smartwatch group were offered to take part in a post-study interview to discuss their experience with the Mood Monitor. These semi-structured individual interviews were conducted by the first author; they informed each interviewee that they were a researcher and not a clinician, and explained the risk management procedure in place.7 Eight of the 35 participants took part in the interview, including five females and three males.

4.4.3 Thematic Analysis.

The data from both the open-ended questions and interviews was analyzed through a reflexive thematic analysis conducted by the first author (CN). In light of recent calls for more detailed reporting of thematic analysis practice in healthcare HCI [14], we have endeavored to provide a comprehensive methodological account of our analytic process. CN followed the approach of Braun and Clarke [22] and adopted a realist perspective, through which she analyzed participants’ words in a way that “treats language as though it is a direct conduit to the participants’ experience” [108]. CN took handwritten notes during the interviews. After talking to each participant, she would then reflect on her notes, complete them, and type them up in a Word document. CN then transcribed the interviews and checked the transcripts against the audio recordings. She familiarized herself with participants’ answers to the open-ended questions of the AQ by reading through them and formatting them into a unique document. She then conducted a first round of coding where she inductively coded patterns of meaning in the data. In a second round of coding, she selected those codes relevant to the research questions. The flexibility of this analysis method allowed for inductively coding the data and organizing the codes chronologically following the TAL. Finally, CN drew on the codes to generate themes and structured the analysis around the three stages of the TAL: pre-use acceptability, initial use acceptance, and sustained use acceptance.

4.4.4 Positionality Statement.

This statement reflects on CN’s positionality with regard to the collection and reflexive thematic analysis of participants’ qualitative data. Through her most recent works, CN has learned the importance of adopting a reflexive approach throughout the research process. Although such approach was not explicitly followed during this study, CN chose to engage in reflexivity a posteriori, and this statement reports this reflection. The study presented in this article is part of CN’s doctoral research on user acceptance of health and mental health care technologies. CN is a white, middle-class, cisgender woman in her late 20s. CN emigrated from France to Ireland, where she completed her doctoral degree and now works as a postdoctoral researcher in healthcare technology. Her background includes a master’s degree in HCI and a bachelor’s and technical degree in computer science. CN’s education reinforced her will to help vulnerable populations through her work; this led her to step into the field of mental health care. CN’s own mental health journey includes experiences of talking therapy and mindfulness sessions. As a feminist, gay woman, and supporter of human rights, CN has a special interest for design justice [27]. While she designed and implemented the Mood Monitor smartwatch app, she was conscious of the biases attached to her own identity and those related to being extensively involved in the development of the technology investigated. CN sought to maintain a critical position during data collection and analysis, and this being CN’s first experience developing an app for a smartwatch helped with conserving a critical view over her work. She also encouraged interview participants to be honest in their comments, explicitly distancing herself from the “development team” and never mentioning her role in the technology design.

4.5 Clinical Feasibility of the Smartwatch App

We are investigating a novel use for smartwatches for patient self-monitoring in a digital mental health intervention. In this context, it is important to assess the clinical feasibility of the Mood Monitor smartwatch app to ensure that the introduction of this technology does not result in reduced patient engagement or clinical outcomes with the intervention. Previous work showed that digital health researchers define and measure feasibility in various ways [54]. In their review of feasibility studies promoting the use of mobile technologies in clinical research, Bakker et al. [5] defined a feasibility study as addressing one or more of the following components:

(a)

Performance of an outcome of interest against a comparator where the outcome of interest could be related to:

(i)

Measurement performance of sensor and/or

(ii)

Algorithm performance (clinical endpoints);

(b)

Human factors considerations (acceptability, tolerability and usability);

(c)

Participant adherence;

(d)

Completeness of data.

Drawing on the definition by Bakker et al. [5], we used a randomized controlled setting where participants were assigned to either receiving the digital treatment with access to the smartwatch app or the digital treatment as usual. We evaluated clinical feasibility of the smartwatch app through comparing metrics obtained in both groups. Specifically, still drawing on the definition by Bakker et al. [5], we checked for differences in patient acceptance and usage of the self-report component, and assessed the clinical safety of the intervention (i.e., differences in patient clinical outcomes). Finding no significant reductions in terms of acceptance, usage, and clinical outcomes would indicate the absence of negative impact by using the smartwatch app as part of standard delivery of the intervention, which will support the clinical feasibility of the smartwatch app.

4.5.1 Patient Clinical Outcomes Before and After Therapy.

Patients involved in the trial received the usual procedures for Improving Access to Psychological Therapies (IAPT, [93]). This included routine clinical assessments of patients with the Patient Health Questionnaire 9 (PHQ-9), Generalized Anxiety Disorder 7 (GAD-7), and Work and Social Adjustment Scale (WSAS) [39, 64]. To evaluate how the introduction of the smartwatch app impacted on patient outcomes, we compared the clinical scores of participants in both groups, obtained before and after the 8-week therapy.

4.5.2 Patient Usage of the Therapy Program.

We compared usage metrics between groups, including the total time spent on the platform, number of sessions, number of tools used, percentage of the program viewed, and number of reviews (metrics are detailed in the study protocol [66]). We also looked at usage of the self-report in both groups to ensure that the addition of the smartwatch was not detrimental to patient self-monitoring (e.g., patients might perceive the smartwatch prompts as a nuisance and choose not to wear the device).

4.5.3 Patient Acceptance of Self-Report.

We compared patient acceptance of self-report via the smartwatch app (experimental group) with that of self-report via the usual therapy platform.

By introducing the smartwatch app, we enable self-monitoring via an additional device and interface, separate from the usual therapy platform. Therefore, the survey questions related to interactions with the interface (“My interaction with the watch app is clear and understandable” (PEOU3)) or with the application as a whole (“Overall, I think that the watch app is useful in managing my mental wellbeing” (PU3)) in the experimental group do not find an exact equivalent for the treatment as usual group. Rather than transposing these questions into approximate equivalents, we chose to compute acceptancescores with a subset of 12 identical, equivalent questions.

Skip 5PARTICIPANTS Section

5 PARTICIPANTS

A total of 155 patients were assessed eligible and invited to participate. Among them, 70 patients did not follow through with the invite and 14 explicitly declined it. For those who refused, the supporter asked the reason for declining. Eligibility assessment thus resulted in 71 patients recruited and randomized for the study, among which 2 withdrew, bringing the total number to 69 participants. The composition of the two groups was the following: 35 patients in the group with a smartwatch and 34 in the treatment as usual group. Participants’ demographics are described in Table 1. Figure 5 shows the flow of participants througheach stage of the study.

Table 1.
CharacteristicTotal sampleSmartwatch groupTreatment as usual group
(N = 69), n (%)(N = 35), n (%)(N = 34), n (%)
Gender
Female48 (69.6)23 (66)25 (74)
Male20 (28.9)12 (34)8 (23)
Non-binary1 (1.5)0 (0)1 (3)
Age group (years)
18–2426 (37.6)10 (29)16 (47)
25–3424 (34.8)12 (34)12 (35)
35–4410 (14.5)7 (20)3 (9)
45–548 (11.6)5 (14)3 (9)
Older than 551 (1.5)1 (3)0 (0)
Ethnicity
Asian or Asian British10 (14.5)7 (20)3 (9)
Black or Black British4 (5.8)1 (3)3 (9)
Mixed3 (4.3)1 (3)2 (6)
White52 (75.4)26 (74)26 (76)
Relationship status
Cohabitant10 (14.5)6 (17)4 (12)
Divorced/civil partnership dissolved5 (7.3)3 (9)2 (6)
Married/in civil partnership20 (28.9)12 (34)8 (23)
Single32 (46.4)12 (34)20 (59)
Not disclosed2 (2.9)2 (6)0 (0)
Level of education
A-levels or equivalent30 (43.5)17 (48)13 (38)
GCSEs or equivalent6 (8.7)2 (6)4 (12)
Other2 (2.9)1 (3)1 (3)
University or college degree30 (43.5)15 (43)15 (44)
Not disclosed1 (1.4)0 (0)1 (3)
Employment
Employed, full time37 (53.6)20 (58)17 (50)
Employed, part time11 (16)5 (14)6 (17)
Not employed, looking for work9 (13)5 (14)4 (12)
Not employed, not looking for work4 (5.8)0 (0)4 (12)
Self-employed6 (8.7)5 (14)1 (3)
Not disclosed2 (2.9)0 (0)2 (6)
Smartwatch ownership21 (30)11 (32)10 (29)

Table 1. Demographic Characteristics of Study Participants

Fig. 5.

Fig. 5. Participant flow diagram.

Skip 6PATIENT ACCEPTANCE OF THE MOOD MONITOR Section

6 PATIENT ACCEPTANCE OF THE MOOD MONITOR

This section aims to build a rich picture of patient acceptance of self-report on the smartwatch. First, we provide an insight on the evolution of patients’ overall acceptance of self-report on the smartwatch through the acceptance scores obtained at pre-use, initial use, and sustained use. Second, we look into possible associations between users’ demographics and pre-use acceptability of the technology (i.e., users’ level of acceptance before first use of the technology), which is suggested by the literature. Next, we further examine the acceptance scores at pre-use, initial use, and sustained use through the lens of the set of acceptance mediators forming the theoretical basis of the AQ. Last, we investigate possible associations between patient satisfaction with the therapy and acceptance of the technology, as suggested by previous acceptance studies. Figure 6 details the different steps of this analysis.

Fig. 6.

Fig. 6. Steps of the analysis, each looking at an aspect of user acceptance suggested by the literature.

6.1 Overall Acceptance

Examining the evolution of patient acceptance over time, through the three AQ scores, has the potential to inform on users’ trajectories with the technology. A significant improvement in the acceptance score over time might, for instance, indicate the appearance of an element in the user journey facilitating acceptance (e.g., users developing trust in the system) or disappearance of a barrier to acceptance (e.g., users feeling less anxious toward using it), and conversely for a significant decline in the acceptance score. We looked at participants’ scores to the AQ given at pre-use (day 0), initial use (3 weeks), and sustained use (8 weeks) (Figure 7).

Fig. 7.

Fig. 7. Acceptance scores of the smartwatch group at pre-use, initial use, and sustained use. Mean scores are represented by an ‘x’ and the median score by a line.

This longitudinal measure revealed that patient acceptance of self-report on the smartwatch started and remained high throughout the 8 weeks. Pre-use acceptability scores (n = 35) ranged from 71% to 100% (M = 89.80, SD = 7.993). Initial use acceptance scores (n = 30) ranged from 73% to 100% (M = 88.38, SD = 7.956). Sustained use acceptance scores (n = 27) ranged from 71% to 100% (M = 88.10, SD = 9.636). An ANOVA test showed no evidence of a statistically significant difference between the scores across the three time points (F(2, 48) = .611, p = .547). This result, although of a quite high level, seems to support the use of the Mood Monitor in this interventional context.

6.2 Demographics and Pre-Use Acceptability

We explored the possible impact of the smartwatch group participants’ demographics (including gender, age, ethnicity, education level, employment status, Apple Watch ownership, and relationship status) on their pre-use acceptability scores (T1). The scores were non-normally distributed, and therefore non-parametric Kruskal-Wallis tests were used. There was no evidence of a statistically significant association between patients’ acceptability score and their gender, age, ethnicity, level of education, employment status, or Apple Watch ownership.

However, there was evidence of a statistically significant difference in the acceptability scores of participants who declared being single and those married/in a civil partnership (Kruskal-Wallis \(\chi ^2\) = 8.762, df = 3, p = .033). A Dunn’s pairwise comparisons test confirmed this significant difference (p = .021). Therefore, patients who declared being married or in a civil partnership were more likely to have a higher acceptability score compared to those who were single, as illustrated in Figure 8). This result reveals the presence of facilitators of acceptance in the former group or barriers in the latter. Although determining these factors would require further investigation, we can hypothesize that the antecedents social support and/or social pressure [20, 52, 117121] might facilitate patients’uptake of self-report on the smartwatch.

Fig. 8.

Fig. 8. Pre-use acceptability score (T1) by relationship status.

6.3 Acceptance Mediators over Time

Examining the repeated score to each of the acceptance mediators assessed in the questionnaire at T1, T2, and T3 has the potential to indicate where might lie facilitators or barriers to acceptance and how these evolve through the use of the Mood Monitor app. A specific acceptance score for each mediator was obtained by calculating the average of the participant’s score to the Likert scale questions measuring that mediator. This resulted in an acceptance score, for each mediator, ranging from a minimum of 1 to a maximum of 5. Figure 9 gives an overview of the evolution of patients’ scores to each acceptance mediator, revealing that the average score for all mediators was high (superior to 4/5) and quite stable across the user journey.

Fig. 9.

Fig. 9. Smartwatch group scores to the acceptance mediators at pre-use, initial use, and sustained use.

Next, we considered the acceptance mediators individually, assessing the evolution of the mean score over time. We checked for significant differences at pre-use (n = 35), initial use (n = 30), and sustained use (n = 27) by performing a repeated-measures ANCOVA (controlling for the effects of the relationship status variable).8

6.3.1 Perceived Threat.

The perceived threat mediator reflects users’ concerns about their mental health and their willingness to take action to get better [52]. The repeated measures show that the perceived threat remained high throughout the intervention. Scores ranged from 3.0 to 5.0 (M = 4.600, SD = .4971) at pre-use, from 1.5 to 5.0 (M = 4.200, SD = .7611) at initial use, and from 2.5 to 5.0 (M = 4.370, SD = .6877) at sustained use. There was no evidence of a statistically significant difference between the scores across the three time points (F(2, 44) = 3.122, p = .054), which means that patients’ mental health concerns remained throughout therapy and so did their will to improve. Because a significant change in perceived threat (and therefore in overall technology acceptance) could be a result of improved/worsened clinical outcomes, examining patients’ clinical trajectories might help interpret this result if differences were observed.

6.3.2 Perceived Usefulness.

The perceived usefulness mediator reflects the degree to which users believe that the system will help improve their mental health [30]. Perceived usefulness of the Mood Monitor app was high overall throughout the measurement points. Scores ranged from 3.0 to 5.0 (M = 4.390, SD = .6022) at pre-use, from 3.0 to 5.0 (M = 4.222, SD = .7183) at initial use, and from 2.0 to 5.0 (M = 4.160, SD = .7919) at sustained use. There was no evidence of a statistically significant difference between the scores across the three time points (F(2, 44) = 1.303, p = .282). In the context of digital mental health, an interesting issue that emerged from discussions with clinical researchers is the potential for decreased symptoms to negatively impact perceived usefulness of the technology. As for the previous mediator, in cases where a significant change in these measures is noticed, researchers interpreting this data might find value in examining patients’ clinical trajectories.

6.3.3 Perceived Ease of Use.

The perceived ease of use mediator indicates the degree to which users believe that using a system will be free of physical and mental effort [30]. Findings reveal that participants rated self-report on the smartwatch as easy to use across the measurement points. Scores ranged from 3.0 to 5.0 (M = 4.293, SD = .6227) at pre-use, from 3.3 to 5.0 (M = 4.333, SD = .5545) at initial use, and from 3.0 to 5.0 (M = 4.407, SD = .6169) at sustained use. There was no evidence of a statistically significant difference between the scores across the three time points (F(2, 44) = .240, p = .787). Observing a significant improvement in perceived ease of use over time could be the marker of a learning curve in the use of technology; researchers in that case might find value in examining additional acceptance antecedents, such as computer self-efficacy [38, 52, 106, 117, 118].

6.3.4 Attitude.

The attitude mediator reflects the user’s overall affective reaction to using a technology [120]. The data show that users’ attitude toward use of the self-report on the smartwatch remained high over time. Scores ranged from 3.0 to 5.0 (M = 4.700, SD = .5402) at pre-use, from 3.3 to 5.0 (M = 4.708, SD = .4738) at initial use, and from 3.3 to 5.0 (M = 4.630, SD = .5606) at sustained use. There was no evidence of a statistically significant difference between the scores across the three time points (F(2, 44) = .195, p = .824). Although patients’ affective reaction to technology remained positive in our study, changes in attitude should be investigated because they can signify that users are not comfortable with aspects of the technology (e.g., sharing of personal information).

6.3.5 Behavioral Intention.

The behavioral intention mediator represents the degree to which individuals are willing to try to use a technology [3]. Despite a slight decrease in the participants’ intention to self-monitor on the smartwatch, the measure remained high throughout the intervention. The null spread at pre-use reveals patients’ strong willingness to take up the technology. Scores ranged from 4.0 to 5.0 (M = 4.829, SD = .3824) at pre-use, from 3.0 to 5.0 (M = 4.633, SD = .6149) at initial use, and from 1.5 to 5.0 (M = 4.352, SD = 1.0078) at sustained use. There was no evidence of a statistically significant difference between the scores across the three time points (F(2, 44) = .633, p = .536). Similarly as with perceived usefulness, a significant change in behavioral intention could be the result of an improvement/worsening in clinical symptoms. Moreover, such change might also be induced by users’ satisfaction (or lack of) with the system. Thus, examining clinical trajectories and user satisfaction might help with interpreting a change in individuals’ behavioral intention.

6.3.6 Usage Behavior.

The study of acceptance aims to determine facilitators and barriers to use of technology. Models of technology acceptance represent usage behavior as the final stage toward which the influence of the mediators is oriented. Examining usage behavior therefore provides direct insight into how user acceptance translates (or not) into usage. We first considered patients’ use of mood self-monitoring; this revealed that the large majority of patients (30 out of 35) engaged with mood recording on the smartwatch (Figure 10). The number of moods recorded on the smartwatch ranged from 0 to 167 (M = 27.03, SD = 38.311). In the group of 30 participants who used the mood monitoring, we observe differences in usage behavior, ranging from sporadic use of mood logging in half the group to more consistent use in the other half. In comparison, the number of moods logged on the desktop/mobile app (M = 3.14, SD = 10.244) ranged from 0 to 13 (Figure 11). The data was non-normally distributed. A Wilcoxon signed-rank test showed evidence of a statistically significant difference between the number of moods loggedon the smartwatch versus on the desktop/mobile app (Z = –3.529, p = .000).

Fig. 10.

Fig. 10. Distribution of the moods recorded via the smartwatch app.

Fig. 11.

Fig. 11. Mean number of moods recorded in the smartwatch group by platform (with error bars).

We also observed significantly more consistent use of the mood self-report in the smartwatch group compared to the treatment as usual group, as illustrated in Figure 12, and confirmed this using a Mann-Whitney test9 (U = 336.0, p = .002). However, although smartwatch participants tended to record their moods with the smartwatch rather than with the desktop/mobile app, it is likely that the prompts sent by the smartwatch app further supported consistent self-report. Overall, these findings indicate that the high levels of acceptance captured by the AQ scores translated into actual use of mood logging on the smartwatch for most patients. Among the five participants who did not engage with the mood monitoring on the smartwatch, only one completed the three AQs and none participated in the post-study interview. Analysis of the non-use of the smartwatch intervention is therefore not straightforward,and the result is open to multiple interpretations, such as a difficulty to engage with digital therapy, technical difficulties, or non-receipt of the watch.

Fig. 12.

Fig. 12. Use of mood self-report over the 8 weeks of therapy (smartwatch group vs. treatment as usual group).

Finally, we looked at patients' use of the “Tips to stay well”, a feature delivering psycho-educational material through the Mood Monitor smartwatch app. Findings reveal that two-thirds of participants in the smartwatch group never accessed the feature (Figure 13). Among the third of participants who did access this feature, the majority (n = 8) only used it once and only a few patients (n = 3) opened it several times (ranging from 3 to 10 times) throughout the 8-week period. Although the ‘Tips to stay well’ feature constitutes an add-on to the self-report intervention, the observed differences in usage raise the question of whether and how to examine user acceptance of auxiliary features, particularly where the feature is a small component of a larger intervention.

Fig. 13.

Fig. 13. Number of times patients who accessed the ‘Tips to stay well’ feature in the smartwatch app.

6.4 Satisfaction and Sustained Use Acceptance

At 8 weeks, participants in the smartwatch group answered the Satisfaction with Treatment measure (n = 27), which was a set of five Likert scale questions. The calculated satisfaction scores ranged from 50% to 100% (M = 80.56, SD = 15.590) and were non-normally distributed (Figure 14). A Pearson correlation coefficient was computed to assess the relationship between the smartwatch group participants’ acceptance score at sustained use and their satisfaction with therapy. The positive statistically significant correlation (with a significance level at .01.) between the two variables showed a strong association between long-term acceptance and patient satisfaction with therapy (r = .514, p = .006). This result suggests that user acceptance of the specific component that is self-report is consistent with users’ experience of the broader interventional context of the iCBT intervention.

Fig. 14.

Fig. 14. Satisfaction with Treatment questionnaire scores versus sustained use acceptance scores (T3).

Skip 7FACILITATORS AND BARRIERS TO ACCEPTANCE OF THE MOOD MONITOR Section

7 FACILITATORS AND BARRIERS TO ACCEPTANCE OF THE MOOD MONITOR

Qualitative data obtained from patients’ answers to the open-ended questions in the questionnaires and interviews enabled the identification of facilitators and barriers to their acceptance of the Mood Monitor app across the user journey. We begin by giving an overview of the facilitators (elements that supported use of the smartwatch app) in Table 2, and the barriers (elements that negatively impacted use of the app) in Table 3, organized under the acceptance mediators of the HITAM [52] they related to and the stage in the user acceptance journey they appeared.

Table 2.
Overarching themesThemesPre-useInitial useSustained use
Perceived threatI want to get betterX
Perceived usefulnessIt helps me check in with myselfXXX
It encourages me to adopt healthier habitsXXX
Perceived ease of useSelf-monitoring is easy and convenientXXX
The smartwatch app is part of my routineXX
AttitudeI am familiar with smartwatchesX
I don’t fear the judgment of othersX
The therapy is better tailored to my needsX

Table 2. Facilitators of Acceptance Experienced by Patients at the Different Stages of the User Journey

Table 3.
Overarching themesThemesPre-useInitial useSustained use
Perceived usefulnessThe app doesn’t allow for enough personalizationXX
Perceived ease of useI don’t believe I can use the smartwatchXX
The app doesn’t behave reliablyXX
It disrupts my routineX
AttitudeI find it difficult to change my habitsX
I am concerned about sharing my self-report dataX

Table 3. Barriers to Acceptance Experienced by Patients at Different Stages of the User Journey

This overview reveals that some factors present before use ceased to be relevant later on, whereas others maintained a strong influence throughout the user journey, supporting the use of a longitudinal exploration approach. Next, we report on each theme developed during our reflexive thematic analysis. Quotes are presented with the participant number for patients given access to the smartwatch app (e.g., P1) or the mention ‘Anonymous’ for patients who consented to share the reason they declined taking part in the research.

7.1 Pre-Use Acceptability Facilitators

Patients’ motivations for the uptake of the Mood Monitor included a desire to improve their mental health, the belief that the watch app would be an efficient means to self-monitor, support the adoption of healthier behaviors and reduce their burden, and familiarity with smartwatches.

7.1.1 I Want to Get Better.

Participants shared their concerns about their mental health status, which was a powerful motivation for the uptake of the technology: “I am open to all ways to help my mood improve” (P28). Patients’ willingness to “try anything to help myself feel better” (P19) spoke of the hopelessness experienced when seeking help for mental health difficulties. P7 commented,

“I would like to give it a try as I have tried everything with my best abilities and I am scared when it [the depression] is going to strike me again” (P7).

7.1.2 It Helps Me Check in with Myself.

The perceived efficiency of self-report on the smartwatch was also a significant determinant of patient acceptability, with participants mentioning how the smartwatch could enhance self-report activity. First, self-report on the watch was perceived as a way to support consistent monitoring of mood and lifestyle habits:

“It will enforce me to keep a record of my mood & exercise” (P1).

Second, patients mentioned that they felt this approach could help them “better gauge my mood” (P5) and “keep a regular check on my moods” (P9), therefore supporting an increased self-awareness. The duality of tracking mood and lifestyle habits was seen as particularly helpful to monitor both “my mental and physical well being” (P7). Particularly, patients were hoping to gain insight into sleep patterns as “this is something I am struggling with at the moment” (P26).

7.1.3 It Encourages Me to Adopt Healthier Habits.

Several patients mentioned that they would use self-report data to recognize mood patterns and, for example, “know how often [I] feel low and track what I do to stop feeling low” (P24). Specifically, they wrote about wanting to identify lifestyle habits influencing their mood to

“get a better understanding on situations that [are] contributing to depression with stuff like sleep deprivation and activity levels” (P27).

The use of the wearable device itself was seen as a good way to motivate behavioral change, as participant P4 comments, “I believe a smartwatch would encourage me to consistently exercise.”

Finally, a patient evoked that being able to share the data recorded on watch app might inform the supporter regarding their progress, for instance, “at times when I can’t see them through the app” (P6), and potentially give them more information to provide feedback on.

7.1.4 Self-Monitoring Is Easy and Convenient.

Being ‘better’ than paper-based self-report was a motivation to take on technology for some patients, for reasons including a greater ease of use and a higher convenience (“[the app] can be used anywhere at any time” (P6)). The EMA was also seen as a contributor to reducing the demands placed on patients, such as “recording things on paper or trying to remember [past events]” (P8).

7.1.5 I Am Familiar with Smartwatches.

Seeking a seamless integration of technology into their daily life, patients mentioned owning a smartwatch and health monitoring apps as incentives to use self-report on smartwatch:

“I use my Apple Watch daily . . . I don’t feel that this will be difficult to implement into my routine” (P23).

Familiarity with the smartwatch technology also triggered a certain enthusiasm in participants, as participant P16 notes, “I already own an Apple smart watch & enjoy using it for exercise.” This enthusiasm was sometimes mixed with curiosity:

“I have found it [my Apple Watch] useful before in tracking sleep and steps etc. However, I have never used it to track mood” (P34).

Therefore, technology uptake was supported not only by a smooth integration of the technology within the patient’s daily routine but also within their technological habits.

7.2 Pre-Use Acceptability Barriers

Eligible patients who were offered the study and refused to take part were asked if they wanted to explain their choice, in line with the approved protocol for the study. Thirteen participants gave their verbal consent to the supporter and provided a reason for declining, which allowed us to identify anxiety toward using the smartwatch as a barrier to patient acceptability.

7.2.1 I Don’t Believe I Can Use the Smartwatch.

The smartwatch technology raised concerns in some patients, regarding their ability to use the device, “I don’t think that I am the best for something like that as I’m not good with technology” (Anonymous). The use of the device also raised concerns in one patient, worried about discomfort as they found wearing watches irritating to the skin.

7.3 Initial Use Acceptance Facilitators

At the initial use stage, participants discussed how the efficient approach to self-report and the diminution of their burden enabled by the Mood Monitor facilitated their acceptance of the watch app.

7.3.1 It Helps Me Check in with Myself.

The most discussed facilitating element at initial use was patient perceived usefulness of self-report, as participant P9 comments, “it’s helpful to record the moods,” and participant P31, “I have found its helped me keep track of things better.” Several participants praised the reminders which supported consistent monitoring, explaining how they were able to “remember to log my mood data when it [the app] notifies me” (P23) and how they saw their engagement with self-report improved (P21). Another mentioned recurrent benefit was how wearing the smartwatch was “very useful for monitoring exercise” (P12) and motivated behavioral change, as participant P29 explains,

“Wearing [the] watch itself really makes me walk more and get fresh air” (P29).

By monitoring their mood and lifestyle habits with the watch app, patients reported an increased self-awareness, as the mood prompts were

“a good reminder to focus on myself & my feelings throughout the day” (P16).

Moreover, the automated tracking of sleep supported them in gaining insight into pre-existing issues, as participant P31 comments,

“[it’s] also interesting to me the data it gathers on my sleep as I know I don’t sleep much” (P31).

7.3.2 Self-Monitoring Is Easy and Convenient.

Participants’ remarks at the pre-use stage, highlighting the importance of a self-report approach that diminished their burden, found an echo at initial use. The perceived ease of use of the app was mentioned multiple times, and particularly the quick interactions enabled by the smartwatch technology and how they supported patient engagement:

“I do think that it is quicker and easier to log my mood on the watch” (P1).

Finally, self-report via a wearable device was once again evoked as a convenient support for EMA:

“It’s handy having the app on your person, so to speak, so you can log your mood easily” (P33).

7.4 Initial Use Acceptance Barriers

At this stage, participants discussed obstacles to their acceptance, including the unreliable behavior of the app, anxiety toward using the smartwatch, the need to change personal habits, and the desire for more tailoring to their needs.

7.4.1 I Don’t Believe I Can Use the Smartwatch.

Frustration could be felt in participant P7’s response to the second questionnaire. However, despite their struggles with smartphones, the patient was willing to give the smartwatch a try:

“I don’t have [a] laptop and only [use] this annoying small screen iphone . . . Don’t know how to make it [the smartwatch] work as I struggled with technology, but willing to learn” (P7).

This statement highlights that even strong barriers to acceptance might not stop one from wanting to use the technology, if stronger motivations exist.

7.4.2 The App Doesn’t Behave Reliably.

Usability issues and configuration issues (e.g., users not granting access to sensor data) in the Mood Monitor app causing the system to malfunction was the most frequently mentioned obstacle to use. Patients’ responses mentioned how the inconsistent behavior of the app did not match their expectations: “Some days it hasn’t asked me for the mood” (P21). This sometimes forced them to take actions to solve the issue: “I have to uninstall the watch app and reinstall it to get it working again” (P23).

Some participants were reassured seeing that, despite some inconsistencies in the behavior of the watch app, it was “still record[ing] my sleep, activity and mood etc.” (P36). However, for others, not knowing if the data was correctly recorded became a source of self-doubt, induced by the impression that one is not using the system as it was intended. Participant P4 writes,

“I am not sure if I am using the app correctly” (P4).

7.4.3 I Find It Difficult to Change My Habits.

By the time they received the study smartwatch in their post, some participants had already taken the habit to self-report their mood on the mobile app. Difficulty in changing one’s habits resulted in a delay in the uptake of the smartwatch app. For example, participant P28 writes,

“Didn’t use it straight away as I was recording the mood data on my smartphone” (P28).

Another patient’s comment reflected the impact of depression symptoms on one’s ability to change their habits:

“some days, I just don’t put it on as I am just being lazy etc.” (P30).

The language employed by P30 (“just being lazy”) is recurrent in the responses of people experiencing depression speaking of a misinterpretation of depression symptoms for laziness [61], or the judgment of their peers/relatives [88], often leading to an attitude of self-blame [55].

Finally, physical discomfort induced by the wearable impeded the continuous monitoring of sleep. Although wearing the watch at night was optional, some patients did so, hoping to get a more accurate reading of sleep patterns. However, keeping the watch on overnight was sometimes source of discomfort: “I find it quite uncomfortable always sleeping with the watch on” (P9).

7.4.4 The App Doesn’t Allow for Enough Personalization.

Several comments concerned the list of lifestyle elements presented after a user logged a mood, as participant P1 comments,

“the options it [the screen] gives me I don’t think are the reasons affecting my mood” (P1).

Participants mentioned ways designers could better tailor the app to their needs, for instance, through adding “more causes of moods . . . stress, finances etc.” (P9) or allowing the person to enter “your own reasons for why you feel that way sometimes other than what is on the app” (P20).

7.5 Sustained Use Acceptance Facilitators

Patients’ answers to the final questionnaire (T3) and post-study interviews revealed that although most of the acceptance facilitators identified before or at initial use persisted at sustained use, a range of additional factors came into play, including a seamless integration of the technology, no stigma associated with use, and a therapy further tailored to patients’ needs.

7.5.1 It Helps Me Check in with Myself.

At sustained use again, the most mentioned benefit of the smartwatch app was that it encouraged consistent mood monitoring, particularly through the mood reminders. Participant P5 comments, “every time I was prompted, I would log my mood,” and participant P36, “the reminders really helped otherwise I would have definitely forgot.”. Delivering self-report on the smartwatch also supported patient compliance: “I stopped logging my mood since I haven’t had the watch” (P33). In addition, participants highlighted how the reminders also helped “create a routine” (P28) and supported them “to stop and check in with how you feel” (P9). These new opportunities for reflection helped patients gain self-awareness and becoming “more aware of yourself, how you are physically and emotionally” (P5). Identifying one’s current mood can be challenging, as “we don’t always pay attention to our mood so closely” (P36). Participant P14 describes,

“When I got the reminder in the morning, I wasn’t sure how I was feeling. So when it asked me to record my mood, I actually took 2 min to understand how I’m feeling . . . I would carry one problem or another . . . But now, if I know that I’m in a bad mood, I know that I have to lay low, just let it pass and it’s going to be okay” (P14).

With a similar experience, other participants mentioned how receiving the mood reminders on the watch itself “broke the cycle . . . especially when I’m feeling anxious” (P5) and helped train their self-awareness, or, as participant P29 explains, “train myself to stop and think about my mood” (P29).

Finally, some patients pointed out the difficulty to “stop, take time, log the information and do it there and then” (P17) and the need for retrospective reflection:

“[the app] allowed me to make a note of effectively how I was feeling at that point in time, and then go back and retrospectively look at it” (P17).

7.5.2 It Encourages Me to Adopt Healthier Habits.

Regarding the pre-use motivation for the uptake of the app, participants commented on how the Mood Monitor supported behavioral change. First, the identification of patterns between mood and lifestyle habits was made easier (P26), helping patients understand “why I was feeling that way, what had changed for me to be like this” (P5).

Reflection was further encouraged by the app asking patients which elements might have affected their mood: “you have to give a reason [for your mood], it does make you more self-aware of the likely reason you are feeling good or bad” (P34). In particular, the impact of sleep and physical activity was rendered more explicit, as participant P36 explains,

“When I wasn’t getting enough sleep or if I was having too much sleep, I did actually notice that it was making me feel a bit grouchy or irritable the next day, and that’s something I never kind of linked together” (P36).

The increased self-awareness of patterns supported behavioral change in participants, enabling them to adopt healthier habits. Participant 36 comments,

“being more active or getting out and doing more things, then it actually made my mood a bit better sometimes I think. And before I’d be like ‘oh I don’t want to go do that, what’s the point’ but actually doing it did make a difference” (P36).

Finding the motivation to engage in physical activity is often challenging for individuals experiencing depression [18], and participants described how seeing their step count in the app gave them a ‘boost’ to get active, as participant P5 describes: “It motivated me to go for walks more . . . I don’t think I would have really been motivated to do that if I hadn’t had the smartwatch.” The Apple Watch daily prompts to stand up and encouragements to set and reach fitness goals acted as an additional motivation, as participant P11 notes: “[the watch] encouraged me to get out of bed and try and get active.” Similarly, the Apple Watch bedtime reminder and wake alarm (which participants were instructed to set up to enable sleep tracking) helped them maintain a sleep routine, particularly the bedtime reminders which acted as

“a cutoff point . . . otherwise you can blink and it will be 10:30. So yeah, I did find it helpful in keeping a routine” (P34).

7.5.3 Self-Monitoring Is Easy and Convenient.

The use of the smartwatch was praised by participants as a convenient delivery means for the mood self-report, making it “easier to record them [moods] there and then” (P1). Participant P5 comments doing “all my mood logging on the watch,” and participant P28 describes

“It’s more convenient than remembering to write it down or having to go online to do it” (P28).

The convenience of the mood self-report on the watch supported patient engagement with self-report, logging their mood “more frequently than I necessarily would have on the computer” (P33). Particularly, mood logging on the smartwatch was described as quick and effortless, compared to via the mobile and desktop app for the intervention which required an ‘extra effort’ (P34, P17). Participant P36 comments,

“it was a lot easier to do it on a smartwatch app because it’s not like everyday I’m going to want to log into SilverCloud [platform]” (P36).

The location and proximity of the smartwatch, allowing immediate interaction, further facilitated self-report, as participant P34 explains, “it’s there, and you’re not having to pick up your phone from somewhere else.” Finally, participants mentioned a lessened burden associated with the tracking of sleep—for instance, participant P26 explains that “trouble sleeping was much easier to track.”

7.5.4 I Don’t Fear the Judgment of Others.

Stigma associated with mental health difficulties was a source of worry for most participants, with some hiding their ongoing therapy: “it’s not something that I’d like to advertise to everybody” (P17). Engaging with the Mood Monitor watch app felt safe for participants, as it did not make their difficulties visible to others. Through their ‘subtle’ interface, the reminders enabled a discreet logging of the mood, as participant P17 explains, “if somebody saw it on my watch, they wouldn’t realize I was involved in some something like this program.” The smartwatch itself was described as a more private means to self-report:

“almost under the radar . . . sometimes if you’re on a big phone, you know, people can see more. If it’s just on your watch, nobody’s really interested” (P34).

Most patients also declared feeling comfortable when logging their mood in a social context, as participant P5 describes,

“We were out with friends this past weekend, and I got some reminders, and I felt very comfortable just kind of quietly logging it, and just taking a second to check in with myself” (P5).

For participants who were open to their relatives about undergoing therapy, the Apple Watch acted as a conversation starter (P33). The interest sparked by the smartwatch created opportunities to speak about the program, which felt empowering for some patients, such as participant P29 who was

“trying to make it quite casual, and I’d say ‘oh I’m just enjoying this program, it’s really good and then it actually allowed me to use Apple Watch for 6 weeks.’ So, I’m sort of telling people in a way that if you ever need help, there’s a way for you” (P29).

7.5.5 The Therapy Is Better Tailored to My Needs.

Participants reported feeling comfortable sharing the data collected through the Mood Monitor watch app with their supporter: “it was just the Mood Monitor, I did not mind sharing that” (P14). Trust in how their personal data was handled reinforced that feeling, as participant P29 comments,

“I am very confident because obviously I know they [the supporter] won’t be discussing any of my personal information, unless they think I have a life threatening moment . . . I do trust them” (P29).

This attitude was primarily motivated by the desire to get better. Participant P36 notes, “at the end of the day it’s in my best interest.”

First, participants believed that the more information the supporter has, the better they can help. Providing information about their daily mood and lifestyle habits appeared particularly helpful to “give them [the supporter] an idea of what I’ve been doing and how I’ve been feeling day to day” (P31), so that they could “understand how I exactly feel” (P29). With regard to lifestyle information, participants evoked how it could “give a lot deeper insight for the supporter . . . rather than just asking me ‘how have you been sleeping?”’ (P28). Furthermore, self-report data might provide complementary information to the clinical questionnaires, affected by recall bias. Participant P36 explains that filling in clinical surveys is

“very subjective, you might think you’ve been feeling a different way to how you actually have been feeling . . . it would be quite a good comparison for them to see actually how you felt every day for 2 weeks against the questionnaires” (P36).

Second, participants trusted the supporter’s expertise to identify pertinent elements in the data and tailor conversations (P29), detect warning signs (P23), and assess clinical outcomes (P10, P18).

7.5.6 It Is Part of My Routine.

Last, participants described how self-report “quickly became into a habit” (P29), or in the words of participant P26, “a part of normal life.” Wearing the watch both integrated into their routine, “I got into the habit of just putting it on every morning pretty much straight away” (P5), and with the technologies they used. Participant P33 describes,

“I’ve found it quite great that it sort of seamlessly worked, it integrated with everything that you already have” (P33).

7.6 Sustained Use Acceptance Barriers

Answers to the final questionnaire and post-study interviews revealed that factors such as a lack of tailoring in the content of the app, unreliable or disruptive behavior, usability issues, and a lack of trust with regard to the handling of personal data were obstacles to patient acceptance.

7.6.1 The App Doesn’t Allow for Enough Personalization.

Most patients reported that the list of options from which they could select which factor(s) influenced their mood was not relevant to their situation, as participant P33 comments,

“I didn’t really find them [options] particularly relevant, so I got into a habit of just overlooking it . . . That definitely was something that kind of put me off using it” (P33).

Some participants suggested adding more reasons to explain the mood, such as family, friends, work, and other lifestyle factors which might also impact one’s mood (P34). Enabling customization of the list was also evoked, as participant P34 suggests, “hav[ing] the opportunity to constantly edit it and put in your own [factors] I think would be fantastic.”

Patients also missed being able to apply a valence to each factor, to record “if it affected in a positive or negative way” (P12). Participant P34 describes,

“I feel down because I haven’t done exercise, therefore, exercise is a reason. But equally, when I was logging me having a great mood after I’d gone for a nice run, then exercise was also a factor” (P34).

Participant P34 suggested adding an extra step to self-report, “where you’re like [selecting] ‘too much caffeine’ [or] ‘too little caffeine.”’ Participants also discussed how the smartwatch app could go a step further in terms of customization, to support their engagement with self-report. The use of generic encouraging prompts (e.g., ‘Well done’) was strongly contested: “I think the generic messages just wash over people because we get so many of them” (P17). Similarly, although the ‘Tips to stay well’ helped some patients adjust their habits (e.g., gradually reducing their caffeine intake (P34)), the recommendations they provided were perceived as too generic. Participant P36 notes,

“I felt like for me personally a lot of them were kind of like common sense or self-explanatory. It’s kind of like I know I need to do those things, sometimes it’s a little bit harder for me to do them but it was kind of like I was aware of those kind of things so throughout the time I think I looked at it once when I set it up and that was it” (P36).

This comment also reflects the difficulty of engaging in behavioral change despite being conscious of the importance of adopting healthy lifestyle habits (“I know I need to do those things”).

To further support behavioral change, participants suggested sending prompts “relevant to what you’ve done” (P17, P36). Such custom messages based on the self-monitoring data collected might also support introspection and action, as participant P33 explains,

“If there was poor sleep going on, asking the questions ‘Is everything OK? Is there something that you need to talk somebody about?’ . . . The more personalized, the better” (P33).

Such messages could be further enhanced through making explicit mention to self-report data:

“Something that was relevant to you personally, [e.g.] ‘So we’ve noticed that the last 3 days you’ve had less sleep and your mood is declining’ . . . you could then take an action” (P17).

However, through incorporating longer prompts, the smartwatch app would no longer rely on microinteractions which might impact user engagement with self-report [77]. Such design change therefore requires careful consideration and further investigation.

7.6.2 The App Doesn’t Behave Reliably.

Some participants managed to solve technical issues by reinstalling the app: “that was just an initial hiccup, but I got over that” (P5). However for others, the issues persisted—for instance, participant P17 comments, “I don’t think the reminders came through consistently.” This negatively impacted acceptance of the technology, as participant P33 notes, “I would have used it more regularly if the reminders worked.”

7.6.3 I Am Concerned about Sharing My Self-Report Data.

Sharing lifestyle data collected through the Mood Monitor with their supporter sometimes induced worries in participants: “it just puts a little more pressure on you” (P14). When asked how their supporter should use the data collected through self-report on the watch, participant P35 simply replied “With care,” reflecting the caution needed when dealing with sensitive data. In particular, patients expressed that they did not want to ‘feel trapped’ and under surveillance, as participant P5 explains,

“I’m not sure that I would necessarily want my clinician to be kind of Big Brother-ing on my sleeping trends” (P5).

Particularly, as much as they would like sharing data reflecting improvements, giving access to unsatisfactory data would be a cause of additional stress:

“It would have been nice if she [the supporter] had said ‘I see that you’ve been moving more, that’s really good!’ . . . but I wouldn’t want them to hold that against me if I haven’t been sleeping well or if I haven’t been exercising” (P5).

Participant P14 highlights that sharing ‘unsatisfactory’ data risks leading to feelings of self-blame: “What happens if I am not able to work out for 2 days? . . . they would think that I’m not working out, or I’m not doing good enough.” Pointing out the difficulty to maintain a sleep and exercise routine when experiencing depression symptoms, the participant further argued that self-report data shouldn’t be used to make them answerable for something they had little control over:

“I would want to work out some days, but my body has no energy. I can’t go and explain it to someone why I feel that way because that is how I feel . . . It’s okay if they have it [the data], right, but I don’t want any questions asked as to why” (P14).

Finally, lack of trust in the secure handling of data was an obstacle to self-monitor sleep on the watch. Participant P17, who owned a smart sleep mat, explains, “I chose not to share that information [sleep] . . . I was very concerned about having another source, another outlet which I wasn’t overly comfortable had been fully secured.”

7.6.4 It Disrupts My Routine.

For some participants, wearing the smartwatch disrupted their routine. Responses revealed that the smartwatch made switching off from technology difficult, as participant P36 describes,

“I couldn’t really switch off . . . having it on my arm and seeing it all the time, sometimes I felt a bit drained and like I wasn’t actually connected to the real world” (P36).

The automatic delivery of notifications and the frequent ‘Stand up’ and ‘Breathe’ prompts of the Apple Watch was described as “annoying” (P6), particularly when interrupting participants in the middle of work (P29, P34). Once again, the importance of personalization came up in the responses, with the suggestion of adjusting the sending of prompts (including the mood reminders (P36)) to one’s calendar: “[the app] would see that I’ve got a free spot in my calendar . . . It gives the watch a slot and it is more likely to get my attention when I’m not already busy” (P17).

Skip 8CLINICAL FEASIBILITY OF THE MOOD MONITOR SMARTWATCH APP Section

8 CLINICAL FEASIBILITY OF THE MOOD MONITOR SMARTWATCH APP

8.1 No Negative Impact on Patient Clinical Outcomes

We compared, across groups, patient clinical scores for depression (PHQ-9), generalized anxiety (GAD-7), and functional impairment (WSAS) obtained at the start and end of the 8-week therapy period. Higher scores to the questionnaires represent greater depressive symptoms. We observe a strong decreasing trend for the three scores over the course of therapy (Table 4) signifying an improvement of depressive symptoms, with no significant difference between the smartwatch and treatment as usual groups. This finding supports that the introduction of the smartwatch did not have a negative impact on patients’ clinical outcomes from the digital therapy.

Table 4.
Clinical measureMeasurement pointGroupNMeanSD
Depression (PHQ-9)Pre-therapySW3416.245.721
TAU3215.945.199
Post-therapySW3111.587.270
TAU2911.766.289
Generalized anxiety (GAD-7)Pre-therapySW3511.835.762
TAU3212.413.958
Post-therapySW329.726.779
TAU299.344.561
Functional impairment (WSAS)Pre-therapySW3421.129.810
TAU3222.638.583
Post-therapySW3117.1611.716
TAU2918.529.767

Table 4. Descriptive Statistics on Clinical Scores in the Smartwatch (SW) and Treatment as Usual (TAU) Groups

8.2 No Negative Impact on Patient Usage of the Therapy Program

We compared usage of the digital therapy platform in both groups, through the metrics presented in Table 5. We found no significant differences, which suggests that the introduction of the smartwatch app had no impact on patient usage of the platform. We also checked for differences10 in use of mood self-report in both groups. We observed that the smartwatch group reported a significantly higher number of moods (U = 245.0, p = .000) than the treatment as usual group, as illustrated in Appendix D. This shows that introducing the smartwatch did not impede patient self-monitoring and that patients deliberately used the device to self-monitor their mood.

Table 5.
MetricsGroupMeanSD
Total time on platform (s)TAU11,637.639,452.991
SW9,663.097,046.912
Number of sessionsTAU15.5011.769
SW18.5410.279
Number of tools usedTAU6.253.592
SW6.203.612
Percentage of the program viewedTAU48.2429.9229
SW52.9431.0091
Number of reviewsTAU3.661.842
SW3.461.559

Table 5. Descriptive Statistics on Usage of the Digital Therapy Program in the Smartwatch (SW) and Treatment as Usual (TAU) Groups

8.3 No Negative Impact on Patient Acceptance of Self-Report

The evolution over time of the acceptance scores for each group can be seen in Figure 15. Although the smartwatch group’s scores remained stable throughout therapy, we observe that the treatment as usual group’s scores gained in spread over time, and the mean score slightly decreased from pre-use (n = 34, M = 86.36, SD = 8.001) through initial use (n = 28, M = 83.64, SD = 9.924) to sustained use (n = 32, M = 78.75, SD = 14.300). To understand the factor(s) responsible for this discrepancy, we compared the evolution of the scores for each acceptance mediator, in both groups, with a repeated-measures ANCOVA11 that controlled for the effects of the relationship status variable. No significant difference was found for the mediators perceived threat, usefulness, ease of use, and behavioral intention. However, results showed evidence of a statistically significant difference for the mediator attitude (F(2, 98) = 5.176, p = .007). Figure 16 shows that indeed the evolution trends in the overall scores in both groups are opposite: in the smartwatch group, patients’ attitude scores increased over time and the spread diminished; in the treatment as usual group, the scores decreased over time while the spread increased. This suggests that although the introduction of the Mood Monitor smartwatch app did not seem to impact patients’ overall acceptance of the self-monitoring, it did impact the evolution of patients’ acceptance, and specifically their attitudes toward the self-monitoring technology.

Fig. 15.

Fig. 15. Acceptance scores per group at pre-use, initial use, and sustained use. Mean scores are represented by an ‘x’ and the median score by a line.

Fig. 16.

Fig. 16. Acceptance scores for the attitude mediator, per group, at pre-use, initial use, and sustained use. Mean scores are represented by an ‘x’ and the median score by a line.

To conclude, this analysis revealed that the introduction of the Mood Monitor smartwatch app did not compromise patients’ clinical outcomes, usage of the digital therapy program, and acceptance of the self-report, which supports the clinical feasibility of the intervention.

Skip 9DISCUSSION Section

9 DISCUSSION

The findings revealed that patients’ strongly accepted the Mood Monitor smartwatch app as a means to monitor moods, sleep, and exercise during the iCBT program for depression. We were also able to identify which elements facilitated and impeded patient acceptance. Last, our findings support the clinical feasibility of the Mood Monitor smartwatch app. Drawing on these results, we (1) propose guidelines for designing self-monitoring on a smartwatch, (2) discuss perspectives for studying acceptance of mental health technologies, and (3) reflect on the conduct of user acceptance research in clinical settings.

9.1 Designing Self-Monitoring on a Smartwatch

Drawing on the identified facilitators and barriers to patient acceptance of the Mood Monitor, we formulate guidelines to design self-monitoring interventions on a smartwatch which are accepted by patients. We therefore extend previous research proposing design recommendations for mental health self-report interventions on a smartphone [36, 57]. In light of the existing validated models of technology acceptance, we reflect on the uncovered facilitators and barriers. We present the guidelines in Table 6 and map each one to the corresponding acceptance factor pertaining to the validated acceptance models.

Table 6.
Acceptance factors, models [reference]Design guidelines
Health status, [52]Target users are experiencing difficulties with their mental health. Understanding the extent of these difficulties (e.g., comorbidities) might help picture their impact on users’ daily life.
Health beliefs & concerns, [20, 52]Users’ beliefs are likely to shape how they perceive their health status. Considering users’ beliefs system might help identify what fears they might carry.
Healthcare professional relationship, [38]Self-monitoring technologies sometimes share patient data with a mental health professional. The design should aim to empower users in choosing who has access to their self-report data. If users decide to share self-report data with their mental health supporter, the technology should promote compassionate use of this data.
Self-image, [118, 119]Some people might not be comfortable disclosing their use of a technology for self-monitoring, as they might fear others’ judgment. We should aim to design non-stigmatizing experiences, which allow users to self-report in a private manner. However, some people might use the technology as an opportunity to speak out: the design should aim to empower those users.
Social pressure (or support), [20, 25, 52, 120, 121]Loved ones can be a strong motivation for one to seek mental health care. When designing for self-monitoring uptake, it might be useful to consider users’ social context and put additional efforts to encourage those who are the most isolated.
Resistance to change, [38]Habits are by nature hard to change. Consider the ways in which using the technology might impact users’ routine: how can we minimize unnecessary disruptions and facilitate the implementation of needed changes? Adopting healthier behaviors is a strong motivation for engaging with self-report. Encouraging users with tailored messages (e.g., grounded in self-report data) can support their engagement. The design should aim to support users’ awareness of their progress and reinforce their sense of achievement.
Trust, [25, 33, 45]Self-monitoring technologies are likely to collect and store sensitive user data. The design should be transparent as to how the data is handled and what it is used for.
Privacy protection, [33, 47, 90]Some people (particularly those identifying as men) fear the stigma associated with mental health therapy. The design can support users’ privacy by ensuring the technology is not medicalising, does not disclose its purpose to others around, and uses discrete interactions.
Technology anxiety, [52, 117, 118]Some people may apprehend using a smartwatch for the sensitive task of self-monitoring. Understanding the source of their anxiety (accessibility issues, fear linked to sensor data collection, etc.) may help to mitigate it.
Perceived reliability, [52, 118, 119]Individuals who see the smartwatch as a reliable means to support their mental health are more likely to engage in self-report. The design should aim to convey how the system works and meets its aims, and to clearly communicate the results achieved.
Objective usability, [38, 52, 117, 118]Ease and convenience of use are strong motivations for self-reporting with the smartwatch. How can the design minimize the demands placed on users while keeping up their self-awareness?
Integration, [25]People are more likely to accept and engage with in-the-moment interventions like self-report if these are well integrated into their life. The design should (1) facilitate a seamless integration of the smartwatch with the user’s devices (e.g., mobile phone) but also (2) minimize disruptions to their routine (e.g., linking reminders with a personal calendar).

Table 6. Guidelines to Design Self-Report on a Smartwatch, Mapped to Validated Acceptance Factors

9.2 Perspectives for Studying Acceptance of Mental Health Technologies

Findings indicate the potential impact of user demographics on pre-use acceptability, and therefore on technology uptake—in our case, participants’ relationship status, relating to the acceptance factor of social influence. In light of the recently developed acceptance models for the context of digital health technologies, an interesting theoretical contribution would be to examine the relationships that might exist between users’ demographic characteristics and newly introduced acceptance factors. This opens perspectives for future work, such as exploring the impact users’ education level might have on their health beliefs and concerns.

Although some of factors influencing acceptance were deemed pertinent by patients throughout the three stages of the user journey (e.g., “it helps me check in with myself”), others were only brought up before first use, after first use, or after long-term use. For instance, conversations around patients’ desire to “get better” were situated at pre-use, whereas those around their concerns about data sharing took place after sustained use of the smartwatch app. This finding aligns with the TAL continuum [68], reinforcing the importance of (1) viewing user acceptance as evolving through use of the technology, and (2) adopting a longitudinal measurement approach to capture the facilitators and barriers to acceptance present at the different stages of the user journey.

We also observed that although many of the factors identified as playing a role in patient acceptance of self-report were encompassed by the HITAM, not all were. Indeed, the acceptance factors identified in the qualitative and quantitative analyses belonged to a set of 11 validated models (Table 6). In addition, the findings revealed that other factors (namely relationship status, familiarity with technology, match with expectations, and satisfaction) facilitated acceptance of the self-report activity on the smartwatch. This aligns with findings of the scoping review of Nadal et al. [68], reporting that existing acceptance models were often not “adapted to the specific issues of their target population,” with researchers often exploring additional “context-specific constructs.” Although consolidating models into a single validated approach for mental health contexts could be beneficial, such approach will likely require adjustments as technology evolves, which further supports the value of flexible, qualitative exploration of technology acceptance.

9.3 Conducting User Acceptance Research in Clinical Settings

The findings of this study also provide insight into the conduct of HCI research examining user acceptance in clinical settings. In this section, we reflect on approaches for measuring acceptance and the challenges associated with clinical settings.

9.3.1 Understanding Technology Uptake and the Evolution of Acceptance over Time.

Our measurement approach relied on previous literature arguing for the influence of user demographics [121], the different acceptance mediators (e.g., perceived threat) [52], and satisfaction with therapy [15, 40, 59, 71, 89] on user acceptance of digital health technologies. Findings revealed that looking through these three lenses provided valuable information on patient acceptance of the self-report on a smartwatch. First, analysis of patient demographics and pre-use acceptability scores permitted to highlight the influence of the relationship status element. Second, analyzing each mediator’s acceptance score obtained at pre-use, initial use, and sustained use revealed that these evolved in different manners. In particular, the lack of statistically significant difference in the overall acceptance and mediator scores at the three time points permitted to rule out potential risks, such as the emergence of obstacles to acceptance in the user journey. Finally, the strong correlation between sustained-use acceptance scores and patient satisfaction with therapy revealed that user acceptance of a small component of a mental health intervention (e.g., mood self-report) can be linked to satisfaction with the overall intervention. This suggests the potential value of investigating the relationship between these two elements.

9.3.2 Mixed-Measurement Methods to Get a Rich Understanding of Patient Acceptance.

The set of acceptance factors deemed important by patients differed (1) by stage of the user journey but also (2) by data collection method. For instance, although the perceived threat questionnaire score remained high throughout the user journey (indicating patients’ persistent concerns about their mental health), this factor was only brought up in conversations at the pre-use stage. This supports the importance of gathering users’ qualitative feedback at different stages of the user journey, to give them an opportunity to share what specific factors are important to them at that moment in time. In addition, although a body of work has equated satisfaction to acceptance [15, 19, 40, 44, 48, 49, 71, 72, 74, 89, 105], the findings of this research show that measuring satisfaction alone does not provide an understanding of the factors that determine patients’ usage behavior with the technology. On a higher level, the approach adopted to elicit patients’ opinion greatly impacts the outcome of an acceptance study in terms of understanding which factors play a role at which stage of the user journey. By adopting a methodology combining open questions with targeted questions, we respectively (1) gave patients the opportunity to explain what acceptance factor mattered to them most at specific time points, and (2) captured patient perspectives on aspects that they might have perceived as being of lesser importance, but which might still affect their technology acceptance. Therefore, we recommend measuring patient acceptance using a mixed-methods approach combining targeted and open measurement items to maximize insight and understanding into the evolution of patient acceptance of a technology.

Finally, capturing usage metrics together with a questionnaire addressing known mediators of acceptance enables between-group comparisons in user acceptance, which would help detect and explain problems if they were present.

Skip 10LIMITATIONS AND FUTURE WORK Section

10 LIMITATIONS AND FUTURE WORK

The primary purpose of our study was exploring patient acceptance of self-report on a smartwatch; however, a secondary aim was to investigate the safety of such intervention. Indeed, when exploring the integration of a new technology into a mental health care service, it is essential to ensure that it does not have a negative impact on the service by assessing clinical feasibility. Doing so requires examining how the technology performs in comparison to an analog or digital existing service. In studies examining the replacement of a service (analog or digital) with a new piece of technology, such comparison is straightforward (i.e., old service vs. new service). However, the study presented in this article examined the addition of a digital service (the Mood Monitor smartwatch app) to an existing digital service (the online therapy program). In such study setups, the comparisons run to assess clinical feasibility need to clearly target the components of the service which form the addition rather than the service as a whole. For us, this implied wording the AQ given to the treatment as usual group such that it would target the self-report component of the program. With digital healthcare systems becoming increasingly complex, future work could look at producing guidelines for assessing clinical feasibility of digital additions made to those systems. Future research could also look into redesigning the self-report component of the digital therapy mobile app, integrating scheduled prompts. Comparing the delivery of self-report on different devices (smartphone vs. smartwatch) might then reveal if patients’ acceptance of that increased level of prompting varies across these different modalities. Finally, although this study did not look at within-group differences in acceptance, this could be the focus of future work—for instance, assessing the significance of certain acceptance factors at specific stages of the user journey.

Skip 11CONCLUSION Section

11 CONCLUSION

This article presented a novel use of smartwatches for the self-monitoring of mood and lifestyle habits within a routine iCBT intervention for depression. We first evaluated patient acceptance of the Mood Monitor smartwatch app. This allowed us to determine that the smartwatch app was highly accepted by patients throughout the course of the 8-week therapy. Then, we identified the elements that acted as facilitators and those that acted as barriers. Our findings also supported the clinical feasibility of the intervention. Drawing on this, we proposed guidelines for the design of self-monitoring interventionson a smartwatch.

APPENDICES

A FEATURES OF THE MOOD MONITOR SMARTWATCH APP

Table 7.
FeatureDescription
Mood monitoring– Performed in one tap from the smartwatch locked screen – Records in-the-moment mood – Automatically schedules reminders to record mood – Allows users to adjust the timing of reminders
Lifestyle monitoring– Automatically captures daily bedtime, number of hours slept and steps count
Self-report visualization– Displays for each day of the week the recorded moods, bedtime, hours slept and steps count – Displays a detailed view of today’s last recorded mood, bedtime, hours slept and step count – Indicates differences between today’s and yesterday’s bedtime, hours slept and steps count – Displays encouraging prompts when milestones are reached
Tips to stay well– Displays brief pieces of advice encouraging healthy lifestyle choices

Table 7. Description of the Features of the Mood Monitor Smartwatch App

B VERSIONS OF THE AQ

Table 8.
MediatorItemMeasurement items
codesSmartwatch groupTreatment as usual group
Perceived ThreatPT1I am strongly concerned about my mental wellbeing.I am strongly concerned about my mental wellbeing.
PT2I would make efforts to manage my mental wellbeing.I would make efforts to manage my mental wellbeing.
Perceived UsefulnessPU1I think that keeping track of my mood with the watch app will help in managing my mental wellbeing.I think that keeping track of my mood with the programme will help in managing my mental wellbeing.
PU2I think that keeping track of my sleep and physical activity automatically will help in managing my mental wellbeing.I think that keeping track of my lifestyle choices, such as sleep and physical activity, with the programme will help in managing my mental wellbeing.
PU3Overall, I think that the watch app will be useful in managing my mental wellbeing.
Perceived Ease of UsePEOU1I think that keeping track of my mood with the watch app will be easy.I think that keeping track of my mood with the programme will be easy.
PEOU2I think that keeping track of my sleep and physical activity with the watch app will be easy.I think that keeping track of my lifestyle choices, such as sleep and physical activity, with the programme will be easy.
PEOU3My interaction with the watch app will be clear and understandable.
PEOU4I think that the watch app will be easy to use.
AttitudeA1I will be comfortable recording my mood data with the watch app.I will be comfortable recording my mood data with the programme.
A2I will be comfortable recording my sleep and physical activity data with the watch app.I will be comfortable recording my lifestyle choices, such as sleep and physical activity, with the programme.
A3I will be comfortable sharing my mood data with my SilverCloud Health supporter.I will be comfortable sharing my mood data with my SilverCloud Health supporter.
A4I will be comfortable sharing my sleep and physical activity data with my SilverCloud Health supporter.I will be comfortable sharing my lifestyle choices, such as sleep and physical activity, with my SilverCloud Health supporter.
Behavioral IntentionBI1I intend to use the watch app until completion of my treatment.I intend to track my mood and lifestyle choices, such as sleep and physical activity, when prompted by the programme until completion of my treatment.
Usage BehaviorUB1I decided to use the watch app because . . .I decided to enrol in this study because . . .

Table 8. AQ at Day 0 (T1)

Table 9.
MediatorItemMeasurement items
codesSmartwatch groupTreatment as usual group
Perceived ThreatPT1I am strongly concerned about my mental wellbeing.I am strongly concerned about my mental wellbeing.
PT2I would make efforts to manage my mental wellbeing.I would make efforts to manage my mental wellbeing.
Perceived UsefulnessPU1I think that keeping track of my mood with the watch app is useful in managing my mental wellbeing.I think that tracking my mood with the program is useful in managing my mental wellbeing.
PU2I think that keeping track of my sleep and physical activity automatically helps in managing my mental wellbeing.I think that tracking my lifestyle choices, such as sleep and physical activity, with the program is useful in managing my mental wellbeing.
PU3Overall, I think that the watch app is useful in managing my mental wellbeing.
Perceived Ease of UsePEOU1I think that it is easy to track my mood with the watch app.I think that it is easy to track my mood with the program.
PEOU2I think that it is easy to track my sleep and physical activity with the watch app.I think that it is easy to track my lifestyle choices with the program.
PEOU3My interaction with the watch app is clear and understandable.
PEOU4I think that the watch app is easy to use.
AttitudeA1I am comfortable recording my mood data with the watch app.I am comfortable recording my mood data with the program.
A2I am comfortable recording my sleep and physical activity data with the watch app.I am comfortable recording my lifestyle choices, such as sleep and physical activity, with the program.
A3I am comfortable sharing my mood data with my SilverCloud Health supporter.I am comfortable sharing my Mood Monitor with my SilverCloud Health supporter.
A4I am comfortable sharing my sleep and physical activity data with my SilverCloud Health supporter.I am comfortable sharing my Lifestyle Choices chart with my SilverCloud Health supporter.
Behavioral IntentionBI1I intend to use the watch app until completion of my treatment.I intend to track my mood and lifestyle choices, such as sleep and physical activity, when prompted by the program until completion of my treatment.
Usage BehaviorUB1How would you describe your use of the watch app?How would you describe your use of the Mood Monitor and Lifestyle Choices chart?
UB2What difficulties (if any) did you experience while installing or using the watch app?

Table 9. AQ at 3 Weeks (T2)

Table 10.
MediatorItemMeasurement items
codesSmartwatch groupTreatment as usual group
Perceived ThreatPT1I am strongly concerned about my mental wellbeing.I am strongly concerned about my mental wellbeing.
PT2I would make efforts to manage my mental wellbeing.I would make efforts to manage my mental wellbeing.
Perceived UsefulnessPU1I think that keeping track of my mood helped in managing my mental wellbeing.I think that tracking my mood with the program helped in managing my mental wellbeing.
PU2I think that keeping track of my sleep and activity automatically helped in managing my mental wellbeing.I think that tracking my lifestyle choices, such as sleep and physical activity, with the program helped in managing my mental wellbeing.
PU3Overall, I think that the watch app was useful in managing my mental wellbeing.
Perceived Ease of UsePEOU1I think that it was easy to track my mood with the watch app.I think that it was easy to track my mood in the program.
PEOU2I think that it was easy to track my sleep and physical activity with the watch app.I think that it was easy to track my lifestyle choices in the program.
PEOU3My interaction with the watch app was clear and understandable.
PEOU4I think that the watch app was easy to use.
AttitudeA1I was comfortable recording my mood with the watch app.I was comfortable recording my mood data with the program.
A2I was comfortable recording my sleep and activity data with the watch app.I was comfortable recording my lifestyle choices, such as sleep and physical activity, with the program.
A3I was comfortable sharing my mood data with my SilverCloud Health supporter.I was comfortable sharing my Mood Monitor with my SilverCloud Health supporter.
A4I was comfortable sharing my sleep and activity data with my SilverCloud Health supporter.I was comfortable sharing my Lifestyle Choices chart with my SilverCloud Health supporter.
Behavioral IntentionBI1I would use the watch app again if I felt the need to monitor my mood.I would use the Mood Monitor again if I felt the need to monitor my mood.
BI2I would use the watch app again if I felt the need to monitor my sleep and physical activity.I would use the Lifestyle Choices chart again if I felt the need to monitor my sleep and physical activity.
Usage BehaviorUB1How would you describe your use of the watch app?How would you describe your use of the Mood Monitor and Lifestyle Choices chart?

Table 10. AQ at 8 Weeks (T3)

C OPEN-ENDED QUESTIONS

Table 11.
MediatorMeasurement items
Match with expectationsHow was the experience of using the watch app?
EngagementHow did the watch app impact how you used the Space from Depression intervention?
RecommendationIf you would recommend (or not) the watch app to other people using the Space from Depression program, could you explain why?
SharingHow would you expect your supporter to use the information gathered through the app?
Perceived privacyHow comfortable did you feel using the watch app in your daily life? When did you feel more/less comfortable using it?
Resistance to changeIf you felt reluctant (or keen) to use the watch app, could you explain why?
Possible negative experienceIf there were any negative aspects to your use of the watch app, could you describe these?
Watch app featuresHow did you feel about the reminders to record your mood? How did you feel about the ‘Tips to stay well’ (accessible from app menu)? How did you feel about the encouragement prompts? If you haven’t encountered any, how would you have liked to be encouraged/rewarded while using the app?
General feedbackWhat do you feel could be improved about the watch app?

Table 11. Open-Ended Questions for the Smartwatch Group at 8 Weeks (T3)

D COMPARISON OF SELF-REPORT USAGE

Fig. 17.

Fig. 17. Mood records by participant in the smartwatch and treatment as usual groups.

ACKNOWLEDGMENTS

We warmly acknowledge the patients at Berkshire Healthcare NHS Foundation Trust who took part in this study: your contribution will help improve the experience of future patients with mental health care. We are also extremely thankful to the psychological wellbeing practitioners for all the time and energy they gave to this research, particularly Principal Investigator Sarah Sollesse, Grace Jell, Samantha Morris-Watts, and Samantha Parker.

Footnotes

  1. 1 Accessible with a Force Touch, an interaction since then discontinued by Apple.

    Footnote
  2. 2 The smartwatch app exists in addition to the online Mood Monitor tool; therefore, users can log self-report entries using either a smartwatch, mobile, or desktop app.

    Footnote
  3. 3 The full list of prompts is available in the study protocol [66].

    Footnote
  4. 4 Patients with severe presentations of depression are not eligible for the program.

    Footnote
  5. 5 Technical support was available through the SilverCloud platform.

    Footnote
  6. 6 “Longitudinal data present information about what happened to a set of research units during a series of time points” [43].

    Footnote
  7. 7 The procedure involved contacting the supporters’ emergency line if the patient was suspected to be at risk of harm.

    Footnote
  8. 8 We used a Bonferroni correction to allow for multiple comparison statements and maintain an overall confidence coefficient.

    Footnote
  9. 9 Homogeneity of variance was not met by the data.

    Footnote
  10. 10 The data was non-normally distributed, therefore Mann-Whitney tests were used.

    Footnote
  11. 11 We used a Bonferroni correction to allow for multiple comparison statements and maintain an overall confidence coefficient.

    Footnote

REFERENCES

  1. [1] Abdullah Saeed, Matthews Mark, Frank Ellen, Doherty Gavin, Gay Geri, and Choudhury Tanzeem. 2016. Automatic detection of social rhythms in bipolar disorder. Journal of the American Medical Informatics Association 23, 3 (2016), 538543.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Ahmedani Brian K.. 2011. Mental health stigma: Society, individuals, and the profession. Journal of Social Work Values and Ethics 8, 2 (2011), 4.1–4.16.Google ScholarGoogle Scholar
  3. [3] Ajzen Icek. 1985. From intentions to actions: A theory of planned behavior. In Action Control. Springer, 1139.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Anastasiadou Dimitra, Folkvord Frans, Serrano-Troncoso Eduardo, and Lupiañez-Villanueva Francisco. 2018. Mobile health adoption in mental health: User experience of a mobile health app for patients with an eating disorder. JMIR mHealth uHealth 7, 6 (2018), e12920.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Bakker Jessie P., Goldsack Jennifer C., Clarke Michael, Coravos Andrea, Geoghegan Cynthia, Godfrey Alan, Heasley Matthew G., Karlin Daniel R., Manta Christine, Peterson Barry, Ernesto Ramirez, Nirav Sheth, Antonia Bruno, Emilia Bullis, Kirsten Wareham, Noah Zimmerman, Annemarie Forrest, and William A. Wood. 2019. A systematic review of feasibility studies promoting the use of mobile technologies in clinical research. NPJ Digital Medicine 2, 1 (2019), 47.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Bardram Jakob E., Frost Mads, Szántó Károly, Faurholt-Jepsen Maria, Vinberg Maj, and Kessing Lars Vedel. 2013. Designing mobile health technology for bipolar disorder: A field trial of the MONARCA system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 26272636.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Belisario José S. Marcano, Jamsek Jan, Huckvale Kit, O’Donoghue John, Morrison Cecily P., and Car Josip. 2015. Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database of Systematic Reviews 7 (2015), MR000042.Google ScholarGoogle Scholar
  8. [8] Beukenhorst Anna L., Sergeant Jamie C., Little Max A., McBeth John, and Dixon William G.. 2018. Consumer smartwatches for collecting self-report and sensor data: App design and engagement. Studies in Health Technology and Informatics 247 (2018), 291–295.Google ScholarGoogle Scholar
  9. [9] Bharadwaj Prashant, Pai Mallesh M., and Suziedelyte Agne. 2017. Mental health stigma. Economics Letters 159 (2017), 5760.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Bidargaddi Niranjan, Almirall Daniel, Murphy Susan, Nahum-Shani Inbal, Kovalcik Michael, Pituch Timothy, Maaieh Haitham, and Strecher Victor. 2018. To prompt or not to prompt? A microrandomized trial of time-varying push notifications to increase proximal engagement with a mobile health app. JMIR mHealth and uHealth 6, 11 (2018), e10123.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Boukhechba Mehdi, Gong Jiaqi, Kowsari Kamran, Ameko Mawulolo K., Fua Karl, Chow Philip I., Huang Yu, Teachman Bethany A., and Barnes Laura E.. 2018. Physiological changes over the course of cognitive bias modification for social anxiety. In Proceedings of the 2018 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI’18). IEEE, Los Alamitos, CA, 422425.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Bower Gordon H.. 1981. Mood and memory. American Psychologist 36, 2 (1981), 129.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Bowie-DaBreo Dionne, Sünram-Lea Sandra I., Sas Corina, and Iles-Smith Heather. 2020. Evaluation of treatment descriptions and alignment with clinical guidance of apps for depression on app stores: Systematic search and content analysis. JMIR Formative Research 4, 11 (2020), e14988.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Bowman Robert, Nadal Camille, Morrissey Kellie, Thieme Anja, and Doherty Gavin. 2023. Using thematic analysis in healthcare HCI at CHI: A scoping review. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI’23). 1–18.Google ScholarGoogle Scholar
  15. [15] Brittain K., Kamp K., Cassandras C., Salaysay Z., and Gómez-Márquez J.. 2018. A mobile app to increase informed decisions about colorectal cancer screening among African American and Caucasian women: A pilot study. Gastroenterology Nursing 41, 4 (2018), 297303.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Bucci S., Barrowclough C., Ainsworth J., Machin M., Morris R., Berry K., Emsley R., Lewis S., Edge D., Buchan I., and Gillian Haddock. 2018. Actissist: Proof-of-concept trial of a theory-driven digital intervention for psychosis. Schizophrenia Bulletin 44, 5 (2018), 10701080.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Burton Christopher, McKinstry Brian, Tătar Aurora Szentagotai, Serrano-Blanco Antoni, Pagliari Claudia, and Wolters Maria. 2013. Activity monitoring in patients with depression: A systematic review. Journal of Affective Disorders 145, 1 (2013), 2128.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Busch Andrew M., Ciccolo Joseph T., Puspitasari Ajeng J., Nosrat Sanaz, Whitworth James W., and Stults-Kolehmainen Matthew A.. 2016. Preferences for exercise as a treatment for depression. Mental Health and Physical Activity 10 (2016), 6872.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Carter Michelle Clare, Burley Victoria Jane, Nykjaer Camilla, and Cade Janet Elizabeth. 2013. Adherence to a smartphone application for weight loss compared to website and paper diary: Pilot randomized controlled trial. Journal of Medical Internet Research 15, 4 (2013), e32.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Cheung Man Lai, Chau Ka Yin, Lam Michael Huen Sum, Tse Gary, Ho Ka Yan, Flint Stuart W., Broom David R., Tso Ejoe Kar Ho, and Lee Ka Yiu. 2019. Examining consumers’ adoption of wearable healthcare technology: The role of health attributes. International Journal of Environmental Research and Public Health 16, 13 (2019), 2257.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Choe Eun Kyoung, Abdullah Saeed, Rabbi Mashfiqui, Thomaz Edison, Epstein Daniel A., Cordeiro Felicia, Kay Matthew, Abowd Gregory D., Choudhury Tanzeem, Fogarty James, Bongshin Lee, Mark Matthews, and Julie A. Kientz. 2017. Semi-automated tracking: A balanced approach for self-monitoring applications. IEEE Pervasive Computing 16, 1 (2017), 7484.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Clarke Victoria and Braun Virginia. 2021. Thematic Analysis: A Practical Guide. SAGE Publications.Google ScholarGoogle Scholar
  23. [23] Cole Casey Anne, Powers Shannon, Tomko Rachel L., Froeliger Brett, and Valafar Homayoun. 2021. Quantification of smoking characteristics using smartwatch technology: Pilot feasibility study of new technology. JMIR Formative Research 5, 2 (2021), e20464.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Colombo Desirée, Fernández-Álvarez Javier, Suso-Ribera Carlos, Cipresso Pietro, Valev Hristo, Leufkens Tim, Sas Corina, Garcia-Palacios Azucena, Riva Giuseppe, and Botella Cristina. 2020. The need for change: Understanding emotion regulation antecedents and consequences using ecological momentary assessment. Emotion 20, 1 (2020), 30.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Connelly Kay. 2007. On developing a technology acceptance model for pervasive computing. In Proceedings of the 9th International Conference on Ubiquitous Computing (UBICOMP’07) and Workshop of Ubiquitous System Evaluation (USE’07). 520.Google ScholarGoogle Scholar
  26. [26] Cormack Francesca, McCue Maggie, Taptiklis Nick, Skirrow Caroline, Glazer Emilie, Panagopoulos Elli, Schaik Tempest A. van, Fehnert Ben, King James, and Barnett Jennifer H.. 2019. Wearable technology for high-frequency cognitive and mood assessment in major depressive disorder: Longitudinal observational study. JMIR Mental Health 6, 11 (2019), e12814.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Costanza-Chock Sasha. 2018. Design justice: Towards an intersectional feminist framework for design theory and practice. In Proceedings of the Design Research Society International Conference.Google ScholarGoogle Scholar
  28. [28] Cousins Jennifer C., Whalen Diana J., Dahl Ronald E., Forbes Erika E., Olino Thomas M., Ryan Neal D., and Silk Jennifer S.. 2011. The bidirectional association between daytime affect and nighttime sleep in youth with anxiety and depression. Journal of Pediatric Psychology 36, 9 (2011), 969979.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Davies William. 2017. How are we now? Real-time mood-monitoring as valuation. Journal of Cultural Economy 10, 1 (2017), 3448.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Davis Fred D.. 1985. A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. Ph.D. Dissertation. Massachusetts Institute of Technology, Cambridge, MA.Google ScholarGoogle Scholar
  31. [31] Davis Fred D., Bagozzi Richard P., and Warshaw Paul R.. 1989. User acceptance of computer technology: A comparison of two theoretical models. Management Science 35, 8 (1989), 9821003.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Debard Glen, Witte Nele De, Sels Romy, Mertens Marc, Daele Tom Van, and Bonroy Bert. 2020. Making wearable technology available for mental healthcare through an online platform with stress detection algorithms: The Carewear project. Journal of Sensors 2020 (2020), 8846077.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Dhagarra Devendra, Goswami Mohit, and Kumar Gopal. 2020. Impact of trust and privacy concerns on technology acceptance in healthcare: An Indian perspective. International Journal of Medical Informatics 141 (2020), 104164.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Distler Verena, Lallemand Carine, and Bellet Thierry. 2018. Acceptability and acceptance of autonomous mobility on demand: The impact of an immersive experience. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Dogan Ezgi, Sander Christian, Wagner Xenija, Hegerl Ulrich, and Kohls Elisabeth. 2017. Smartphone-based monitoring of objective and subjective data in affective disorders: Where are we and where are we going? Systematic review. Journal of Medical Internet Research 19, 7 (2017), e262.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Doherty Kevin, Barry Marguerite, Marcano-Belisario José, Arnaud Bérenger, Morrison Cecily, Car Josip, and Gavin Doherty. 2018. A mobile app for the self-report of psychological well-being during pregnancy (BrightSelf): Qualitative design study. JMIR Mental Health 5, 4 (2018), e10007.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Doherty Kevin and Doherty Gavin. 2018. The construal of experience in HCI: Understanding self-reports. International Journal of Human-Computer Studies 110 (2018), 6374.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Dou Kaili, Yu Ping, Deng Ning, Liu Fang, Guan YingPing, Li Zhenye, Ji Yumeng, Du Ningkai, Lu Xudong, and Duan Huilong. 2017. Patients’ acceptance of smartphone health technology for chronic disease management: A theoretical model and empirical test. JMIR mHealth and uHealth 5, 12 (2017), e177.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] American Psychiatric Association2013. Diagnostic and Statistical Manual of Mental Disorders (5th ed.). American Psychiatric AssociationGoogle ScholarGoogle Scholar
  40. [40] Eisenhauer C., Hageman P., Rowland S., Becker B., Barnason S., and Pullen C.. 2017. Acceptability of mHealth technology for self-monitoring eating and activity among rural men. Public Health Nursing 34, 2 (2017), 138146.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Faurholt-Jepsen Maria, Vinberg Maj, Christensen Ellen Margrethe, Frost Mads, Bardram Jakob, and Kessing Lars Vedel. 2013. Daily electronic self-monitoring of subjective and objective symptoms in bipolar disorder—The MONARCA trial protocol (MONitoring, treAtment and pRediCtion of bipolAr disorder episodes): A randomised controlled single-blind trial. BMJ Open 3, 7 (2013), e003353.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Garces Giovanny Arbelaez, Rakotondranaivo Auguste, and Bonjour Eric. 2016. An acceptability estimation and analysis methodology based on Bayesian networks. International Journal of Industrial Ergonomics 53 (2016), 245256.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Gerken Jens. 2011. Longitudinal Research in Human-computer Interaction. Ph.D. Dissertation. Fachbereich Informatik & Informationswissenschaft.Google ScholarGoogle Scholar
  44. [44] Gordon Judith S., Armin Julie, Hingle Melanie D., Jr. Peter Giacobbi, Cunningham James K., Johnson Thienne, Abbate Kristopher, Howe Carol L., and Roe Denise J.. 2017. Development and evaluation of the see me smoke-free multi-behavioral mHealth app for women smokers. Translational Behavioral Medicine 7, 2 (2017), 172184.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Harris Maurita T. and Rogers Wendy A.. 2023. Developing a healthcare technology acceptance model (H-TAM) for older adults with hypertension. Ageing & Society 43, 4 (2023), 814834.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Hirschfeld R. M. A.. 2000. Antidepressants in long-term therapy: A review of tricyclic antidepressants and selective serotonin reuptake inhibitors. Acta Psychiatrica Scandinavica 101 (2000), 3538.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Hsu Chien-Lung, Lee Ming-Ren, and Su Chien-Hui. 2013. The role of privacy protection in healthcare information systems adoption. Journal of Medical Systems 37, 5 (2013), 112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Isetta V., Torres M., González K., Ruiz C., Dalmases M., Embid C., Navajas D., Farré R., and Montserrat J.. 2017. A new mHealth application to support treatment of sleep apnoea patients. Journal of Telemedicine and Telecare 23, 1 (2017), 1418.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Jacobson A., Vesely S., Haamid F., Christian-Rancy M., and O’Brien S.. 2018. Mobile application vs paper pictorial blood assessment chart to track menses in young women: A randomized cross-over design. Journal of Pediatric and Adolescent Gynecology 31, 2 (2018), 8488.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Juarascio Adrienne S., Goldstein Stephanie P., Manasse Stephanie M., Forman Evan M., and Butryn Meghan L.. 2015. Perceptions of the feasibility and acceptability of a smartphone application for the treatment of binge eating disorders: Qualitative feedback from a user population and clinicians. International Journal of Medical Informatics 84, 10 (2015), 808816.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Kahneman Daniel, Diener Edward, and Schwarz Norbert. 1999. Well-Being: Foundations of Hedonic Psychology. Russell Sage Foundation.Google ScholarGoogle Scholar
  52. [52] Kim Jeongeun and Park Hyeoun-Ae. 2012. Development of a health information technology acceptance model using consumers’ health behavior intention. Journal of Medical Internet Research 14, 5 (2012), e133.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Lane Andrew M. and Terry Peter C.. 2000. The nature of mood: Development of a conceptual model with a focus on depression. Journal of Applied Sport Psychology 12, 1 (2000), 1633.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Lau Nancy, Colt Susannah F., Waldbaum Shayna, O’Daffer Alison, Fladeboe Kaitlyn, Yi-Frazier Joyce P., McCauley Elizabeth, and Abby R. Rosenberg. 2021. Telemental health for youth with chronic illnesses: Systematic review. JMIR Mental Health 8, 8 (2021), e30098.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Maija Korhonen and Katri Komulainen. 2019. The moral orders of work and health: A case of sick leave due to burnout. Sociology of Health & Illness 41, 2 (2019), 219233.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Martin Nicolas, Erhel Séverine, Jamet Éric, and Rouxel Géraldine. 2015. What links between user experience and acceptability? In Proceedings of the 27th Conference on l’Interaction Homme-Machine. 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Matthews Mark, Abdullah Saeed, Murnane Elizabeth, Voida Stephen, Choudhury Tanzeem, Gay Geri, and Frank Ellen. 2016. Development and evaluation of a smartphone-based measure of social rhythms for bipolar disorder. Assessment 23, 4 (2016), 472483.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] McCallum Claire, Rooksby John, and Gray Cindy M.. 2018. Evaluating the impact of physical activity apps and wearables: Interdisciplinary review. JMIR mHealth and uHealth 6, 3 (2018), e9054.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] O’Brien Kimberly H. McManama, LeCloux Mary, Ross Abigail, Gironda Christina, and Wharff Elizabeth A.. 2017. A pilot study of the acceptability and usability of a smartphone application intervention for suicidal adolescents and their parents. Archives of Suicide Research 21, 2 (2017), 254264.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Mor Nilly and Haran Dafna. 2009. Cognitive-behavioral therapy for depression. Israel Journal of Psychiatry and Related Sciences 46, 4 (2009), 269.Google ScholarGoogle Scholar
  61. [61] Morey-Nase Catherine, Phillips Lisa J., Bryce Shayden, Hetrick Sarah, Wright Andrea L., Caruana Emma, and Allott Kelly. 2019. Subjective experiences of neurocognitive functioning in young people with major depression. BMC Psychiatry 19, 1 (2019), 19.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Motti Vivian Genaro. 2018. Smartwatch applications for mental health: A qualitative analysis of the users’ perspectives. Poster presented at the 3rd Symposium on Computing and Mental Health.Google ScholarGoogle Scholar
  63. [63] Motti Vivian Genaro. 2019. Assistive wearables: Opportunities and challenges. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers. 10401043.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Mundt James C., Marks Isaac M., Shear M. Katherine, and Greist John M.. 2002. The work and social adjustment scale: A simple measure of impairment in functioning. British Journal of Psychiatry 180, 5 (2002), 461464.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Nadal Camille. 2022. User Acceptance of Health and Mental Health Care Technologies. Ph.D. Dissertation. School of Computer Science & Statistics, Trinity College Dublin.Google ScholarGoogle Scholar
  66. [66] Nadal Camille, Earley Caroline, Enrique Angel, Vigano Noemi, Sas Corina, Richards Derek, and Doherty Gavin. 2021. Integration of a smartwatch within an Internet-delivered intervention for depression: Protocol for a feasibility randomized controlled trial on acceptance. Contemporary Clinical Trials 103 (2021), 106323.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Nadal Camille, McCully Shane, Doherty Kevin, Sas Corina, and Doherty Gavin. 2022. The TAC toolkit: Supporting design for user acceptance of health technologies from a macro-temporal perspective. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Nadal Camille, Sas Corina, and Doherty Gavin. 2020. Technology acceptance in mobile health: Scoping review of definitions, models, and measurement. Journal of Medical Internet Research 22, 7 (2020), e17256.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] Naslund John A., Aschbrenner Kelly A., Barre Laura K., and Bartels Stephen J.. 2015. Feasibility of popular m-Health technologies for activity tracking among individuals with serious mental illness. Telemedicine and e-Health 21, 3 (2015), 213216.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Nicholas Jennifer, Larsen Mark Erik, Proudfoot Judith, and Christensen Helen. 2015. Mobile apps for bipolar disorder: A systematic review of features and content quality. Journal of Medical Internet Research 17, 8 (2015), e198.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Niendam T., Tully L., Iosif A., Kumar D., Nye K., Denton J., Zakskorn L., Fedechko T., and Pierce K.. 2018. Enhancing early psychosis treatment using smartphone technology: A longitudinal feasibility and validity study. Journal of Psychiatric Research 96 (2018), 239246.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] O’Brien J. T., Gallagher P., Stow D., Hammerla N., Ploetz T., Firbank M., Ladha C., Ladha K., Jackson D., McNaney Roisin, N. Ferrier, and P. Olivier. 2017. A study of wrist-worn activity measurement as a potential real-world biomarker for late-life depression. Psychological Medicine 47, 1 (2017), 93102.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Ometov Aleksandr, Shubina Viktoriia, Klus Lucie, Skibińska Justyna, Saafi Salwa, Pascacio Pavel, Flueratoru Laura, Gaibor Darwin Quezada, Chukhno Nadezhda, Chukhno Olga, Asad Ali, Asma Channa, Ekaterina Svertoka, Waleed Bin Qaim, Raul Casanova-Marques, Sylvia Holcer, Joaquin Torres-Sospedra, Sven Casteleyn, Guiseppe Ruggeri, Guiseppe Araniti, Radim Burget, Jiri Hosek, and Elena Simona Lohan. 2021. A survey on wearable technology: History, state-of-the-art and current challenges. Computer Networks 193 (2021), 108074.Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] Patel Samir, Jacobus-Kantor Laura, Marshall Lorraine, Ritchie Clark, Kaplinski Michelle, Khurana Parvinder S., and Katz Richard J.. 2013. Mobilizing Your Medications: An Automated Medication Reminder Application for Mobile Phones and Hypertension Medication Adherence in a High-Risk Urban Population. SAGE Publications.Google ScholarGoogle Scholar
  75. [75] Perez Marco V., Mahaffey Kenneth W., Hedlin Haley, Rumsfeld John S., Garcia Ariadna, Ferris Todd, Balasubramanian Vidhya, Russo Andrea M., Rajmane Amol, Cheung Lauren, Grace Hung, Justin Lee, Peter Kowey, Nisha Talati, Divya Nag, Santosh E. Gummidipundi, Alexis Beatty, Mellanie True Hills, Sumbul Desai, Christopher B. Granger, Manisha Desai, Mintu P. Turakhia; Apple Heart Study Investigators. 2019. Large-scale assessment of a smartwatch to identify atrial fibrillation. New England Journal of Medicine 381, 20 (2019), 19091917.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Piwek Lukasz, Ellis David A., Andrews Sally, and Joinson Adam. 2016. The rise of consumer health wearables: Promises and barriers. PLoS Medicine 13, 2 (2016), e1001953.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Ponnada Aditya, Li Jixin, Wang Shirlene, Wang Wei-Lin, Do Bridgette, Dunton Genevieve F., and Intille Stephen S.. 2022. Contextual biases in microinteraction Ecological Momentary Assessment (\(\mu\)EMA) non-response. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 1 (2022), 124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. [78] Qu Chengcheng, Sas Corina, and Doherty Gavin. 2019. Exploring and designing for memory impairments in depression. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 115. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. [79] Qu Chengcheng, Sas Corina, Roquet Claudia Daudén, and Doherty Gavin. 2020. Functionality of top-rated mobile apps for depression: Systematic search and evaluation. JMIR Mental Health 7, 1 (2020), e15321.Google ScholarGoogle ScholarCross RefCross Ref
  80. [80] Richards Derek, Enrique Angel, Eilert Nora, Franklin Matthew, Palacios Jorge, Duffy Daniel, Earley Caroline, Chapman Judith, Jell Grace, Sollesse Sarah, and Ladislav Timulak. 2020. A pragmatic randomized waitlist-controlled effectiveness and cost-effectiveness trial of digital interventions for depression and anxiety. NPJ Digital Medicine 3, 1 (2020), 110.Google ScholarGoogle Scholar
  81. [81] Richards Derek, Murphy Treasa, Viganó Noemi, Timulak Ladislav, Doherty Gavin, Sharry John, and Hayes Claire. 2016. Acceptability, satisfaction and perceived efficacy of “Space from Depression,” an internet-delivered treatment for depression. Internet Interventions 5 (2016), 1222.Google ScholarGoogle ScholarCross RefCross Ref
  82. [82] Richards Derek and Timulak Ladislav. 2013. Satisfaction with therapist-delivered vs. self-administered online cognitive behavioural treatments for depression symptoms in college students. British Journal of Guidance & Counselling 41, 2 (2013), 193207.Google ScholarGoogle ScholarCross RefCross Ref
  83. [83] Richards Derek, Timulak Ladislav, O’Brien Emma, Hayes Claire, Vigano Noemi, Sharry John, and Doherty Gavin. 2015. A randomized controlled trial of an Internet-delivered treatment: Its potential as a low-intensity community intervention for adults with symptoms of depression. Behaviour Research and Therapy 75 (2015), 2031.Google ScholarGoogle ScholarCross RefCross Ref
  84. [84] Rizvi Shireen L., Hughes Christopher D., and Thomas Marget C.. 2016. The DBT Coach mobile application as an adjunct to treatment for suicidal and self-injuring individuals with borderline personality disorder: A preliminary evaluation and challenges to client utilization. Psychological Services 13, 4 (2016), 380.Google ScholarGoogle ScholarCross RefCross Ref
  85. [85] Rodriguez-Villa Elena, Mehta Urvakhsh Meherwan, Naslund John, Tugnawat Deepak, Gupta Snehil, Thirtalli Jagadisha, Bhan Anant, Patel Vikram, Chand Prabhat Kumar, Rozatkar Abhijit, Matcheri Keshavan, and John Torous. 2021. Smartphone health assessment for relapse prevention (SHARP): A digital solution toward global mental health. BJPsych Open 7, 1 (2021), e29.Google ScholarGoogle ScholarCross RefCross Ref
  86. [86] Rogers Everett M.. 1983. Diffusion of Innovations. Simon & Schuster.Google ScholarGoogle Scholar
  87. [87] Sanches Pedro, Janson Axel, Karpashevich Pavel, Nadal Camille, Qu Chengcheng, Roquet Claudia Daudén, Umair Muhammad, Windlin Charles, Doherty Gavin, Höök Kristina, and Corina Sas. 2019. HCI and affective health: Taking stock of a decade of studies and charting future research directions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 117.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. [88] Scharp Kristina M. and Thomas Lindsey J.. 2017. “What would a loving mom do today?” Exploring the meaning of motherhood in stories of prenatal and postpartum depression. Journal of Family Communication 17, 4 (2017), 401414.Google ScholarGoogle ScholarCross RefCross Ref
  89. [89] Schlosser D., Campellone T., Truong B., Anguera J., Vergani S., Vinogradov S., and Arean P.. 2017. The feasibility, acceptability, and outcomes of PRIME-D: A novel mobile intervention treatment for depression. Depression and Anxiety 34, 6 (2017), 546554.Google ScholarGoogle ScholarCross RefCross Ref
  90. [90] Schomakers Eva-Maria, Lidynia Chantal, and Ziefle Martina. 2019. Listen to my heart? How privacy concerns shape users’ acceptance of e-Health technologies. In Proceedings of the 2019 International Conference on Wireless and Mobile Computing, Networking, and Communications (WiMob’19). IEEE, Los Alamitos, CA, 306311.Google ScholarGoogle Scholar
  91. [91] Schwartz Stefani, Schultz Summer, Reider Aubrey, and Saunders Erika F. H.. 2016. Daily mood monitoring of symptoms using smartphones in bipolar disorder: A pilot study assessing the feasibility of ecological momentary assessment. Journal of Affective Disorders 191 (2016), 8893.Google ScholarGoogle ScholarCross RefCross Ref
  92. [92] Sekhon Mandeep, Cartwright Martin, and Francis Jill J.. 2017. Acceptability of healthcare interventions: An overview of reviews and development of a theoretical framework. BMC Health Services Research 17, 1 (2017), 113.Google ScholarGoogle ScholarCross RefCross Ref
  93. [93] Service National Health. 2011. The Improving Access to Psychological Therapies Data Handbook v2.0.1. National Health Service.Google ScholarGoogle Scholar
  94. [94] Inc S. H.. 2021. SilverCloud Health | The Leading Digital Mental Health Platform. Retrieved September 3, 2023 from https://www.silvercloudhealth.com/ukGoogle ScholarGoogle Scholar
  95. [95] Shiffman Saul. 2000. Real-time self-report of momentary states in the natural environment: Computerized ecological momentary assessment. In The Science of Self-Report: Implications for Research and Practice, A. A. Stone, J. S. Turkkan, C. A. Bachrach, J. B. Job, H. S. Kurtzman, and V. S. Cain (Eds.). Lawrence Erlbaum Associates, 277296.Google ScholarGoogle Scholar
  96. [96] Simmons Elizabeth Schoen, Paul Rhea, and Shic Frederick. 2016. Brief report: A mobile application to treat prosodic deficits in autism spectrum disorder and other communication impairments: A pilot study. Journal of Autism and Developmental Disorders 46, 1 (2016), 320327.Google ScholarGoogle ScholarCross RefCross Ref
  97. [97] Skinner Andrew L., Stone Christopher J., Doughty Hazel, and Munafò Marcus R.. 2019. StopWatch: The preliminary evaluation of a smartwatch-based system for passive detection of cigarette smoking. Nicotine and Tobacco Research 21, 2 (2019), 257261.Google ScholarGoogle ScholarCross RefCross Ref
  98. [98] Smith Stephen M. and Petty Richard E.. 1995. Personality moderators of mood congruency effects on cognition: The role of self-esteem and negative mood regulation. Journal of Personality and Social Psychology 68, 6 (1995), 1092.Google ScholarGoogle ScholarCross RefCross Ref
  99. [99] Solhan Marika B., Trull Timothy J., Jahng Seungmin, and Wood Phillip K.. 2009. Clinical assessment of affective instability: Comparing EMA indices, questionnaire reports, and retrospective recall. Psychological Assessment 21, 3 (2009), 425.Google ScholarGoogle ScholarCross RefCross Ref
  100. [100] Somat A., Jamet E., Menguy G., Forzy J. F., and El-Jaafari M.. 2012. Acceptabilité individuelle, sociale & acceptation. Livrable L5.3 du project PARTAGE (ANR-08-VTT-012-01).Google ScholarGoogle Scholar
  101. [101] Spector Sheldon L., Kinsman Robert, Mawhinney Helen, Siegel Sheldon C., Rachelefsky Gary S., Katz Roger M., and Rohr Albert S.. 1986. Compliance of patients with asthma with an experimental aerosolized medication: Implications for controlled clinical trials. Journal of Allergy and Clinical Immunology 77, 1 (1986), 6570.Google ScholarGoogle ScholarCross RefCross Ref
  102. [102] Stone Arthur A., Bachrach Christine A., Jobe Jared B., Kurtzman Howard S., and Cain Virginia S.. 1999. The Science of Self-Report: Implications for Research and Practice. Psychology Press.Google ScholarGoogle ScholarCross RefCross Ref
  103. [103] Stone Arthur A. and Shiffman Saul. 1994. Ecological Momentary Assessment (EMA) in behavorial medicine. Annals of Behavioral Medicine 16, 3 (1994), 199–202.Google ScholarGoogle ScholarCross RefCross Ref
  104. [104] Strongman Kenneth T. and Russell Paul N.. 1986. Salience of emotion in recall. Bulletin of the Psychonomic Society 24, 1 (1986), 2527.Google ScholarGoogle ScholarCross RefCross Ref
  105. [105] Sureshkumar K., Murthy G. V. S., Natarajan S., Naveen C., Goenka S., and Kuper H.. 2016. Evaluation of the feasibility and acceptability of the ‘Care for Stroke’ intervention in India, a smartphone-enabled, carer-supported, educational intervention for management of disability following stroke. BMJ Open 6, 2 (2016), e009243.Google ScholarGoogle ScholarCross RefCross Ref
  106. [106] Taylor Shirley and Todd Peter A.. 1995. Understanding information technology usage: A test of competing models. Information Systems Research 6, 2 (1995), 144176.Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. [107] Terrade Florence, Pasquier Hélène, Reerinck-Boulanger Juliette, Guingouain Gérard, and Somat Alain. 2009. L’acceptabilité sociale: La prise en compte des déterminants sociaux dans l’analyse de l’acceptabilité des systèmes technologiques. Le Travail Humain 72, 4 (2009), 383395.Google ScholarGoogle ScholarCross RefCross Ref
  108. [108] Terry Gareth, Hayfield Nikki, Clarke Victoria, and Braun Virginia. 2017. Thematic analysis. In The SAGE Handbook of Qualitative Research in Psychology. SAGE Publications, 1737.Google ScholarGoogle ScholarCross RefCross Ref
  109. [109] Tison Geoffrey H., Sanchez José M., Ballinger Brandon, Singh Avesh, Olgin Jeffrey E., Pletcher Mark J., Vittinghoff Eric, Lee Emily S., Fan Shannon M., Gladstone Rachel A., Carlos Mikell, Nimit Sohoni, Johnson Hsieh, and Gregory M. Marcus. 2018. Passive detection of atrial fibrillation using a commercially available smartwatch. JAMA Cardiology 3, 5 (2018), 409416.Google ScholarGoogle ScholarCross RefCross Ref
  110. [110] Titov Nickolai, Dear Blake, Nielssen Olav, Staples Lauren, Hadjistavropoulos Heather, Nugent Marcie, Adlam Kelly, Nordgreen Tine, Bruvik Kristin Hogstad, Hovland Anders, Arne Repal, Kim Mathiasen, Marin Kraepelien, Kerstin Blom, Cecilia Svanborg, Nils Lindefors, and Viktor Kaldo. 2018. ICBT in routine care: A descriptive analysis of successful clinics in five countries. Internet Interventions 13 (2018), 108115.Google ScholarGoogle ScholarCross RefCross Ref
  111. [111] Toledo Meynard John, Hekler Eric, Hollingshead Kevin, Epstein Dana, and Buman Matthew. 2017. Validation of a smartphone app for the assessment of sedentary and active behaviors. JMIR mHealth and uHealth 5, 8 (2017), e119.Google ScholarGoogle ScholarCross RefCross Ref
  112. [112] Trull Timothy J. and Ebner-Priemer Ulrich W.. 2009. Using experience sampling methods/ecological momentary assessment (ESM/EMA) in clinical assessment and clinical research: Introduction to the special section. Psychological Assessment 21, 4 (2009), 457–462.Google ScholarGoogle Scholar
  113. [113] Turakhia Mintu P., Desai Manisha, Hedlin Haley, Rajmane Amol, Talati Nisha, Ferris Todd, Desai Sumbul, Nag Divya, Patel Mithun, Kowey Peter, John S. Rumsfeld, Andrea M. Russo, Mellanie True Hills, Christopher B. Granger, Kenneth W. Mahaffey, and Marco V. Perez. 2019. Rationale and design of a large-scale, app-based study to identify cardiac arrhythmias using a smartwatch: The Apple Heart Study. American Heart Journal 207 (2019), 6675.Google ScholarGoogle ScholarCross RefCross Ref
  114. [114] Umair Muhammad, Chalabianloo Niaz, Sas Corina, and Ersoy Cem. 2021. HRV and stress: A mixed-methods approach for comparison of wearable heart rate sensors for biofeedback. IEEE Access 9 (2021), 1400514024.Google ScholarGoogle ScholarCross RefCross Ref
  115. [115] Umair Muhammad, Sas Corina, and Alfaras Miquel. 2020. ThermoPixels: Toolkit for personalizing arousal-based interfaces through hybrid crafting. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (DIS’20). ACM, New York, NY, 10171032. Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. [116] Umair Muhammad, Sas Corina, Chalabianloo Niaz, and Ersoy Cem. 2021. Exploring personalized vibrotactile and thermal patterns for affect regulation. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (DIS’21). ACM, New York, NY, 891906. Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. [117] Venkatesh Viswanath. 2000. Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research 11, 4 (2000), 342365.Google ScholarGoogle ScholarDigital LibraryDigital Library
  118. [118] Venkatesh Viswanath and Bala Hillol. 2008. Technology Acceptance Model 3 and a research agenda on interventions. Decision Sciences 39, 2 (2008), 273315.Google ScholarGoogle ScholarCross RefCross Ref
  119. [119] Venkatesh Viswanath and Davis Fred D.. 2000. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science 46, 2 (2000), 186204.Google ScholarGoogle ScholarCross RefCross Ref
  120. [120] Venkatesh Viswanath, Morris Michael G., Davis Gordon B., and Davis Fred D.. 2003. User acceptance of information technology: Toward a unified view. MIS Quarterly 27, 3 (2003), 425478.Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. [121] Venkatesh Viswanath, Thong James Y. L., and Xu Xin. 2012. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly 36, 1 (2012), 157178.Google ScholarGoogle ScholarCross RefCross Ref
  122. [122] Vincent P. M., Mahendran Nivedhitha, Nebhen Jamel, Deepa N., Srinivasan Kathiravan, and Hu Yuh-Chung. 2021. Performance assessment of certain machine learning models for predicting the major depressive disorder among IT professionals during pandemic times. Computational Intelligence and Neuroscience 2021 (2021), 9950332.Google ScholarGoogle ScholarDigital LibraryDigital Library
  123. [123] Wang Rui, Chen Fanglin, Chen Zhenyu, Li Tianxing, Harari Gabriella, Tignor Stefanie, Zhou Xia, Ben-Zeev Dror, and Campbell Andrew T.. 2014. StudentLife: Assessing mental health, academic performance and behavioral trends of college students using smartphones. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 314.Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. [124] Wichers M., Simons C. J. P., Kramer I. M. A., Hartmann Jessica A., Lothmann C., Myin-Germeys Inez, Bemmel A. L. Van, Peeters F., Delespaul Ph., and Os J. Van. 2011. Momentary assessment technology as a tool to help patients with depression help themselves. Acta Psychiatrica Scandinavica 124, 4 (2011), 262272.Google ScholarGoogle ScholarCross RefCross Ref
  125. [125] Zhang Renwen, Ringland Kathryn E., Paan Melina, Mohr David C., and Reddy Madhu. 2021. Designing for emotional well-being: Integrating persuasion and customization into mental health technologies. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 113.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Patient Acceptance of Self-Monitoring on a Smartwatch in a Routine Digital Therapy: A Mixed-Methods Study

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Computer-Human Interaction
      ACM Transactions on Computer-Human Interaction  Volume 31, Issue 1
      February 2024
      517 pages
      ISSN:1073-0516
      EISSN:1557-7325
      DOI:10.1145/3613507
      • Editors:
      • Kristina Höök,
      • Kasper Hornbæk
      Issue’s Table of Contents

      Copyright © 2023 Copyright held by the owner/author(s).

      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 29 November 2023
      • Online AM: 26 August 2023
      • Accepted: 20 June 2023
      • Revised: 10 May 2023
      • Received: 11 January 2022
      Published in tochi Volume 31, Issue 1

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)1,355
      • Downloads (Last 6 weeks)319

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader