Skip to main content

Smartphones for musculoskeletal research – hype or hope? Lessons from a decennium of mHealth studies

Abstract

Background

Smartphones provide opportunities for musculoskeletal research: they are integrated in participants’ daily lives and can be used to collect patient-reported outcomes as well as sensor data from large groups of people. As the field of research with smartphones and smartwatches matures, it has transpired that some of the advantages of this modern technology are in fact double-edged swords.

Body

In this narrative review, we illustrate the advantages of using smartphones for data collection with 18 studies from various musculoskeletal domains. We critically appraised existing literature, debunking some myths around the advantages of smartphones: the myth that smartphone studies automatically enable high engagement, that they reach more representative samples, that they cost little, and that sensor data is objective. We provide a nuanced view of evidence in these areas and discuss strategies to increase engagement, to reach representative samples, to reduce costs and to avoid potential sources of subjectivity in analysing sensor data.

Conclusion

If smartphone studies are designed without awareness of the challenges inherent to smartphone use, they may fail or may provide biased results. Keeping participants of smartphone studies engaged longitudinally is a major challenge. Based on prior research, we provide 6 actions by researchers to increase engagement. Smartphone studies often have participants that are younger, have higher incomes and high digital literacy. We provide advice for reaching more representative participant groups, and for ensuring that study conclusions are not plagued by bias resulting from unrepresentative sampling. Costs associated with app development and testing, data storage and analysis, and tech support are substantial, even if studies use a ‘bring your own device’-policy. Exchange of information on costs, collective app development and usage of open-source tools would help the musculoskeletal community reduce costs of smartphone studies. In general, transparency and wider adoption of best practices would help bringing smartphone studies to the next level. Then, the community can focus on specific challenges of smartphones in musculoskeletal contexts, such as symptom-related barriers to using smartphones for research, validating algorithms in patient populations with reduced functional ability, digitising validated questionnaires, and methods to reliably quantify pain, quality of life and fatigue.

Peer Review reports

Background

The worldwide impact of around 200 musculoskeletal conditions that affect joints, bones, muscles and soft-tissues, is large [14]. Many musculoskeletal conditions are characterized by pain, stiffness, fatigue, sleep disturbances [5]. These symptoms affect patients’ physical function andimpede activities of daily living [4, 5]. Although the past century has seen advances in diagnosis accuracy and treatment options, people with musculoskeletal conditions still face high impact of disease, even if they achieve clinical remission [13].

To improve patients’ quality of life, a better understanding of fluctuations in disease state, symptoms and wellbeing over time is crucial, also in people with clinical remission. Smartphones could provide opportunities to better understand these fluctuations in patients’ everyday lives [6]. In addition, smartphones can enable monitoring, reporting of patient-reported outcomes, better patient-provider communication and personalized treatments [7].

The promise of smartphones in musculoskeletal research and clinical care has been widely recognized. Various studies and publications [624] have documented key benefits of smartphone data collection for both research and clinical care, namely that it:

  1. 1.

    Offers opportunities to answer research questions that were difficult to investigate before.

  2. 2.

    Allows recruitment and engagement of large groups of participants with a range of musculoskeletal conditions, increasing power to detect associations

  3. 3.

    Supports the relatively unobtrusive collection of real-time, high frequency data for measuring exposures, outcomes and behaviour, ranging from daily to multiple times per millisecond.

  4. 4.

    Can reduce burden of providing patient-reported outcomes and sources of reporting bias.

  5. 5.

    Can identify recurring patterns in the symptoms of individual participants, detect real-time deviations from this baseline and aggregate these to summarize populations

  6. 6.

    Augment our understanding of patients’ lived experiences by taking more frequent measurements and/or combining key symptom domains, disease characteristics and behavioural patterns

  7. 7.

    Can support the development, and eventually delivery, of patient-centred and personalized care.

Smartphones and mobile health have been hailed as ‘the biggest technology breakthrough of our time’ with ‘the potential to change every aspect of the health care environment’ [25]. Despite success stories, smartphone studies have not delivered on all promises, and are certainly are no panacea to all limitations of traditional data collection methods [18]. Researchers should recognize various novel challenges for data collection and analysis, as well as various sources of bias [25].

.

After a decade of smartphone research, the research community has gained a better understanding of opportunities and challenges in musculoskeletal research [7, 2629]. At the same time, research groups and commercial partners continue developing bespoke apps, potentially ‘reinventing the wheel’. In addition, various pitfalls of smartphone studies have been discovered in other medical application domains, but are not necessarily known to the musculoskeletal research community. In this narrative review, we discuss smartphone studies in musculoskeletal research and outline the advantages of smartphones for data collection. We highlight common misconceptions about smartphones, link these to pitfalls of smartphone research, and discuss our hope for overcoming these pitfalls.

Of note, this paper’s scope is limited to the use of smartphones for musculoskeletal research. Some of the challenges and opportunities we discuss will be applicable on technology more widely (e.g. on wearables, web portals or mobile health in general). However, we did not actively include literature on these types of technology in this article. Similarly, this article does not focus on the use of smartphones in clinic to support shared decision-making, although some of the aspects we discuss generalize to that setting [6, 30].

Main text

Smartphones for musculoskeletal research

As smartphones are ubiquitous and typically carried by their users, they can be used for unobtrusive measurement of frequent active and passive data [31, 32]. Active data include all data types that require input from the participant, such as smartphone surveys, audio recordings, photos or ‘active tasks’ (e.g. tapping the screen as fast as possible).

Passive data include all data types that do not require participant action beyond installing a data collection app, such as sensor data and metadata [27]. Smartphones typically have the following in-built sensors: GPS receivers, accelerometers, gyroscopes, magnetometers, microphones, barometers, digital compass, and proximity and light sensors [3337]. These sensors can be used to measure user behaviour and environment, including factors such as:

  • exposure to pollution or weather (e.g. after linkage of location data to weather databases individual-level [17, 38]),

  • mobility (e.g. distance travelled, travel patterns, maximum distance from home, visits to health clinics [39]),

  • routines (diurnal movements, travel routines, out-of-home activities, interactions with other people),

  • physical activity (walking, standing, running, sitting, falling, mode of transportation, gait speed, intensity [36, 40])

  • ambient environment (light, presence of people), and

  • sociability (locations visited, vicinity to other smartphone users)

Passive data collection may also refer to the accrual of metadata. Metadata refers to digital information on how the device or app is used. These include timestamps for tasks including survey completion, passive data, call and text logs (timing, duration/length of incoming and outgoing calls and texts), battery status and phone charging events [41]. Metadata can be useful to validate data by determining the timing of self-report entries, compared to the completion time requested. However, metadata also offers unique opportunities to collect information about participants’ behaviour such as screen use, sociability or help seeking (as measured by call and text logs), circadian rhythm (determined based on the timing of activities throughout the day) and engagement with a digital intervention (as measured by app use) [27, 39].

Table 1 shows 18 examples of studies in people with musculoskeletal conditions ranging from rheumatoid arthritis and osteoarthritis to juvenile idiopathic arthritis and systemic lupus erythematosus. We included 10 examples of observational studies and 8 examples of interventional studies, to showcase studies across a range of musculoskeletal conditions, collecting different types of outcome metrics. Of note, for this narrative review we did not perform a systematic search of app stores or the academic literature, but we merely collected examples that showcased the advantages and pitfalls of smartphone research well. A systematic review is a great method to identify smartphone apps or smartphone studies and appraise their quality. We refer readers who are interested in such systematic reviews to reviews published here and elsewhere, including a review (1) rating the quality of 19 apps to monitor rheumatoid arthritis [50], (2) describing one study using a smartphone app [51], (3) identifying 4 apps to facilitate physical activity in people with rheumatoid arthritis [52], (4) identifying 20 apps for people with systemic lupus erythematosus [28], (5) describing 61 apps on self-managing low back pain [53], and (6) providing an overview of development of 32 apps for self-management of musculoskeletal diseases [54].

Table 1 Scoping review of smartphone studies, by year of publication and then by alphabetical order of first author name

The intervention studies in Table 1 focused on improving self-management [9, 10, 15, 48, 49], or targeted more specific outcomes including mobility [16], hand function [14], fatigue [11], pain [49], physical function [49], or serum urate levels in a gout population [8].

The observational studies often examined environmental exposures or behaviour, including weather [17] or exercise [22, 43], investigated pain location and intensity [22, 42, 43, 55], functional ability [46, 47], disease flares [24] or disease activity and its impact [6]. Various studies were pilot studies or feasibility studies [22, 43, 4648].

Ten studies asked participants to perform “active tasks”, a predefined activity that provides insight in physical functioning, cognition or motor skills. The active tasks used in these papers included: a wrist motion test [46], yoga exercises and cognitive behavioural therapy exercises [48], obtaining and entering serum urate levels [8], a walking exercise [15, 46], performing behavioural interventions chosen by a health coach [10, 14, 16], sharing experiences by answering questions from other participants [9], or active tasks chosen by the participants [1113]. One study asked patients to upload a photo of their hand once a month, which was used to monitor inflammation and deformity [49]. Five studies showcase how data from smartphone apps can be combined with data from a wearable. Four projects used it for sensor data collection only [15, 16, 21, 24] and one for both sensor data and patient-reported outcomes [22, 43].

These studies demonstrate some of the advantages of using smartphones for data collection. First, frequent measurements increase the power of studies to detect associations. Four studies recruited hundreds of participants, showcasing the advantage of remote recruitment and lower marginal effort per newly recruited participant [12, 13, 17, 21, 44, 46, 47]. The included studies were able to collect data frequently for sustained periods of time, ranging from 30 days to 15 months. In addition, high frequency longitudinal data provided insights into day-to-day fluctuations in symptoms, and their impact on activities of daily living and quality of life [6]. This resulted, for example, determination of thresholds for flares in axial spondyloarthritis [24].

Moreover, smartphones enable more detailed patient-reported outcomes. One study collected the patient-reported outcomes to guide subsequent visits to a rheumatologist [6]. Three studies used digital pain manikins to enable participants to locate pain, and specify both the extent of pain and the intensity [42, 46, 47]. Previously, musculoskeletal researchers highlighted the simplicity of existing pain trackers for smartphones, and the need for more rigorously tested apps that record location-specific pain aspects and convert data into quantitative scores for pain extent and intensity [29, 56].

For intervention studies, the integration of smartphones in participants’ daily lives provided an opportunity to influence behaviour by sending unobtrusive, context-aware messages at the right place at the right time. This was used in various studies aiming at improving function, mobility, self-management or coping behaviour [811, 1416].

Second, smartphones offered the opportunity to collect passive data on real-life behaviour, exercise regimen and environmental exposures via sensors [17, 27, 31]. Sensor data is free from the biases associated with self-reported data, such as social desirability bias and recall bias [57]. The reviewed studies used sensor data to determine location where participants were exposed to the weather [17], or to quantify their mobility and physical functioning [15, 22].

Third, participants used their own device for data collection and even enrol in studies online. In contrast to paper-based diaries or research-grade wearables, data transfer did not require returning diaries or devices to trained research staff. Instead, data transfer was done automatically, using WiFi or cellular connection. This advantage allowed for recruitment of large sample sizes [17], large sample sizes relative to the prevalence of a musculoskeletal condition [10], or collection of large quantities of data [1113, 24, 43].

The hype, the reality and the hope

Enthusiasm for using smartphones as tools for data collection has skyrocketed (Fig. 1). Given the tendency to focus on advantages of smartphone studies only, it may even border on a hype. Many of characteristics of smartphones are in fact double-edged swords: they provide advantages compared to traditional data collection methods, but also impose new challenges. For appropriate use of smartphones and maximization of their benefits, researchers need to be aware of and account for these new challenges. We will therefore present common myths, provide a more nuanced overview of both the upsides and downsides of smartphone studies and outline what further steps are required to harness the upsides while handling the downsides.

Fig. 1
figure 1

Between 2011 and 2021, the number of Pubmed-indexed publications with search term ‘musculoskeletal’ and ‘smartphone’ increased sharply from 0 in 2011 to 105 in 2021. In the same period, publications with search term ‘musculoskeletal’ only increased gradually, with drops in 2016 and 2021

Hype: smartphone studies enable high engagement

Smartphones are often used to collect self-reported data and patient-reported outcomes at a larger scale. Compared to, for example, paper-based diaries, smartphone studies make it easier to recruit large sample sizes and collect more data, more frequently. Traditionally, self-reported data was collected multiple times an hour over a period of days, or, at the other end of the scale: daily self-report for months to a year [58]. Paper-based diaries typically require more effort to completion, which increases attrition. Studies using paper-based diaries often face low compliance: participants lose diaries, forget to send them back, forget to complete them, hoard and backfill paper-based surveys, or misdate survey entries [57, 59]. Hoarding, backfilling and misdating can influence study validity enormously: one study showed apparent compliance rates of 90% (based on participant-reported completion), while a hidden sensor showed it was actually 11% [60].

Compared to paper-based diaries, smartphones have a lower burden of data entry. In addition, they provide possibilities to increase compliance and validate it: they automatically timestamp and upload (or “send back”) data entries. However, high engagement, high compliance and low attrition are by no means a characteristic of smartphone studies [20, 61]. Smartphone researchers need to be aware of the challenges to engagement, attrition and compliance in smartphone study and account for these at study design and during data analysis.

Reality: high engagement requires substantial effort and attrition is a significant threat to smartphone study validity

Although the burden of data entry on smartphones is regarded as low, this does not mean that engagement is automatically high in smartphone studies. One challenge for smartphone studies is that no universal definition of engagement exists. Engagement may refer to days of self-reported data entry, days of passive data collection (any passive data, or only above a threshold), duration of usage (i.e. time between first and last assessment), proportion of days with self-reported data entry, or more complex definitions including likelihood of data completion overtime, thresholds for a minimal amount of data collected or, for interventions, the ‘usage half-life’ [20, 43, 6164]. This variety of definitions illustrate the complexity of describing smartphone data, which is typically frequent, and includes data from numerous data streams.

Regardless of the definition used, many studies have shown that missing data due to attrition and low engagement is common, even a characteristic of smartphone studies and smartphone interventions [61, 62, 65, 66]. Various reasons have been named for this so-called ‘law of attrition’ [61] in smartphone studies, including competing priorities, ease of dropping out, user experience and usability issues [43, 61, 66]. This illustrates the double-edged sword: although smartphones are unobtrusive and integrated in participants’ daily lives, research smartphone apps compete for attention with all those other apps and activities of daily life.

High attrition and low engagement threaten both the internal and external validity of studies. Study results may not generalize to participants that drop-out early if these dropouts systematically differ from those who stay engaged. Various studies have shown that demographic characteristics, such as age, sex and socio-economic status, are associated with the risk of drop-out [62, 67, 68]. As these characteristics are also associated with disease, disease severity, comorbidities and other outcomes, differential drop-out may introduce attrition bias. Especially in musculoskeletal diseases, participants can face additional barriers when using smartphones. For example, some previous studies found that people with rheumatoid arthritis had issues of dexterity or data entry issues when using a smartphone research app [17, 20, 21, 69].

Hope: researchers design smartphone studies to promote engagement, reduce attrition and report attrition rates

Researchers should be aware of the challenges of engagement in smartphone studies and can use various strategies to improve engagement and reduce the risks of attrition bias. Previous studies have provided advice for better study designs and data analyses. We have grouped these in three areas relevant for all smartphone studies, and two areas relevant for interventional studies specifically.

First, researchers can increase the usability of technology. Pilot studies or feasibility studies in the target population can be useful to improve these aspects of the smartphone app or study design [21, 22, 32, 69, 70]. In healthy populations, high daily smartphone use has a weak relationship with lower hand grip strength and pinch-grip strength [71, 72]. In people with musculoskeletal symptoms and pain, especially in their hands, it is therefore important to assess their ability to navigate research apps and report symptoms. To increase engagement, researchers should integrate patients’ perspectives and needs into their study designs [7]. In addition, it can be useful to use the expertise of healthcare providers, family members (especially in the case of juvenile idiopathic arthritis [70]) and information technology experts [73]. Upon the launch of the study, researchers can monitor data collection to diagnose app or device problems early [74] and enable participants to be contacted to prevent attrition [20].

Second, researchers can reduce the active data collection, which requires the participant to enter data or do an active task, and increase sensor data collection. Studies show that sensor data is often collected for longer periods than active data like surveys, as it provides less burden to the participants [43, 75].

Third, researchers can design their study to promote engagement. They can introduce motivating factors (such as rewards or visualisations of inputted data), use both in-built reminders and targeted reminders (i.e. in the case of likely drop-out), create a study community (e.g hosted on social media sites), provide of personal support from a named member of study personnel, and, if resources allow actively target personal motivators [20]. In addition, they can provide overviews of participants’ self-reported symptoms to share with a clinician or other forms of feedback, which have been shown to increase participant interest [43, 68]. A currently ongoing randomized clinical trial is evaluating whether skipping clinic visits if smartphone data shows low disease activity is beneficial to patients [73].

Fourth, researchers should investigate engagement, attrition and its consequences when analysing their data. Researchers should not gloss over attrition rates, but report engagement and attrition metrics, show Kaplan-Meier curves and report attrition rates (more information in [61]).

Fifth, interventional studies can use recommendations from previous interventional studies using smartphones [32, 66]. These recommendations include, for example, recommendations on pilot studies, on avoiding one-size-fits-all approaches, and on reducing participants’ self-care burden. As it is not within our scope to provide an exhaustive overview of every recommendation ever done, we refer readers to those source publications.

Sixth, interventional studies can use the ‘run-in and withdrawal design’, where participants are only randomized if they are still active after a weed-out period [61]. Many interventional smartphone studies face an exponential dropout early in the study. If randomization is postponed until after this phase, the intervention and control groups are less likely to contain early dropouts. If the randomized participants stay engaged for longer, the study will have higher power and may provide a more realistic image of the intervention. Of note, results of studies using the run-in and withdrawal design may only apply to the ‘hardcore users’, i.e. participants that engaged, and may not be generalizable to the participants that dropped out early and were never randomized.

Hype: smartphone studies reach a more representative sample

Smartphones have often been hailed as opportunity to reach a more diverse sample. Smartphone usage is widespread: in the UK it is 98% for people between 16 and 44 years, 95% between 45 and 54 years and 87% for people between 55 and 64 years [76]. The ability to enrol participants from distance and collect data from distance, remove geographic barriers to participation [68].

Indeed, some smartphone studies have succeeded in recruiting participant groups that traditionally are harder to reach. This includes younger, more active participants, people with mental illnesses, people living in rural locations, people with severe progressive diseases [7779]. In addition, smartphones have been shown useful for data collection during the COVID-19 pandemic, when clinical data collection halted [80]. Furthermore, previous smartphone studies have recruited large numbers of participants from distance - in the order of magnitude of thousands of participants - by instructing them to download an app and register online [17, 44, 45, 65].

Reality: smartphone studies may not succeed in recruiting representative participant groups and impose barriers to participation

When people are recruited for smartphone studies in clinic, ‘smartphone ownership’ or ‘smartphone usage’ is often an inclusion criterium. When people are enrolled from distance, this often means that ‘digital literacy’ is an implicit prerequisite: only people that succeed in downloading an app and registering, will succeed in participating in a study.

These inclusion criteria or prerequisites can influence the representativeness of the final study sample. Although smartphones are used widely, their use is less prevalent in certain demographic groups, and in certain parts of the world. Seldom-heard groups may include underprivileged groups [67], people with low socio-economic status [26], people with low literacy skills or low digital literacy [26, 66] and older people [62, 77]. In addition, if participants are recruited remotely, it may be more difficult to validate that participants belong to the target population beyond self-report [20].

The risk of selection bias is especially large if researchers develop an app for one operating system only. The operating systems Android (Google) and iOS (Apple) each cover around half of the smartphone market (Android being used slightly more frequently in low-income countries, and iOS slightly more in high-income countries) [81]. Apps that are developed for one operating system only (such as apps using the Apple Research Kit [77]), will not reach the other half of the population. Both halves are not interchangeable: on average, the annual income of iOS users is almost 50% more than Android users [27].

Hope: researchers should refrain from generalising their results, and undertake active steps to increase representation of seldom-heard groups

With smartphone studies, it is even more important to limit inferences to the sample population only, rather than extrapolating findings to larger groups. Some limitations to generalizability are intuitive (e.g. absence of the oldest age brackets due to lack of smartphone penetration), but others may be less obvious (e.g. selection of specific income group because of choice of operating system). Researchers should undertake action to reach people from underprivileged groups [67]. In the context of musculoskeletal diseases, this may include people of low socio-economic status and people with multiple morbidities.

Hype: sensor data is objective

Passive data collection can provide frequent and granular information on exposures, outcomes and covariates (including time-varying confounders), and day-to-day participant behaviour and exposure [17, 3339, 8285]. Smartphone sensors increase the range of domains which can be measured in participants’ daily lives, including aspects like time spent at home, minute-to-minute weather exposures and sleep [37, 86, 87]. In addition, smartphone sensors can improve the accuracy of reporting: passive data is not plagued by the information biases of self-reported data, such as social desirability bias (e.g. people under-reporting drug use or sedentary time [88, 89]) or recall bias (e.g. people misremembering past exposures). It is therefore often called ‘objective’ and hailed as gold standard for measuring various constructs.

Reality: sensor data and any metrics derived from it are subject to researchers’ choices and can still be biased

However, converting sensor data into summaries of exposures, outcomes and behaviours is still subject to choices made during data analysis. For example, algorithms for human activity recognition require a multitude of choices during pre-processing (dividing time series into smaller epochs; the number and type of features that are extracted from raw data; the statistical method chosen for activity classification) [90]. A review of smartphone studies using sensor data showed that it can be difficult to assess the ground truth for sensor data, and that some studies found little relation between sensor data and validation measures [37]. Furthermore, many algorithms are developed and tested in a laboratory, and generalize poorly to free-living settings [90]. When these algorithms are applied to real-life data, results may be far from accurate or objective. Algorithms from commercial devices are known to introduce substantial bias when applied in free-living settings (especially if the population of interest is different from the testing population, which often comprises the specific demographic of thirty-year-old healthy males) [91].

Second, raw sensor data underlying these analyses can be subject to different biases. In ‘bring-your-own-device’ studies participants use their personal smartphones, which may be from different brands, model types and software, potentially with defects to screens or sensors [32, 92]. The software used can, for example, influence the amount of missing sensor data [93]. In addition, smartphones often show heterogeneity in accelerometry and gyroscopy data, large enough to potentially influence results [94]. Of note, these technological factors change over time and are difficult to influence [95]. In addition, they can both differ between participants, and within a participant over time.

Third, participant behavior can influence the amount of data collected, the quality of data or the accuracy of summary metrics. For example, accelerometry-based markers of physical activity depend on the position of the smartphone. Algorithms that assume that a smartphone is pocket-worn do not perform as well on data from bag-worn smartphones [96]. Currently, it is still difficult to ascertain whether a smartphone is carried in a position appropriate for the event being sensed [36, 96]. As there are systematic differences in phone use between participants - women tend to carry it mostly in a bag whereas men carry it in their pocket [97] - this may cause misclassification or bias.

Fourth, conversion of sensor data in meaningful summary statistics may introduce specific biases in the musculoskeletal context. Studies have shown that algorithms that perform well in healthy volunteers, often do not perform well in people with musculoskeletal conditions, who tend to walk slower and have different gait characteristics [98, 99].

Fifth, sensor data may not always be available. For example, location data is not available if participants switch their phones off, and may not be available if the phones are indoors or out of battery [27, 78, 93, 100]. In addition, data may be missing by design: for data streams that are battery-intensive, it is not feasible to collect data continuously [22, 27, 37, 69]. Missing sensor data is often not missing at random, and may be related to the participants’ exposure status or outcome status [63].

Hope: the analysis of sensor data will be increasingly transparent, validated in all patient groups and include uncertainty quantification

Researchers should be transparent about the choices they make during the processing of sensor data, for example by sharing source code [90]. Proprietary apps and algorithms are less suitable for academic research, which requires providing the details needed for reproducibility. Algorithms should be validated against gold standards, in patient groups as well as in healthy volunteers. In some cases, researchers might do well to use such a research-grade wearable rather than a smartphone for sensor data collection, for example, if performance of the former is better in people with musculoskeletal conditions.

To improve the accuracy of mobility metrics, researchers could collect location data alongside accelerometer data, although this comes at a cost to privacy [41, 75]. Some research smartphone apps, however, provide the possibility to (a) record distances rather than locations, or (b) add Gaussian noise to location data to de-identify data, providing higher accuracy while preserving privacy [75].

Furthermore, research efforts should be diverted to algorithms for uncertainty quantification. Any point estimate of someone’s step count is unlikely to be the truth (‘you took 8721 steps today’). Algorithms should therefore provide a 95% confidence interval, ideally considering the amount of missing data and considering the position of the smartphone on the participants’ body [90]. For participants with complete data and a body-worn smartphone, this confidence interval would be much narrower than for participants with high amounts of missing data, or who had their phone lying on a desk. If accuracy and validity is insufficient, researchers should consider using body-worn devices, such as wearable activity trackers [101] or smartwatches [22].

Hype: smartphone research is cheap

Smartphone studies are often argued to be a cheap option for data collection, as they enable remote enrolment, remote data collection, and participants can use their own device and phone subscription or WiFi network. These bring-your-own-device studies can remove the need for enrolment events and clinic visits for data collection [22, 35]. This reduces costs associated with staffing costs during recruitment or data collection, costs for consumables or physical storage space. Off-the-shelf smartphone apps can be tweaked to specific studies for as little as £1000 to £30,000; other apps are freely available under an open source license [27, 77]. Once an app has been developed, the marginal costs of enrolling an additional 1000, 10,000 or even 100,000 participants is low [36, 77, 102]. As a result, the marginal costs of using a smartphone app for data collection may be lower than those of using paper-based data collection [58].

Reality: smartphone research can be expensive

Although smartphone studies reduce some costs compared to traditional studies, they create new, often hidden costs. First, app development is not necessarily cheap. Development of a bespoke research app requires collaboration with software engineers and UX designers. It can be time-consuming, since app development usually entails an iterative approach, co-development with patients and careful piloting in feasibility studies [18, 32, 54, 69, 103]. Feasibility studies are essential to determine if the target population finds the app easy to use, balance data collection needs with preservation of battery life of participants’ devices, and determine the optimal frequency of active and passive data [32, 69, 79]. Modification of off-the-shelf apps or platforms can be simpler, although these may come with licensing fees.

Second, the use of smartphone apps may require research budget towards data storage [27, 32, 77], licenses for data analysis software, and, especially if high volume sensor data is collected, computing infrastructure [27].

Third, smartphone studies require maintenance and support. Smartphone models and smartphone software (i.e. the operating systems Android and iOS) are frequently updated and these updates can delay or block data collection [27, 32]. Costs associated with such updates can be substantial and can be, for example, in the range of $100,000 per year. In addition, technical support may be required in case participants face problems with the app.

Hope: smartphone studies will be cost-effective and efficient

It would be helpful if the musculoskeletal research community could share expected and unexpected costs associated with smartphone studies. Cost comparisons between studies enable researchers to identify what types of apps and infrastructures fit their budget. If researchers report costs per expense (e.g. app development, maintenance, storage, analysis; separating costs of staff time from work by external parties from costs of hardware or software) as well as marginal costs per participant, the research community can identify sources of variability in costs, as well as areas where improvement would lead to the highest reduction in costs.

Secondly, we believe that smartphone studies would be more efficient if they could use shared platforms. Examples of app platforms that are suitable for research are Beiwe (open source/non-commercial [27, 92],), uMotif [17, 21, 42] and Apple ResearchKit [68, 77]. Re-use of high-quality platforms will prevent researchers from reinventing the wheel, and from developing apps that are unsafe, unsecure or unwell engineered [103]. Especially open source software should be of interest of the research community. Open source apps can be reviewed by an unlimited base of software engineers and improve reproducibility and transparency of studies [27, 68]. In addition, researchers can develop extensions or new features and contribute the code so that other groups can use those extensions too.

Third, research apps could provide more utility when clinically implemented and linked with electronic medical records [104]. Data from research apps could transform consultations for clinician and patient benefit and aid shared decision-making [6, 105]. Furthermore, such applications of research apps could open doors to new funding opportunities. However, there are few published efforts on efficacy, effectiveness and feasibility [105]. Integration of research data into the electronic medical record also requires overcoming various barriers, including issues around sharing, privay and governance [18, 26].

Conclusions

Without doubt, smartphone studies represent an exciting and rapidly growing area of research development. In this article, we provided an insight in the plethora of benefits around size, scale and frequency of data collection. In musculoskeletal research, smartphones provide special benefits, as the target group face chronic conditions (increasing the importance of long-term data collection), characterised by symptoms that affect mobility and physical activity (potentially easier to measure with smartphones) as well as a range of patient-reported outcomes (self-reported more frequently at lower burden to the participant). We discussed various studies that showcased the unique benefits from smartphones in the musculoskeletal context.

However, despite these substantial and exciting benefits, smartphone studies are not free from challenges and do not solve all challenges. If smartphone studies are designed without awareness of the challenges inherent to smartphone use, they may fail or may provide biased results. In this article, we therefore reviewed the known limitations of smartphones and provided lessons for future smartphone studies.

We showed that achieving high engagement of participants may be a challenge, as well as recruiting representative samples, including less privileged people, or people with low digital literacy. We argued that sensor data is by no means objective, even though it removes some biases associated with self-reported exposures and outcomes. Finally, we discussed the costs of smartphone studies, noting that even though participants may bring their own device, smartphone studies can still bring substantial costs of app development and testing, data storage and analysis.

Of note, we conducted a narrative review and did not perform a systematic search of smartphone studies, a systematic quality appraisal of studies or a systematic search of hypes and hopes. The examples that we included are for illustrative purposes, and this narrative review, although hopefully informative to the reader, is unlikely to be comprehensive. Where possible, we have provided references to reviews both of smartphone studies in musculoskeletal conditions (such as [18, 28, 5054, 56]) and of the hypes and hopes we discussed (such as [20, 36, 39, 66, 89]). These reviews tend to be narrower in scope than our overview, but provide a more comprehensive overview of the subject area and often contain systematic quality appraisals.

We hope the musculoskeletal research community will join us in paving the way forwards. In this journey, transparency is key. Smartphone studies in the healthcare context often do not report essential information on study design (e.g. details on app development or privacy protection [37, 103]), data collection (e.g. the method for location data collection [39]) and data analysis (e.g. the algorithm used to convert sensor data into human activity metrics [90]). Better transparency can be stimulated through the use of reporting guidelines. For smartphone-based interventions, two checklists provide reporting guidelines, one for trials using web-based health interventions (CONSORT-EHEALTH statement [106]) and one for mobile phone-based health interventions (the mERA checklist [107]). For apps aimed at self-management by people with muskuloskeletal disorders, an EULAR task force articulated three overarching principles and ten points to consider [108]. Various reporting guidelines directly relate to the hypes and hopes discussed in this article. For example, the mERA requirement of providing a cost assessment of the intervention would fulfil our hope of more insights in the costs and savings associated with smartphone studies. Wider adoption of the best practices described in these guidelines will contribute to transparency. Transparency will help prevent others to re-invent the wheel and to contribute to cost-effective and efficient smartphone studies. Regardless of available reporting checklists, we encourage full and comprehensive sharing of app development, piloting procedures and research methods.

Awareness of the challenges of smartphone studies will help researchers anticipate or avoid common pitfalls. In addition, it would be useful to bring the musculoskeletal research community together to exchange lessons learnt on issues specific to our field. These issues may include symptom-related barriers to using smartphones for research, validating algorithms in patient populations with reduced functional ability, digitising validated questionnaires, and methods to reliably quantify pain, quality of life and fatigue. We recommend that researchers share both the successful and unsuccessful strategies they employed for others to learn from. We hope this review will support researchers to generate more success stories of smartphone studies in musculoskeletal research.

Availability of data and materials

Not applicable.

Abbreviations

CONSORT-EHEALTH:

Consolidated Standards of Reporting Trials of Electronic and Mobile HEalth Applications and onLine TeleHealth

EULAR:

European Alliance of Associations for Rheumatology

mERA:

Mobile health (mHealth) evidence reporting and assessment

References

  1. Druce KL, Bhattacharya Y, Jones GT, Macfarlane GJ, Basu N. Most patients who reach disease remission following anti-TNF therapy continue to report fatigue: results from the British Society for Rheumatology biologics register for rheumatoid arthritis. Rheumatol (United Kingdom). 2016;55(10).

  2. Olsen CL, Lie E, Kvien TK, Zangi HA. Predictors of fatigue in rheumatoid arthritis patients in remission or in a low disease activity state. Arthritis Care Res. 2016;68(7).

  3. Michaud K, Pope J, van de Laar M, Curtis JR, Kannowski C, et al. A systematic literature review of residual symptoms and unmet need in patients with rheumatoid arthritis. Arthritis Care Res. 2020.

  4. Parsons S, Ingram M, Clarke-Cornwell A, Symmons D. A heavy burden: the occurrence and impact of musculoskeletal conditions in the United Kingdom today; 2011.

    Google Scholar 

  5. March L, Smith EUR, Hoy DG, Cross MJ, Sanchez-Riera L, Blyth F, et al. Burden of disability due to musculoskeletal (MSK) disorders. Best Pract Res Clin Rheumatol. 2014;28.

  6. Austin L, Sharp CA, van der Veer SN, Machin M, Humphreys J, Mellor P, et al. Providing ‘the bigger picture’: benefits and feasibility of integrating remote monitoring from smartphones into the electronic health record. Rheumatology. 2019.

  7. Richardson JE, Reid MC. The promises and pitfalls of leveraging mobile health technology for pain care. Pain Med (United States). 2013;14(11).

  8. Day RO, Frensham LJ, Nguyen AD, Baysari MT, Aung E, Lau AYS, et al. Effectiveness of an electronic patient-centred self-management tool for gout sufferers: a cluster randomised controlled trail protocol. BMJ Open. 2017;7.

  9. Lalloo C, Harris LR, Hundert AS, Berard R, Cafazzo J, Connelly M, et al. The iCanCope pain self-management application for adolescents with juvenile idiopathic arthritis: a pilot randomized controlled trial. Rheumatol (United Kingdom). 2021;60(1).

  10. Khan F, Granville N, Malkani R, Chathampally Y. Health-related quality of life improvements in systemic lupus Erythematosus derived from a digital therapeutic plus Tele-health coaching intervention: randomized controlled pilot trial. J Med Internet Res. 2020;22(10).

  11. Nap-van der Vlist MM, Houtveen J, Dalmeijer GW, Grootenhuis MA, van der Ent CK, van Grotel M, et al. Internet and smartphone-based ecological momentary assessment and personalized advice (PROfeel) in adolescents with chronic conditions: a feasibility study. Internet Interv. 2021;25.

  12. Nowell WB, Gavigan K, Kannowski CL, Cai Z, Hunter T, Venkatachalam S, et al. Which patient-reported outcomes do rheumatology patients find important to track digitally? A real-world longitudinal study in ArthritisPower. Arthritis Res Ther. 2021;23(1).

  13. Nowell WB, Curtis JR, Nolot SK, Curtis D, Venkatachalam S, Owensby JK, et al. Digital tracking of rheumatoid arthritis longitudinally (digital) using biosensor and patient-reported outcome data: protocol for a real-world study. JMIR Res Protoc. 2019;8(9).

  14. Rodríguez-Sánchez-Laulhé P, Luque-Romero LG, Blanquero J, Suero-Pineda A, Biscarri-Carbonero Á, Barrero-Garciá FJ, et al. A mobile app using therapeutic exercise and education for self-management in patients with hand rheumatoid arthritis: a randomized controlled trial protocol. Trials. 2020;21(1).

  15. Skrepnik N, Spitzer A, Altman R, Hoekstra J, Stewart J, Toselli R. Assessing the impact of a novel smartphone application compared with standard follow-up on mobility of patients with knee osteoarthritis following treatment with Hylan G-F 20: a randomized controlled trial. JMIR mHealth uHealth. 2017;5(5):e64.

    Article  Google Scholar 

  16. Tam J, Lacaille D, Liu-Ambrose T, Shaw C, Xie H, Backman CL, et al. Effectiveness of an online self-management tool, OPERAS (an on-demand program to EmpoweR active self-management), for people with rheumatoid arthritis: a research protocol. Trials. 2019;20(1).

  17. Dixon WG, Beukenhorst AL, Yimer BB, Cook L, Gasparrini A, El-Hay T, et al. How the weather affects the pain of citizen scientists using a smartphone app. npj Digit Med. 2019.

  18. Solomon DH, Rudin RS. Digital health technologies: opportunities and challenges in rheumatology. Nat Rev Rheumatol. 2020;16.

  19. Catarinella FS, Bos WH. Digital health assessment in rheumatology: current and future possibilities. Clin Exp Rheumatol. 2016;34.

  20. Druce KL, Dixon WG, McBeth J. Maximizing engagement in Mobile health studies: lessons learned and future directions. Rheum Dis Clin N Am. 2019;45:159–72. https://doi.org/10.1016/j.rdc.2019.01.004 [cited 2021 Mar 10].

    Article  Google Scholar 

  21. Druce KL, Cordingley L, Short V, Moore S, Hellman B, James B, et al. Quality of life, sleep and rheumatoid arthritis (QUASAR): a protocol for a prospective UK mHealth study to investigate the relationship between sleep and quality of life in adults with rheumatoid arthritis. BMJ Open. 2018;8(1).

  22. Beukenhorst AL, Parkes MJ, Cook L, Barnard R, van der Veer SN, Little MA, et al. Collecting symptoms and sensor data with consumer Smartwatches (the knee OsteoArthritis, linking activity and pain study): protocol for a longitudinal, observational feasibility study. JMIR Res Protoc. 2019.

  23. Radin JM, Quer G, Jalili M, Hamideh D, Steinhubl SR. The hopes and hazards of using personal health technologies in the diagnosis and prognosis of infections. Lancet Digit Heal. 2021;3(7).

  24. Gossec L, Guyard F, Leroy D, Lafargue T, Seiler M, Jacquemin C, et al. Detection of flares by decrease in physical activity, collected using wearable activity trackers in rheumatoid arthritis or axial Spondyloarthritis: An application of machine learning analyses in rheumatology. Arthritis Care Res. 2019.

  25. Steinhubl SR, Muse ED, Topol EJ. Can mobile health technologies transform health care? JAMA - J Am Med Assoc. 2013;310(22):2395–6.

    Article  CAS  Google Scholar 

  26. Pisaniello HL, Dixon WG. What does digitalization hold for the creation of real-world evidence? Rheumatology (United Kingdom). 2020;59(1):39–45.

    Article  Google Scholar 

  27. Onnela J-P. Opportunities and challenges in the collection and analysis of digital phenotyping data. Neuropsychopharmacology. 2020.

  28. Dantas LO, Weber S, Osani MC, Bannuru RR, McAlindon TE, Kasturi S. Mobile health technologies for the management of systemic lupus erythematosus: a systematic review. Lupus. 2020;29(2):144–56.

    Article  CAS  Google Scholar 

  29. Lalloo C, Jibb LA, Rivera J, Agarwal A, Stinson JN. There’s a pain app for that. Clin J Pain. 2015;31(6).

  30. Shaw Y, Courvoisier DS, Scherer A, Ciurea A, Lehmann T, Jaeger VK, et al. Impact of assessing patient-reported outcomes with mobile apps on patient-provider interaction. RMD Open. 2021;7(1).

  31. Amor JD, James CJ. Setting the scene: Mobile and wearable technology for managing healthcare and wellbeing. Proc Annu Int Conf IEEE Eng Med Biol Soc EMBS. 2015;2015:7752–5.

    Google Scholar 

  32. Ben-Zeev D, Schueller SM, Begale M, Duffecy J, Kane JM, Mohr DC. Strategies for mHealth research: lessons from 3 Mobile intervention studies. Adm Policy Ment Health Ment Health Serv Res. 2015;42(2):157–67.

    Article  Google Scholar 

  33. Apple. iOS Developers’ Documentation. 2019. [cited 2019 Aug 11]. Available from: https://developer.apple.com/documentation/

  34. Google. Android API Developers Guide. 2019 [cited 2019 Aug 11]. Available from: https://developer.android.com/guide/

  35. Lane ND, Miluzzo E, Lu H, Peebles D, Choudhury T, Campbell AT. A survey of Mobile phone sensing. IEEE Commun Mag. 2010;(September):140–50 Available from: http://www1.folha.uol.com.br/mercado/2016/05/1767480-conteudo-patrocinado-e-saida-para-tornar-marca-relevante-diz-susini.shtml.

  36. Incel OD, Kose M, Ersoy C. A review and taxonomy of activity recognition on Mobile phones. Bionanoscience. 2013;3(2):145–71.

    Article  Google Scholar 

  37. Cornet VP, Holden RJ. Systematic review of smartphone-based passive sensing for health and wellbeing. J Biomed Inform. 2018.

  38. Bhaskaran K, Gasparrini A, Hajat S, Smeeth L, Armstrong B. Time series regression studies in environmental epidemiology. Int J Epidemiol. 2013.

  39. Fraccaro P, Beukenhorst A, Sperrin M, Harper S, Palmier-Claus J, Lewis S, et al. Digital biomarkers from geolocation data in bipolar disorder and schizophrenia: a systematic review. J Am Med Inform Assoc. 2019.

  40. Espay AJ, Bonato P, Nahab FB, Maetzler W, Dean JM, Klucken J, et al. Technology in Parkinson’s disease: challenges and opportunities. Mov Disord. 2016;31(9):1272–82.

    Article  Google Scholar 

  41. De Montjoye YA, Shmueli E, Wang SS, Pentland AS. OpenPDS: protecting the privacy of metadata through SafeAnswers. PLoS One. 2014;9(7).

  42. van der Veer SN, Beukenhorst AL, Ali SM, James B, Silva P, McBeth J, et al. Development of a mobile digital manikin to measure pain location and intensity. In: Studies in health technology and informatics; 2020.

    Google Scholar 

  43. Beukenhorst AL, Howells K, Cook L, McBeth J, O’Neill TW, Parkes MJ, et al. Engagement and participant experiences with consumer Smartwatches for Health Research: longitudinal, observational feasibility study. JMIR mHealth uHealth. 2020.

  44. Schultz DM, Beukenhorst AL, Yimer BB, Cook L, Pisaniello HL, House T, et al. Weather patterns associated with pain in chronic-pain sufferers. Bull Am Meteorol Soc. 2020.

  45. Birlie B, Schultz D, Beukenhorst A, Lunt M, Pisaniello HL, House T, et al. Heterogeneity in the association between weather and pain severity among patients with chronic-pain: a Bayesian multilevel regression analysis. Pain Reports. 2022.

  46. Hamy V, Garcia-Gancedo L, Pollard A, Myatt A, Liu J, Howland A, et al. Developing smartphone-based objective assessments of physical function in rheumatoid arthritis patients: the PARADE study. Digit Biomarkers. 2020;4(1).

  47. Crouthamel M, Quattrocchi E, Watts S, Wang S, Berry P, Garcia-Gancedo L, et al. Using a researchkit smartphone app to collect rheumatoid arthritis symptoms from real-world participants: feasibility study. JMIR mHealth uHealth. 2018;6(9).

  48. de la Vega R, Roset R, Galán S, Miró J. Fibroline: a mobile app for improving the quality of life of young people with fibromyalgia. J Health Psychol. 2018;23(1).

  49. Mollard E, Michaud K. A mobile app with optical imaging for the self-management of hand rheumatoid arthritis: pilot study. JMIR mHealth uHealth. 2018;6(10).

  50. Grainger R, Townsley H, White B, Langlotz T, Taylor WJ. Apps for people with rheumatoid arthritis to monitor their disease activity: a review of apps for best practice and quality. JMIR mHealth uHealth. 2017;5(2).

  51. Seppen BF, Den Boer P, Wiegel J, ter Wee MM, Van der Leeden M, De Vries R, et al. Asynchronous mhealth interventions in rheumatoid arthritis: systematic scoping review. JMIR mHealth uHealth. 2020;8(11):1–11.

    Article  Google Scholar 

  52. Bearne LM, Sekhon M, Grainger R, La A, Shamali M, Amirova A, et al. Smartphone apps targeting physical activity in people with rheumatoid arthritis: systematic quality appraisal and content analysis. JMIR mHealth uHealth. 2020;8(7):1–13.

    Article  Google Scholar 

  53. Machado GC, Pinheiro MB, Lee H, Ahmed OH, Hendrick P, Williams C, et al. Smartphone apps for the self-management of low back pain: a systematic review. Best Pract Res Clin Rheumatol. 2016;30(6):1098–109. https://doi.org/10.1016/j.berh.2017.04.002.

    Article  PubMed  Google Scholar 

  54. Najm A, Gossec L, Weill C, Benoist D, Berenbaum F, Nikiphorou E. Mobile health apps for self-management of rheumatic and musculoskeletal diseases: systematic literature review. JMIR mHealth uHealth. 2019;7.

  55. Lee RR, Shoop-worrall S, Rashid A, Thomson W. “ Asking too much ?”: a randomised N-of-1 trial exploring patient preferences and measurement reactivity to frequent use of remote multi- dimensional pain assessments in children and young people with juvenile idiopathic. Arthritis. 2019.

  56. Ali SM, Lau WJ, McBeth J, Dixon WG, van der Veer SN. Digital manikins to self-report pain on a smartphone: a systematic review of mobile apps. Eur J Pain (United Kingdom). 2021;25.

  57. Shiffman S, Stone AA, Hufford M. Ecological momentary assessment; 2008. p. 1–32.

    Google Scholar 

  58. Shiffman S, Stone AA, Hufford M. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1–32.

    Article  Google Scholar 

  59. Tourangeau R. Remembering what happened: memory errors and survey reports. The science of self-report: implications for research and practice; 2000.

    Google Scholar 

  60. Stone AA, Shiffman S, Schwartz JE, Broderick JE, Hufford MR. Patient compliance with paper and electronic diaries. Control Clin Trials. 2003.

  61. Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):1–9.

    Article  Google Scholar 

  62. Druce KL, McBeth J, van der Veer SN, Selby DA, Vidgen B, Georgatzis K, et al. Recruitment and ongoing engagement in a UK smartphone study examining the association between weather and pain: cohort study. JMIR mHealth uHealth. 2017;5(11):e168 Available from: http://mhealth.jmir.org/2017/11/e168/.

    Article  Google Scholar 

  63. Kiang MV, Chen JT, Krieger N, Buckee CO, Alexander MJ, Baker JT, et al. Sociodemographic characteristics of missing data in digital Phenotyping. medRxiv. 2021:2012–20.

  64. Trister AD, Neto EC, Bot BM, Perumal T, Pratap A, Klein A, et al. mPower: a smartphone-based study of Parkinson’s disease provides personalized measures of disease impact. Mov Disord. 2016.

  65. Bot BM, Suver C, Neto EC, Kellen M, Klein A, Bare C, et al. The mPower study, Parkinson disease mobile data collected using ResearchKit. Sci Data. 2016;3.

  66. O’Connor S, Hanlon P, O’Donnell CA, Garcia S, Glanville J, Mair FS. Understanding factors affecting patient and public engagement and recruitment to digital health interventions: a systematic review of qualitative studies. BMC Med Inform Decis Mak. 2016;16(1):1–15Available from:. https://doi.org/10.1186/s12911-016-0359-3.

    Article  Google Scholar 

  67. Lee EWJ, Viswanath K. Big data in context: addressing the twin perils of data absenteeism and chauvinism in the context of health disparities research. J Med Internet Res. 2020;22(1):e16377.

    Article  Google Scholar 

  68. Dorsey ER, Chan YF, Mcconnell MV, Shaw SY, Trister AD, Friend SH. The use of smartphones for health research. Acad Med. 2017;92(2):157–60.

    Article  Google Scholar 

  69. Reade S, Spencer K, Sergeant JC, Sperrin M, Schultz DM, Ainsworth J, et al. Cloudy with a chance of pain: engagement and subsequent attrition of daily data entry in a smartphone pilot study tracking weather, disease severity, and physical activity in patients with rheumatoid arthritis. JMIR mHealth uHealth. 2017;5(3):e37 Available from: http://mhealth.jmir.org/2017/3/e37/.

    Article  Google Scholar 

  70. Cai RA, Beste D, Chaplin H, Varakliotis S, Suffield L, Josephs F, et al. Developing and evaluating JIApp: acceptability and usability of a smartphone app system to improve self-management in young people with juvenile idiopathic arthritis. JMIR mHealth uHealth. 2017;5(8).

  71. Osailan A. The relationship between smartphone usage duration (using smartphone’s ability to monitor screen time) with hand-grip and pinch-grip strength among young people: an observational study. BMC Musculoskelet Disord. 2021;22(1):1–8.

    Article  Google Scholar 

  72. Shen S, Suzuki K, Kohmura Y, Fuku N, Someya Y, Miyamoto-Mikami E, et al. Associations of voluntary exercise and screen time during the first wave of COVID-19 restrictions in Japan with subsequent grip strength among university students: J-fit+ study. Sustainability. 2021;13(24):13648.

    Article  CAS  Google Scholar 

  73. Seppen BF, Wiegel J, L’ami MJ, dos Santos Rico SD, Catarinella FS, Turkstra F, et al. Feasibility of self-monitoring rheumatoid arthritis with a smartphone app: results of two mixed-methods pilot studies. JMIR Form Res. 2020;4(9):1–10.

    Article  Google Scholar 

  74. Park JY, Lee G, Shin SY, Kim JH, Han HW, Kwon TW, et al. Lessons learned from the development of health applications in a tertiary hospital. Telemed e-Health. 2014;20(3):215–22.

    Article  Google Scholar 

  75. De Montjoye YA, Hidalgo CA, Verleysen M, Blondel VD. Unique in the crowd: the privacy bounds of human mobility. Sci Rep. 2013;3:1–5.

    Article  Google Scholar 

  76. O’Dea S. Smartphone usage by age UK 2012-2019 | Statista. Statista; 2020.

    Google Scholar 

  77. Jardine J, Fisher J, Carrick B. Apple’s ResearchKit: smart data collection for the smartphone era? J R Soc Med. 2015;108(8):294–6.

    Article  Google Scholar 

  78. Ben-Zeev D, Wang R, Abdullah S, Brian R, Scherer EA, Mistler LA, et al. Mobile behavioral sensing for outpatients and inpatients with schizophrenia. Psychiatr Serv. 2016;67(5):558–61.

    Article  Google Scholar 

  79. Berry JD, Paganoni S, Carlson K, Burke K, Weber H, Staples P, et al. Design and results of a smartphone-based digital phenotyping study to quantify ALS progression. Ann Clin Transl Neurol. 2019.

  80. Beukenhorst AL, Collins E, Burke KM, Rahman SM, Clapp M, Konanki SC, et al. Smartphone data during the COVID-19 pandemic can quantify behavioral changes in people with ALS. Muscle Nerve. 2021.

  81. StatCounter. Market share of leading Mobile operating Systems in Europe from 2010 to 2019. Statista. 2019; [cited 2019 Oct 21]. Available from: https://www.statista.com/statistics/639928/market-share-mobile-operating-systems-eu/.

  82. Onnela JP, Rauch SL. Harnessing smartphone-based digital Phenotyping to enhance behavioral and mental health. Neuropsychopharmacology. 2016;41(7):1691–6Available from:. https://doi.org/10.1038/npp.2016.7.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Torous J, Staples P, Onnela JP. Realizing the potential of Mobile mental health: new methods for new data in psychiatry. Curr Psychiatry Rep. 2015.

  84. Gasparrini A. The case time series design A new tool for big data analysis Environment & Health Research Group seminar The last decades have witnessed an intense methodological research on. 2017.

  85. Salathé M, Bengtsson L, Bodnar TJ, Brewer DD, Brownstein JS, Buckee C, et al. Digital epidemiology. PLoS Comput Biol. 2012;8(7):1–5.

    Article  Google Scholar 

  86. Jackowska M, Dockray S, Hendrickx H, Steptoe A. Psychosocial factors and sleep efficiency: discrepancies between subjective and objective evaluations of sleep. Psychosom Med. 2011;73(9).

  87. Landry GJ, Best JR, Liu-Ambrose T. Measuring sleep quality in older adults: a comparison using subjective and objective methods. Front Aging Neurosci. 2015;7(SEP).

  88. Rothman KJ, Greenland S, Associate TLL. Modern epidemiology, 3rd edition. Hast Cent Rep. 2014.

  89. Dyrstad SM, Hansen BH, Holme IM, Anderssen SA. Comparison of self-reported versus accelerometer-measured physical activity. Med Sci Sports Exerc. 2014;46(1):99–106.

    Article  Google Scholar 

  90. Straczkiewicz M, James P, Onnela JP. A systematic review of smartphone-based human activity recognition for health research. arXiv. 2019;

  91. Murakami H, Kawakami R, Nakae S, Nakata Y, Ishikawa-Takata K, Tanaka S, et al. Accuracy of wearable devices for estimating total energy expenditure: Comparisonwith metabolic chamber and doubly labeledwater method. JAMA Intern Med. 2016;176(5):702–3.

    Article  Google Scholar 

  92. Torous J, Kiang MV, Lorme J, Onnela J-P. New tools for new research in psychiatry: a scalable and customizable platform to Empower data driven smartphone research. JMIR Ment Heal. 2016.

  93. Beukenhorst AL, Schultz DM, McBeth J, Lakshminarayana R, Sergeant JC, Dixon WG. Using smartphones for research outside clinical settings: how operating systems, app developers, and users determine geolocation data quality in mHealth studies. In: Studies in health technology and informatics; 2017.

    Google Scholar 

  94. Kuhlmann T, Garaizar P, Reips U-D. Smartphone sensor accuracy varies from device to device in mobile research: the case of spatial orientation. Behav Res Methods. 2021;53:22–33.

    Article  Google Scholar 

  95. Beukenhorst AL, Burke KM, Berry JD, Onnela J-P. Using smartphones to reduce research burden in a neurodegenerative population and assessing participant adherence:a randomized clinical trial and two observational studies. JMIR mHealth uHealth. 2022;10(1):e31877.

    Article  Google Scholar 

  96. Arase Y, Ren F, Xie X. User activity understanding from mobile phone sensors. UbiComp’10. Proc 2010 ACM Conf Ubiquitous Comput. 2010:391–2.

  97. Cui Y, Chipchase J, Ichikawa F. A cross culture study on phone carrying and physical personalization. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics); 2007.

    Google Scholar 

  98. Backhouse MR, Hensor EMA, White D, Keenan AM, Helliwell PS, Redmond AC. Concurrent validation of activity monitors in patients with rheumatoid arthritis? Clin Biomech. 2013;28(4):473–9Available from:. https://doi.org/10.1016/j.clinbiomech.2013.02.009.

    Article  Google Scholar 

  99. Ishikawa Y, An Q, Nakagawa J, Oka H, Yasui T, Tojima M, et al. Gait analysis of patients with knee osteoarthritis by using elevation angle: confirmation of the planar law and analysis of angular difference in the approximate plane. Adv Robot. 2017;31(1–2):68–79.

    Article  Google Scholar 

  100. Beukenhorst AL, Sergeant J, Schultz DM, McBeth J, Yimer BB, Dixon WG. Understanding predictors of missing location data to inform smartphone study design: an observational study. Under Rev. 2021.

  101. Davergne T, Kedra J, Gossec L. Wearable activity trackers and artificial intelligence in the management of rheumatic diseases: where are we in 2021? Z Rheumatol. 2021;80(10):928–35.

    Article  Google Scholar 

  102. Onnela JP, Rauch SL. Harnessing smartphone-based digital Phenotyping to enhance behavioral and mental health. Neuropsychopharmacology. 2016.

  103. Magrabi F, Habli I, Sujan M, Wong D, Thimbleby H, Baker M, et al. Why is it so difficult to govern mobile apps in healthcare? BMJ Health Care Informatics. 2019.

  104. Studenic P, Karlfeldt S, Alunno A. The past, present and future of e-health in rheumatology. Jt Bone Spine. 2021;88(4).

  105. Gandrup J, Ali SM, McBeth J, van der Veer SN, Dixon WG. Remote symptom monitoring integrated into electronic health records: a systematic review. J Am Med Inform Assoc. 2020;27(11):1752–63.

    Article  Google Scholar 

  106. Eysenbach G, Stoner S, Drozd F, Blankers M, Crutzen R, Tait R, et al. ConSORT-eHealth: improving and standardizing evaluation reports of web-based and mobile health interventions. J Med Internet Res. 2011;13(4).

  107. Agarwal S, Lefevre AE, Lee J, L’engle K, Mehl G, Sinha C, et al. Guidelines for reporting of health interventions using Mobile phones: Mobile health (mHealth) evidence reporting and assessment (mERA) checklist. BMJ. 2016;352:1–10.

    Google Scholar 

  108. Najm A, Nikiphorou E, Kostine M, Richez C, Pauling JD, Finckh A, et al. EULAR points to consider for the development, evaluation and implementation of mobile health applications aiding self-management in people living with rheumatic and musculoskeletal diseases. RMD Open. 2019;5(2):1–7.

    Article  Google Scholar 

Download references

Acknowledgements

None.

Funding

ALB is supported by an MRC DTP grant (number MR/N013751/1).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to this manuscript. The idea was conceived by AB with input from KD and DDC. All three authors contributed to writing and reviewing the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Anna L. Beukenhorst.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

No competing interests declared.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Beukenhorst, A.L., Druce, K.L. & De Cock, D. Smartphones for musculoskeletal research – hype or hope? Lessons from a decennium of mHealth studies. BMC Musculoskelet Disord 23, 487 (2022). https://doi.org/10.1186/s12891-022-05420-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12891-022-05420-8

Keywords