Next Article in Journal
Effectiveness of the Validation Method in Work Satisfaction and Motivation of Nursing Home Care Professionals: A Literature Review
Next Article in Special Issue
Predictors of Contemporary under-5 Child Mortality in Low- and Middle-Income Countries: A Machine Learning Approach
Previous Article in Journal
Functional MRI (fMRI) Evaluation of Hyperbaric Oxygen Therapy (HBOT) Efficacy in Chronic Cerebral Stroke: A Small Retrospective Consecutive Case Series
Previous Article in Special Issue
Screening Model for Estimating Undiagnosed Diabetes among People with a Family History of Diabetes Mellitus: A KNHANES-Based Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feasibility of Using Floor Vibration to Detect Human Falls

1
School of Architecture, Harbin Institute of Technology, Harbin 150001, China
2
Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science, Ministry of Industry and Information Technology, Harbin 150001, China
3
School of Architecture, The University of Sheffield, Sheffield S10 2TN, UK
*
Authors to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2021, 18(1), 200; https://doi.org/10.3390/ijerph18010200
Submission received: 22 November 2020 / Revised: 24 December 2020 / Accepted: 25 December 2020 / Published: 29 December 2020
(This article belongs to the Special Issue Disease Prediction, Machine Learning, and Healthcare)

Abstract

:
With the increasing aging population in modern society, falls as well as fall-induced injuries in elderly people become one of the major public health problems. This study proposes a classification framework that uses floor vibrations to detect fall events as well as distinguish different fall postures. A scaled 3D-printed model with twelve fully adjustable joints that can simulate human body movement was built to generate human fall data. The mass proportion of a human body takes was carefully studied and was reflected in the model. Object drops, human falling tests were carried out and the vibration signature generated in the floor was recorded for analyses. Machine learning algorithms including K-means algorithm and K nearest neighbor algorithm were introduced in the classification process. Three classifiers (human walking versus human fall, human fall versus object drop, human falls from different postures) were developed in this study. Results showed that the three proposed classifiers can achieve the accuracy of 100, 85, and 91%. This paper developed a framework of using floor vibration to build the pattern recognition system in detecting human falls based on a machine learning approach.

1. Introduction

An aging population is causing issues in several countries and this is an increasing trend. By 2050, the old-age dependence ratio (the number of people who are 65 and over relative to those between 15 and 64) in the European Union is expected to double to 54% [1]. Aging can cause the decline in bone mineral density and physical function [2], which makes older people more vulnerable to accidents. Studies have shown that falls constitute one of the leading causes of severe injury in the elderly [3]. Almost 20% of community-dwelling older people experience accidental falls every year [4]. Two thirds of all severe injuries sustained on the elderly are caused by falls [5]. The situation becomes more serious when a senior adult is unable to stand up after a fall, especially when living alone [6]. Over half of those who remain on the ground for more than an hour after a fall experience a deterioration in general health, resulting in death within six months even when there is no direct injury from the fall. Older people living alone can easily fall without anyone noticing, and it can take considerable time for an alarm to be raised due to them being alone. They can remain on the floor for a prolonged time, with severe adverse consequences [7]. The development of automatic fall-detection methods to detect a fall and ensure that medical help arrives in time has become of increasing importance. Different fall postures can lead to different kinds of injury. Forward falls will lead to high potential of joint dislocations and upper limb injuries, while backward falls often lead to increased risk of head injuries [8]. It is essential that fall postures be identified by a fall detection system to ensure that action can be taken in time.
Since falls are often associated with injuries and possibly more serious consequences, several studies have been carried out to detect the occurrence of falls in the elderly in order to prevent them from remaining on the ground for too long. There are three mainstream methods for fall detection that are currently available and are being used in by older adults in different living settings.

1.1. Visual-Based Fall Detecting Systems

Visual-based fall detecting systems can monitor the elderly throughout surveillance cameras in real-time in a home care setting. Thermal vision cameras can also be applied in certain cases [9,10,11]. The criteria chosen for fall detection after subtraction can be further categorized into three branches, namely, time of inactivity, body movement, and posture recognition. The first approach is based on how long the person remains in a reclined position state after a period of inactivity by setting up an inactivity history map [12]. This method can generate false alarms when persons are resting. In the second approach, parameters like velocity of movement [13], displacement [12] and orientation of the head position [14], body centroid or spine [15] are used to evaluate whether there is a fall. However, this approach does not give good results when people are engaged in strenuous exercise [16]. In the posture recognition approach, various methods such as projection histograms of the segmented silhouette [17], pixel-based subtraction algorithms [18], Kalman filtering [19], and genetic algorithms [20] are applied to extract the human silhouette from the background in surveillance images. Several features such as the aspect ratio, orientation angle [14], and distance to the floor [21] are extracted to analyze shape changes so as to distinguish the fall event from daily life activities.
The installation of cameras in homes to monitor the elderly in real-time is undoubtedly an efficient method to detect falls with high accuracy. However, it is undeniable that this form of surveillance could make the elderly feel observed and infringe their privacy, which may lead to increased stress and resistance. This is not conducive to creating a safe and comfortable home environment for older people.

1.2. Wearable Sensor-Based Fall Detecting Systems

Another method of detecting falls in the elderly is through the use of wearable sensors. There are different approaches and different systems have different algorithms and therefore a different number of sensors can be used [22,23,24,25]. The use of pairs of sensors such as a combined accelerometer and gyroscope [26,27,28,29] or a combined accelerometer and barometer [30,31] constitute the two leading options in current research. These sensors can work together to collect various data including orientation in terms of yaw, pitch, and roll angles [32], frequency domain [33], and the acceleration of multiple body parts, especially the waist regions [34,35,36,37]. Data can be extracted from these sensors to classify different human activities, including falls. Wearable sensors can collect accurate data and give out signals in a timely fashion. However, this method does rely on the sensors being attached to the wearer, and if the person does not have the sensors attached, it is not possible to detect when the fall occurs.

1.3. Floor Vibration-Based Fall Detecting Systems

The vibration generated by a fall can provide a good approach to the development of a fall detection system. Alwan et al. proposed a fall detection system where a particular piezoelectric sensor is attached to the floor surface to collect the vibration signals, and a pre-processing circuit powered by batteries is applied to analyze the vibration patterns and transfer them as a binary signal in the case of a fall event [33]. Fall experiments were conducted by multiple participants falling on purpose in this research. This experimental method can be too subjective to fully simulate an unconscious fall. Alwan’s method was further developed by Werner into an automatic fall detection system focusing on appraising the practical feasibility of this method as well as further validation and development of the relative techniques. Werner used a dummy to simulate falls in this study [38]. These methods proved the feasibility of using floor vibration to determine a fall although the experiments did not focus on the way of falling. Yazar et al. demonstrated a fall detection system composed of a vibration sensor and two passive infrared sensors [39]. This approach increases accuracy but reduces fault tolerance. Liu et al. used a multi-features semi-supervised support vector machine algorithm to process and analyze the floor acceleration through installed accelerometers and managed to recognize human falls [40]. Liu’s team deliberately adopted falls in different postures for the related experiments, but this was only aimed at increasing the diversity of experimental samples rather than deliberately distinguishing between falls with different postures.
In comparison with visual-based detection systems and wearable sensors, the use of floor vibration to determine fall events not only ensures high accuracy with relatively low financial investment, but also manages to avoid placing any further physical or mental burden on users. However, there is a lack of current research into the floor vibration-based fall detecting system’s ability to distinguish falls with different postures.
Although there are several existing fall detecting systems, there is limited research examining fall posture identification. Most proposed systems tended to use high-sensitivity sensors, which are too expensive for widespread use in housing for the elderly. On the basis of current related studies, the aims of this research are as follows:
  • The feasibility of identifying falls from other activities based on floor vibration signals collected by high-sensitivity sensors has already been demonstrated. This work investigates whether fall detection using low-sensitivity mobile built-in sensors can achieve sufficiently high accuracy.
  • Are the results efficient enough to identify different fall postures based on floor vibration signals collected by mobile built-in sensors?

2. Method

This paper combines fall experiments with a machine learning approach to realize fall detection through the use of floor vibration signals.

2.1. Experiments

2.1.1. Human Body Model Formulation

In order to simulate the posture of a fall precisely, a physical model developed at a scale of 1:4 to actual human body size was fabricated. The ratio of the length of each body part in this model is consistent with the actual ratios of the human body, with reference to [41] (Figure 1), and the main joints connecting different body parts function in the same way as actual human joints in terms of the direction and range of movement (Figure 2). The model was 3D printed and the body parts were assembled with joints and screws to ensure mobility (Figure 3). The model is used to simulate a conscious person who is capable of controlling all of his limbs and standing up straight when all screws are tightened. It can simulate an unconscious person unable to control his body when all the model joints are loosened. The body weight proportions were studied and these are reflected in the model by attaching weights to the skeleton [42], as shown in Figure 4.

2.1.2. Experimental Procedure

The fall experiments are carried out on a 12 × 12-inch plywood sheet, and the vibration from the model’s falls are recorded. A mobile phone with a built-in accelerometer is used to collect the vibration data (Figure 5). The purpose was to utilize a low-sensitivity accelerometer instead of using a high-sensitivity accelerometer to detect human falls. The sample frequency of the test is 100 Hz. The acceleration chart obtained from the mobile phone is shown in Figure 6.
Figure 6 clearly shows that there is rapid attenuation after the fall, so only the first five seconds are used in this analysis. Three different activities, namely human forward fall, human backward fall, and object drop, are included in this study. In the human fall experiments, the model is placed in the center of the plywood board, with head and shoulder parts tied to a support above to keep the body up straight at the beginning. The posture changes during the human forward fall process are shown in Figure 7 using a sequence of images. Before the fall occurs, the model has all screws loosened to simulate the unconscious state, and the entire body is leant forward.
After releasing the rope holding the model, the knees land first as the body leans forward (Figure 7a,d), followed by the upper body and arms (Figure 7b,e), and the head lands last (Figure 7c,f)). In the case of the backward fall, the model’s initial state is also with all joints loosened while leaning backward. As the rope is released, as shown in Figure 8, the entire body leans leaning backward with the knees bent and the hips landing first (Figure 7h,k). After that, the upper body continues to fall back due to inertia, with the upper limbs and the head touching the floor at the end (Figure 7i,l).
A series of object drop tests were also carried out (Figure 8). In the object drop experiment, the total weight of the test object is the same as the human model and the object is dropped from the same height as the center of gravity of the model. The vibration data is recorded from the time of the fall to the end of the last rebound. The purpose of the object fall test is to ensure whether human falls can be distinguished from other one-drop activities by assessing the floor vibration data.
A total of 314 experiments were conducted during the entire process, consisting of 107 object drops, 97 sets of human forward falls, and 110 sets of human backward falls.

2.2. Proposed Fall Detection Algorithms

2.2.1. K-Means Clustering Algorithm

In this paper, the K-means clustering method is introduced to modify the raw data at the outset. The K-means clustering algorithm is an iterative solution clustering analysis algorithm. The method’s aim is to divide an N-dimensional population into K sets on the basis of a sample with reasonably efficient partitions in the sense of within-class variance [43].
The standard algorithm for K-means clustering consists of two parts [44]. The first step is the initial assignment, where K objects are selected randomly as initial cluster centers(centroids), and then each observation is assigned to the cluster with the least squared Euclidean distance. Given a set of observations (X1, X2, …, Xn), where each observation is a m-dimensional real vector, K-means clustering aims to partition the n observations into k (≤n) sets C = {C1, C2, …, Ck}. Equation (1) is a mathematical representation that can be used to describe the initial assignment:
d i s ( X i , C j ) = t = 1 m ( X i t C j t ) 2   1 i n ,   1 j k ,   1 t m
where Xis is the ith observation, Cj is the jth cluster center, X i t is the tth vector of the ith observation, and C j t is the tth vector of the jth cluster center
The next step is to recalculate centroids for observations assigned to every cluster and reassign each observation into a new cluster. This process is repeated until a certain termination condition is met. The termination condition is that no (or a minimum number of) objects are reassigned to different clusters, no (or minimum number) of centroids change again, and the sum of squared errors is also minimum. Equation (2) can be used to describe the recalculation:
C l = X i S l X i | S l | 1 l k ,   1 i | S l |
where Cl is the lth cluster’s centroid, |Sl| is the number of the observation in the lth cluster, and Xi is the ith observation in the lth cluster
Since K is pre-set according to different data sets, the value that can best reflect the characteristics of the pattern is different when processing different data patterns. One way to find the most appropriate k value in cluster analysis is the elbow method, which is a heuristic method to determine the number of clusters in a data set. The elbow method uses different values of k to plot the value of the mean absolute deviation of the dataset [45]. As the value of k increases, the mean absolute deviation also increases, and the instances approach the centerline of the corresponding graph. However, the increase in the mean absolute deviation decreases as the value of k increases. The k value at the steepest drop point in the dispersion is called the elbow, where diminishing returns are no longer worth the additional cost of further clusters. Given a set {x1, x2, …, xn}, Equation (3) can be used to calculate the mean absolute deviation for the elbow method:
M D = 1 n i = 1 n | x i m ( X ) | 1 i n
where MD is the mean absolute deviation, xi is the ith observation, and Xi is the ith observation in the given set, m(X) is the mean value for the given set

2.2.2. K-Nearest Neighbor Algorithm

The K-nearest neighbor algorithm is a non-parametric method proposed by Thomas Cover used for classification and regression [46]. It is a relatively mature method and one of the simplest machine learning algorithms. The main concept of this method is that if most of the K numbers for the nearest samples of a chosen sample in the featured space belong to a certain category, the chosen sample also belongs to the same category and has the characteristics of the samples in this category.

2.3. Pre-Defined Values in the Algorithms

In the process of developing the binary classification system, the value of specific parameters in the algorithm needs to be pre-defined before the system is performed. The pre-defined values include the total time of extracted data (T), the K value in the K-means clustering algorithm (K1), and the K value in the K-nearest neighbor algorithm (K2).
By clustering similar data using the K-means algorithm, the continuous pattern obtained from the experiments can be processed into discrete data. Taking a set of the vibration signals collected from a human fall as an example, the mean absolute deviation of the data is shown in Figure 9. Since there is no obvious steep drop point in the graph, the intersection point of the regression lines at the beginning and end of the graph is regarded as the yield point when using the elbow method to determine the most reasonable number of clusters in the dataset. Using the elbow algorithm, the optimal value of K1 is determined to be 30. The examples of the patterns generated for the three different experimental activities after adopting K-means clustering using 30 as the K value are shown in Figure 10.
Since the total length of time for each pattern is 5 s, the time before the activity starts is 0.5 s and the longest vibration time in a pattern is 1 s for all the data sets, the value of T can range from 1.5 to 5 s. Theoretically speaking, the smaller T is, the more pronounced the pattern features can be. However, if the value of T is too small, the classifier’s fault tolerance rate will also decrease accordingly. The value of K2 should be a single positive integer. The control variates method can be used to determine the values of both pre-determined parameters for the three classifications.
Table 1 shows the performance of the classifier with different T values under the same condition, while Table 2 shows the accuracy with different k2 values. The most reasonable values can be selected by comparison. These are T = 3 for people walking and falling with different falling postures, T = 1.5 for people falling and object drops, while K = 7.

3. Results and Analysis

3.1. Human Walking versus Human Fall

In order to ensure that the algorithm can distinguish a human walking from a fall, a series of walking signals were generated. The signals were generated using the software OpenSees and the process was discussed in [47].
The ratio of the standard pattern for walking generated by the simulation software is different from the fall pattern generated through the experiments, and the values in both patterns need to be scaled to a range of 0–1 for future comparison. Figure 11 shows an example of the two sets of data after normalization.
Since walking is a continuous behavior, the floor acceleration over time during the entire activity is widely distributed. In contrast, a fall is a single-drop behavior, and the timber floor starts free decay after the fall (Figure 11). Hence, the patterns for these two different human activities are significantly different in terms of:
the length of the vibration (parameter 1)
the amount of data within the vibration (parameter 2)
Table 3 shows clear differences between the two parameters in terms of mean value, maximum and minimum values, and standard deviation for the different behavior. Based on this observation, the two activities can be distinguished by setting these parameters as criteria.
Cross-validation and the K-nearest neighbor algorithm were applied to train the binary classifier. In order to use the K-nearest neighbor algorithm to train the classifier, all the vibration data were divided into five random groups. Each group was then taken as the test set while the remaining groups are taken in turn as the training set. Table 4 shows the performance of the classifier using each parameter in five simulations when taking K = 7, with good results. Parameter 1 gave better results with accuracy, precision, and recall of 100%.

3.2. Human Fall Versus Object Drop

Unlike the binary classification of human walking and human fall, human fall and object drop are both one-drop activities, which means the total length of vibration in these two activities is relatively short in both cases. Therefore, when distinguishing between the patterns for the two activities, it is necessary to focus on the data distribution within the vibration section period. In an object drop, the object touches the ground first and bounces back several times until it stops. These activities cause the floor to vibrate multiple times, but the amplitude of each vibration gradually decreases over time. However, in case of a human fall, although the floor also generates multiple vibrations since body parts can touch floor synchronously, the sequence of body parts touching the ground is uncertain, and hence the maximum acceleration of these vibrations may not decrease over time. It can be seen from Figure 12 that the peak point appeared relatively later and the total vibration duration was comparatively longer in the human fall pattern. This means that there is less vibration data located near 0 on the y-axis in the human fall pattern in comparison to the object drop pattern. In terms of the distribution of the data during vibration, the data from the human fall pattern are relatively widely scattered while the data from the object drop pattern generally appear around the same time as the wave peak. Since the weight of the object and of the human model are the same, the maximum instantaneous floor acceleration is greater for the object drop than for the human fall.
In order to digitize these features, assuming the total number of data points in the vibration is n, the names of the data points are p1, p2, p3, …, pn, the coordinates are (x1, y1), (x2, y2), (x3, y3) …. (xn, yn), and the extracted parameters are listed as follows (Figure 12):
The total length of the vibration: w = xn − x1
The dispersion of data in vibration: v = (x2 − x1)2 + (x3 − x1)2 + (x4 − x1)2 + … + (xn − x1)2
The number of data points located outside the vibration section (zr)
The aspect ratio of the vibration section: r = w/a, where w = xn − x1, a = max (|yn|) (Figure 13, Parameter 4)
The number of data points within the vibration which do not appear at the same time as the peak (d)
Table 5 shows the average, maximum and minimum, and standard deviation for these five parameters from the human fall and object drop datasets. These values vary between the two activities, but the difference is not pronounced.
Cross-validation and the K-nearest neighbor algorithm were again applied to train the binary classifier. The performance of the classifier using each parameter as the criterion for the five is shown in Table 6. It can be seen that accuracy, precision, and recall for all parameters were poor in this case, with values below 75%.
In order to increase accuracy, instead of using only one parameter as a criterion, a combination of two parameters was applied to improve the classifier. In the plots in Figure 13, the x-axis and y-axis of the graphs respectively represent the value of one parameter. Red dots on the plots represent human fall cases while the blue ones represent object drop cases. The results from combining ten parameters in pairs are shown in Figure 13. Table 7 shows the performance of the classifier using these five combinations of parameters as criteria. With these new criteria, accuracy increases significantly, with three sets of combinations reaching an accuracy of over 80%, and the best performance overall achieved by r&d with an accuracy of 85%. The values for precision and recall for each parameter also increased significantly.

3.3. Human Falls from Different Postures

Two types of fall postures are examined in this work, the forward and the backward fall. When a person falls forward, the knee touches the ground first, followed by the upper body. When a person falls backward, the hip often touches the ground first (Figure 8). In terms of human body weight distribution (Figure 5), the heaviest part is the upper body, which accounts for about half of the total weight. Therefore, when a person leans forward and falls, the upper body touches the ground later, and the position of the relative wave crest in the corresponding pattern should be visible during the second half of the vibration. When a person falls backward with the hips touching the ground first, the first wave crest should be the highest wave crest of the entire pattern, and this appears relatively early. Based on this theoretical hypothesis, assuming the total time of the vibration is b, the time difference between the start of the vibration and the appearance of the peak point is a, and the distance between the centroid of all data points inside the vibration section and the first data point inside the vibration section along the x-axis is c (Figure 14). The extracted parameters are listed as follows:
The ratio of the time between the appearance of the peak and the start of the vibration to the total time of the vibration: port = a/b
The ratio of the distance between the centroid of all data points within the vibration and the vibration’s starting point to the total length of the vibration along the x-axis: ave = c/b
The ratio of the distance between the centroid of all data points within the vibration and the vibration’s starting point to the distance between the vibration’s starting point and the peak point along the x-axis: sca = a/c
Table 8 shows the performance of these parameters when used to distinguish different fall postures. It can be seen that the maximum, minimum, and average values for the three values differ for the different datasets. Therefore, it is reliable to use these parameters as criteria to recognize human fall postures.
When K = 7 in the K nearest neighbor algorithm, the accuracy, precision, and recall using a single parameter and the combined parameters as the classification criteria are shown in Table 9. The accuracy can exceed 0.9 when using the combined parameters port&ave as the criterion. The value of precision and recall both increase accordingly.

3.4. The Framework for Detection

A flowchart is established in this work to address the procedure for detecting human falls (Figure 15). For starters, the system needs to distinguish single drop activities from continuous activities. Here, human walking is taken as representative of continuous activity, with two parameters selected as the criteria to achieve 100% accuracy in distinguishing between these two activities. After recognizing single drop activities, the next step is to distinguish human falls from object drops. Five parameters are extracted from the patterns as criteria. Since the single parameter is not sufficiently accurate to distinguish activities, a combination of parameters was applied to improve the classifier. With a combination of parameter r and parameter d as the criterion, accuracy of classification reaches 85%. After recognizing human falls, the final classifier to distinguish different fall postures was proposed. Three parameters are extracted to determine the fall posture. An accuracy of 91% can be achieved by using a combination of two parameters as the criterion.

4. Discussion

(1) Using a 3D-printed model to simulate unconscious fall.
In former studies, researchers have collected floor vibration data for different falls by asking participants to fall on purpose. It is hard to simulate unconscious falls using this method as the participants are controlling the sequence of the landing body parts involuntarily even when attempting not to. Using a human body model to simulate falls can avoid this problem and this method can achieve high accuracy. Liu and his team used a dummy to simulate falls from a standing position [40], while Alwan and his team used two dummies to simulate falls from standing position and sitting position, respectively [33]. Both of the proposed classifiers achieved high accuracy. The results from this research demonstrated that a one-to-four scaled 3D-printed model can also achieve high accuracy when compared to related research outcome. This model is formulated to simulate unconscious falls under different postures by controlling the body hinges. This can expand the current field of study and provides a new approach to analyzing human activity for future studies.
(2) Using simplified patterns of certain activities in feature extraction and classification.
Instead of analyzing the physical characteristics of the raw data, the simplified patterns of certain activities are obtained before classification in this study. Features were extracted directly from the obtained patterns. A machine learning approach was introduced in this step to preprocess vibration data. Clustering not only reduced the total amount of data and accelerated the procedure, but also made the data features more eminent and easier to extract in the further classification. Pattern recognition in classification has already been successfully applied to various fields, such as computer-aided diagnosis, machine vision, data mining, and knowledge discovery. However, behavior recognition based on floor vibration is still in its initial stages. The pretreating of data provides a new approach in human activity recognition.
(3) Feasibility of using mobile built-in sensors to detect falls.
The study explored the feasibility of using mobile built-in sensors to detect falls, which can be further used for wireless networking between hospital and older peoples’ homes. Unlike formerly developed fall detecting systems using high-sensitivity sensors, the proposed method can achieve high accuracy even with low-sensitivity sensors. Compared with installing sensors in older people’s home, which is expensive and difficult to promote, using mobile phones to detect human fall is more affordable and accessible. Since the proposed system relied on a mobile’s built-in sensor, the collected data can be transmitted in real time to the cloud. For further development, an app could be developed based on this method to satisfy the demand for wireless networking between hospital and older peoples’ home in the context of the upcoming 5G era. This could help the hospital to prepare and treat patient in the shortest duration possible. Apart from in housing for older people, this application can also be widely adopted in hospitals and nursing homes to reduce the risk to older residents and enhance the working efficiency of staff.
(4) The significance in identifying fall postures.
Many researchers have already succeeded in using floor vibration to identify falls from other activities. However, there is limited research examining floor vibration to recognize fall postures, even though the way people fall has a significant influence on the health outcomes. There are three common postures in most falls, namely forwards, backward, and sideways [48]. Different fall postures have different levels of injurious consequences [49]. This research focused on the situation of unconscious falls, and since sideways falls do not often occur in unconscious falls, the work here only considers forward and backward falls. Forward falls are always accompanied by soft tissue injuries, joint dislocations, and upper limb injuries. The high potential energy of a fall and a hard landing surface are known to be independent risk factors for hip fractures when a sideways fall occurs [8]. The backward fall has an increased risk of head injury, which can lead to severe long-term sequelae. If older people do not get the right medical treatment in time after a fall, their health situation will deteriorate quickly. Therefore, having a detecting system which can identify fall posture after a fall and apprise the hospital about the patient’s situation in advance can help with the medical process. The proposed system can achieve 91% accuracy using mobile phone to identify fall postures.

5. Conclusions

The purpose of this study is to develop an intelligent detection system embedded within a building to determine when an unconscious fall occurs and to distinguish between the different fall postures based on the floor vibration data. The system improves safety in the home environment and enhance health care for the elderly.
To realize this, this paper investigates the use of machine learning algorithms as classification methods to identify fall events with different postures. By applying the K-means algorithm and the K-nearest neighbor algorithm, the fall detection system successfully classifies the patterns generated by the various fall activities through the application of machine learning.
The performance of the proposed method is validated experimentally, with two variations of simulated human falls using a 3D-printed model with adjustable joints as well as object drops. Three classifiers were developed to distinguish human falls from human walking, human falls from object drops, and human forward falls from human backward falls. The results showed that the accuracy of fall identification in these three classifications reached 100, 85, and 91%, respectively. In summary, the results confirmed the performance of the proposed system and demonstrated great potential in distinguishing falls from other activities, as well as identifying the different fall postures from the floor vibration. The classification system developed in this research proved the feasibility of this novel method using algorithms in machine learning to build a pattern recognition system to detect human falls.
The system developed achieved high accuracy in identifying falls and in recognizing fall postures. This can expedite medical treatment as hospitals can be pre-informed about the nature of the fall experience by elderly patients. Moreover, the proposed system is based on low-sensitivity mobile built-in sensors, which are accessible and affordable. It can be widely promoted and installed in older peoples’ apartments and nursing homes to improve the life quality of the elderly without interrupting their normal daily routine.

Author Contributions

Data curation, X.W., W.S., and S.I.; funding acquisition, H.G.; methodology, H.G. and W.-S.C.; software, W.S.; supervision, W.-S.C.; writing—original draft, Y.S. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by National Natural Science Foundation of China, grant number 52078153; and Heilongjiang Provincial Natural Science Foundation of China, grant number LH2019E110.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Carone, G.; Costello, D. Can Europe afford to grow old. Financ. Dev. 2006, 43, 28–31. [Google Scholar]
  2. Gobbo, L.A.; Júdice, P.B.; Hetherington-Rauth, M.; Sardinha, L.B.; Dos Santos, V.R. Sedentary Patterns Are Associated with Bone Mineral Density and Physical Function in Older Adults: Cross-Sectional and Prospective Data. Int. J. Environ. Res. Public Health 2020, 17, 8198. [Google Scholar] [CrossRef]
  3. Lee, J.; Ham, M.J.; Pyeon, J.Y.; Oh, E.; Jeong, S.H.; Sohn, E.H.; Lee, A.Y. Factors Affecting Cognitive Impairment and Depression in the Elderly Who Live Alone: Cases in Daejeon Metropolitan City. Dement. Neurocogn. Disord. 2017, 16, 12–19. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Bhattacharya, B.; Maung, A.; Schuster, K.; Davis, K.A. The older they are the harder they fall: Injury patterns and outcomes by age after ground level falls. Injury 2016, 47, 1955–1959. [Google Scholar] [CrossRef]
  5. Ahmed, N.; Kuo, Y.-H. Evaluating the outcomes of blunt thoracic trauma in elderly patients following a fall from a ground level: Higher level care institution vs. lower level care institution. Eur. J. Trauma Emerg. Surg. 2019. [Google Scholar] [CrossRef]
  6. Tinetti, M.E.; Liu, W.-L.; Claus, E.B. Predictors and Prognosis of Inability to Get Up After Falls Among Elderly Persons. JAMA 1993, 269, 65–70. [Google Scholar] [CrossRef]
  7. Wild, D.; Nayak, U.S.; Isaacs, B. How dangerous are falls in old people at home? BMJ 1981, 282, 266–268. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Luukinen, H.; Herala, M.; Koski, K.; Honkanen, R.; Laippala, P.; Kivelä, S.-L. Fracture Risk Associated with a Fall According to Type of Fall Among the Elderly. Osteoporos. Int. 2000, 11, 631–634. [Google Scholar] [CrossRef]
  9. Sixsmith, A.; Johnson, N. A smart sensor to detect the falls of the elderly. IEEE Pervasive Comput. 2004, 3, 42–47. [Google Scholar] [CrossRef]
  10. Rafferty, J.; Synnott, J.; Nugent, C.; Morrison, G.; Tamburini, E. Fall Detection through Thermal Vision Sensing. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; pp. 84–90. [Google Scholar]
  11. Elshwemy, F.; Tanta University; Elbasiony, R.; Saidahmed, M. A New Approach for Thermal Vision based Fall Detection Using Residual Autoencoder. Int. J. Intell. Eng. Syst. 2020, 13, 250–258. [Google Scholar] [CrossRef]
  12. Dong, Q.; Yang, Y.; Wang, H.; Xu, J. Fall alarm and inactivity detection system design and implementation on raspberry pi. In Proceedings of the 2015 17th International Conference on Advanced Communication Technology (ICACT), Seoul, Korea, 1–3 July 2015; pp. 382–386. [Google Scholar]
  13. Foroughi, H.; Aski, B.S.; Pourreza, H. Intelligent video surveillance for monitoring fall detection of elderly in home environments. In Proceedings of the 2008 11th International Conference on Computer and Information Technology, Khulna, Bangladesh, 24–27 December 2008; pp. 219–224. [Google Scholar]
  14. Gunale, K.; Mukherji, P. Indoor Human Fall Detection System Based on Automatic Vision Using Computer Vision and Machine Learning Algorithms. J. Eng. Sci. Technol. 2018, 13, 2587–2605. [Google Scholar]
  15. Diraco, G.; Leone, A.; Siciliano, P. An active vision system for fall detection and posture recognition in elderly healthcare. In Proceedings of the 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010), Dresden, Germany, 8–12 March 2010; pp. 1536–1541. [Google Scholar]
  16. Khraief, C.; Amiri, H.; Benzarti, F. Vision-based fall detection for elderly people using body parts movement and shape analysis. In Proceedings of the Eleventh International Conference on Machine Vision (ICMV 2018), Munich, Germany, 1–3 November 2018; Volume 11041, p. 110410K. [Google Scholar]
  17. Harrou, F.; Zerrouki, N.; Sun, Y.; Houacine, A. An Integrated Vision-Based Approach for Efficient Human Fall Detection in a Home Environment. IEEE Access 2019, 7, 114966–114974. [Google Scholar] [CrossRef]
  18. Zhang, L.; Fang, C.; Zhu, M. A Computer Vision-Based Dual Network Approach for Indoor Fall Detection. Int. J. Innov. Sci. Res. Technol. 2020, 5, 939–943. [Google Scholar] [CrossRef]
  19. De Miguel, K.; Brunete, A.; Hernando, M.; Gambao, E. Home camera-based fall detection system for the elderly. Sensors 2017, 17, 2864. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Alonso, M.; González, A.B.; Hernando, M.; Gambao, E. Background-Subtraction Algorithm Optimization for Home Camera-Based Night-Vision Fall Detectors. IEEE Access 2019, 7, 152399–152411. [Google Scholar] [CrossRef]
  21. Panahi, L.; Ghods, V. Human fall detection using machine vision techniques on RGB–D images. Biomed. Signal Process. Control. 2018, 44, 146–153. [Google Scholar] [CrossRef]
  22. Lai, C.; Huang, Y.; Park, J.H.; Chao, H. Adaptive body posture analysis for elderly-falling detection with mul-tisensors. IEEE Ann. Hist. Comput. 2010, 25, 20–30. [Google Scholar]
  23. Gjoreski, H.; Luštrek, M.; Gams, M. Accelerometer Placement for Posture Recognition and Fall Detection. In Proceedings of the 2011 Seventh International Conference on Intelligent Environments, Nottingham, UK, 25–28 July 2011; pp. 47–54. [Google Scholar]
  24. Chander, H.; Burch, R.; Talegaonkar, P.; Saucier, D.; Luczak, T.; Ball, J.; Turner, A.; Arachchige, S.N.K.K.; Carroll, W.; Smith, B.K.; et al. Wearable Stretch Sensors for Human Movement Monitoring and Fall Detection in Ergonomics. Int. J. Environ. Res. Public Health 2020, 17, 3554. [Google Scholar] [CrossRef]
  25. Kyriakopoulos, G.; Ntanos, S.; Anagnostopoulos, T.; Tsotsolas, N.; Salmon, I.; Ntalianis, K. Internet of things (IoT)-enabled elderly fall verification, exploiting temporal inference models in smart homes. Int. J. Environ. Res. Public Health 2020, 17, 408. [Google Scholar] [CrossRef] [Green Version]
  26. Schwickert, L.; Klenk, J.; Zijlstra, W.; Forst-Gill, M.; Sczuka, K.; Helbostad, J.L.; Chiari, L.; Aminian, K.; Todd, C.; Becker, C. Reading from the Black Box: What Sensors Tell Us about Resting and Recovery after Real-World Falls. Gerontology 2017, 64, 90–95. [Google Scholar] [CrossRef]
  27. Sucerquia, A.; Lopez, J.D.; Vargas-Bonilla, J.F. SisFall: A Fall and Movement Dataset. Sensors 2017, 17, 198. [Google Scholar] [CrossRef] [PubMed]
  28. Zakaria, N.A.; Kuwae, Y.; Tamura, T.; Minato, K.; Kanaya, S. Quantitative analysis of fall risk using TUG test. Comput. Methods Biomech. Biomed. Eng. 2013, 18, 426–437. [Google Scholar] [CrossRef] [PubMed]
  29. Di Rosa, M.; Hausdorff, J.M.; Stara, V.; Rossi, L.; Glynn, L.G.; Casey, M.; Burkard, S.; Cherubini, A. Concurrent validation of an index to estimate fall risk in community dwelling seniors through a wireless sensor insole system: A pilot study. Gait Posture 2017, 55, 6–11. [Google Scholar] [CrossRef] [PubMed]
  30. Brodie, M.A.; Lord, S.R.; Coppens, M.J.; Annegarn, J.; Delbaere, K. Eight-Week Remote Monitoring Using a Freely Worn Device Reveals Unstable Gait Patterns in Older Fallers. IEEE Trans. Biomed. Eng. 2015, 62, 2588–2594. [Google Scholar] [CrossRef]
  31. Ejupi, A.; Brodie, M.; Lord, S.R.; Annegarn, J.; Redmond, S.J.; Delbaere, K. Wavelet-Based Sit-To-Stand Detection and Assessment of Fall Risk in Older People Using a Wearable Pendant Device. IEEE Trans. Biomed. Eng. 2017, 64, 1602–1607. [Google Scholar] [CrossRef]
  32. Pierleoni, P.; Belli, A.; Palma, L.; Pellegrini, M.; Pernini, L.; Valenti, S. A high reliability wearable device for elderly fall detection. IEEE Sens. J. 2015, 15, 4544–4553. [Google Scholar] [CrossRef]
  33. Alwan, M.; Rajendran, P.J.; Kell, S.W.; Mack, D.C.; Dalal, S.; Wolfe, M.H.; Felder, R.A. A Smart and Passive Floor-Vibration Based Fall Detector for Elderly. In Proceedings of the 2006 2nd International Conference on Information & Communication Technologies, Damascus, Syria, 24–28 April 2006; Volume 1, pp. 1003–1007. [Google Scholar]
  34. Wang, K.; Delbaere, K.; Brodie, M.A.D.; Lovell, N.H.; Kark, L.; Lord, S.R.; Redmond, S.J. Differences Between Gait on Stairs and Flat Surfaces in Relation to Fall Risk and Future Falls. IEEE J. Biomed. Health Inform. 2017, 21, 1479–1486. [Google Scholar] [CrossRef]
  35. Shahzad, A.; Ko, S.; Lee, S.; Lee, J.A.; Kim, K. Quantitative assessment of balance impairment for fall-risk estimation using wearable triaxial accelerometer. IEEE Sens. J. 2017, 17, 6743–6751. [Google Scholar] [CrossRef]
  36. Ponti, M.A.; Bet, P.; Oliveira, C.L.; Castro, P.C. Better than counting seconds: Identifying fallers among healthy elderly using fusion of accelerometer features and dual-task Timed Up and Go. PLoS ONE 2017, 12, e0175559. [Google Scholar] [CrossRef]
  37. Palmerini, L.; Bagalà, F.; Zanetti, A.; Klenk, J.; Becker, C.; Cappello, A. A wavelet-based approach to fall detection. Sensors 2015, 15, 11575–11586. [Google Scholar] [CrossRef] [Green Version]
  38. Werner, F.; Diermaier, J.; Schmid, S.; Panek, P. Fall detection with distributed floor-mounted accelerometers: An overview of the development and evaluation of a fall detection system within the project eHome. In Proceedings of the 2011 5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops, Dublin, Ireland, 23–26 May 2011; pp. 354–361. [Google Scholar]
  39. Yazar, A.; Erden, F.; Cetin, A.E. Multi-sensor ambient assisted living system for fall detection. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’14), Florence, Italy, 4–9 May 2014; pp. 1–3. [Google Scholar]
  40. Liu, C.; Jiang, Z.; Su, X.; Benzoni, S.; Maxwell, A. Detection of Human Fall Using Floor Vibration and Multi-Features Semi-Supervised SVM. Sensors 2019, 19, 3720. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. De Silva, L.C.; Darussalam, B. Audiovisual sensing of human movements for home-care and security in a smart environment. Int. J. Smart Sens. Intell. Syst. 2008, 1, 220–245. [Google Scholar] [CrossRef] [Green Version]
  42. Charney, P.; Malone, A. ADA Pocket Guide to Nutrition Assessment; American Dietetic Associati: Chicago, IL, USA, 2009. [Google Scholar]
  43. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1967; Volume 1, pp. 281–297. [Google Scholar]
  44. Pelleg, D.; Moore, A. Accelerating exact k-means algorithms with geometric reasoning. In Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Association for Computing Machinery: New York, NY, USA, 1999; pp. 277–281. [Google Scholar]
  45. Thorndike, R.L. Who belongs in the family? Psychometrika 1953, 18, 267–276. [Google Scholar] [CrossRef]
  46. Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 1992, 46, 175–185. [Google Scholar]
  47. Huang, H.; Gao, Y.; Chang, W.S. Human-induced vibration of cross-laminated timber (CLT) floor under dif-ferent boundary conditions. Eng. Struct. 2020, 204, 110016. [Google Scholar] [CrossRef]
  48. O’Neill, T.W.; Varlow, J.; Silman, A.J.; Reeve, J.; Reid, D.M.; Todd, C.; Woolf, A.D. Age and sex influences on fall characteristics. Ann. Rheum. Dis. 1994, 53, 773–775. [Google Scholar] [CrossRef] [Green Version]
  49. Sotimehin, A.E.; Yonge, A.V.; Mihailovic, A.; West, S.K.; Friedman, D.S.; Gitlin, L.N.; Ramulu, P.Y. Locations, Circumstances, and Outcomes of Falls in Patients With Glaucoma. Am. J. Ophthalmol. 2018, 192, 131–141. [Google Scholar] [CrossRef]
Figure 1. Proportion of body parts of the model (data source [41]).
Figure 1. Proportion of body parts of the model (data source [41]).
Ijerph 18 00200 g001
Figure 2. The moving mode and range of each joint in the model.
Figure 2. The moving mode and range of each joint in the model.
Ijerph 18 00200 g002
Figure 3. Components required for model assembly.
Figure 3. Components required for model assembly.
Ijerph 18 00200 g003
Figure 4. Model load (data source [42]).
Figure 4. Model load (data source [42]).
Ijerph 18 00200 g004
Figure 5. Experiment setup.
Figure 5. Experiment setup.
Ijerph 18 00200 g005
Figure 6. Time-history record for human fall.
Figure 6. Time-history record for human fall.
Ijerph 18 00200 g006
Figure 7. The process for forward and backward fall. (a) Front of the first step of forward fall; (b) Front of the second step of forward fall; (c) Front of the third step of forward fall; (d) The left side of the first step of forward fall; (e) The left side of the second step of forward fall; (f) The left side of the third step of forward fall; (g) Front of the first step of backwards fall; (h) Front of the second step of backwards fall; (i) Front of the third step of backwards fall; (j) The left side of the first step of backwards fall; (k) The left side of the second step of backwards fall; (l) The left side of the third step of backwards fall.
Figure 7. The process for forward and backward fall. (a) Front of the first step of forward fall; (b) Front of the second step of forward fall; (c) Front of the third step of forward fall; (d) The left side of the first step of forward fall; (e) The left side of the second step of forward fall; (f) The left side of the third step of forward fall; (g) Front of the first step of backwards fall; (h) Front of the second step of backwards fall; (i) Front of the third step of backwards fall; (j) The left side of the first step of backwards fall; (k) The left side of the second step of backwards fall; (l) The left side of the third step of backwards fall.
Ijerph 18 00200 g007
Figure 8. The experiment of object drops.
Figure 8. The experiment of object drops.
Ijerph 18 00200 g008
Figure 9. Selection of the K value using the elbow method.
Figure 9. Selection of the K value using the elbow method.
Ijerph 18 00200 g009
Figure 10. Pattern generation when K = 30. (a) Original pattern of human forward fall; (b) Simplified pattern of human forward fall; (c) Original pattern of human backwards fall; (d) Simplified pattern of human backwards fall; (e) Original pattern of object drop; (f) Simplified pattern of object drop.
Figure 10. Pattern generation when K = 30. (a) Original pattern of human forward fall; (b) Simplified pattern of human forward fall; (c) Original pattern of human backwards fall; (d) Simplified pattern of human backwards fall; (e) Original pattern of object drop; (f) Simplified pattern of object drop.
Ijerph 18 00200 g010
Figure 11. Standardized patterns for walking and fall. (a) original walking pattern; (b) original fall pattern; (c) simplified walking pattern; (d) simplified fall pattern.
Figure 11. Standardized patterns for walking and fall. (a) original walking pattern; (b) original fall pattern; (c) simplified walking pattern; (d) simplified fall pattern.
Ijerph 18 00200 g011
Figure 12. Extracted parameters of human fall and object drop.
Figure 12. Extracted parameters of human fall and object drop.
Ijerph 18 00200 g012
Figure 13. Plot of human fall (red dots) and object drop (blue dots) with different combined parameters (parameter 1: w, parameter 2: v, parameter 3: zr, parameter 4: r, parameter 5: d).
Figure 13. Plot of human fall (red dots) and object drop (blue dots) with different combined parameters (parameter 1: w, parameter 2: v, parameter 3: zr, parameter 4: r, parameter 5: d).
Ijerph 18 00200 g013
Figure 14. Features extracted from pattern for forward and backward falls. (a) Original Pattern of backwards fall; (b) Original Pattern of forward fall; (c) Simplified Pattern of backwards fall; (d) Simplified Pattern of forward fall.
Figure 14. Features extracted from pattern for forward and backward falls. (a) Original Pattern of backwards fall; (b) Original Pattern of forward fall; (c) Simplified Pattern of backwards fall; (d) Simplified Pattern of forward fall.
Ijerph 18 00200 g014
Figure 15. Work flow for the classification system.
Figure 15. Work flow for the classification system.
Ijerph 18 00200 g015
Table 1. Accuracy of classification with different T values.
Table 1. Accuracy of classification with different T values.
Parameter SettingBinary Classification of People Walking and FallingBinary Classification of People Falling and Object DropsBinary Classification of Different Falling Postures
T =1.5 s, K1 = 30, K2 = 90.820.820.85
T = 2 s, K1 = 30, K2 = 90.870.780.85
T = 2.5 s, K1 = 30, K2 = 90.850.770.86
T = 3 s, K1 = 30, K2 = 90.890.780.87
T = 3.5 s, K1 = 30, K2 = 90.880.810.83
T = 4 s, K1 = 30, K2 = 90.780.800.80
Table 2. Accuracy of classification with different K2 values.
Table 2. Accuracy of classification with different K2 values.
Parameter SettingBinary Classification of People Walking and Falling (T = 3)Binary Classification of People Falling and Object Drops (T = 1.5)Binary Classification of Different Falling Postures (T = 3)
K1 = 30, K2 = 30.890.790.86
K1 = 30, K2 = 50.910.790.87
K1 = 30, K2 = 70.920.820.88
K1 = 30, K2 = 90.890.820.87
Table 3. Values of the selected parameters in different data sets.
Table 3. Values of the selected parameters in different data sets.
ParameterValueHuman WalkingHuman Fall
Parameter 1:
Vibration Time
(second)
Average value0.52670.3856
Maximum value0.58750.9757
Minimum value0.43440.1674
Standard deviation0.04590.1334
Parameter 2:
Number of dots
Average value28.714213.9800
Maximum value3020
Minimum value274
Standard deviation1.27772.8305
Table 4. Index of the classification using a single parameter.
Table 4. Index of the classification using a single parameter.
ParameterIndexCross Validation Iteration 1Cross Validation Iteration 2Cross Validation Iteration 3Cross Validation Iteration 4Cross Validation Iteration 5Average Value
Parameter 1accuracy111111
precision111111
recall111111
Parameter 2accuracy10.981110.996
precision10.8751110.975
recall10.931110.986
Table 5. Performance of selected parameters in different data sets.
Table 5. Performance of selected parameters in different data sets.
Parameter ValueHuman FallObject Drop
Parameter 1: w
(second)
Average value0.38560.2576
Maximum value0.97560.6501
Minimum value0.16740.1094
Standard deviation0.13330.1104
Parameter 2: v
(second 2)
Average value0.68000.2336
Maximum value4.01551.1266
Minimum value0.03480.0140
Standard deviation0.61740.2178
Parameter 3: zr
(number of dots)
Average value16.023818.5000
Maximum value2626
Minimum value109
Standard deviation2.82413.4071
Parameter 4: r
( s e c o n d 3 m )
Average value15.905042.0024
Maximum value52.0430108.7313
Minimum value2.33607.4762
Standard deviation12.069620.3400
Parameter 5: d
(number of dots)
Average value3.10421.2543
Maximum value14.63494.9231
Minimum value0.33480
Standard deviation2.14921.2223
Table 6. Index of classification of human falls and object drops.
Table 6. Index of classification of human falls and object drops.
Parameter IndexCross Validation Iteration 1Cross Validation Iteration 2Cross Validation Iteration 3Cross Validation Iteration 4Cross Validation Iteration 5Average Value
Parameter 1 (w)Accuracy0.5630.6370.5940.6270.5480.594
Precision0.6880.5630.6880.7060.6840.6658
Recall0.6880.5630.6880.750.8120.7002
Parameter 2 (v)Accuracy0.750.6560.6560.6880.7180.694
Precision0.7220.6920.6670.6880.6840.6906
Recall0.8130.5630.6250.6880.8120.7002
Parameter 3 (zr)Accuracy0.750.6250.6250.6560.5930.645
Precision0.786110.8570.6670.862
Recall0.6880.250.250.3750.3750.3876
Parameter 4 (r)Accuracy0.7550.70.7370.7750.7750.7484
Precision10.78610.90910.939
Recall0.6880.6880.6250.6250.6870.6626
Parameter 5 (d)Accuracy0.5940.6250.5570.5940.60.594
Precision0.7220.5880.6670.6840.6840.669
Recall0.8130.6250.6250.8130.8130.7378
Table 7. Index of the classification of human fall and object drop with combined parameters.
Table 7. Index of the classification of human fall and object drop with combined parameters.
Parameter CombinationIndexCross Validation Iteration 1Cross Validation Iteration 2Cross Validation Iteration 3Cross Validation Iteration 4Cross Validation Iteration 5Average Value
w&vAccuracy0.750.8130.750.750.750.7626
Precision0.7220.7780.750.720.7220.7384
Recall0.8130.8750.750.8120.8120.8124
w&zrAccuracy0.7580.7790.7920.7150.7750.7638
Precision0.71610.7050.7360.7330.778
Recall0.750.6880.750.8750.6870.75
w&rAccuracy0.7540.7750.7970.7130.7750.7628
Precision10.78510.90910.9388
Recall0.6880.6870.6250.6250.6880.6626
w&dAccuracy0.750.7180.7180.7180.750.7308
Precision0.70.7050.7050.6840.70.6988
Recall0.8750.750.750.8120.8750.8124
v&zrAccuracy0.8440.8530.8460.8620.8430.8496
Precision0.8240.7360.7780.82310.8322
Recall0.8750.8750.8750.8750.6880.8376
v&rAccuracy0.8150.8350.7850.850.8350.824
Precision10.90910.90910.9636
Recall0.6880.6250.6250.6250.6880.6502
v&dAccuracy0.6880.8120.7810.6870.7180.7372
Precision0.6670.8120.9090.6870.6840.7518
Recall0.750.8120.6250.6870.8120.7372
zr&rAccuracy0.7840.7840.7840.7670.7780.7794
Precision0.9230.8120.9230.8330.7860.8554
Recall0.750.8120.750.6250.6880.725
zr&dAccuracy0.750.7810.750.750.750.7562
Precision0.7220.9090.7220.7220.7220.7594
Recall0.8130.6250.8150.8150.8120.776
r&dAccuracy0.8440.8870.8470.8330.8520.8526
Precision10.750.9090.91610.915
Recall0.6880.5620.6250.6870.6250.6374
Table 8. Performance of selected parameters in different datasets.
Table 8. Performance of selected parameters in different datasets.
ParameterValueHuman Forward FallHuman Backward Fall
Parameter 1: port
(second)
Average value0.61220.3579
Maximum value10.5511
Minimum value0.35490.1148
Standard deviation0.12950.1061
Parameter 2: ave
(second)
Average value0.53910.4604
Maximum value0.77310.6062
Minimum value0.32500.2739
Standard deviation0.08260.0834
Parameter 3: sca
(second)
Average value1.13670.7710
Maximum value1.59131.0841
Minimum value0.75750.2974
Standard deviation0.16640.1624
Table 9. Index of the classification of human fall postures with single and combined parameters.
Table 9. Index of the classification of human fall postures with single and combined parameters.
Parameter CombinationIndexCross Validation Iteration 1Cross Validation Iteration 2Cross Validation Iteration n3Cross Validation Iteration 4Cross Validation Iteration 5Average Value
portAccuracy0.7530.7370.7920.7150.7780.755
Precision0.7060.7060.70.90.7220.7468
Recall0.80.80.80.60.8150.763
aveAccuracy0.6780.6940.6480.7150.7330.6936
Precision0.6320.8750.610.8870.7988
Recall0.80.9330.80.66710.84
scaAccuracy0.90.8890.8780.920.90.8974
Precision0.8750.8250.87510.6670.8484
Recall0.9330.7670.930.66710.8594
port&aveAccuracy0.90.9370.8940.90.9150.9092
Precision0.87510.8250.7220.6840.8212
Recall0.930.6880.7670.8150.8120.8024
port&scaAccuracy0.80.8340.7450.8360.8150.806
Precision0.8750.8750.73710.750.8474
Recall0.930.930.9330.68810.8962
ave&scaAccuracy0.9120.8780.8850.9250.9470.9094
Precision0.7370.8250.87510.7360.8346
Recall0.9330.7670.930.6880.9330.8502
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shao, Y.; Wang, X.; Song, W.; Ilyas, S.; Guo, H.; Chang, W.-S. Feasibility of Using Floor Vibration to Detect Human Falls. Int. J. Environ. Res. Public Health 2021, 18, 200. https://doi.org/10.3390/ijerph18010200

AMA Style

Shao Y, Wang X, Song W, Ilyas S, Guo H, Chang W-S. Feasibility of Using Floor Vibration to Detect Human Falls. International Journal of Environmental Research and Public Health. 2021; 18(1):200. https://doi.org/10.3390/ijerph18010200

Chicago/Turabian Style

Shao, Yu, Xinyue Wang, Wenjie Song, Sobia Ilyas, Haibo Guo, and Wen-Shao Chang. 2021. "Feasibility of Using Floor Vibration to Detect Human Falls" International Journal of Environmental Research and Public Health 18, no. 1: 200. https://doi.org/10.3390/ijerph18010200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop