Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Body fat prediction through feature extraction based on anthropometric and laboratory measurements

  • Zongwen Fan,

    Roles Formal analysis, Investigation, Methodology, Software, Validation, Writing – original draft

    Affiliations School of Information and Physical Sciences, The University of Newcastle, Callaghan, NSW, Australia, College of Computer Science and Technology, Huaqiao University, Xiamen, China

  • Raymond Chiong ,

    Roles Conceptualization, Investigation, Methodology, Supervision, Writing – review & editing

    Raymond.Chiong@newcastle.edu.au

    Affiliation School of Information and Physical Sciences, The University of Newcastle, Callaghan, NSW, Australia

  • Zhongyi Hu,

    Roles Investigation, Supervision, Writing – review & editing

    Affiliation School of Information Management, Wuhan University, Wuhan, China

  • Farshid Keivanian,

    Roles Investigation, Validation

    Affiliation School of Information and Physical Sciences, The University of Newcastle, Callaghan, NSW, Australia

  • Fabian Chiong

    Roles Investigation, Supervision, Writing – review & editing

    Affiliation Alice Springs Hospital, The Gap, NT, Australia

Abstract

Obesity, associated with having excess body fat, is a critical public health problem that can cause serious diseases. Although a range of techniques for body fat estimation have been developed to assess obesity, these typically involve high-cost tests requiring special equipment. Thus, the accurate prediction of body fat percentage based on easily accessed body measurements is important for assessing obesity and its related diseases. By considering the characteristics of different features (e.g. body measurements), this study investigates the effectiveness of feature extraction for body fat prediction. It evaluates the performance of three feature extraction approaches by comparing four well-known prediction models. Experimental results based on two real-world body fat datasets show that the prediction models perform better on incorporating feature extraction for body fat prediction, in terms of the mean absolute error, standard deviation, root mean square error and robustness. These results confirm that feature extraction is an effective pre-processing step for predicting body fat. In addition, statistical analysis confirms that feature extraction significantly improves the performance of prediction methods. Moreover, the increase in the number of extracted features results in further, albeit slight, improvements to the prediction models. The findings of this study provide a baseline for future research in related areas.

1 Introduction

Obesity, characterised by excess body fat, is a medical problem that increases one’s risk of other diseases and health issues, such as cardiovascular diseases, diabetes, musculoskeletal disorders, depression and certain cancers [13]. These diseases could result in escalating the spiralling economic and social costs of nations [4]. Conversely, having extremely low body fat is also a significant risk factor for infection in children and adolescents [5], and it may cause pubertal delay [6], osteoporosis [7] and surgical complications [8]. Thus, the accurate prediction of both excess and low body fat is critical to identifying possible treatments, which would prevent serious health problems. Although a huge volume of medical data is available from sensors, electronic medical health records, smartphone applications and insurance records, analysing the data is difficult [9]. There are often too many measurements (features), leading to the curse of dimensionality [10] from a data analytics viewpoint. With a relatively small size of patient samples, but a large number of disease measurements, it is very challenging to train a highly accurate prediction model [11]. In addition, redundant, irrelevant or noise features may further hinder the prediction model’s performance [12].

Feature extraction, as an important tool in data mining for data pre-processing, has been applied to reduce the number of input features by creating new, more representative combinations of features [13]. This process reduces the number of features without leading to significant information loss [14]. In this study, three widely used feature extraction methods are utilised to reduce features. Specifically, by analysing large interrelated features, Factor Analysis (FA) can be used to extract the underlying factors (latent features) [15]. It is able to identify latent factors that adequately predict a dataset of interest. Unlike FA, which assumes there is an underlying model, Principal Component Analysis (PCA) is a descriptive feature reduction method that applies an optimal set of derived features, extracted from the original features, for model training [16]. PCA data projection concerns only the variances between samples and their distribution. Independent Component Analysis (ICA), a technique that assumes the data to be the linear mixtures of non-Gaussian independent sources [17], is widely used in blind source separation applications [18].

Feature extraction has been widely used in the medical area to map redundant, relevant and irrelevant features into a smaller set of features from the original data [19, 20]. For example, Das et al. [21] applied feature extraction methods to extract significant features from the raw data before using an Artificial Neural Network (ANN) model for medical disease classification. Their results showed that feature extraction methods could increase the accuracy of diagnosis. Tran et al. [22] proposed an improved FA method for cancer subtyping and risk prediction with good results. Sudharsan and Thailambal [23] applied PCA to pre-process the experimental datasets used for predicting Alzheimer’s disease. Their results showed that applying PCA for pre-processing could improve the precision of the prediction model. In the work of Franzmeier et al. [24], ICA was utilised to extract features from cross-sectional data for connectivity-based prediction of tau spreading in Alzheimer’s disease with impressive results.

In addition, machine learning methods have been increasingly applied to solve body fat prediction problems [25]. Shukla and Raghuvanshi [26] showed that the ANN model is effective for estimating the body fat percentage using anthropometric data in a non-diseased group. Kupusinac et al. [27] also employed ANNs for body fat prediction and achieved high prediction accuracy. Keivanian et al. [28, 29] considered a weighted sum of body fat prediction errors and the ratio of features, and optimised the prediction using a metaheuristic search-based feature selection-Multi-Layer Perceptron (MLP) model (MLP is a type of ANN). Chiong et al. [30] proposed an improved relative-error Support Vector Machine (SVM) for body fat prediction with promising results. Fan et al. hybridised a fuzzy-weighted operation and Gaussian kernel-based machine learning models to predict the body fat percentage, while Uçar et al. [31] combined a few machine learning methods (e.g. ANN and SVM) for the same purpose, and their models achieved satisfactory predictions.

In this study, we apply FA, PCA and ICA to extract critical features from the available features, using four machine learning methods—MLP, SVM, Random Forest (RF) [32], and eXtreme Gradient Boosting (XGBoost) [33]—to predict the body fat percentage. We consider five metrics, that is, the mean absolute error (MAE), standard deviation (SD), root mean square error (RMSE), robustness (MAC) and efficiency, in the evaluation process. We use experimental results based on real-world body fat datasets to validate the effectiveness of feature extraction for body fat prediction. One of the datasets is from the StatLib, based on body circumference measurements [34]; the other dataset is from the National Health and Nutrition Examination Survey (NHANES) based on physical examinations [35]. In addition, we employ the Wilcoxon rank-sum test [36] to validate whether the prediction accuracy based on feature extraction improves significantly or not. The motivation of this study is to assess and compare different feature extraction methods for body fat prediction as well as provide a baseline for future research in related areas. It is worth pointing out that the results presented here are new in the context of body fat prediction. We also explore the optimal number of features used for each of the feature extraction methods while balancing accuracy and efficiency.

The rest of this paper is organised as follows: Section 2 briefly introduces the feature extraction methods and prediction models. In Section 3, experimental results based on the real-world body fat datasets are provided; specifically, performance measurements are first described, and then experimental results based on feature extraction for the prediction of body fat percentage are discussed. Lastly, Section 4 concludes this study and highlights some future research directions.

2 Methods

In this section, we first discuss three widely used feature extraction methods: FA, PCA and ICA. Then, we present four well-known machine learning algorithms—MLP, SVM, RF and XGBoost.

2.1 Feature extraction methods

Feature extraction methods are widely used in data mining for data pre-processing [37]. They can reduce the number of input features without incurring much information loss [38]. In this case, they can alleviate the overfitting of prediction models by removing redundant, irrelevant or noise measurements/features. In addition, with less misleading features, the model accuracy and computation time could be further improved.

2.1.1 Factor analysis.

This widely used statistical method for feature extraction is an exploratory data analysis method. FA can be used to reduce the number of observable features with a set of fewer latent features (factors) without losing much information [39]. Each latent feature is able to describe the relationships between the corresponding observed features. Since the factor cannot be directly measured with a single feature, it is measured through the relationships in a set of common features, if and only if one of these requirements is satisfied: (a) The minimum number of features is used to capture maximum variability in the data and (b) the information overlap among the factors is minimised. By doing so, (1) the most common variance between features is extracted by the first latent factor; (2) eliminating the factor extracted in (1), the second factor with the most variance between the remaining features is extracted; and (3) steps (1) and (2) are repeated until the rest of features are tested. FA is very helpful for reducing features in a dataset where a large number of features can be presented by a smaller number of latent features. An example of the relationship between a factor and its observed features is given in Fig 1, in which p denotes the number of observed features. If the models has k latent features, then the assumption in FA is given in Eq 1. Generally, FA calculates a correlation matrix based on the correlation coefficient to determine the relationship for each pair of features. Then, the factor loadings are analysed to check which features are loaded onto which factors where factor loadings can be estimated using maximum likelihood [40]. (1) where are factor loadings, which means that wir is the factor loading of the ith variable on the rth factor (similar to weights or strength of the correlation between the feature and the factor) [41], and ei is the error term, which denotes the variance in each feature that is unexplained by the factor.

thumbnail
Fig 1. An example of the relationship between a factor and its observed features.

https://doi.org/10.1371/journal.pone.0263333.g001

2.1.2 Principle component analysis.

PCA is a very useful tool for reducing the dimensionality of a dataset, especially when the features are interrelated [42]. This non-parametric method uses an orthogonal transformation to convert a set of features into a smaller set of features termed principal components. Using a covariance matrix, we are able to measure the association of each feature with other features. To decompose the covariance matrix, singular value decomposition [43] can be applied for linear dimensionality reduction by projecting the data into a lower dimensional space, which yields eigenvectors and eigenvalues of the principal components. In this case, we could obtain the directions of data distribution and the relative importance of these directions. A positive covariance between two features indicates that the features increase or decrease together, whereas a negative covariance indicates that the features vary in opposite directions. The first principal component could preserve as much of the information in the data as possible, whereas the second one could retain as much of the remaining variability as possible until no features are left. In other words, the extracted principal components are ordered in terms of their importance (variance). Considering that PCA is sensitive to the relative scaling of the original features, in practice, it is better to normalise the data before using PCA. An example of using a component to represent its corresponding features is given in Fig 2. As this figure shows, each component is a linear function of its corresponding features, whereas a feature in FA is a function of given factors plus an error term.

thumbnail
Fig 2. An example of using a component to represent its corresponding features.

https://doi.org/10.1371/journal.pone.0263333.g002

2.1.3 Independent component analysis.

ICA is a blind source separation technique [44]. It is very useful for finding factors hidden behind random signals, measurements or features based on high-order statistics. The purpose of ICA is to minimise the statistical dependence of the components of the representation. By doing so, the dependency among the extracted signals is eliminated. To achieve good performance, some assumptions should be met before using ICA [45]: (1) The source signals (features) should be statistically independent; (2) the mixture signals should be linearly independent from each other; (3) the data should be centred (zero-mean operation for every signals); and (4) the source signals should have a non-Gaussian distribution. One widely used application of ICA is the cocktail party problem [46]. As Fig 3 illustrates, there are two people speaking, and each has a voice signal. These signals are received by the microphones, which then send the mixture signals. Since the distance between the microphones and the people differ, the mixture signals from microphones differ as well. Using ICA for signal extraction, the original signals can be obtained. Notably, it is difficult for FA and PCA to extract source signals (original components).

thumbnail
Fig 3. An example of the process of extracting signals from the cocktail party problem with two speaking people (source signals) and two microphones (mixture signals).

https://doi.org/10.1371/journal.pone.0263333.g003

2.2 Prediction models

In this section, four widely used machine learning models—MLP, SVM, RF and XGBoost—are introduced.

2.2.1 MLP.

The MLP is a type of ANN that generally has three different kinds of layers, including the input, hidden and output layers [47]. Each layer is connected to its adjacent layers. Similarly, each neuron in the hidden and output layers is connected to all the neurons in the previous layer with a weight vector. The values from the weighted sum of inputs and bias term are fed into a non-linear activation function as outputs for the next layer. Fig 4 shows an example of MLP with three, two and one input, hidden and output neurons, respectively. We can see from the figure that the input layer has three input neurons (x1, x2, x3) and one bias term with a value of b1. Their values, based on the inner product with the weight matrix, are fed into the hidden layer. In this step, the input is first transformed using a learned non-linear transformation—an activation function g(⋅)—that projects the input data into a new space where it becomes linearly separable. The outputs of two neurons in the hidden layer depend on the outputs of input neurons and a bias neuron in the same layer with a value of b2. The output layer has one neuron that takes inputs from the hidden layer with the activation function, where f(x) is the feed-forward prediction value from an input vector x.

thumbnail
Fig 4. An example of MLP with three input neurons, two hidden neurons, and one output neuron.

https://doi.org/10.1371/journal.pone.0263333.g004

2.2.2 SVM.

SVMs, founded on the structural risk minimisation principle and statistical learning theory [48], have been widely used in many real-world applications and have displayed satisfactory performance (e.g., see [4951]). Given n training samples , the standard form of ε-SVM regression can be expressed as Eq (2). We can see from Fig 5 that, unlike the SVM for classification problems that classifies a sample into a binary class, the SVM regression fits the best line within a threshold value ε with tolerate errors (ξi and ). (2) where w is a weight vector, wT is the transpose of w, b is a bias term, ξi and are slack variables of the ith sample, C is a penalty parameter, ε is a tolerance error, xi and yi are the ith input vector and output value, respectively, and ϕ(x) is a function that is able to map a sample from a low dimension space to a higher dimension space.

thumbnail
Fig 5. ε-SVM regression with the ε-insensitive hinge loss, meaning there is no penalty to errors within the ε margin.

https://doi.org/10.1371/journal.pone.0263333.g005

After solving the objective function in Eq (2) using the Lagrangian function [52] and Karush–Kuhn–Tucker conditions [53], we can obtain the best parameters ( and ) for the SVM. The final prediction model, g(x), can be expressed as follows: (3) where Kernel(xi, xj) = ϕ(xi)ϕ(xj) is a kernel function [54].

2.2.3 RF.

The RF, proposed by Ho [55], is a decision tree-based ensemble model. For body fat prediction, the RF regression model uses an ensemble learning method for regression. It creates many decision trees based on the training set [56]. By combining multiple decision trees into one model, the RF model improves the prediction accuracy and stability. It is also able to avoid overfitting by utilising resampling and feature selection techniques. The training procedure of RF is given in Fig 6. As the figure illustrates, the RF generates many sub-datasets with the same size of samples from the given training samples based on the re-sampling strategy. Then, for each new training set, each decision tree is trained with the selected features based on recursive partitioning, where a decision tree search is applied for the best split from the selected features. The final output is based on the average of predictions from all the decision trees.

2.2.4 XGBoost.

XGBoost is also an ensemble model [57]. It employs gradient boosting [58] to group multiple results from the decision tree-based models as the final result. In addition, it uses shrinkage and feature sub-sampling to further reduce the impact of overfitting [59]. XGBoost is suitable in applications that require parallelisation, distributed computing, out-of-core computing, and cache optimisation, which is suitable in real-world applications that have high requirements of computation time and storage memory [60]. The training procedure of XGBoost is depicted in Fig 7. It can be seen from the figure that XGBoost is based on gradient boosting. More specifically, new models (decision trees) are built to predict the errors (residuals) of prior models (from f1 to the current model). Once all the models are obtained, they are integrated together to make the final prediction.

3 Experimental results and discussions

In this section, we present the results of the computational experiments conducted based on two body fat datasets—Cases 1 and 2—to validate the effectiveness of feature extraction methods for body fat prediction. Case 1 is based on anthropometric measurements, while Case 2 is based on physical examination and laboratory measurements. We compare four well-known machine learning algorithms, the MLP, SVM, RF and XGBoost, with the feature extraction methods used. Specifically, MLP_FA, MLP_PCA and MLP_ICA are the MLP based on FA, PCA and ICA; SVM_FA, SVM_PCA and SVM_ICA are the SVM based on FA, PCA and ICA; RF_FA, RF_PCA and RF_ICA are the RF based FA, PCA and ICA; and XGBoost_FA, XGBoost_PCA and XGBoost_ICA are XGBoost based on FA, PCA and ICA. The programming/development environment was based on Python using scikit-learn, and the experiments were executed on a computer with an i5-6300HQ CPU of 2.30GHz having 16.0 GB RAM.

3.1 Performance measures

In this study, we considered five performance measures. Specifically, the MAE and RMSE were used to evaluate the model’s approximation ability, SD was used to measure the variability of the errors between the predicted and target values, MAC [61] was used to evaluate model robustness, and computation time was used to measure the efficiency. To better evaluate the performance, we randomly shuffled the data and ran the experiments of five-fold cross validation for 20 times, then averaged them to get the final results. The computation time included the time for feature extraction and 20 runs of five-fold cross validation. Our objective was to minimise the MAE, SD, RMSE and computation time while maximising MAC. (4) (5) (6) (7) where n is the number of samples, and are prediction and target values of the ith sample, respectively, ei is the ith sample’s absolute error, is the average of absolute errors, (yp)Tyt is the inner product operation for (yp)T and yt, and (yp)T is the transpose of yp.

3.2 Parameter settings

We used the grid search approach with cross validation for parameter selection [62]. The settings used in our experiments, obtained after some tuning process, are listed in Table 1.

thumbnail
Table 1. Parameter settings for the prediction models, where #neurons is the number of neurons, #iterations is the maximum number of iterations, regularisation is the regularisation parameter, σ2 is the variance within the RBF kernel, #trees is the number of trees, and depth is the maximum depth of the tree.

https://doi.org/10.1371/journal.pone.0263333.t001

A flowchart of different feature extraction methods used for body fat prediction based on K-fold cross validation with N repeated experiments is given in Fig 8 to further clarify the procedure of our experiments. In the figure, K = 5 and N = 20; i.e., the experiments were repeated 20 times and each experiment was conducted based on 5-fold cross validation.

thumbnail
Fig 8. A flowchart of different feature extraction methods used for body fat prediction based on K-fold cross validation with N repeated experiments.

https://doi.org/10.1371/journal.pone.0263333.g008

3.3 Case 1: Body fat percentage prediction based on anthropometric measurements

3.3.1 Data description.

The body fat dataset used in Case 1 contained 252 samples with 13 input features and one output feature. It was downloaded from the StatLib (see http://lib.stat.cmu.edu/datasets/bodyfat). The statistical descriptions of this dataset are provided in Table 2. The input features included age, weight and various body circumference measurements, and the output feature was the body fat percentage.

thumbnail
Table 2. Statistical properties of Case 1’s body fat dataset.

https://doi.org/10.1371/journal.pone.0263333.t002

3.3.2 Determination of the number of extracted features.

To determine the number of extracted features, we calculated the explained variance for each feature by using scikit-learn [63]. We only selected the principal components that have the largest eigenvalues based on a given threshold (i.e. how much information it contained). The four steps to determine the number of extracted features were as follows: (1) constructing the covariance matrix; (2) decomposing the covariance matrix into its eigenvectors and eigenvalues; (3) sorting the eigenvalues by decreasing order to rank the corresponding eigenvectors; and (4) selecting the k largest eigenvalues such that their cumulative explained variance reached the given threshold. The explained variance ratio for the StatLib dataset is given in Fig 9. Here, the threshold was set to 0.99, which means 99% of the information remained. In this case, six features were extracted from the 13 input features.

3.3.3 Experiments and results.

Table 3 presents the results obtained by the MLP, SVM, RF and XGBoost for body fat prediction with and without feature extraction. As shown in the table, the SVM, RF and XGBoost perform better than MLP. The performance of SVM and XGBoost is similar, whereas that of RF is the best in terms of accuracy. However, it is clear that, by incorporating feature extraction, the learning models can achieve higher prediction accuracy, stability and robustness in most cases. The XGBoost model with FA feature extraction generated the most precise and stable results, albeit taking longer computation time than the standalone XGBoost. Using the feature extraction method increases the computation time because feature extraction pre-processing also takes time, even though it is more efficient to train the prediction model with less input features. Among all the prediction models, XGBoost with FA for feature extraction shows the best prediction accuracy (MAE = 3.433, SD = 4.188 and RMSE = 4.248), and the SVM with PCA obtained results in the shortest computation time (close to the standalone SVM).

thumbnail
Table 3. Experimental results based on the StatLib dataset (best results are highlighted in bold).

https://doi.org/10.1371/journal.pone.0263333.t003

3.3.4 Statistical analysis based on the Wilcoxon rank-sum test.

Although the results of MLP, SVM and XGBoost presented thus far have shown that the use of feature extraction can improve their performance, statistical analysis is needed to validate whether the differences between the results obtained are statistically significant. In this section, we report the results of statistical tests conducted based on the Wilcoxon rank-sum test [64]. Table 4 shows the statistical test results based on the 20-run experimental results. As shown in the table, the MLP, SVM and XGBoost and their versions with the feature extraction methods incorporated are significantly different (the p-value is less than 0.05). However, the difference between the RF and RF_PCA is not significant. This means the use of feature extraction is effective in improving the performance of MLP, SVM and XGBoost.

thumbnail
Table 4. Wilcoxon rank-sum tests for the MLP, SVM, RF, XGBoost, and the use of feature extraction, based on the StatLib dataset in terms of RMSE (p-values less than 0.05 are highlighted in bold).

https://doi.org/10.1371/journal.pone.0263333.t004

3.3.5 Prediction performance with more extracted features.

To investigate the impact of having a different number of anthropometric features on the prediction performance, we increased the number of extracted features from 6 (as calculated in Section 3.3.2) to 13 (the total number of input features) in this series of experiments. Tables 57 show the results obtained by the MLP, SVM, RF and XGBoost using FA, PCA and ICA, respectively. As shown in Tables 57, in most cases, the accuracy (RMSE and MAE) and stability (SD and MAC) were not necessarily enhanced by extracting more features as the inputs of the learning models. Among the models being compared, XGBoost-FA performs the best for predicting the body fat percentage in terms of MAE, RMSE, SD and MAC, which means it is able to predict the body fat percentage with the highest accuracy and stability on the StatLib dataset.

thumbnail
Table 5. Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with FA feature extraction (best results are highlighted in bold; # means the number of features).

https://doi.org/10.1371/journal.pone.0263333.t005

thumbnail
Table 6. Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with PCA feature extraction (best results are highlighted in bold; # means the number of features).

https://doi.org/10.1371/journal.pone.0263333.t006

thumbnail
Table 7. Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with ICA feature extraction (best results are highlighted in bold; # means the number of features).

https://doi.org/10.1371/journal.pone.0263333.t007

It is critical to reduce the number of dimensions when the data size or the number of dimensions is large (big data scenarios). In addition, the prediction models with PCA outperform the corresponding versions with ICA in terms of all the metrics used. This might be due to the Gaussian distribution of the body fat dataset since PCA can process the Gaussian distribution data while ICA cannot.

Fig 10 depicts the comparative experimental results of the computation time for the MLP, SVM, RF and XGBoost using FA, PCA and ICA, respectively. The results show that XGBoost with FA is the fastest among the compared methods. Fig 10 also reveals that in some cases, the computation time increases with more features, which further highlights the importance of feature extraction in improving the efficiency. The computation time includes the time for feature extraction and 20 runs of five-fold cross validation, which means that when a different number of features are extracted, the time for feature extraction may also differ.

thumbnail
Fig 10. Comparison results in terms of computation time based on FA, PCA and ICA feature extraction for the StatLib dataset.

https://doi.org/10.1371/journal.pone.0263333.g010

3.4 Case 2: Body fat percentage prediction based on physical examination and laboratory measurements

3.4.1 Data description.

The body fat dataset used in Case 2 was downloaded from the NHANES (see https://www.cdc.gov/nchs/nhanes/index.htm). The data were pre-processed as in [65] by (1) combining DEMO, LAB11, LAB18, LAB25, BMX, and BIX files into one dataset, (2) keeping data on male adults (age > 18); and (3) removing samples with missing values. After pre-processing, 862 samples with 39 features were obtained. These features and their statistical descriptions are provided in Table 8.

thumbnail
Table 8. Statistical properties of Case 2’s body fat dataset.

More details can be found at https://www.cdc.gov/nchs/nhanes/index.htm.

https://doi.org/10.1371/journal.pone.0263333.t008

3.4.2 Determination of the number of extracted features.

We ran the same experiment as in Section 3.3.2 to determine the number of extracted features. The explained variance ratio for the NHANES dataset is given in Fig 11. With the threshold set to 0.99, 12 features were extracted from the 38 input features.

3.4.3 Experiment results.

Table 9 presents results obtained through the MLP, SVM, RF and XGBoost for body fat prediction with and without feature extraction. These results are consistent with those shown in Table 3, and show that ensemble models such as XGBoost performs better than the MLP and SVM. Similarly, results show that incorporating feature extraction into the prediction models enhances the body fat prediction accuracy. The XGBoost model with PCA feature extraction generated the most precise and stable results, as well as shorter computation time than the standalone XGBoost.

thumbnail
Table 9. Experimental results based on the NHANES dataset (best results are highlighted in bold).

https://doi.org/10.1371/journal.pone.0263333.t009

3.4.4 Statistical analysis based on the Wilcoxon rank-sum test.

Table 10 presents statistical test results between the experimental results with and without feature extraction pre-processing. As shown in the table, the MLP, SVM, RF and XGBoost and their versions that use feature extraction are significantly different (the p-value is less than 0.05). This means the use of feature extraction methods are effective in improving the performance of MLP, SVM and XGBoost, but not that of RF (the performance of RF_FA, RF_PCA and RF_ICA is less than that of RF in Table 9).

thumbnail
Table 10. Wilcoxon rank-sum tests for the MLP, SVM, RF, XGBoost, and the use of feature extraction, based on the NHANES dataset in terms of RMSE (p-values less than 0.05 are highlighted in bold).

https://doi.org/10.1371/journal.pone.0263333.t010

3.4.5 Prediction performance with more extracted features.

To evaluate the prediction performance on increasing the number of extracted features, we conducted experiments in which the number of features used ranged from 12 (as calculated in Section 3.4.2) to 38 (the total number of input features).

Tables 1113 show the results obtained from the MLP, SVM, RF and XGBoost by using FA, PCA and ICA for feature extraction, respectively. From the tables, we can observe that with more features extracted, the prediction models can be further improved using feature extraction methods. Table 11 shows that XGBoost based on FA feature extraction has the best prediction accuracy (3.713, 4.707 and 4.728 in terms of (MAE, SD, RMSE) using 38 features. However, it performs satisfactorily using 24 features (3.772, 4.783, 4.803), which is more feasible in real applications. As shown in Table 12, the MLP has the best performance using 35 features. It has improved (from 4.160, 5.230, 5.250 and 0.948 to 3.621, 4.618, 4.647 and 0.960) in terms of MAE, SD, RMSE and MAC. As Table 13 shows, XGBoost outperforms the other models in comparison with the use of different number of features. Its best result is 3.805, 4.818, 4.840 and 0.955 in terms of MAE, SD, RMSE and MAC based on 24 extracted features. The results with 38 features are used as the baseline. Analysing the results from Tables 1113 reveals that the MLP, SVM, RF, and XGBoost with feature extraction performed similarly or better than their corresponding baselines in terms of all metrics with only half the features (19 features). This shows the potential of greatly improving the efficiency in real-world applications. In addition, analysis reveals that PCA is more suitable for extracting features for the body fat dataset than ICA. The reason could be that this body fat dataset has a Gaussian distribution and PCA is better suited for Gaussian-distribution data whereas ICA is better suited for non-Gaussian distribution data.

thumbnail
Table 11. Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with FA feature extraction (best results are highlighted in bold; # means the number of features).

https://doi.org/10.1371/journal.pone.0263333.t011

thumbnail
Table 12. Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with PCA feature extraction (best results are highlighted in bold; # means the number of features).

https://doi.org/10.1371/journal.pone.0263333.t012

thumbnail
Table 13. Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with ICA feature extraction (best results are highlighted in bold; # means the number of features).

https://doi.org/10.1371/journal.pone.0263333.t013

Among the three feature extraction algorithms, PCA is the most effective one for this dataset. It greatly improves the performance of the prediction models being compared. In addition, Fig 12 depicts the comparative experimental results of computation time for the MLP, SVM, RF and XGBoost with different number of features extracted from FA, PCA and ICA. As shown in the figure, for each prediction model, there is a trend that with more features used, more time is needed. The prediction models ordered by computation time from the most time-consuming to the most efficient are the MLP, RF, XGBoost and SVM.

thumbnail
Fig 12. Comparison results in terms of computation time based on FA, PCA, and ICA feature extraction for the NHANES dataset.

https://doi.org/10.1371/journal.pone.0263333.g012

4 Conclusion

The accurate prediction of body fat is important for assessing obesity and its related diseases. However, researchers find it challenging to analyse the large volumes of medical data generated. The main purpose of this study is to analyse and compare the prediction effectiveness of four well-known machine learning models (MLP, SVM, RF and XGBoost) when combined with three widely used feature extraction approaches (FA, PCA and ICA) for body fat prediction. The results presented in this paper are new in the context of body fat prediction; they could, therefore, provide a baseline for future research in this domain.

Experimental results showed that feature extraction methods can reduce features without incurring significant loss of information for body fat prediction. In Case 1, with only six extracted features, the prediction models exhibited better performance than the models without using feature extraction. This finding confirms the effectiveness of feature extraction. Among the comparison models, XGBoost with FA had the best approximation ability and high efficiency. With the increase in the number of extracted features, model performance can be further improved. For Case 2, PCA was the most effective in improving model performance. Although the MLP with PCA had the best prediction accuracy, it required significantly more computation time. This means XGBoost is more appropriate for real-world applications, given its similar prediction accuracy and greater efficiency. Statistical analysis based on the Wilcoxon rank-sum test confirmed that feature extraction significantly improved the performance of MLP, SVM and XGBoost. This finding confirms the effectiveness of using feature extraction in these models. Although, the prediction models can be further improved slightly by increasing the number of extracted features, the number of features determined by the explained variance ratio was sufficient in both the considered cases.

The feature extraction results themselves are a novel contribution of this work. The results provided by XGBoost with PCA feature extraction could be used as the baseline for future research in related areas. In future studies, we plan to investigate ways to improve the feature extraction method specified for body fat datasets. Methods of improving the prediction model (e.g. an improved MLP [66]), using XGBoost with PCA as a baseline for body fat prediction, also need to be investigated. It is also worth noting that the findings of this work could be applied to other prediction problems with a large number of features, e.g., finance, engineering and healthcare. Finally, we will explore other applications of analysing the body fat percentage. For example, applying domain knowledge to group body fat percentages into different disease classes in order to confirm the relationship between the body fat percentage and specific disease(s).

References

  1. 1. García-Jiménez C, Gutiérrez-Salmerón M, Chocarro-Calvo A, García-Martinez JM, Castaño A, De la Vieja A. From obesity to diabetes and cancer: epidemiological links and role of therapies. British Journal of Cancer. 2016;114(7):716. pmid:26908326
  2. 2. Collaborators GO. Health effects of overweight and obesity in 195 countries over 25 years. New England Journal of Medicine. 2017;377(1):13–27.
  3. 3. Jantaratnotai N, Mosikanon K, Lee Y, McIntyre RS. The interface of depression and obesity. Obesity Research & Clinical Practice. 2017;11(1):1–10. pmid:27498907
  4. 4. Edelman CL, Mandle CL, Kudzma EC. Health promotion throughout the life span. Elsevier Health Sciences; 2017.
  5. 5. Dobner J, Kaser S. Body mass index and the risk of infection-from underweight to obesity. Clinical Microbiology and Infection. 2018;24(1):24–28. pmid:28232162
  6. 6. Greer MM, Kleinman ME, Gordon LB, Massaro J, D’Agostino RB Sr, Baltrusaitis K, et al. Pubertal progression in female adolescents with progeria. Journal of Pediatric and Adolescent Gynecology. 2018;31(3):238–241. pmid:29258958
  7. 7. Lim J, Park H. Relationship between underweight, bone mineral density and skeletal muscle index in premenopausal Korean women. International Journal of Clinical Practice. 2016;70(6):462–468. pmid:27163650
  8. 8. Manrique J, Chen AF, Gomez MM, Maltenfort MG, Hozack WJ. Surgical site infection and transfusion rates are higher in underweight total knee arthroplasty patients. Arthroplasty Today. 2017;3(1):57–60. pmid:28378008
  9. 9. Raghupathi W, Raghupathi V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems. 2014;2(1):3. pmid:25825667
  10. 10. Urbanowicz RJ, Meeker M, La Cava W, Olson RS, Moore JH. Relief-based feature selection: Introduction and review. Journal of Biomedical Informatics. 2018;85:189–203. pmid:30031057
  11. 11. Inbarani HH, Azar AT, Jothi G. Supervised hybrid feature selection based on PSO and rough sets for medical diagnosis. Computer Methods and Programs in Biomedicine. 2014;113(1):175–185. pmid:24210167
  12. 12. Bolón-Canedo V, Sánchez-Maroño N, Alonso-Betanzos A. Feature selection for high-dimensional data. Progress in Artificial Intelligence. 2016;5(2):65–75.
  13. 13. Ding S, Zhu H, Jia W, Su C. A survey on feature extraction for pattern recognition. Artificial Intelligence Review. 2012;37(3):169–180.
  14. 14. Pölsterl S, Conjeti S, Navab N, Katouzian A. Survival analysis for high-dimensional, heterogeneous medical data: Exploring feature extraction as an alternative to feature selection. Artificial Intelligence in Medicine. 2016;72:1–11. pmid:27664504
  15. 15. Dandu SR, Engelhard MM, Qureshi A, Gong J, Lach JC, Brandt-Pearce M, et al. Understanding the physiological significance of four inertial gait features in multiple sclerosis. IEEE Journal of Biomedical and Health Informatics. 2017;22(1):40–46.
  16. 16. Pořízka P, Klus J, Képeš E, Prochazka D, Hahn DW, Kaiser J. On the utilization of principal component analysis in laser-induced breakdown spectroscopy data analysis, a review. Spectrochimica Acta Part B: Atomic Spectroscopy. 2018;148:65–82.
  17. 17. Ablin P, Cardoso JF, Gramfort A. Faster independent component analysis by preconditioning with Hessian approximations. IEEE Transactions on Signal Processing. 2018;66(15):4040–4049.
  18. 18. Comon P, Jutten C. Handbook of Blind Source Separation: Independent component analysis and applications. Academic Press; 2010.
  19. 19. Dara S, Tumma P, Eluri NR, Kancharla GR. Feature Extraction In Medical Images by Using Deep Learning Approach. International Journal of Pure and Applied Mathematics. 2018;120(6):305–312.
  20. 20. Varshni D, Thakral K, Agarwal L, Nijhawan R, Mittal A. Pneumonia Detection Using CNN based Feature Extraction. In: Proceedings of the IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT). IEEE; 2019. pp. 1–7.
  21. 21. Das H, Naik B, Behera H. Medical disease analysis using neuro-fuzzy with feature extraction model for classification. Informatics in Medicine Unlocked. 2020;18:100288.
  22. 22. Tran D, Nguyen H, Le U, Bebis G, Luu HN, Nguyen T. A novel method for cancer subtyping and risk prediction using consensus factor analysis. Frontiers in Oncology. 2020;10:1052. pmid:32714868
  23. 23. Sudharsan M, Thailambal G. Alzheimer’s disease prediction using machine learning techniques and principal component analysis (PCA). Materials Today: Proceedings. 2021.
  24. 24. Franzmeier N, Dewenter A, Frontzkowski L, Dichgans M, Rubinski A, Neitzel J, et al. Patient-centered connectivity-based prediction of tau pathology spread in Alzheimer’s disease. Science Advances. 2020;6(48):eabd1327. pmid:33246962
  25. 25. DeGregory K, Kuiper P, DeSilvio T, Pleuss J, Miller R, Roginski J, et al. A review of machine learning in obesity. Obesity Reviews. 2018;19(5):668–685. pmid:29426065
  26. 26. Shukla SMPSA, Raghuvanshi RS. Artificial Neural Network: A New Approach for Prediction of Body Fat Percentage Using Anthropometry Data in Adult Females. International Journal on Recent and Innovation Trends in Computing and Communication. 2018;6(2):117–125.
  27. 27. Kupusinac A, Stokić E, Doroslovački R. Predicting body fat percentage based on gender, age and BMI by using artificial neural networks. Computer Methods and Programs in Biomedicine. 2014;113(2):610–619. pmid:24275480
  28. 28. Keivanian F, Mehrshad N. Intelligent feature subset selection with unspecified number for body fat prediction based on binary-GA and Fuzzy-Binary-GA. In: Proceedings of the 2nd International Conference on Pattern Recognition and Image Analysis (IPRIA). IEEE; 2015. pp. 1–7.
  29. 29. Keivanian F, Chiong R, Hu Z. A Fuzzy Adaptive Binary Global Learning Colonization-MLP model for Body Fat Prediction. In: Proceedings of the 3rd International Conference on Bio-engineering for Smart Technologies (BioSMART). IEEE; 2019. pp. 1–4.
  30. 30. Chiong R, Fan Z, Hu Z, Chiong F. Using an improved relative error support vector machine for body fat prediction. Computer Methods and Programs in Biomedicine. 2021;198:105749. pmid:33080491
  31. 31. Uçar MK, Ucar Z, Köksal F, Daldal N. Estimation of body fat percentage using hybrid machine learning algorithms. Measurement. 2021;167:108173.
  32. 32. Breiman L. Random forests. Machine Learning. 2001;45(1):5–32.
  33. 33. Chen T, He T, Benesty M, Khotilovich V, Tang Y. Xgboost: Extreme gradient boosting. R Package Version 04-2. 2015; pp. 1–4.
  34. 34. Johnson RW. Body fat dataset, [Online; accessed 4 April 2021]; 1995. http://lib.stat.cmu.edu/datasets/bodyfat.
  35. 35. cdc gov W. National Health and Nutrition Examination Survey, NHANES 1999-2000 Examination Data, [Online; accessed 4 April 2021]; 2013. https://wwwn.cdc.gov/nchs/nhanes/Search/DataPage.aspx?Component=Laboratory&CycleBeginYear=1999.
  36. 36. Fay MP, Proschan MA. Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Statistics Surveys. 2010;4:1. pmid:20414472
  37. 37. Khalid S, Khalil T, Nasreen S. A survey of feature selection and feature extraction techniques in machine learning. In: Proceedings of the Science and Information Conference. IEEE; 2014. pp. 372–378.
  38. 38. Widodo A, Yang BS. Support vector machine in machine condition monitoring and fault diagnosis. Mechanical Systems and Signal Processing. 2007;21(6):2560–2574.
  39. 39. Salas-Gonzalez D, Górriz J, Ramírez J, Illán I, López M, Segovia F, et al. Feature selection using factor analysis for Alzheimer’s diagnosis using PET images. Medical Physics. 2010;37(11):6084–6095. pmid:21158320
  40. 40. De Vito R, Bellio R, Trippa L, Parmigiani G. Multi-study factor analysis. Biometrics. 2019;75(1):337–346. pmid:30289163
  41. 41. Yong AG, Pearce S. A beginner’s guide to factor analysis: Focusing on exploratory factor analysis. Tutorials in Quantitative Methods for Psychology. 2013;9(2):79–94.
  42. 42. Jolliffe IT, Cadima J. Principal component analysis: a review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2016;374(2065):20150202. pmid:26953178
  43. 43. Xie J, Chen W, Zhang D, Zu S, Chen Y. Application of principal component analysis in weighted stacking of seismic data. IEEE Geoscience and Remote Sensing Letters. 2017;14(8):1213–1217.
  44. 44. Ahmad J, Akula A, Mulaveesala R, Sardana H. An independent component analysis based approach for frequency modulated thermal wave imaging for subsurface defect detection in steel sample. Infrared Physics & Technology. 2019;98:45–54.
  45. 45. Langlois D, Chartier S, Gosselin D. An introduction to independent component analysis: InfoMax and FastICA algorithms. Tutorials in Quantitative Methods for Psychology. 2010;6(1):31–38.
  46. 46. Tharwat A. Independent component analysis: An introduction. Applied Computing and Informatics. 2018. 10.1016/j.aci.2018.08.006
  47. 47. Zhang Y, Sun Y, Phillips P, Liu G, Zhou X, Wang S. A multilayer perceptron based smart pathological brain detection system by fractional Fourier entropy. Journal of Medical Systems. 2016;40(7):1–11. pmid:27250502
  48. 48. Vapnik VN. An overview of statistical learning theory. IEEE Transactions on Neural Networks. 1999;10(5):988–999. pmid:18252602
  49. 49. Lo SL, Chiong R, Cornforth D. Ranking of high-value social audiences on Twitter. Decision Support Systems. 2016;85:34–48.
  50. 50. Chiong R, Budhi GS, Dhakal S. Combining sentiment lexicons and content-based features for depression detection. IEEE Intelligent Systems 2021;36(6):99–105.
  51. 51. Chiong R, Budhi GS, Dhakal S, Chiong F. A textual-based featuring approach for depression detection using machine learning classifiers and social media texts. Computers in Biology and Medicine. 2021; 104499. pmid:34174760
  52. 52. Fan Z, Chiong R, Hu Z, Lin Y. A fuzzy weighted relative error support vector machine for reverse prediction of concrete components. Computers & Structures. 2020;230:106171.
  53. 53. Fan Z, Chiong R, Chiong F. A fuzzy-weighted Gaussian kernel-based machine learning approach for body fat prediction. Applied Intelligence. 2021; pp. 1–10.
  54. 54. Sihag P, Jain P, Kumar M. Modelling of impact of water quality on recharging rate of storm water filter system using various kernel function based regression. Modeling Earth Systems and Environment. 2018;4(1):61–68.
  55. 55. Ho TK. Random decision forests. In: Proceedings of the Third International Conference on Document Analysis and Recognition. vol. 1. IEEE; 1995. pp. 278–282.
  56. 56. Zahedi P, Parvandeh S, Asgharpour A, McLaury BS, Shirazi SA, McKinney BA. Random forest regression prediction of solid particle Erosion in elbows. Powder Technology. 2018;338:983–992.
  57. 57. Georganos S, Grippa T, Vanhuysse S, Lennert M, Shimoni M, Wolff E. Very high resolution object-based land use–land cover urban classification using extreme gradient boosting. IEEE Geoscience and Remote Sensing Letters. 2018;15(4):607–611.
  58. 58. Ke G, Meng Q, Finley T, Wang T, Chen W, Ma W, et al. Lightgbm: A highly efficient gradient boosting decision tree. In: Advances in Neural Information Processing Systems; 2017. pp. 3146–3154.
  59. 59. Wang H, Liu C, Deng L. Enhanced prediction of hot spots at protein-protein interfaces using extreme gradient boosting. Scientific Reports. 2018;8(1):14285. pmid:30250210
  60. 60. Ilager S, Ramamohanarao K, Buyya R. Thermal prediction for efficient energy management of clouds using machine learning. IEEE Transactions on Parallel and Distributed Systems. 2020;32(5):1044–1056.
  61. 61. Fan Z, Chiong R, Hu Z, Dhakal S, Lin Y. A two-layer Wang-Mendel fuzzy approach for predicting the residuary resistance of sailing yachts. Journal of Intelligent & Fuzzy Systems. 2019;36(6):6219–6229.
  62. 62. Witten IH, Frank E, Hall MA, Pal CJ. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann; 2016.
  63. 63. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research. 2011;12(Oct):2825–2830.
  64. 64. Murakami H. The power of the modified Wilcoxon rank-sum test for the one-sided alternative. Statistics. 2015;49(4):781–794.
  65. 65. Ferenci T, Kovács L. Predicting body fat percentage from anthropometric and laboratory measurements using artificial neural networks. Applied Soft Computing. 2018;67:834–839.
  66. 66. Huk M. Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network. International Journal of Applied Mathematics and Computer Science. 2012;22(2):449–459.