Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Optimal experimental conditions for Welan gum production by support vector regression and adaptive genetic algorithm

  • Zhongwei Li,

    Roles Conceptualization, Formal analysis, Project administration, Supervision

    Affiliation College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China

  • Xiang Yuan,

    Roles Data curation, Software, Validation, Writing – original draft

    Affiliation College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China

  • Xuerong Cui,

    Roles Software

    Affiliation College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China

  • Xin Liu,

    Roles Conceptualization, Writing – review & editing

    Affiliation College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China

  • Leiquan Wang,

    Roles Writing – review & editing

    Affiliation College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China

  • Weishan Zhang,

    Roles Writing – review & editing

    Affiliation College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China

  • Qinghua Lu,

    Roles Writing – review & editing

    Affiliation College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China

  • Hu Zhu

    Roles Conceptualization, Supervision, Writing – review & editing

    zhuhu@fjnu.edu.cn

    Affiliation College of Chemistry and Materials, Fujian Normal University, Fuzhou 350007, China

Abstract

Welan gum is a kind of novel microbial polysaccharide, which is widely produced during the process of microbial growth and metabolism in different external conditions. Welan gum can be used as the thickener, suspending agent, emulsifier, stabilizer, lubricant, film-forming agent and adhesive usage in agriculture. In recent years, finding optimal experimental conditions to maximize the production is paid growing attentions. In this work, a hybrid computational method is proposed to optimize experimental conditions for producing Welan gum with data collected from experiments records. Support Vector Regression (SVR) is used to model the relationship between Welan gum production and experimental conditions, and then adaptive Genetic Algorithm (AGA, for short) is applied to search optimized experimental conditions. As results, a mathematic model of predicting production of Welan gum from experimental conditions is obtained, which achieves accuracy rate 88.36%. As well, a class of optimized experimental conditions is predicted for producing Welan gum 31.65g/L. Comparing the best result in chemical experiment 30.63g/L, the predicted production improves it by 3.3%. The results provide potential optimal experimental conditions to improve the production of Welan gum.

Introduction

Welan gum is a kind of polysaccharide, which is one of the secretions of Alcaligenes sp.NX-3 strain. It has good stability, ideal thickening property, unique shear thinning property, good suspension and emulsification, and assured safety, and can be used in oil drilling with its unique shear-thinning properties. Finding optimal experimental conditions to maximize the production of Welan gum is paid growing attentions. This can process the production of Welan gum industrially. In 2014, producing Welan gum fermentation in laboratory is achieved in [1], where cyperus beans are used as raw materials, protein and hydrolysis as substrate. After that, Bacillus foecalis alkaligenes are designed as starting bacterial strain, to optimize the yield process of Welan gum by response surface method [2].

It is found that many factors affecting the production of Welan gum, such as glucose, yeast, liquid volume, PH vale, temperature, which contribute the experimental conditions of producing Welan gum. To find the optimal experimental conditions, we need to consider the following aspects:

  1. function of each factor;
  2. interaction between each pair of factors;
  3. relationship among all the factors.

In 2010, Li et al used the batch fermentation experiment data of Welan gum’s starting bacterial strain Alcaligenessp.CGMCC2428 to carry out the dynamic model research, implemented fermentation process of Welan gum optimization control [3]. In 2016, JMP statistical analysis software was used to optimize the fermentation medium of Welan gum by Alcaligenes sp.Y5. With the optimized experimental conditions, the production of Welan gum was increased from 15.72 g/L to 26.58 g/L, with an increment of 69.08% [4].

Recently, many significant artificial intelligent algorithms and data processing strategies has been applied on data mining, such as a self-adaptive artificial bee colony algorithm based on global best for global optimization [5], the public auditing protocol with novel dynamic structure for cloud data [6], privacy-preserving smart semantic search method for conceptual graphs over encrypted outsourced data [7], a privacy-preserving and copy-deterrence content for image data processing with retrieval scheme in cloud computing [8], strategy solving NP problems such as subset sum problem based on SN P systems [9], Apriori algorithm based on tissue-like P systems [10], split clustering algorithm based on P systems on simplices [11], spatial clustering algorithm based on DNA model [12], PSO algorithm based on dynamic niche technology [13] and machine learning method have been applied for experimental condition design, see. e.g. a secure and dynamic multi-keyword ranked search scheme over encrypted cloud data [14]. In this work, we presents a hybrid computational method to optimize experimental conditions for producing Welan gum with data collected from experiments records. Specifically, Support Vector Regression (SVR) is used to model relationship between Welan gum production and experimental conditions, and then adaptive Genetic Algorithm (AGA) is used to search optimized experimental conditions. As results, a mathematic model of predicting production of Welan gum from experimental conditions with accuracy rate 88.36% is obtained, a class of optimized experimental conditions is designed to produce Welan gum 31.65g/L. Comparing the best results in chemical lab 30.63g/L, the predicted production can be improved by 3.3%. The result provides a potential experimental conditions by data mining to improve the production of Welan gum in the lab.

Related technologies

In this section, the two main methods used, Support Vector Regression (SVR) and adaptive Genetic Algorithm (AGA), are briefly recalled.

Here, we choose the SVR method mainly because of our limited samples. First of all, as for the regression of a small amount of samples, SVR has many advantages, such as a few adjusted parameters and fast arithmetic speed, etc. Secondly, the final decision function of SVR is determined by only a small number of support vectors. Finally, the computational complexity depends on the number of support vectors, not the dimension of the sample space, which also reflects that the robustness of the SVR method is better.

Genetic algorithm is a global search algorithm, which have a good reference for our problems. However, the traditional genetic algorithm still needs to be improved in terms of global search ability and convergence speed. The adaptive Genetic Algorithm we adopt can improve these two aspects to a certain extent. In the case of crossover probability, the AGA method can enable the crossover probability to vary with the evolution process and give the same crossover ability to the individuals of the same generation population, so as to realize the global search ability better. In the case of mutation probability, according to the fitness value of each individual to be mutated, the AGA method can make the mutation probability adaptively change with the evolutionary process.

Support vector regression

Support Vector Machine (SVM) is known as a kind of machine learning method for classification proposed in 1995 [15], has been widely used in biological data processing [1618] and bioinformatics [1923]. It focuses on doing classification with seeking structured minimum risk to improve the generalization ability of learning machine and minimizing empirical risk and confidence limit [24, 25], thus achieving good statistical law under the condition of the less statistical sample size. In general, it is a kind of two-category model, the basic model is defined as the feature space interval on the maximum linear classifier. The learning strategy of SVM is to maximize the interval, which finally can be converted into a convex quadratic programming problem.

Support Vector Regression (SVR) is developed based on SVM for dealing with regression forecasting problems [26, 27]. Some basic concepts of SVR are briefly recalled.

Given a set of training data {(x1, y1), (x2, y2), …, (xl, yl)}, Rn × R, where xi denotes the input samples, yi is the target value and l is the total number of input samples. In SVR, the goal is to find a function f(x), i.e., an optimal hyperplane, which has at most ε deviation from the actually obtained target yi for all the training data as flat as possible. The form of functions is denoted as (1) where Φ(⋅) is a nonlinear mapping by which the input data x is mapped into a high dimensional space F, (⋅, ⋅) denotes the dot product in space F. Eq (1) can be transformed into the following convex constrained optimization problem by introducing the non-negative slack variables ξi and to cope with the otherwise infeasible constraints (2) thereinto, C > 0, with C being the penalty parameter. ξi, are slack variables introduced in order to allow a certain error [2832]. ξ is also a parameter of the ε-insensitive loss function, where ε is called the tube size [33]. The greater the value of C is, the greater the penalty for data points beyond the ε deviation, which determines the balance between the degree of smoothness of the function and the number of sample points beyond ε deviation. To find the upper bound of a convex quadratic programming problem, Lagrangian function is applied: (3) thereinto, αi, , ηi, are the Lagrange multiplier. The optimization problem can be obtained as follows: (4) where is the nonnegative Lagrange multiplier that can be obtained by solving the convex quadratic programming problem. By exploiting the Karush-Kuhn-Tucker (KKT) conditions of the primal optimization problem [3436], we can get the equation , which means that both of the multipliers and equal to zero, or one of multipliers is zero and is nonzero. The data samples with non-vanishing Lagrange multipliers are called the support vectors inside or outside the ε-insensitive tube [33].

The regression estimation function can be obtained by learning as follows: (5) thereinto, (6) where NNSV represents the number of standard support vectors. K(xi, xj) is defined as the kernel function. According to Hilbert-Schmidt principle, when kernel function matches Mercer conditions, that is, for any given function g(x), if is limited, the value of the kernel is equal to the dot product of two vectors xi and xj in the feature space Φ(xi) and Φ(xj), i.e., K(xi, xj) = 〈Φ(xi), Φ(xj)〉 [33].

We choose here the Gauss radial basis function as kernel function.

(7)

where σ is the kernel parameter.

Adaptive genetic algorithm

Genetic Algorithm (GA) derives from the computer simulation study of biological system [37], which has been widely used function optimization, combinatorial optimization, job shop scheduling problems [38], complex network clustering, pattern mining [3941]. However, there are still some disadvantages, the most obvious disadvantages are the low efficiency and easy to fall into local optimum [42, 43].

In 2000, adaptive Genetic Algorithm (AGA) [44] is proposed, which improves the performance of traditional GA to some extent. After that, adaptive GA is improved by involving certain intelligent strategies, including crossover to avoid inbreeding, crossover probability associated with the number of evolution and regulating adaptive mutation probability [45]. The formula which is only related to the number of evolution for cross-probabilistic computing is as follows: (8) (9) In the formula, mtmp is an intermediate variable for calculation, TGen is the maximum evolutionary number preset, t is the current evolutionary number (0 ≤ tTGen), Pc, max is the largest crossover probability preset, Pc, min is the smallest crossover probability preset, and Pc(t) is the crossover probability of current population.

The formula of adaptive mutation probability related to the number of genetic evolution and individual fitness is as follows: (10) (11)

In the formula, Pm, max is the largest mutation probability preset, Pm, min is the smallest mutation probability preset, f(xi) is the fitness value of individual xi, fmax is the maximum value of fitness in current populations, Pm(t) is the mutation probability of individual xi in current population [45].

The mathematic model and data experiments

In this section, it starts by selecting probable elements from original data, and then the values of two important parameters of the model are determined. After that, the mathematic model based on SVR is built to describe the relationship between Welan gum products and experimental conditions. With the model, AGA is applied to find the optimal sample point of the model, which corresponds to a class of potential optimal experimental conditions to maximize the production of Welan gum. The flowchart is shown in Fig 1.

The mathematic model

Data preparation.

Before building the mathematic model for describing the relationship between Welan gum production and experimental conditions, it needs to normalize the data. SVR mainly deals with the nonlinear problems, so the magnitude of the eigenvalues of the samples should be different greatly, the results will be greatly affected without normalizing samples. Besides, normalizing samples can avoid the small weight of the model and leading to the instability of the numerical calculation, so that the parameter optimization can converge at a faster speed and the accuracy of the model can be improved. The normalized formula used in our method is as follows: (12) where x is the original data, y is the normalized data, xmin is the minimum of the original data, xmax is the maximum of the original data, ymin is the minimum of the normalized data, ymax is the maximum of the normalized data. The value of ymin is set to be 0 and the value of ymax to be 1. The normalized data is shown in Tables 1 and 2 below:

Without losing the generality, all 67 samples collected from Welan gum producing experiments are classified according to the production, which are divided into three types: high, middle and low level production. Specifically, productions between 0g/L and 5g/L belong to low level production data, in total 8 groups; productions between 5g/L and 20g/L are in medium level, in total 39 groups; productions more than 20g/L are in high level, in total 20 groups.

Each time the model data is taken, the order of the samples within each yield is randomly arranged, For each level data groups, the first 70% of each type data is used as training data, the 30% data left are used as the testing data.

Before building the mathematic model, it is necessary to determine the values of two parameters, namely penalty factor parameters (c) and kernel function parameters (g). Here, grid search method is used to determine the optimal values of the two parameters. The result is shown in Fig 2 below:

In the above figure of contour line, two red dotted lines are represented separately the optimal values of the two parameters. The intersection of two lines, that is, the red point in the figure represents the value of the “CVmse”. The CVmse means that the mean of the squares of the difference between the predicted value and the true value under the 5-fold cross validation.

After the values of the parameters are determined, the training data and testing data are determined according to the selection of the aforementioned method. The index of the accuracy of the model is reflected in the square of correlation coefficient. The diagrams in Figs 3 and 4 reflect the model’s prediction of the testing data and the relative error.

thumbnail
Fig 3. Comparison of raw data and regression predictive data.

https://doi.org/10.1371/journal.pone.0185942.g003

Finding optimal experimental conditions by AGA

With the mathematical model constructed, an improved AGA is used to find experimental conditions for optimal production. The process has the following steps.

Step 1: Initialize the population and encode the individuals.

Each sample is related to nine variables, so we consider the nine variables as nine genes that make up a chromosome. For example, encode [glucose, yeast, KH2PO4, MgSO4, fluid volume, PH value, temperature, rotational speed, inoculation amount] to [x1, x2, x3, x4, x5, x6, x7, x8, x9], where x1 ∈ [5, 95], x2 ∈ [1, 10], x3 ∈ [1, 6], x4 ∈ [0.1, 1], x5 ∈ [25, 125], x6 ∈ [2, 12], x7 ∈ [25, 35], x8 ∈ [125, 250], x9 ∈ [1, 10].

Step 2: Select good individuals based on the fitness values.

Step 3: Perform crossover operation. From the first individual in the population, the corresponding crossover probability of the individual is calculated, denoted as cross_rate. We randomly generate a random number between 0 and 1, denoted as rand_num. If the value of rand_num is less than cross_rate, the individual is performed crossover operation. That is, two integers between 1 and 9 are randomly generated, where the smaller number is the starting position of the crossed chromosome, the larger number is the ending position, the chromosome of the individual is exchanged with the chromosome of the next adjacent individual, in the range from the starting position to the termination position. In addition, if the i-th individual did not perform the crossover operation, the above-described process is repeated for the i+1-th individual; if the i-th individual performed the crossover operation, the above-described process is repeated for the i+2-th.

Step 4: Perform mutation operation. From the first individual in the population, the corresponding mutation probability of the individual is calculated, denoted as mutate_rate. We randomly generate a random number between 0 and 1, denoted as rand_num. If the value of rand_num is less than mutate_rate, the individual is performed mutation operation. That is, an integer between 1 and 9 is randomly generated as the location of the gene that needs to be mutated, regenerate the gene at the location.

Step 5: The new individuals generated by the above operations constitute the new population, and go to step 2.

Repeat these steps until we find the optimal individual.

The size of initial population is set to be 300, that is there are 300 individuals, the number of iterations is 500. The selection operator is roulette selection method, which is also known as the proportional selection operator. The basic idea is that the probability of each individual selected is proportional to its fitness value.

(13)

where P(xi) is the selection probability of individual xi, K is the population size. The value of parameter Pc,min is set to be 0.6, Pc,max to be 0.9, Pm,max to be 0.1 and Pm,max to be 0.001. The search results are shown in Fig 5.

To improve the accuracy and further reduce the range of the nine gene variables. We made the following changes by observing the genetic variables of samples with productions higher than 30g/L, which is x1 ∈ [55, 60], x2 ∈ [2.5, 3.1], x3 ∈ [5, 5.5], x4 ∈ [0.1, 0.3], x5 ∈ [48, 51.5], x6 ∈ [6.7, 7.15], x7 ∈ [32, 33], x8 ∈ [176, 179], x9 ∈ [4.85, 5.15]. The average maximum fitness value of data experiments with 500 iterations each time is shown in Fig 6.

thumbnail
Fig 6. The average maximum yield result graph under 500 iterations.

https://doi.org/10.1371/journal.pone.0185942.g006

Results

The accuracy of the established mathematic model is 88.36%, the optimal medium composition ratio is shown in Table 3 below:

The maximum production of Welan gum is 31.65g/L.

This hybrid computational method, which combines with SVM and AGA, has the intelligent learning ability and can overcome the limitation of large-scale biotic experiments [4651]. A mathematic model of predicting production of Welan gum from experimental conditions with accuracy rate 88.36% is obtained, a class of optimized experimental conditions is designed to produce Welan gum 31.65g/L. Comparing the best results in chemical experiment 30.63g/L, the predicted production can be improved by 3.3%.

Conclusion

We focused on building a mathematic model of Welan gum, the nine factors which contribute the experimental conditions of producing Welan gum as preparative optimization indicators. The nine factors include glucose, yeast, KH2PO4, MgSO4, fluid volume, PH value, temperature, rotational speed and inoculation amount. A hybrid computational method combined with SVM and AGA is proposed. Through the training of sample data, a mathematic model of predicting production of Welan gum from experimental conditions is obtained. We find the optimal sample point in the sample space, i.e. a class of optimized experimental conditions. This hybrid computational method has a good learning ability, which can avoid the high cost problem caused by large-scale biological experiments. It also overcomes the “mature” defects of traditional Genetic Algorithm. The result provides a potential experimental conditions by data mining to improve the production of Welan gum in the lab.

For further research, neural-like computing models, e.g., spiking neural P systems [52] can be used for optimization of Welan gum production. As well, some recently developed data processing and mining methods, such as the speculative approach to spatial-temporal efficiency for multi-objective optimization in cloud data and computing [53], privacy-preserving smart similarity search methods in simhash over encrypted data in cloud computing [53], k-degree anonymity with vertex and edge modification algorithm [54], kernel quaternion principal component analysis for object recognition [55], might be used for optimizing experimental conditions of Welan gum. In the aspect of data preparation, decision tree [56] can be used to deal with the missing attribute value of some samples in dataset.

Acknowledgments

This work was supported by 863 program (2015AA020925), National Natural Science Foundation of China (61402187, 61502535, 61572522, 61572523, 61672033 and 61672248), Key Research and Development Program of Shandong Province (No. 2017GGX10147), China Postdoctoral Science Foundation funded project (2016M592267), PetroChina Innovation Foundation (2016D-5007-0305), Fundamental Research Funds for the Central Universities (R1607005A). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  1. 1. Long K, Li X, Xie T, Zhang Y, Liu W. Welan gum production from cyperus esculentus fermented by sphingomonas sp. ATCC 31555. Chemical Engineer. 2014;8:002.
  2. 2. Li H, Li S, Feng X, Wang F, Xu H. Production of Welan Gum by Alcaligenes sp.NX-3 with Fed-batch Fermentation. Food & Fermentation Industries. 2009;35(1):1–4.
  3. 3. Li H, Xu H, Li S, Feng X, Xu H, Ouyang P. Effects of dissolved oxygen and shear stress on the synthesis and molecular weight of welan gum produced from Alcaligenes sp. CGMCC2428. Process Biochemistry. 2011;46(5):1172–1178.
  4. 4. Liang J, Li Z, Chen B. Optimization of Fermentation Media for Welan Gum Using JMP. Food Research And Development. 2016;37(18):104–108.
  5. 5. Xue Y, Jiang J, Zhao B, Ma T. A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft Computing. 2017; (8):1–18.
  6. 6. Shen J, Shen J, Chen X, Huang X, Susilo W. An Efficient Public Auditing Protocol With Novel Dynamic Structure for Cloud Data. IEEE Transactions on Information Forensics & Security. 2017;12(10):2402–2415.
  7. 7. Fu Z, Huang F, Ren K, Weng J, Wang C. Privacy-Preserving Smart Semantic Search Based on Conceptual Graphs Over Encrypted Outsourced Data. IEEE Transactions on Information Forensics & Security. 2017;12(8):1874–1884.
  8. 8. Xia Z, Wang X, Zhang L, Qin Z, Sun X, Ren K. A Privacy-Preserving and Copy-Deterrence Content-Based Image Retrieval Scheme in Cloud Computing. IEEE Transactions on Information Forensics & Security. 2016;11(11):2594–2608.
  9. 9. Zhao Y, Liu X, Wang W. Spiking Neural P Systems with Neuron Division and Dissolution. PLOS ONE. 2016;11(9):e0162882. pmid:27627104
  10. 10. Liu X, Zhao Y, Sun M. An Improved Apriori Algorithm Based on an Evolution-Communication Tissue-Like P System with Promoters and Inhibitors. Discrete Dynamics in Nature and Society. 2017;2017:
  11. 11. Liu X, Xue J. A Cluster Splitting Technique by Hopfield Networks and P Systems on Simplices. Neural Processing Letters. 2017;46(1):171–194.
  12. 12. Liu X, Xiang L, Wang X. Spatial Cluster Analysis by the Adleman-Lipton DNA Computing Model and Flexible Grids. Discrete Dynamics in Nature and Society. 2012;2012(1–4):132–148.
  13. 13. Liu X, Liu H, Duan H. Particle swarm optimization based on dynamic niche technology with applications to conceptual design. Advances in Engineering Software. 2006;38(10):668–676.
  14. 14. Xia Z, Wang X, Sun X, Wang Q. A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud Data. IEEE Transactions on Parallel & Distributed Systems. 2016;27(2):340–352.
  15. 15. Vapnik V. The nature of statistical learning theory. Springer science & business media; 2013.
  16. 16. Wang X, Miao Y, Cheng M. Finding motifs in DNA sequences using low-dispersion sequences. Journal of Computational Biology. 2014;21(4):320–329. pmid:24597706
  17. 17. Wang X, Miao Y. GAEM: a hybrid algorithm incorporating GA with EM for planted edited motif finding problem. Current Bioinformatics. 2014;9(5):463–469.
  18. 18. Wu T, Wang X, Zhang Z, Gong F, Song T, Chen Z, et al. NES-REBS: a novel nuclear export signal prediction method using regular expressions and biochemical properties. Journal of bioinformatics and computational biology. 2016;14(03):1650013. pmid:27225342
  19. 19. Liu B, Zhang D, Xu R, Xu J, Wang X, Chen Q, et al. Combining evolutionary information extracted from frequency profiles with sequence-based kernels for protein remote homology detection. Bioinformatics. 2013;30(4):472–479. pmid:24318998
  20. 20. Zeng X, Liao Y, Liu Y, Zou Q. Prediction and validation of disease genes using HeteSim Scores. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2017;14(3):687–695. pmid:26890920
  21. 21. Liu B, Xu J, Lan X, Xu R, Zhou J, Wang X, et al. iDNA-Prot| dis: identifying DNA-binding proteins by incorporating amino acid distance-pairs and reduced alphabet profile into the general pseudo amino acid composition. PloS one. 2014;9(9):e106691. pmid:25184541
  22. 22. Zeng X, Zhang X, Liao Y, Pan L. Prediction and validation of association between microRNAs and diseases by multipath methods. Biochimica et Biophysica Acta (BBA)-General Subjects. 2016;1860(11):2735–2739.
  23. 23. Wang X, Song T, Pan Z, Hao MT Shaohua. Spiking Neural P Systems with Anti-Spikes and without Annihilating Priority. Romanian Journal of Information Science and Technology. 2017;20(1):32–41.
  24. 24. Vapnik VN, Vapnik V. Statistical learning theory. vol. 1. Wiley New York; 1998.
  25. 25. Burges CJ. A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery. 1998;2(2):121–167.
  26. 26. Wu Y, Krishnan S. Combining least-squares support vector machines for classification of biomedical signals: a case study with knee-joint vibroarthrographic signals. Journal of Experimental & Theoretical Artificial Intelligence. 2011;23(1):63–77.
  27. 27. Cai S, Yang S, Zheng F, Lu M, Wu Y, Krishnan S. Knee joint vibration signal analysis with matching pursuit decomposition and dynamic weighted classifier fusion. Computational and mathematical methods in medicine. 2013;2013.
  28. 28. Xuegong Z. Introduction to statistical learning theory and support vector machines. Acta Automatica Sinica. 2000;26(1):32–42.
  29. 29. Van Gestel T, Suykens JA, Baesens B, Viaene S, Vanthienen J, Dedene G, et al. Benchmarking least squares support vector machine classifiers. Machine Learning. 2004;54(1):5–32.
  30. 30. Amari Si, Wu S. Improving support vector machine classifiers by modifying kernel functions. Neural Networks. 1999;12(6):783–789. pmid:12662656
  31. 31. Chen W, Xing P, Zou Q. Detecting N6-methyladenosine sites from RNA transcriptomes using ensemble Support Vector Machines. Scientific reports. 2017;7:40242. pmid:28079126
  32. 32. Wu Y, Luo X, Zheng F, Yang S, Cai S, Ng SC. Adaptive linear and normalized combination of radial basis function networks for function approximation and regression. Mathematical Problems in Engineering. 2014;2014.
  33. 33. Wei G, Yu X, Long X. Novel approach for identifying Z-axis drift of RLG based on GA-SVR model. Journal of Systems Engineering and Electronics. 2014;25(1):115–121.
  34. 34. Burges CJ. Geometry and invariance in kernel based methods. Advances in kernel methodssupport vector learning. 1999; p. 89–116.
  35. 35. Schölkopf B, Burges CJ. Advances in kernel methods: support vector learning. MIT press; 1999.
  36. 36. Shevade SK, Keerthi SS, Bhattacharyya C, Murthy KRK. Improvements to the SMO algorithm for SVM regression. IEEE transactions on neural networks. 2000;11(5):1188–1193. pmid:18249845
  37. 37. Goldberg DE, Holland JH. Genetic algorithms and machine learning. Machine learning. 1988;3(2):95–99.
  38. 38. Zhang L, Pan H, Su Y, Zhang X, Niu Y. A Mixed Representation-Based Multiobjective Evolutionary Algorithm for Overlapping Community Detection. IEEE Transactions on Cybernetics. 2017;.
  39. 39. Ju Y, Zhang S, Ding N, Zeng X, Zhang X. Complex network clustering by a multi-objective evolutionary algorithm based on decomposition and membrane structure. Scientific reports. 2016;6.
  40. 40. Zhang X, Duan F, Zhang L, Cheng F, Jin Y, Tang K. Pattern Recommendation in Task-oriented Applications: A Multi-Objective Perspective;.
  41. 41. Song T, Gong F, Liu X, Zhao Y, Zhang X. Spiking neural P systems with white hole neurons. IEEE transactions on nanobioscience. 2016;15(7):666–673. pmid:28029614
  42. 42. Zeng X, Yuan S, Huang X, Zou Q. Identification of cytokine via an improved genetic algorithm. Frontiers of Computer Science: Selected Publications from Chinese Universities. 2015;9(4):643–651.
  43. 43. Song T, Pan L. Spiking neural P systems with request rules. Neurocomputing. 2016;193:193–200.
  44. 44. Srinivas M, Patnaik LM. Adaptive probabilities of crossover and mutation in genetic algorithms. IEEE Transactions on Systems, Man, and Cybernetics. 1994;24(4):656–667.
  45. 45. Ouyang S. A New Improved Genetic Algorithm. Computer Engineering & Applications. 2003;.
  46. 46. Li Z, Sun B, Xin Y, Wang X, Zhu H. A Computational Method for Optimizing Experimental Environments for Phellinus igniarius via Genetic Algorithm and BP Neural Network. BioMed Research International. 2016;2016.
  47. 47. Wang X, Song T, Gong F, Zheng P. On the computational power of spiking neural P systems with self-organization. Scientific reports. 2016;6:27624. pmid:27283843
  48. 48. Zhang X, Tian Y, Cheng R, Jin Y. A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization. IEEE Transactions on Evolutionary Computation. 2016;.
  49. 49. Zhang X, Tian Y, Jin Y. A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation. 2015;19(6):761–776.
  50. 50. Song T, Wang X, Zhang Z, Chen Z. Homogenous spiking neural P systems with anti-spikes. Neural Computing & Applications. 2014;24.
  51. 51. Song T, Zheng P, Wong MD, Wang X. Design of logic gates using spiking neural P systems with homogeneous neurons and astrocytes-like control. Information Sciences. 2016;372:380–391.
  52. 52. Song T, Xu J, Pan L. On the universality and non-universality of spiking neural P systems with rules on synapses. IEEE Transactions on NanoBioscience. 2015;14(8):960–966. pmid:26625420
  53. 53. Liu Q, Cai W, Shen J, Fu Z, Liu X, Linge N. A speculative approach to spatial-temporal efficiency with multi-objective optimization in a heterogeneous cloud environment. Security & Communication Networks. 2016;9(17):4002–4012.
  54. 54. Ma T, Zhang Y, Cao J, Shen J, Tang M, Tian Y, et al. KDVEM: a (k)-degree anonymity with vertex and edge modification algorithm. Computing. 2015;97(12):1165–1184.
  55. 55. Chen B, Yang J, Jeon B, Zhang X. Kernel quaternion principal component analysis and its application in RGB-D object recognition. Neurocomputing. 2017;.
  56. 56. Wang R, Kwong S, Wang XZ, Jiang Q. Segment Based Decision Tree Induction With Continuous Valued Attributes. IEEE Transactions on Cybernetics. 2015;45(7):1262. pmid:25291806