Skip to main content
Log in

Improving many objective optimisation algorithms using objective dimensionality reduction

  • Research Paper
  • Published:
Evolutionary Intelligence Aims and scope Submit manuscript

Abstract

Many-objective optimisation problems (MaOPs) have recently received a considerable attention from researchers. Due to the large number of objectives, MaOPs bring serious difficulties to existing multi-objective evolutionary algorithms (MOEAs). The major difficulties includes the poor scalability, the high computational cost and the difficulty in visualisation. A number of many-objective evolutionary algorithms (MaOEAs) has been proposed to tackle MaOPs, but existing MaOEAs have still faced with the difficulties when the number of objectives increases. Real-world MaOPs often have redundant objectives that are not only inessential to describe the Pareto-optimal front, but also deteriorate MaOEAs. A common approach to the problem is to use objective dimensionality reduction algorithms to eliminate redundant objectives. By removing redundant objectives, objective reduction algorithms can improve the search efficiency, reduce computational cost, and support for decision making. The performance of an objective dimensionality reduction strongly depends on nondominated solutions generated by MOEAs/MaOEAs. The impact of objective reduction algorithms on MOEAs and vice versa have been widely investigated. However, the impact of objective reduction algorithms on MaOEAs and vice versa have been rarely investigated. This paper studies the interdependence of objective reduction algorithms on MaOEAs. Experimental results show that combining an objective reduction algorithm with an MOEA can only successfully remove redundant objectives when the total number of objectives is small. In contrast, combining the objective reduction algorithm with an MaOEA can successfully remove redundant objectives even when the total number of objectives is large. Experimental results also show that objective reduction algorithms can significantly improve the performance of MaOEAs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. The final set of solutions returned by MOEA at termination.

  2. Is implicitly defined by the functions composing an MOP.

  3. Objectives in evolutionary many-objective optimisation are considered features in dimensionality reduction.

  4. Computation of of an objective subset of minimum size, yielding a (change) dominance structure with given error.

  5. Computation of an objective subset of given size with the minimum error.

  6. Eigenvalues are normalised, eigenvalues and eigenvectors are sorted descending together based on eigenvalues.

  7. \(T_{cor}=1.0 - e_1 (1.0- M_{2\alpha }/M)\) in which \(e_1=0.39416, M_{2\alpha }=5, M=8\).

  8. Selection score for each objective is calculated \(sc_i=\sum _{j=1}^{N_v} {e_j |f_{ij} |}\).

References

  1. Antonelli M, Ducange P, Lazzerini B, Marcelloni F (2009) Multi-objective evolutionary learning of granularity, membership function parameters and rules of mamdani fuzzy systems. Evol Intel 2(1–2):21

    Google Scholar 

  2. Bader J, Zitzler E (2011) Hype: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76

    Google Scholar 

  3. Bechikh S, Elarbi M, Said LB (2017) Many-objective optimization using evolutionary algorithms: a survey. In: Recent advances in evolutionary multi-objective optimization, Springer, Berlin, pp 105–137

  4. Brockhoff D, Zitzler E (2006) Are all objectives necessary? on dimensionality reduction in evolutionary multiobjective optimization. In: Parallel Problem Solving from Nature-PPSN IX, pages 533–542. Springer

  5. Brockhoff D, Zitzler E (2006) Dimensionality reduction in multiobjective optimization with (partial) dominance structure preservation: generalized minimum objective subset problems. TIK Report, 247

  6. Brockhoff D, Zitzler E (2006) On objective conflicts and objective reduction in multiple criteria optimization. TIK Report, 243

  7. Brockhoff D, Zitzler E (2007) Offline and online objective reduction in evolutionary multiobjective optimization based on objective conflicts. TIK Report, 269

  8. Brockhoff D, Zitzler E (2009) Objective reduction in evolutionary multiobjective optimization: theory and applications. Evol Comput 17(2):135–166

    Google Scholar 

  9. Coello CAC, Lamont GB, Van Veldhuizen DA (2002) Evolutionary algorithms for solving multi-objective problems. Springer, Berlin, p 800

    MATH  Google Scholar 

  10. Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans Evolut Comput 20(5):773–791

    Google Scholar 

  11. Cheung YM, Gu F (2014) Online objective reduction for many-objective optimization problems. In: 2014 IEEE congress on evolutionary computation (CEC), pp 1165–1171. IEEE

  12. Cheung Y-M, Gu F, Liu H-L (2016) Objective extraction for many-objective optimization problems: algorithm and test problems. IEEE Trans Evol Comput 20(5):755–772

    Google Scholar 

  13. Cunningham JP, Ghahramani Z (2015) Linear dimensionality reduction: survey, insights, and generalizations. J Mach Learn Res 16:2859–2900

    MathSciNet  MATH  Google Scholar 

  14. Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans Evolut Comput 18(4):577–601

    Google Scholar 

  15. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans Evol Comput 6(2):182–197

    Google Scholar 

  16. Deb K, Saxena D (2006) Searching for pareto-optimal solutions through dimensionality reduction for certain large-dimensional multi-objective optimization problems. In: Proceedings of the world congress on computational intelligence (WCCI-2006), pp 3352–3360

  17. DeRonne KW, Karypis G (2013) Pareto optimal pairwise sequence alignment. IEEE/ACM Trans Comput Biol Bioinform (TCBB) 10:481–493

    Google Scholar 

  18. Erceg-Hurn DM, Mirosevich VM (2008) Modern robust statistical methods: an easy way to maximize the accuracy and power of your research. Am Psychol 63:591

    Google Scholar 

  19. Farina M, Amato P (2002) On the optimal solution definition for many-criteria optimization problems. In: Proceedings of the NAFIPS-FLINT international conference, pp 233–238

  20. Fleming PJ, Purshouse RC, Lygoe RJ (2005) Many-objective optimization: an engineering design perspective. In: International conference on evolutionary multi-criterion optimization, pp 14–32

  21. Fu G, Kapelan Z, Kasprzyk JR, Reed P (2012) Optimal design of water distribution systems using many-objective visual analytics. J Water Resour Plan Manag 139:624–633

    Google Scholar 

  22. Gu F, Liu H-L, Cheung Y-m (2017) A fast objective reduction algorithm based on dominance structure for many objective optimization. In: Asia-Pacific conference on simulated evolution and learning, Springer, pp 260–271

  23. Guo X, Wang X, Wang M, Wang Y (2012) A new objective reduction algorithm for many-objective problems: employing mutual information and clustering algorithm. In: 2012 eighth international conference on computational intelligence and security (CIS), IEEE, pp 11–16

  24. Guo X, Wang Y, Wang X (2013) Using objective clustering for solving many-objective optimization problems. Mathematical Problems in Engineering 2013

  25. Hughes EJ (2003) Multiple single objective pareto sampling. Congr Evolut Comput 2003:2678–2684

    Google Scholar 

  26. Ishibuchi H, Masuda H, Nojima Y (2016) Pareto fronts of many-objective degenerate test problems. IEEE Trans Evolut Comput 20:807–813

    Google Scholar 

  27. Ishibuchi H, Tsukamoto N, Nojima Y (2008) Evolutionary many-objective optimization. In: 3rd international workshop on genetic and evolving systems, 2008. GEFS 2008, IEEE, pp 47–52

  28. Jaimes AL, Coello CAC, Barrientos JEU (2009) Online objective reduction to deal with many-objective problems. In: International conference on evolutionary multi-criterion optimization, Springer, Berlin, pp 423–437

  29. Kaufman L, Rousseeuw P (1987) Clustering by means of medoids. North-Holland, Amsterdam

    Google Scholar 

  30. Knowles JD, Corne DW (2000) Approximating the nondominated front using the pareto archived evolution strategy. Evol Comput 8(2):149–172

    Google Scholar 

  31. Li B, Li J, Tang K, Yao X (2015) Many-objective evolutionary algorithms: a survey. ACM Comput Surv (CSUR) 48(1):13

    Google Scholar 

  32. Li M, Yang S, Liu X (2014) Shift-based density estimation for pareto-based algorithms in many-objective optimization. IEEE Trans Evol Comput 18(3):348–365

    Google Scholar 

  33. López Jaimes A, Coello CCA, Chakraborty D (2008) Objective reduction using a feature selection technique. In: Proceedings of the 10th annual conference on genetic and evolutionary computation, ACM, New York, pp 673–680

  34. Miettinen K (1999) Nonlinear multiobjective optimization, volume 12 of international series in operations research and management science

  35. Pei Y, Takagi H (2013) Accelerating iec and ec searches with elite obtained by dimensionality reduction in regression spaces. Evol Intel 6(1):27–40

    Google Scholar 

  36. Riquelme N, Von Lücken C, Baran B (2015) Performance metrics in multi-objective optimization. In: Computing conference (CLEI), 2015 Latin American, IEEE, pp 1–11

  37. Saxena DK, Deb K (2007) Non-linear dimensionality reduction procedures for certain large-dimensional multi-objective optimization problems: employing correntropy and a novel maximum variance unfolding. In: International conference on evolutionary multi-criterion optimization, Springer, Berlin, pp 772–787

  38. Saxena DK, Duro JA, Tiwari A, Deb K, Zhang Q (2013) Objective reduction in many-objective optimization: linear and nonlinear algorithms. IEEE Trans Evol Comput 17(1):77–99

    Google Scholar 

  39. Singh HK, Isaacs A, Ray T (2011) A pareto corner search evolutionary algorithm and dimensionality reduction in many-objective optimization problems. IEEE Trans Evol Comput 15(4):539–556

    Google Scholar 

  40. Tang J, Alam S, Lokan C, Abbass HA (2012) A multi-objective evolutionary method for dynamic airspace re-sectorization using sectors clipping and similarities. In: 2012 IEEE congress on evolutionary computation (CEC), pp 1–8

  41. Tate J, Woolford-Lim B, Bate I, Yao X (2012) Evolutionary and principled search strategies for sensornet protocol optimization. IEEE Trans Syst Man Cybernet Part B (Cybernet) 42:163–180

    Google Scholar 

  42. Tian Y, Cheng R, Zhang X, Jin Y (2017) Platemo: a matlab platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput Intell Mag 12(4):73–87

    Google Scholar 

  43. Tran CT, Zhang M, Andreae P, Xue B (2016) Improving performance for classification with incomplete data using wrapper-based feature selection. Evol Intel 9(3):81–94

    Google Scholar 

  44. Wang H, Jiao L, Yao X (2015) Two\_arch2: an improved two-archive algorithm for many-objective optimization. IEEE Trans Evol Comput 19(4):524–541

    Google Scholar 

  45. Yang S, Li M, Liu X, Zheng J (2013) A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 17(5):721–736

    Google Scholar 

  46. Yuan Y, Xu H, Wang B, Yao X (2016) A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 20(1):16–37

    Google Scholar 

  47. Zhang Q, Li H (2007) Moea/d: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731

    Google Scholar 

  48. Zhang X, Tian Y, Jin Y (2015) A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 19(6):761–776

    Google Scholar 

  49. Zhang Z (2011) Artificial immune optimization system solving constrained omni-optimization. Evol Intel 4(4):203–218

    Google Scholar 

  50. Zhou A, Qu B-Y, Li H, Zhao S-Z, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: a survey of the state of the art. Swarm Evolut Comput 1(1):32–49

    Google Scholar 

  51. Zitzler E, Laumanns M, Thiele L (2001) Spea2: improving the strength pareto evolutionary algorithm. TIK-report, 103

  52. Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE Trans Evol Comput 3(4):257–271

    Google Scholar 

Download references

Acknowledgements

This research is funded by Ministry of Science and Technology under Bilateral and Multilateral Research Programs (the grant for Face Recognition).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cao Truong Tran.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

This section further investigates the proposed methods when using a clustering method for objective dimensionality reduction.

Integrating clustering objective dimensionality reduction algorithm into MaOEAs

Based on correlation coefficient (where \(\rho (x,y)\) is the correlation coefficient between random variables x and y, the range of \(\rho\) is from \({-}\) 1 to 1), Jaimes et al. [33] used \((1-\rho ) \in [0,2]\) to measure the degree of correlation between two objectives in approximation set of Pareto Front in MOPs. In which, zero value indicates that objectives x and y are completely positively correlated and a value of 2 indicates that x and y are completely negatively correlated. A negative correlation between two objectives means that one objective increases while the other decreases and vice versa. On the other hand, if the correlation is positive, then both objectives increase or decrease at the same time. Following this way, the more negative correlation between two objectives leads to the more conflict between the objectives. In [33], based on a correlation matrix of a non-dominated set obtained using an evolutionary algorithm, the objective set is first divided into homogeneous neighborhoods. The distance between the objectives is considered as the conflict between the objectives. Thereafter, the most compact neighborhood is chosen, and all the objectives in it except the center one are removed, as they are the least conflicting.

MICA-NORMOEA and OC-ORA algorithms are developed in [23, 24], respectively. In these algorithms, interdependence coefficient matrix is calculated, then PAM clustering algorithm [29] and NSGA-II [15] are invoked iteratively to reduce the redundant objectives until criterion is satisfied. The main different between these methods with LPCA is the relationship between pair of objectives. While LPCA use linear relationship, the method represents nonlinear one.


The framework of these algorithms (MICA-NORMOEA and OC-ORA) is shown


Step 1. Set an iteration counter \(t = 0\); original objective set is \(F_t={f_1,f_2,\ldots ,f_M }\), and the number of predefined clusters is k.

Step 2. Initialize a random population \(P_t\) run NSGA-II corresponding to \(F_t\) and obtain a non-dominated set \(A_t\)

Step 3. Calculate the interdependence coefficient matrix based on the non-dominated set \(A_t\) and use the PAM clustering algorithm to divide the objective set \(F_t\) into k clusters.

Step 4. According to the clusters of objective set \(F_t\) obtained in Step 3, remove one of the redundant or the most interdependent objective from \(F_t\) according to the above objective reduction rules, and the remaining objective set is denoted as \(F_{t+1}\)

Step 5. If \(F_t\) = \(F_{t+1}\) then stop; else \(t:=t+1\); \(F_t:= F_{t+1}\); return to Step 2.

The results

These are done same as in Sect. 3. When calculating interdependence, the number of subintervals is set as 20, and the threshold \(\theta\) is set as 0.9.

Table 7 shows the mean and standard deviation (in parentheses) of GD and IGD of five MaOEAs including GrEA, KnEA, NSGAIII, RVEA*, and \(\theta\)-DEA. \(IGD_{1}\) and \(GD_{1}\) refer to IGD and GD of the MaOEAs without combining with objective dimensionality reduction algorithm, respectively. \(IGD_{2}\) and \(GD_{2}\) refer to IGD and GD of the MaOEAs combining with clustering objective dimensionality reduction (OCA-ORA) for removing redundant objectives, respectively. The table also shows the mean and standard deviation of the number of objectives which are retained after carrying out objective reduction. The table indicates the performance of the combination of MaOEAs and clustering objective reduction is significantly better than MaOEAs alone in almost all cases. In detail, \(IGD_2\) is significant better than \(IGD_1\) on 21 of 30 cases, and \(GD_2\) is also significant better than \(GD_1\) in 28 of 30 cases. Moreover, \(IGD_2\) is only significant worse than \(IGD_1\) on 4 of 30 cases, and \(GD_2\) is significant worse than \(GD_1\) in 2 of 30 cases.

In summary, the proposed methods can be combined with different objective dimensionality reduction methods to improve evolutionary computation many objective optimisation algorithms.

Table 7 The values of IGD, GD of true Pareto (\(IGD_1\), \(GD_1\)); the number of objective retain (R) and IGD, GD (\(IGD_2\), \(GD_2\)) after carrying out clustering objective dimensionality reduction

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nguyen, X.H., Bui, L.T. & Tran, C.T. Improving many objective optimisation algorithms using objective dimensionality reduction. Evol. Intel. 13, 365–380 (2020). https://doi.org/10.1007/s12065-019-00297-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12065-019-00297-4

Keywords

Navigation