1 Introduction

In the optimal design of composites, multiple requirements and goals, each possibly originating from different field, do form design criteria and constraints. So optimizing composite structures, often leads to multi-criteria and multi-disciplinary optimization (Sobieszczanksi-Sobieski et al. 2015).

The choice of fiber material choice should thus consider mechanical as well as technical, economical and ecological aspects. Further underpinning this, the mechanical focal point is lying on designing as light as possible, yet this may induce conflicts in technical criteria, where high manufacturing effort may be observed (Schatz and Baier 2014a). Manufacturing effort is frequently considered by numerical process simulations or analytics (Ghiasi et al. 2010). However, either of those approaches might be applicable, since numerics can be computationally expensive or analytics not comprehensive enough. Which is why, herein, a soft computing approach is being used to generate a model based on expert knowledge Schatz and Baier (2014a, 2014b) and Schatz and Baier (2015). This approach has already successfully been applied for extruded aluminum profiles by Wehrle and Baier (2016). The methods used to form such knowledge-based models are associated with soft computing (Hajela 2002). There are similar approaches exploiting this, e.g. Pillai et al. (1997), where the cure of thick composites is considered via a knowledge-based system and Iqbal (Iqbal et al. 2007), where a fuzzy-rule-based system is used to model a hard-milling process. Further examples are given by Huber and Baier (2006) and Zhou and Saitou (2015). However regarding generality and flexibility in choice of algorithm, formalization of the optimization task or a more detailed manufacturing model shows room for improvement.

For the optimization to be as technically relevant as possible, it evidently needs to comprise structural aspects as well. Therefore a holistic optimization frame as depicted with Fig. 1 and being inspired by Baier et al. (1994) will be outlined in this paper as well. Furthermore, since multiple criteria need to be considered throughout the optimization, the optimization approach needs to be augmented. Edgeworth, was among the first addressing multi-criteria problems (Edgeworth 1881) by describing possible settlements in-between conflicting consumer interests. Considerable contributions were made by Pareto, e.g. Pareto (1906). In Baier (1977, 1978) and Stadler (1984) applications in the structural design have showed huge potential by simultaneously considering multiple criteria and thereby yielding optimal compromises. Sobieszczanksi-Sobieski et al. (2015) covers most of the mentioned aspects as multiple objectives, multiple disciplines and knowledge engineering.

Fig. 1
figure 1

Overview of the optimization process

Herein, the propeller is braided. Thus the stacking of different layers is defined by textile preforms, called braids. These braids are formed, by continuously placing dry fiber bundles onto a mandrel–which defines the structure’s shape–while the fiber carriers (horn gears) are counter-rotating and thus realizing a distinct fiber architecture (Mazumdar 2002). This is illustrated in Fig. 2.

Fig. 2
figure 2

The braiding technology

In this work, basic fundamentals on multi-criteria optimization and soft computing will be given first. This section is followed by the discussion of the structural and particularly the effort model. Then, the outcomes of various optimizations, which for instance vary in the considered criteria set, will be discussed. Lastly, the paper will be concluded by providing a summary and an outlook.

2 Fundamentals for holistic multi-criteria optimization and soft computing

2.1 Mathematical basis of multi-crtieria optimization

A general mutli-criteria optimization statement is defined by,

$$ \begin{array}{c l l} \underset{x \in \chi}{\text{minimize}} & f_{i}(x_{1},..,x_{j},..x_{n_{\text{DV}}}), \; & i = 1, \ldots, n_{\mathrm{O}}, \\ \text{subject to} & g_{l}(x_{j}) \leq 0, \; & l = 1, \ldots, n_{\text{IC}}, \\ \text{with} & \chi = \{x_{j} \in \mathbb{R}^{n_{\text{DV}}} : & {x^{l}_{j}} \le x_{j} \le {x^{u}_{j}} \}, \end{array} $$
(1)

with x, f and g being the vector of design variables, vector of objectives–or more frequently in this context: criteria vector–and vector of inequalities (Stadler 1988). A multi-criteria optimization problems, i.e. n O>1, can be categorized by the number of criteria, e.g. bi- or tri-objective. The gathering of all feasible criteria values is referred to as feasible criterion space \(\mathcal {Y}\), i.e. \(\mathcal {Y} : = \{ \mathbf {f}(\mathbf {x}) : \mathbf {g}(\mathbf {x}) \leq 0, \mathbf {x} \in \chi \}\). This space is depicted in Fig. 3, where in addition to \(\mathcal {Y}\) the best and worst fictitious criteria values: utopia f and nadir f , e.g. \({f}^{\blacksquare }_{i} = \underset {x \in \chi }{\min } \left \{ f_{i}(\mathbf {x}) | \mathbf {g}(\mathbf {x}) \leq 0 \right \}\) (Marler and Arora 2004).

Fig. 3
figure 3

Depiction of an exemplary criterion space, Pareto frontier and Pareto efficient set

Moreover, one can find the set of optimal criteria values, referred to as Pareto efficient solutions Ω E and the so-called Pareto frontier Ω P in Fig. 3. The Pareto frontier is herein understood as a superset of all Pareto efficient solutions in the general case, thus Ω P ⊇Ω E . In the following, an approach capable of approximating the Pareto frontier Ω P will be presented. This approach will be gradient-based, since this did prove to be the most efficient kind of optimization algorithms for the discussed structural composite problem. Reasons for this are, the large design space, the availability of analytical gradients and the lack of multi-modality of the involved optimization responses. An overview on solution methods is given by Marler et al. in Marler and Arora (2004).

2.2 Solution technique

In this paper, the approach first introduced by Pereyra in Pereyra (2009) has been chosen, since it performed best in terms of numerical effort and equi-spaced solutions. The key idea is to solve for discrete points \( \mathbf {f}^{p}_{\text {opt}}\) of the Pareto frontier Ω P and thus approximate it as given with (2). Figure 4 illustrates this approximation.

$$ {\Omega}_{\tilde{P}} := \left\{ \mathbf{f}^{p}_{\text{opt}} \; \text{with} \; p = 1,...,n_{p} \right\} \subsetneqq {\Omega}_{P} . $$
(2)
Fig. 4
figure 4

Approximation \({\Omega }_{\tilde P}\) defined by (2)

Each of those solutions f p are locally Pareto efficient and are computed based on a scalarized, i.e. substitute single objective optimization problem, wherefore n p single objective optimization task need to solved for obtaining \( {\Omega }_{\tilde {P}} \). The scalarized single objective problem is thereby defined as follows,

$$ \begin{array}{c l l} \underset{x, \lambda}{\text{minimize}} & \tilde{f} = \kappa f_{1} + (1-\kappa) f_{2} & \\ \text{subject to} & g_{l}(\mathbf{x}) \leq 0, \; & \\ & g_{\text{BS}}(\mathbf{x}) \leq 0, \; & \\ \text{and} & h_{\gamma}(\mathbf{x}) = \| \mathbf{f} - \mathbf{f}^{p-1} \|^{2} = \gamma^{2} , \; & \\ \text{with} & \mathbf{x} \in \chi \; \text{and} \; \kappa \in [0,...,1]. \end{array} $$
(3)

As can be observed, the scalarized problem stated via equation set (3) is extended by additional design variable κ being the objective weight for the scalarization and two augmented constraints. Both constraints are given with equation set (4) and are also illustrated in Fig. 5.

$$ \begin{array}{r l l} g_{\text{BS}}(\mathbf{x}) &= \| (\mathbf{f}^{p-1}-\mathbf{f}^{p-2}) (\mathbf{f}(\mathbf{x})-\mathbf{f}^{p-1}) \|_{\text{L}2} \leq 0, \\ h_{\gamma}(\mathbf{x}) &= \| \mathbf{f} - \mathbf{f}^{p-1} \|_{\text{L}2}^{2} = \gamma^{2} \end{array} $$
(4)
Fig. 5
figure 5

Exemplary illustration of the bi-objective criterion space

The first constraint, the inequality g BS, ensures that the algorithm does not step back, thus, that no priorly computed solution is computed again. h γ on the other hand, ensures, that the final approximation of the Pareto frontier is equispaced, thus display equal distance in-between each pair of subsequent solutions. Details about their implementation or influence is discussed in Pereyra et al. (2013). For this to work properly, the distance in-between two subsequent solutions is restricted based on the following,

$$ \gamma = \frac{c}{n_{P}} \left\| \mathbf{f}^{1} - \mathbf{f}^{n_{p}} \right\|_{2} , $$
(5)

with the anchor points f 1 and \(\mathbf {f}^{n_{p}} \) and the chord factor c being a parameter and accounting for the convexity of the Pareto frontier. Herein, c is set to be 1.2 since most Pareto frontier are slightly bend.

Finally, the gradient-based approach for computing equispaced Pareto frontier approximations is given in form of a flowchart with Fig. 6.

Fig. 6
figure 6

Flow chart of the implemented algorithm

2.3 The concept to define knowledge-based effort models

Now, the developed concept for generating effort models based on expert knowledge will be introduced. These models require only a fraction of the computation effort of conventional process simulations, while being as comprehensive. The developed concept can basically be separated into two levels: knowledge engineering and soft computing. Figure 7 brings both levels into context with the acquisition of knowledge and how it is numerically emulated.

Fig. 7
figure 7

The two major levels determining knowledge-based models

The level of knowledge engineering

This level represents the challenge of acquiring knowledge from domain-specific experts and the thereafter translation into a well-structured knowledge base (KB), being compliant to numerical implementation. In this regard, it can be considered as an intermediate step towards the final knowledge-based system (KBS) (Feigenbaum and McCorduck 1983). At first, one gathers fundamentals from sources like literature, existing data bases, norms or even preliminary numerical analyses. The knowledge engineer (KE), i.e. the engineer who acquires the expert knowledge, thereby becomes familiar with terminology and peculiarities of the domain-specific topic. By this the KE foresees communication issues causing flaws (Buchanan and Shortliffe 1984). Thereafter, acquisition of knowledge is realized by conducting multiple interviews, each ideally guided by a questionnaire. The questionnaires shall as such facilitate a fluent, unbiased and unbroken interview, also allowing the KE to comprehensively record by making short notes. With Fig. 8, two questions of the developed electronically and hence interactive questionnaire are given. The first question, i.e. Example 1, is regarded to be open, because the range of possible answers is not restricted, whereas the second, Example 2, is a closed one since a specific ratio is queried. Obviously, the open question type is often used when the KE explores new domains of the to-be-modeled topic. Opposed to that, the closed ones help to deepen and detail the KB. It can be comprehended that the questions are shall evolve from open to closed question types throughout the iterative process of iterations.

Fig. 8
figure 8

Questions of interview for acquiring expert knowledge

Besides, generality of the deduced KB can be guaranteed, when the KE abstains from including absolute numbers, but uses relative definitions instead. For instance, the query of a minimal ratio \(\frac {R}{D}\) is generally valid, whereas asking for limits on the radius R can be misleading or dependent on expert’s working field.

Next, the interview records are translated into the knowledge base. Figure 9, provides an extract of the interview records being interpreted by the KE in terms of a fuzzy membership function. This function will be explained in the following paragraph. It shall however be noted, that the membership function (given in blue) is able to address the imprecision due to the qualitative nature. Furthermore, one can observe that the expert and thus the manufacturing process itself dictates the shape of the membership function.

Fig. 9
figure 9

Part of an interview record aiming to acquire expert knowledge

The level of soft computing

is characterized by the knowledge-based system itself. Here, a fuzzy inference system (FIS) is used to evaluate and reason about information described by knowledge priorly acquired from experts. This FIS is based on the fuzzy logic arithmetic initiated by Zadeh (1994) and its inputs, inference rules and outputs are arranged according to Mamdani and Assilian (1975), which makes it a Mamdani FIS.

This FIS has, however, been extended in various ways. Figure 10 illustrates a general Mamdani FIS, where crisp input variables x i are first translated into fuzzy sets (fuzzification to \(\mu ^{j}_{x_{i}}\)) and evaluated rule-wise (Inference of each rule j). The output of each rule \({\mu ^{j}_{r}}\) (implication) is combined to one single output (aggregation to μ r ) and translated back to a crisp value (defuzzification). The implication and aggregation can be realized by different methods each either representing AND or OR rules. In Fig. 10, the minimum rule is used for the implication and maximum rule for aggregation. Despite their wide range of applicability, there exists a wide range of further rule definitions in literature and praxis. Defuzzification is most frequently accomplished by computing the center of gravity of the aggregated output function, thus,

$$ r_{\text{CoG}} = \frac{{\int}_{R} \mu_{r} r \mathrm{d}r}{{\int}_{R} \mu_{r} \mathrm{d}r} $$
(6)
Fig. 10
figure 10

A general Mamdani fuzzy inference system

The Mamdani FIS has been extended, such that the weights leading to a certain output, are utilized to reveal the reasons causing that specific output. Thus, the knowledge-based system founded on such a FIS is then able to not only evaluate complex situations, but moreover to provide reasoning about decisions made and give advice for optimal improvement. This extension was realized by fetching the arguments of the evaluated rules j and in that consequence querying the major contributor to the output or a given rule. This is shown next; at first for the aggregation rule and then followed by the query for the key argument of the implication, where j Active represent the most relevant rule and i Active the decision-dominating input variable index.

$$\begin{array}{@{}rcl@{}} j_{\text{Active}} &:=& \underset{j}{\arg\max} \{ {\mu_{r}^{j}} \} \end{array} $$
(7)
$$\begin{array}{@{}rcl@{}} i_{\text{Active}} &:=& \underset{i}{\arg\min} \{ \mu_{x_{i}}^{j_{\text{Active}}} \} \end{array} $$
(8)

3 Developed models and problem statement

3.1 Propeller design

The developed methods will herein be demonstrated on a propeller blade of a piston engine airplane. For such a propeller blade, two critical loading conditions can be identified: maximum thrust T max and windmilling T min. The latter is characterized by an operation of the propeller similar to a windmill, thus, minimal thrust.

In addition to these two loading conditions, the modal performance of the propeller is relevant as well. Hence, the first three natural frequencies ω i need to be higher than the rotary citation frequency, i.e. revolutions of blade. The following Table 1 and Fig. 11 provide these frequencies of the initial design.

Table 1 Eigenfrequencies of the first three modes
Fig. 11
figure 11

The first three modal vibration shapes

Multiple analyses and reviewing literature (Theodorsen 1948) revealed, that for our case, the aerodynamics and mechanics can be separated with reason. However, by doing so, coupling-effects such as flutter are not considered here. Therefore, in the following section a computational fluid dynamics (CFD) simulation is conducted so as to compute the pressure field for different configurations, being passed to the structural simulations.

3.2 Determining the pressure fields via fluid simulations

The flow around the propeller is described by the conservation of mass, momentum and energy. The latter is used to consider compressibility effects arising here, due to the high rotational speed of the propeller. These three equations are known as the compressible Navier-Stokes-Equations (Tu et al. 2008). For modeling turbulence effects, the turbulent viscosity SST model is used. So as to account for the inertia effects of the rotating propeller, the discretized flow domain is divided into a rotary domain with a moving reference frame according to a frozen rotor concept and a stationary domain covering the far flow field (Kumar and Wurm 2015). Both are connected via an interface and are schematically given with Fig. 12. It shall be noted, that the fluid domain is by far larger than the propeller domain, causing the CFD simulation to be computationally expensive.

Fig. 12
figure 12

Illustration of the CFD model

With Fig. 13 one of the computed pressure fields is given. The pressure fields are in a subsequent step mapped onto the structural model (see Table 2).

Fig. 13
figure 13

Pressure field for maximum thrust case

Table 2 Overview on the solution sequence as implemented

3.3 Parametrized structural model

In Fig. 14, the parametrized structural model is given. This geometric model has been set up by taking advantage of Abaqus CAE. As can be observed by the presence of three distinct braid layer, e.g. t 1,2,3, the propeller blade is over-braided multiple times.

Fig. 14
figure 14

Structural model of the propeller

For parameterizing braid angles of the three layers (i∈{1,2,3}), the following equation was used,

$$ \varphi_{i} (r) = \varphi_{S}^{(i)} + \alpha^{(i)} r + \left( \frac{\varphi_{E}^{(i)} - \varphi_{S}^{(i)}}{{l_{i}^{2}}} - \frac{\alpha^{(i)}}{l_{i}} \right) r^{2}, $$
(9)

with r being the coordinate axis (Fig. 14) and α the slope at r=0,

$$ \left. \frac{\mathrm{d} \varphi_{i}} {\mathrm{d} r} \right|_{r=0} = \alpha^{(i)} , \hspace{1cm} i \in \{ 1,2,3 \}. $$
(10)

With Table 2, a overview of the implemented solution sequence is given. The first two steps evaluating the stability of the structure are optional and are only activated once an optimum is post-processed. This is mainly because all found optima where non-critical regarding buckling, however, it needed to be verified that this is the case.

3.4 Derived meso-scale material model

As earlier mentioned, braiding was chosen as the manufacturing technique for this propeller structure (see Fig. 2). Since this technique–as the name already indicates–yields a rather complex fiber architecture, which significantly influences mechanical properties, for instance through undulations, a specific material model addressing this, was required as well. In addition to an accurate rendering of the mechanical properties, such as stiffness of the braid, it was also indispensable that the material model is numerically efficient due to the embedding in the optimization framework. Such that these conflicting requirements can be met, it was decided to first build up a numerical unit cell model for high fidelity homogenization analysis. The analysis was performed in Abaqus, where the TexGen libraries (Lin et al. 2011) were used to generate the unit cell as depicted in Fig. 15.

Fig. 15
figure 15

Parametrized meso model derived

Then, subsequently, a meta model is build upon this homogenization model. A polynomial regression of order two was used to determine the response surface approximation as given with Fig. 16.

Fig. 16
figure 16

Response surface of the stiffness E x x including experimental (Exp.) and numerical (Num.) data

It was shown by comparison with further numerical analyses and experimental investigations, that the encountered error is less than five percent. The error bars are also given in Fig. 16 as (I). More on this can be found in Schatz and Baier (2014a).

3.5 The manufacturing effort model

The manufacturing effort model (MEM) was formed based on the concepts introduced in Section 2.3. Thus multiple interviews with braiding experts have been conducted, thereafter, condensed to a knowledge base, which is then incorporated into knowledge-based system (KBS). The final KBS is then able to evaluate the level of manufacturing effort associated with the technical input variables y tech. The input variables namely are: the braiding angle φ, profile circumference, curvature of the mandrel axis, aspect ratio, edge radii and number of layers.

With Fig. 17 the response surface of the MEM’s primary output manufacturing effort is plotted over a subset of input variables, being the braiding angle φ and edge radius r. As can be presumed, the red color hereby reflects high effort level and blue the opposite case. In addition to this primary output, the secondary outputs reason \(\mathcal {R}\) and advise \(\mathcal {A}\) are given for multiple distinct points as well. These secondary outputs basically provide insight on how the MEM made the decision for a certain effort level (reason \(\mathcal {R}\)) and how the situation may be improved the most effectively (advise \(\mathcal {A}\)). Therefore, these outputs not only support the verification of the MEM’s decisions, but also enable the use of this MEM as an independent CAE tool giving live feed-back to the designer.

Fig. 17
figure 17

Manufacturing effort response surface over braiding angle and edge radius

The definition of manufacturing effort as being a qualitative measure was made deliberately. The main reason for doing so, is the fact, that this qualitative measure enables a certain degree of abstractness, which in turn leverages the MEM to a high level of generality. For instance, the alternative cost would lead to a model possibly being valid for one single company, because all parameters–expect the mechanical and physical ones–are situation and country dependent. Examples could be shift pattern, man, capital and plant surface costs, maintenance pattern et cetera (Mazumdar 2002).

3.6 Deducing the optimization model

Both analyses–finite element analysis (FEA) and manufacturing effort analysis (MEA)–are conducted for each update of the design variables x so to determine the optimization responses (f,g) making the optimization an multi-criteria (vector of objectives f) as well as a multi-disciplinary one. Herein, the analyses are conducted sequentially, since a subset of the mechanical responses are required by the MEM (r mech- - - y tech), e.g. geometric parameter created by the pre-processing of the FEA (Fig. 18).

Fig. 18
figure 18

Computing optimization responses (f,g) composed of mechanical r mech and technical responses r tech for given a design vector x and a set of constant parameters p

Following (11) states how the gradients are computed for the objective f; a similar expression can be stated for the constraints. The first partial derivative (I) is the direct dependency on the design variables and is, hence, explicitly given. (II) is computed implicitly but yet analytically by the FEA, e.g. for displacement sensitivity ∇u = K −1{∇f−(∇K)u}. The last partial derivative (III) is obtained via finite differences. Finite differences have been used, since the manufacturing effort model is evaluated within no time.

$$ \frac{\mathrm{d} f_{i}}{\mathrm{d} x_{j}} = \underbrace{\frac{\partial f_{i}}{\partial x_{j}}}_{\text{(I)}} + \frac{\partial f_{i}}{\partial r_{k,\text{mech}}} \underbrace{\frac{\partial r_{k,\text{mech}}}{\partial x_{j}}} _{\text{(II)}} + \frac{\partial f_{i}}{\partial r_{l,\text{tech}}} \underbrace{\frac{\Delta r_{l,\text{tech}}}{\Delta x_{j}}}_{\text{(III)}} $$
(11)

On the side, the whole optimization is set up and solved using the interface EOS (Environment for optimization and simulation) to the optimization package pyOpt (Perez et al. 2012). EOS is written by da Rocha und the first author at the Institute of Lightweight Structures.

4 Results of the optimization

4.1 Choice of optimization algorithm

Unless it is not stated differently, the NLPQLP algorithm by Schitttkowski (2010) being a sequential-quadratic-programming algorithm has been used. The optimizations have actually been conducted with different algorithms being available in the pyOpt module, however, NLPQLP needed the fewest number of function evaluations. The choice of a gradient-based optimization was made after an excessive parametric study, which did not reveal non-convexity in the design responses. To further ensure, that the results being shown are not of local nature, the optimization has been started from multiple starting points. The starting points their-self, have been determined by scanning the design space via latin hypercube sampling so as to ensure a certain coverage.

4.2 Simultaneous optimization on mass and frequency

At first, the propeller structure will solely be optimized considering the criteria mass m and the first, i.e. lowest, natural frequency ω 1. While minimizing m and maximizing ω 1, all structural requirements are imposed as constraints g and manufacturing aspects are not considered yet. The structural constraints are maximum tolerable hub force F h,Max, bound on the natural frequencies defined by the revolution n P,Max, stiffness requirements u Tip,LC1/2,Max and, ultimately, structural integrity for both load cases measured by the failure index \(\mathcal {F}\mathcal {I}\). For the last constraint, it shall be noted, that failure is evaluated for multiple sections, i.e. n Sec, along the propeller blade based on the Puck failure theory. The multi-criteria optimization task can thus be defined as follows,

$$ \begin{array}{c l} \underset{x \in \chi}{\text{minimize}} & \mathbf{f}(\mathbf{x}) = [ \; m; \; -\omega_{1} \; ]^{T} \\ \text{subject to} & g_{1}(\mathbf{x}) = \frac{ F_{h} (m )}{ F_{h,\text{Max}}} -1 \\ & g_{2,\dots,11}(\mathbf{x})= 1- \frac{60}{2 \pi} \frac{ \omega_{1,...,10}} { n_{\text{P,Max}}} \\ & g_{12,\dots,13}(\mathbf{x}) = \frac{ u_{\text{Tip,LC1/2}}} { u_{\text{Tip,LC1/2,Max}}} -1 \\ & g_{14,\dots,13+n_{\text{Sec}}}(\mathbf{x}) = \frac{\mathcal{F}\mathcal{I}_{\text{LC1,s}} } { 0.9}-1 \\ &g_{14+n_{\text{Sec}},\dots,13+ 2 n_{\text{Sec}}}(\mathbf{x}) = \frac{\mathcal{F}\mathcal{I}_{\text{LC2,s}} } { 0.9}-1 \end{array} $$
(12)

With Table 3 the design variables x and the design space χ is given.

Table 3 Definition of the design space

This optimization task is solved by the gradient-based approach given by Fig. 6, where the optimization problem is scalarized in analogy to (3) and solved considering both additional constraints defined by (4) (also see Fig. 5). The thereby computed approximation of the Pareto frontier is depicted in Fig. 19 with the red dots representing the found Pareto efficient criteria (n P =9 optimizations) and the blue circle the illustration of the equality constraints h γ .

Fig. 19
figure 19

Computed approximation of the Pareto frontier of (12)

Studying Fig. 19 reveals, that both criteria of (12) are competing and do form a pronounced convex Pareto frontier. This can also be comprehended by the extremal solutions which differ considerably: Δm=1.7kg (36 %) and Δω 1=34Hz (97 %). The fact that both criteria are competing is physically plausible as well, because enforcing the propeller’s root elevates the first natural frequency, but yet also increases the mass. The non-normalized approximation of the Pareto frontier \({\Omega }_{\tilde {P}}\) is given with the following Fig. 20 with .

Fig. 20
figure 20

Approximation of Pareto frontier

Next, the objective weights κ are plotted for each of the n P =9 single objective optimizations in Fig. 21. Again, κ was utilized for scalarizing f such that, f κ = κ m−(1−κ)ω 1. This figure underpins, how the objective weights evolve over the different points of the frontier and, thus, reveals how challenging it would be to a priori chose κ for obtaining a certain Pareto efficient solution. This has actually also been observed by Zhang and Gao (2006), who did vary their weights as well. They moreover already revealed, that the set of active constraints has a major impact on the actual value of the weights κ.

Fig. 21
figure 21

Plot of objective weights κ

4.3 Contrasting the gradient-based multi-criteria optimization with a genetic algorithm

For the sake of an rudimentary comparison, the multi-criteria optimization task stated by (12) will now by solved by an alternative approach, namely by a genetic algorithm (GA) whose fitness evaluation is extended such that multiple objectives are considerable. A GA has been chosen for the comparison, since in many cases where soft computing methods have been used to define an optimization models, a GA has been used. NSGA-II (Deb et al. 2002) is such a biologically inspired algorithm. It has been used here as well, whereby the settings were set as default, allowing for possible improvement. Figure 22 provides the solutions obtained by the gradient-based Pareto approximation approach and the ones obtained by the NSGA-II algorithm, where , and differ in the population size n pop and number of generations n gen according to (n pop=72,n gen=10), (n pop=144,n gen=20) and (n pop=288,n gen=10).

Fig. 22
figure 22

Comparing pareto front obtained via gradient-based approach with a genetic algorithm for a given number of specified function evaluations

The gradient-based approach performed to our satisfaction. First of all reasons, the number of function evaluations is orders smaller than the ones required by the GA. On the used cluster nodes–neglecting parallelization, even though both approaches are parallelizable–this mentioned difference in function evaluations, translates into a computation times of 3h:47min for and of 3d:14h:17min for . Secondly, the information gain is greater due to the equal spacing in-between Pareto efficient solutions. And, last but not least, the solutions dominate all others. The latter advantage can be fortified by evaluating the well known optimality criteria given by the Karush-Kuhn-Tucker (KKT) conditions (Kuhn and Tucker 1951). Such that the KKT can be plotted for all derived solutions p of Figs. 19 and 22, the first order optimality \(\mathcal {O}\) as a norm of the stationary of the Lagrangian function \(\mathcal {L} := f_{\kappa } + \mathbf {\lambda } \mathbf {g}\) and the complementary slackness, i.e. \( \mathcal {O} = \left \| [ \nabla \mathcal {L}; \; \mathbf {\lambda } \mathbf {g} ]^{T} \right \|_{2} \), is introduced. Because both, the stationary and the slackness need to be zero at an optima–either being local or global–\(\mathcal {O}\) should take values being numerically zero as well (Fig. 23).

Fig. 23
figure 23

First oder optimality \(\mathcal {O}\) for the solutions in Fig. 22

In spite of these mentioned advantages, it shall also mentioned, that the GA approach is by far more flexible regarding the optimization problem. This becomes clear, when considering that a GA generally can handle discrete variable types with ease or that it imposes less requirements on the system responses behavior, i.e. convex, continuous et cetera. Moreover, the performance of the GA can definitely be improved by increasing the number of generations and population size. Nonetheless, this may only be viable if computational resources are large or if computation times are low.

4.4 Post-processing of the conducted optimization

The gradient-based approach also enables new post-processing possibilities. Next, an extrapolation similarly to the concept of shadow prices will be derived and shown. For this sake, the Lagrangian function \(\mathcal {L} := f_{\kappa } + \lambda \mathbf {g}\) will be pertubated first \(\delta \mathcal {L}\), while stationary is assumed as well, thus,

$$ \delta \mathcal{L} \overset{!}{=} 0 = \kappa \delta f_{1} + (1-\kappa ) \delta f_{2} +\mathbf{ \lambda} \delta \mathbf{ g} $$
(13)

Rearranging this and further assuming that feasibility is retained (\(\delta \mathbf {g} \overset {!}{=} \mathbf {0}\)), yields the following statement,

$$ \frac{\kappa}{\kappa -1} = \frac{\delta f_{2}}{\delta f_{1}} \overset{\delta \to 0}{=} \frac{\partial f_{2}}{\partial f_{1}} , $$
(14)

which basically defines the slope of the Pareto frontier Ω P at any given point. To verify this statement–which has already been mentioned and discussed in Baier (1978) on structural problems–a comparison will be made next. The basis for comparison will here be given by a response surface approximation (RSA) of the Pareto frontier as depicted in Fig. 20, where - - - represents the RSA of . The following (15) provides the underlying polynomial regression.

$$ \omega_{1,\text{RSA}} = -11.4 \frac{m^{2} \text{[Hz]}} {\text{[kg\textsuperscript{2}]}} + 66.7 \frac{m \text{[Hz]}} {\text{[kg]}} -29.8 \text{[Hz]} , $$
(15)

Deriving this equation at m=1.8kg and computing the slope \(\frac {\partial \omega _{1}} {\partial m}\) of the Pareto frontier based on κ leads to the results of Table 4. It can be verified that the relative difference in-between both methods is less then ten percent, which makes the approach given by (14) a viable one, especially if one considers that the RSA model itself is subjected to numerical inaccuracies as well. Knowing the slope at different points enables an extrapolation, thus the estimation of possible improvement if one criteria is loosened, i.e. at the discussed point an increase of ten percent mass m will possibly improve the frequency ω 1 by five Hertz.

Table 4 Slope of the Pareto frontier at m=1.8kg

In addition to that, the concept of shadow prices is still being applicable as well. Thus if one seeks for the imprint of the stiffness constraint \(g_{13} := \frac {u}{u_{\text {Tip}}}-1\) or in other words, the hidden price (value of the criteria) paid such that optimality (including feasibility) is reached, the following statement can be used,

$$ \begin{array}{r l} \text{LIN} (\omega_{1}) & = \left. \omega_{1} \right|_{\mathbf{x}^{\text{opt}}} + \frac{\partial \omega_{1}} {\partial u_{\text{Tip}}} {\Delta} u_{\text{Tip}} \\ & = \left. \omega_{1} \right|_{\mathbf{x}^{\text{opt}}} + \underbrace{ \frac{\partial \omega_{1}}{\partial f_{\kappa}}}_{{\kappa-1} } \underbrace{ \frac{\partial f_{\kappa}}{\partial g_{13}} }_{{- \lambda_{13}}} \underbrace{ \frac{\partial g_{13}}{\partial u} }_{\frac{1}{ u_{\text{Tip}}} } \underbrace{ \frac{\partial u} {\partial u_{\text{Tip}}} }_{\text{1}} {\Delta} u_{\text{Tip}} \\ & = 54.9\mathrm{[Hz]} + 1.4 \frac{\mathrm{[Hz]}}{[\%]} {\Delta} u_{\text{Tip}} \end{array} $$
(16)

Assuming one could relax the displacement requirement by five perfect, i.e. Δu Tip=5 %, (16) extrapolated an increase in the natural frequency of seven Hertz. This underpins, how a post-processing of such gradient-based multi-criteria optimizations could support the understanding of the optimal design regarding robustness, possible improvements et cetera. In that consequence, the decision making can be leveraged by this increase in transparency.

4.5 Considering manufacturing effort as the criterion

So far, the propeller has solely been optimized based on structural criteria. Now, the optimization model is made more holistic, by incorporating technical aspects, namely, the manufacturing efforts computed by the MEM as introduced in Section 3.5. Before the optimization task is being conducted, the design displaying the maximum natural frequency (at the right upper side of Fig. 20) will be evaluated in terms of manufacturing effort. Figure 24 depicts the effort density \(\dot {e}_{i}\) being the effort per area or pointwise evaluated effort. In that consequence, the layer-wise effort e i is then defined by \(e_{i} = \frac {1}{\mathcal {A}} {\int }_{\mathcal {A}} \dot {e}_{i} \; \mathrm {d} A\), where \(\mathcal {A}\) denotes the propeller surface. As then can be seen in Fig. 24, so associated levels of manufacturing efforts are high for each of the three braid layers (indicated by the red color).

Fig. 24
figure 24

Manufacturing effort densities \(\dot {e}_{i}\) for each layer i of the maximum natural frequency design

For this reason, the structural multi-criteria optimization statement as given with equation set (12), will be augmented by replacing the criteria mass m by manufacturing effort e, while maintaining all constraints g. Therefore,

$$ \begin{array}{c l} \underset{x \in \chi}{\text{minimize}} & \mathbf{f}(\mathbf{x}) = [ \; e; \; -\omega_{1} \; ]^{T} \\ \text{subject to} & \mathbf{g}(\mathbf{x}) \end{array} $$
(17)

Using the same gradient-based multi-criteria optimization approach (see Fig. 6) leads to the approximation \({\Omega }_{\tilde {P}}\) of the whole Pareto frontier given with in Fig. 25. For the computation of \({\Omega }_{\tilde {P}}\) nine single objective optimizations–in analogy to the problem statement given by the equation set (3)–where the objective vector f was scalarized to f κ = κ e−(1−κ)ω 1. The minus was again necessary, since the frequency is, opposed to the effort level, being maximized.

Fig. 25
figure 25

Pareto frontier for f(x)=[ e; −ω 1 ]T

The approximation \({\Omega }_{\tilde P}\) in Fig. 25 appears to have some sort of dent ranging from the second to the firth Pareto efficient solution f p. Switching of rules in the MEM (braid opening to critical ratio of take-up and horn gear speed) is deemed to be the origin of this dent. A plot supporting this, is given in the following Fig. 26, where, as before with Fig. 23, the first order optimality \( \mathcal {O} = \left \| [ \nabla \mathcal {L}; \; \lambda \mathbf {g} ]^{T} \right \|_{2}\) is depicted for each Pareto point p. Despite the fact that these KKT conditions appear to be met for each point, it shall be noted that each solution could still represent a local instead of a global optima.

Fig. 26
figure 26

First order optimality \(\mathcal {O}\) associated with Fig. 25

4.6 Comparing the propeller designs

Before the final propeller design is introduced, an overview of the extremal solutions, thus minimizing each of the distinct criteria individually, will be provided by Table 5. As it turned out, each of the three criteria is in conflict with the others. This is comprehend-able considering the spectrum of each criteria, especially, when one of the others is being optimized on. Therefore, by studying Table 5 the actual imprint of the manufacturing effort model can be revealed. This is because, solely setting either mass m or eigenfrequency −w 1 – first and second row – as the criterion, while not considering manufacturing effort, leads to different results.

Table 5 Extremal Pareto optima

The following three Figs. 2728 and 29 depict the design vector of the selected optimal compromise in-between the criteria, where f = [ ω 1 ; e ;m ]T =[ 5 7 .2 8 7 Hz; 0 . 3 8 ;2 . 3 kg]T.

Fig. 27
figure 27

Braiding angle φ i (r) for each layer i of the optimal compromise design

Fig. 28
figure 28

Braid thickness t(r) of the optimal compromise design

Fig. 29
figure 29

Campbell of the effort-frequency-compromise design

For guaranteeing a properly design propeller, the buckling analysis, as well as a full frequency screening were performed for the optimal compromise f=[ω 1; e; m]T=[57.287Hz; 0.38; 2.3kg]T. The results of the latter are provided and discussed next. Commonly, the results of such a frequency screening analysis are plotted in a so called Campbell diagram, wherein the revolutions of the rotor-dynamical system, here the propeller, are defined as the coordinate axis and the resulting excitation frequency (rotary frequency) is plotted in tandem with the eigenfrequencies of the system. Such a Campbell diagram is given with Fig. 29, where the red and blue curve represent the corresponding eigenfrequencies of the propeller and the black line the rotary and thus excitation frequency. Since none of the eigenfrequency curves is intersecting with the rotary one, it ca be assumed that resonance caused by the rotation of the propeller will not occur. In addition to that, the plot also displays the rotary stiffening of the propeller by the great revolution.

5 Conclusion

5.1 Summary

With this paper, an approach for coping with multi-criteria optimization problems involving multiple disciplines have been showed. As it turned out, the approach is capable of approximating Pareto frontiers of industry relevant problems with a moderate number of function evaluations, while still revealing most of the frontier (equidistance). Thus, making it an efficient and reliable approach for many engineering design problems, especially when analytical or at least semi-analytical sensitivity information is obtainable.

In addition to this, a method derived from the concept of soft computing has been presented, with which soft aspects, e.g. verbal information of experts, can be modeled and processed. With this method the modeling of the technical measure manufacturing effort was facilitated. For doing so, expert knowledge was accessed via interviews of braiding experts and translated into a knowledge-based system. The thereby derived manufacturing effort model was extended by the determination of reasons causing certain levels of manufacturing effort and by additionally providing an advise, pointing into the direction of maximal improvement.

Last but not least, several optimizations considering structural mechanics in concert with technical aspects put forth a propeller design meting multiple technical requirements and representing an optimal compromise in-between the involved objectives: mass, natural frequency and manufacturing effort.

5.2 Prospective research

The multi-criteria optimizations shown, will further be evaluated and studied by for instance contrasting it, with other scalarization techniques and Pareto approximation approaches such as the one given by Eichfelder (2009). Moreover, the incorporation of a cost model, albeit being situation dependent, could possibly enrich the optimization approach demonstrated on the propeller structure. The developed approach will applied to another large design problem to further asses its advantages and possible limitations.