1 Introduction

Composite materials are used in a wide variety of applications. The industries range from the medical sector, over sport goods to automotive parts and the aerospace sector. The increasing use of composite materials in high volume products is also leading to new challenges during the design and manufacturing of composite parts [1, 2]. Optimization approaches are widely used during the design of composite parts to increase the lightweight potential. Typical results from academic as well as commercial methods result in designs consisting mainly of local reinforcements bearing the loads in the part [3]. Variations of the optimized parameters (e.g. fiber orientation and patch position, size and thickness) result in a decreased performance of the part. Beside the reduced performance, the variations influence the parts behavior during manufacturing, which causes even stronger geometric variations.

Variations occurring during the manufacturing process of composite parts are unavoidable. Accordingly, considering those variations during the design phase of composite parts is an important task to prevent a high rate of non-conforming parts (this can for example include quality measures like geometric variations or premature failure) [4]. The presented work tackles this problem using a tolerance optimization approach. Optimized tolerance values for design parameters of local reinforcements define parameter variations which lead to an acceptable quality and performance of the composite part.

The contribution is structured as follows: First, an overview of related works, involving variations in composite structures, tolerance analysis and optimization for composite structures is given. Second, a tolerance optimization approach for composite structures with local reinforcements is presented, followed by an application of the approach. The presented approach and the applications results then are discussed before a conclusion is drawn.

2 Related Works

In the following chapter, related works in the areas of variations in composite structures, tolerance analysis and tolerance optimization for composite structures are presented.

2.1 Variations in Locally Reinforced Composite Structures

Composite materials have a comparatively complex material behavior. On the one hand, the layered structure of anisotropic, unidirectional plies enables an optimal lay-up regarding the dimensioning load cases. The material behavior can be defined precisely to fit the occurring loads and stresses. On the other hand, this high degree of design freedom requires elaborate expert knowledge for designing the composite lay-up. Due to the enormous design freedom, optimization approaches, such as presented in [3, 5, 6] are used to reduce the design effort and increase the utilization of the favorable material properties. A further drawback of the heterogeneous composite materials is the high number of uncertainty sources [7] in the raw materials and the high number of needed manufacturing steps, which leads to scattering in the resulting mechanical properties of the laminate and thus of the structural behavior. Considering the uncertainty therefore is essential for designing reliable composite products [8].

Composite structures can be analyzed on three different scales, the macro scale , the meso scale and the micro scale [9, 10]. On these three scales, different design parameters can be observed which are subject to variations. The micro scale, as smallest scale, includes parameters concerning the fibers, the matrix and the interface. Exemplary parameters are the mechanical properties, voids or the fiber volume fraction. The meso scale considers the ply level. Homogenized plies are the smallest unit used for modeling a composite on this scale. Therefore, variations of the ply level parameters, like ply angles and thicknesses or mechanical properties of the homogenized ply, can be investigated. Variations from the smaller micro scale are inherently included. However, the source on the micro scale cannot be identified [10]. On the macro scale, composites are regarded as laminates with mechanical properties. The single layers and possible variations cannot be investigated in detail, instead they are included in the macro scale parameter variations. Beside the mechanical properties of the laminate, loads and geometry are important parameters of the macro scale [8, 10]. A more detailed view on the classification of parameters to the different scales and investigations on the effects of uncertainties is given in [11].

An alternative categorization of occurring variations to the different scales is presented in [12]. It considers the variation sources leading to variations in the mechanical properties. The influential factors are divided into four categories: material properties, design parameters of the laminate, manufacturing process and influences of the geometry and tooling. Potter presents another categorization, determining the sources of variability during manufacturing subdivided into variability in materials, in molding processes and post-molding processes [7]. The variability can result in defects, which are defined as violation of geometrical tolerances and design specifications or reduced structural performance. A taxonomy of defects is derived to classify the defects regarding the originating process step and the type of defect as well as the severity. The taxonomy can be found in [7] and [13]. Potter states that manufacturing defects, in contrast to defects resulting from the part design, can only be eliminated by manufacturing and production engineers [13]. Nonetheless, variations can be considered early during the design phase through a conscious tolerance management, where specific variations and defects from manufacturing can be addressed in a process-oriented way, as presented in [14, 15]. The inclusion of manufacturing process simulations into the tolerance management enables the investigation of varying process parameters on the resulting part variations.

Local reinforcements are used in the highly stressed areas of a part to increase the stiffness and strength. Optimization approaches to compute the laminate lay-up for composite structures result in local reinforcements with the fibers oriented along the principal stress directions [1, 3, 16, 17]. The geometry of the patches can vary freely and depends on the stress state. The physical realization of the local reinforcements can be done for example by automated tape laying (ATL) or computerized numerical control (CNC)-cut preforms from prepreg material which are stacked manually or automated in the mold. Beside variations of the material/laminate properties, local reinforcements suffer from geometrical variations. On the one hand, the dimensions of the reinforcement patch underlie variations. On the other hand, the position of the local reinforcements on the part is subject to variations.

2.2 Tolerance Analysis of Composite Structures

Tolerance analyses are used to investigate the functionality of a part under variations. Therefore, relations between the deviating parameters and the functional key characteristics (FKC) have to be modeled. While tolerance analysis originates from the investigation of geometrical and dimensional tolerances [18], the aforementioned variational laminate design parameters can only be partly considered as such. In this context, it is common to differ between design tolerances on the part geometry to ensure the functionality and machine tolerances, which are needed to meet the design tolerances during manufacturing [19]. Tolerances for the laminate design parameters such as ply angle, ply thickness or fiber volume fraction, as well as the dimensional parameters of the reinforcements are most likely to be categorized as design tolerances [11]. Nonetheless, they have not been tackled in tolerance analysis of composite structures extensively.

Previous research on tolerance analysis mainly focuses on geometrical and dimensional variations on the part level resulting from the manufacturing process. Especially the effect of the geometrical variations on the assembly quality is scrutinized numerically as well as experimentally [20, 21]. Polini and Corado show the potentials of digital twins in tolerance management of composite parts in [22]. The sole investigation of geometrical variations enables the use of the method of influence coefficients (MIC) which reduces computational effort. Applications of the MIC for tolerance analyses of composite assemblies are presented in [20, 23, 24]. In [25] the influence of ply angle variations on the deformation of the assembly is investigated. The variations on the ply level introduce additional variability in the assembly deformation. Including variations during the manufacturing and assembly process of composite structures, such as the curing process temperature or the fixture positioning, into the variation analysis requires detailed simulation models to obtain results that are close to reality [26].

A great focus in tolerance management of composite structures is set on the integration of geometrical variations resulting from spring-in into product assemblies. Beside the MIC approach, further research concentrates on the prediction of spring-in and the transfer into the assembly for general shape parts using the structural tree method (STM) [27]. Further enhancements introduce preloading of the composite parts during the clamping process into the assembly simulation. The preloading results in additional geometrical variations [28] which can as well be modeled with the small displacement torsor model [29]. In addition to the geometric variations, the preloading of the compliant composite parts results in a prestressed assembly which has to be evaluated for structural performance [30] and fatigue [31].

Variation analyses on the smaller meso and micro scale are computationally more expensive but help to understand the effect of variations on these scales onto the geometric variations on the macro scale. The sensitivity analysis presented in [32] shows the high influence of variations in fiber volume fraction on the spring-in angles. Further works investigate the effects of the local fiber orientation [33] on the spring-in effect as well as the influence of process parameter variations [34] numerically and experimentally. While most variation analyses concentrate on the spring-in effect, variations on micro and meso scale affect as well the structural performance of the part. Effects of variations of optimized, locally reinforced structures on the structural behavior have been investigated thoroughly. In [12] an overview of influential parameters was given. Furthermore a study of the effect of local patch reinforcement variations on the structural performance was performed. Subsequently the effect of variations in patch parameters as well as different modeling approaches of thickness variations in optimized and locally reinforced structures was investigated in [35]. The variations had mainly worsening effects on the performance, whereas the influence of single patches is harder to determine and mainly depends on the patches size. In [16] the sensitivities of variations of single layers in an optimized multi-layer composite structure showed the need for a careful handling of variations during the product development.

2.3 Tolerance Optimization for Composite Structures

While tolerance analysis investigates the quality of a part or assembly under given variation distributions, respectively tolerances, the allocation of the tolerance values, as part of tolerance synthesis, is a crucial step in defining part quality. Nonetheless, the allocation of tolerances is often based on individual expertise and rule of thumb [19]. The optimal allocation of tolerances requires the involvement of various disciplines and, due to its complexity, is mainly possible applying numerical methods. Tolerance-cost optimization is an approach to allocate tolerance values to ensure a required quality, while considering the costs caused by the tolerances [36]. The tolerance allocation is therefore formulated as an optimization problem. A wide variety of problem formulations exists. The key difference is the optimization objective, least-cost or best quality. In the following, we will concentrate on least-cost formulation, where the tolerances as design values are changed to find the set with least-costs (objective), while ensuring the quality by applying tolerance analysis as a constraint and therefore constraining the FKCs. For a comprehensive review about tolerance optimization, different problem formulations and applications the interested reader is referred to [19].

The applicability of tolerance optimization on composite structures depends on the required degree of detail of the analysis and the precision of the related simulations. The investigation of geometric variations on the macro scale for rigid parts is possible with current tolerance optimization approaches. More detailed analyses including variations on the laminate level involve the meso or micro scale as well as the production process. Therefore, current tolerance optimization approaches have to be adapted to meet the challenges of the composite materials. Effects like variations in the local reinforcements change both the parts curing deformations and the structural behavior of the part. Challenges that have to be overcome in tolerance optimization of composite structures are:

  • nonlinear FKCs for linear variations

  • solution spaces with local minima are possible

  • differences in the achievable accuracy

  • no data on tolerance cost curves for composite materials

So far, only few tolerance optimization approaches specifically for composites have been developed in research. The first steps towards tolerance optimization for composite structures have been made by Kristinsdottir and Zabinsky [37]. They propose an approach to find feasible tolerance boxes for given layer angles, respectively laminate designs. Therefore, they use a growing algorithm based on a bi-sectioning algorithm, growing or shrinking a hypercube while evaluating the function of the part with the resulting tolerance set [37]. Through integration of the approach into design optimization, it is possible to find near optimal laminate angles considering variations of the angles through a tolerance box [38]. Based on the aforementioned STM tolerance analysis Dong [39] presented a tolerance optimization approach to allocate tolerances to the spring-in angle. The author estimates the tolerance costs using the fuzzy multi attribute theory (FMAUT) and minimizes them using a nonlinear programming optimization algorithm [39]. A current tolerance optimization approach is presented by the authors in [11]. Herein tolerances for laminate design parameters are allocated using a least-cost approach in combination with a meta model based tolerance analysis. The optimization problem is solved with a genetic algorithm which is suitable for global optimization. In the given application example, tolerance values are calculated for the layer angles of a commonly used laminate lay-up and the layer angles of optimized local reinforcements.

2.4 Research Questions

Summarizing the state of the research, variations in composite structures occur in many different ways and influence the part and assembly quality. Tolerance management of composite structures tackles the challenge of ensuring the part quality under such variations. However, the focus of previous work has been on geometric variations on the macro scale, e.g. spring-in deformations. Beyond this numerous variation simulation studies have been performed, investigating the effects of laminate variations on the structural behavior of the parts as well on the effects in subsequent simulations like part assembly. Methods to allocate or optimize tolerances for laminate design parameters and therefore control those variations have been rarely studied. Nonetheless, laminate parameter variations on the meso and micro scale and the resulting nonlinear effects in the outputs should be considered through tolerances. Moreover, the complexity of composite structures increases with the expanded use of optimization approaches in composite part design. Optimized laminates consisting of numerous local reinforcement patches emerge. Manufacturing variations necessarily lead to a decrease of the part quality, geometrically and structurally. From this the following research questions arise:

  • How can local reinforcement patches be modeled to be considered in variation simulation?

  • How do variations of local reinforcement patches influence the composite part quality and how strong is the influence of different varying design parameters of locally reinforced laminates on the structural behavior?

  • Can tolerances be allocated and optimized for the reinforcement patch design parameters?

3 Tolerance Optimization for Composite Structures with Local Reinforcements

To investigate the research questions, a method for allocating tolerance values to the parameters of the local reinforcement patches is developed. Tolerance optimization is used to find the optimal tolerance values which guarantee the functionality of the composite structure. In the following Sect. 3.1 the developed tolerance optimization method is presented. Furthermore, variational parameters of local reinforcement patches are discussed, complemented by the comparability of the different parameters inside the tolerance optimization.

3.1 Method

The proposed method can be divided into three substeps, first model preparation, second the meta model training and third the tolerance optimization. The three steps are visualized in detail in Fig. 1.

Fig. 1
figure 1

Method for tolerance optimization of composite structures with local reinforcement patches

Model Preparation

The presented method is intended to be used during the design phase of composite structures. Therefore, the first step is the modeling of the Computer Aided Design (CAD) model. While composite structures often consist of thin surfaces and therefore are modeled as shell geometry including the local reinforcement, the presented method is not restricted to shell geometries.

Based on the CAD model the laminate design has to be performed. This includes the material selection and the lay-up definition. As a result the materials, sequence, thicknesses, angles, and fiber volume ratios of the plies define the laminate. For structural parts, various optimization approaches can be applied to find the best suitable lay-up. To optimally design local reinforcements simulation-based approaches like presented in [3, 6, 40, 41] lead to a structurally suitable lay-up.

The following step defines the part quality criteria. These functional key characteristics are evaluated during the tolerance analyses with respect to variations. The presented approach focuses on the use of geometrical FKCs. This includes manufacturing deformations emerging from the curing process during composite manufacturing or deformations under structural load. Furthermore, failure criteria can be used to evaluate the structural quality considering variations in the laminate and the local reinforcements.

Beside the definition of the FKCs to measure part quality the variational measures need to be parametrized. Therefore, the investigated measures which are subject to variations from manufacturing are defined. Beside the ply angles, thickness and fiber volume fraction, which already have been presented in [11], the focus is set to geometrical variations of the local reinforcement patches in this work. This includes the patch dimensions, the position and orientation. In the current work the local patch reinforcements are restricted to flat surfaces. Curved surfaces lead to draping effects, like local changes of the fiber angles, which need to be modeled through an additional draping simulation. More details on the modeling and parametrization of local reinforcement patches is presented in Sect. 3.2. Additionally, for each parameter a distribution type has to be assumed.

To measure the defined quality criteria with respect to the parameters, the FEA models are set up to evaluate the behavior during manufacturing and under structural loads. Thus the approach is not limited to shell structures, meshing the FEA models using shell elements with a layered formulation offers a good trade off between computational effort and result quality.

Meta Model Training

Tolerance optimization repeatedly performs tolerance analyses and thus requires an high amount of evaluations of the FKCs. Using FEA to perform those evaluations would result in computation times which would make tolerance optimization not applicable. Meta models, i.e. the approximated model of a more complex model, reduce computation times tremendously, compared to the original FEA [42]. Instead of a physics based formulation, they are a mathematical representation of a system. Meta models are generated by fitting a predefined mathematical formulation to a set of observed data. Therefore, non-reasonable results are possible using meta models. Their main advantage, however, lies in the low computational effort and computation times. In combination with the possibility to model non-linear systems, the application of meta models contributes to the feasibility of tolerance optimization of composite structures.

To obtain meta models that can predict the relation of the input parameters, e.g. patch width, and the output parameters, e.g. maximum deformation, the models have to be trained with a data set. The first step to generate the data set is the sampling procedure. Latin Hypercube Sampling (LHS) is used to design a sample of the input parameters, i.e. the variational parameters, with uniform distribution. LHS tries to cover the sampling space uniformly. It is therefore very efficient, and results in a lower number of needed samples to gain the same amount of information, when compared to random sampling [43]. The size of the design space is chosen to wrap the maximum allowable tolerances in tolerance optimization, defined by the product developer. In the next step, the output data, i.e. the FKCs, are computed by solving each design point of the sample with the before prepared FEA-models. The gathered FEA results are then further processed, e.g. to calculate derived FKCs like geometrical tolerances. The output data is then matched with the input data and prepared for the meta model training.

A Gaussian process regression (GPR) is chosen as meta model. Its high flexibility in modeling highly nonlinear data and correlations makes them applicable to a wide variety of engineering problems. Furthermore, GPR is well suited to represent deterministic applications, e.g. FEA [42].

GPR, a generalization of a multivariate Gaussian process, is defined by a mean function m(x) and a covariance function \(k(x,x'|\varvec{\theta })\) with x and \(x'\) being evaluation points from the design space \(\mathcal {X}\) and the hyperparameters \(\varvec{\theta }\) [44]. This results in the Gaussian process model being similar to the real process function f(x):

$$\begin{aligned} f(x) \sim \mathcal{GP}\mathcal{}(m(x),k(x,x'|\varvec{\theta })) \end{aligned}$$
(1)

The covariance function typically uses a kernel function fitted to the training data through optimizing its set of hyperparameters \(\varvec{\theta }\). In this work no noise in the observations is assumed, as FEA is a deterministic method. The mean function m(x) is defined by a constant basis function \(H=1\) and the corresponding \(\beta\) coefficients through \(H \cdot \beta\). Furthermore, the kernel function to calculate the covariance function is considered to be a rational quadratic term, which showed the best results fitting the data in preliminary studies. The training process to fit the GPR to the observed training data uses a maximum likelihood by using quasi-Newton algorithm to optimize the hyperparameters \(\varvec{\theta }\). K-fold cross-validation is used to determine the function loss by splitting the data into five groups.

The quality of the trained meta models is evaluated in two ways. First the root mean square error (RMSE) resulting from the cross-validation is used. Because of different parameter ranges and scales the RMSE is normalized.

$$\begin{aligned} NRMSE = \frac{RMSE}{y_{\text {train,max}}-y_{\text {train,min}}} \end{aligned}$$
(2)

Meta models with values \(NRMSE<10\%\) are considered to be of acceptable quality in this study. The second quality measure used is the coefficient of prognosis (COP) as introduced in [45].

$$\begin{aligned} COP = 1 - \frac{\sum _{n=1}^{N}\left( y_{n}-y_{\text {pred},n}\right) ^{2}}{\sum _{n = 1}^{N}\left( y_{n}-\mu _{Y}\right) ^{2}} \end{aligned}$$
(3)

The COP is calculated using an additional validation data set. This adds more computational effort to the meta model generation, but improves the model quality evaluation by using data the meta model never saw before. The COP can be interpreted as the model’s prediction quality, with values between 0% and 100%. Meta models with \(COP > 90\%\) are considered to have acceptable precision.

Tolerance Optimization

The optimization procedure is based on least-cost tolerance optimization formulation presented by Franz et al. in [11]. The objective of the formulation is to minimize the costs related to the tolerance values. Due to the lack of cost curves for tolerances of patch parameters dummy cost curves for the individual parameters are introduced. Since they do not represent actual costs but can be understood as a penalization of tight tolerances, they are referred to as penalties in the following. At the same time the part quality is ensured by the applied constraints. The minimization is achieved through adapting the design variables, i.e. the tolerance values. This results in an optimization problem of the following form:

$$\begin{aligned} {\begin{matrix} &{}min \quad P_{\text {tot}}(\varvec{t}) = \mathop{\sum} \limits_{i=1}^{T} P_i(t_i) \\ &{}s.t. \quad g(\varvec{t}) \le S_{\text {max}} \\ &{}t_{i,\text {min}} \le t_i \le t_{i,\text {max}} \quad \forall i = 1,...,T \end{matrix}} \end{aligned}$$
(4)

The objective function sums up the individual penalty values \(P_i\) for all T tolerance values in \(\varvec{t}\). The resulting total penalty \(P_{\text {tot}}\) is the function value to be minimized. The penalties are calculated using an individual exponential formulation

$$\begin{aligned} P_{i}(t_{i}) = a_i + b_i \cdot e^{-c_i \cdot t_{i}} \end{aligned}$$
(5)

with respect to the coefficients \(a_i\), \(b_i\) and \(c_i\) and the tolerance \(t_i\). An example of the penalty function is depicted in Fig. 2 for the coefficients \(a = 0\), \(b = 1623.78\) and \(c = 0.9695\).

Fig. 2
figure 2

Penalty curve resulting from the lower and upper bound of \(t_i\)

The inequality constraint function \(g(\varvec{t})\) represents the results of the sampling-based tolerance analysis, which results in a scrap rate that has to be lower than the maximum allowed scrap rate \(S_{\text {max}}\). The scrap rate is estimated empirically during the tolerance analysis [11] with

$$\begin{aligned} g(\varvec{t})&=1 - \frac{\sum _{j=1}^{N}\prod _{k=1}^{K}q_{j,k}(Y_{j,k})}{N}\\ \text {with} \quad q_{j,k}& = {\left\{ \begin{array}{ll} 1, &{} \text {if } Y_{j,k}(\varvec{v}) \le B_{k}\\ 0, &{} \text {if } Y_{j,k}(\varvec{v}) > B_{k}\end{array}\right. } \end{aligned}$$
(6)

A sample of normally distributed variations \(\varvec{v}\) is drawn using LHS consisting of N design points for the T tolerance values in \(\varvec{t}\). The standard deviations \(\sigma _{i}\) of the sample corresponds to the tolerance values \(t_{i}\):

$$\begin{aligned} \pm 3 \sigma _{i} = \pm t_{i} \end{aligned}$$
(7)

Figure 3 exemplarily shows the normal distribution for a tolerance value of \(\pm t = \pm 10 \text {mm}\) which results in a standard deviation \(\sigma = 3.33~\text {mm}\) and the histogram of a sample with the size of 10000, like it is used during tolerance optimization. For each of the K observed FKCs the quality \(q_{j,k}\) is considered to be conform if the meta model’s output value \(Y_{j,k}(\varvec{v})\) is below the pre-defined boundary \(B_k\) of the k-th FKC. The FKCs are therefore bounded with an upper specification limit (USL). For more complex FKCs lower and upper specification limits can be applied like in [46]. The scrap rate is then calculated by taking the fraction of all conform parts of the N samples, i.e. \(q_{j,k} = 1;\quad \forall k = 1,...,K\), and subtracting it from one.

Fig. 3
figure 3

Normal probability distribution and density function for a parameter with nominal value \(d_0 = 75 \text {mm}\) and a tolerance range \(\pm t = \pm 3\sigma = \pm 10 \text {mm}\)

The non-linearity and non-continuity of the constraint function as well as the possibility of local minima limits the range suitable algorithms [19]. Meta-heuristic optimization algorithms increase the probability of leaving local minima and finding the global minimum. Therefore, the optimization problem is solved using a meta-heuristic optimization algorithm, the genetic algorithm. Solving the tolerance optimization problem with the genetic algorithm results in a set of tolerance values which fulfill the constraints, i.e. the maximum scrap rate. Furthermore, the most critical FKCs can be evaluated.

3.2 Variation Modeling for Local Patch Reinforcements

Local patch reinforcements are used to improve the structural performance of a composite structure. Structural optimization methods are often used to design the reinforcements in an optimal way, resulting in complex shapes of the reinforcements based on FEA meshes. The difficulty lies in reverse engineering them into a CAD model. In order to reduce the geometrical complexity and number of parameters needed to define the reinforcements, a rectangular shape is assumed, laid down on a flat surface. A parameterized optimization method for rectangular patch optimization is presented in [41]. Figure 4 shows the considered parameters of a rectangular reinforcement patch for variation analysis. The lay-up on a flat surface avoids draping effects which would lead to local distortion and changes of fiber angles. The parameters of the reinforcement patch can be categorized to:

  • dimension (length l, width w)

  • orientation (angle \(\alpha\))

  • position (x-direction \(p_{x}\), y-direction \(p_{y}\))

Together the parameters build the vector of the design parameters \(\varvec{d_0}\) which defines the nominal design state.

$$\begin{aligned} \varvec{d_0} = {(l_0, w_0, \alpha _0, p_{x,0}, p_{y,0})}^T \end{aligned}$$
(8)

The variations \(\varvec{v}\) are added to \(\varvec{d_0}\) to represent the variational state.

Fig. 4
figure 4

Geometrical parameters of patches

Aforementioned methods for structural optimization of composite structures are commonly based on FEA. Therefore, resulting local reinforcement patches are defined by an element wise approach, based on the FEA meshes. Furthermore, commercially available design tools for composite structures, like Siemens NX Fibersim or Ansys Composite Pre/Post (ACP), use sketches to intersect shell geometry to define local reinforcements and subsequently discretize the geometry with a FEA mesh like presented in Fig. 5. These approaches have in common that they need to remesh the geometry when changes, like geometrical variations, have to be considered.

Fig. 5
figure 5

Sketch based modeling technique for reinforcement patches resulting in different meshes of the base part for coarse and fine meshes of the reinforcement patch, as well as for applied variations

A second approach is to model the reinforcement patches as separate parts, presented in Fig. 6. The reinforcement patches are defined and parametrized through sketches like in the first approach.The modeling as separate parts results in a fixed mesh for the base part and a bonded part for the patches, which has to be remeshed when the dimensions change, shown in Fig. 6. Because the patches are smaller in size and have a regular geometry the remeshing of the patch is faster than remeshing of the whole part like in approach one. Furthermore, the remeshing of the whole part can lead to result changes. In consequence finer meshes are needed for sketch based patch modeling than part based patch modeling.

Fig. 6
figure 6

Part based modeling technique for reinforcement patches resulting in the same mesh for the base part for coarse and fine meshes of the reinforcement patch, as well as for applied variations

Both approaches consider the patch thickness through offsets. The sketch based approach has a layered formulation, which models the patch on top of the base laminate, which automatically considers thicknesses. In the part based approach the plane of the patch part has a parametric offset that no gap between base part and patch occurs.

3.3 Penalty Functions

As already stated in Sect. 2.3, data on variation distributions and cost of manufacturing precision is rare or not available for composite structures and therefore needs further research activities. For the presented tolerance optimization approach the cost functions, representing the arising costs to achieve the tolerances, are replaced by the penalty functions, penalizing tight tolerances more than wider tolerances. The penalty functions are based on the widely used exponential formulation to model tolerance cost curves. Different parameter types and parameters have different achievable tolerance values and cause different effort, which is reflected by the penalty value. The penalty functions are fitted individually to the parameter type and its boundaries. Individual penalty functions for the parameters respect minimal tolerance values and maximum tolerance values of the parameters. The penalty function is defined for the minimal tolerance value to be

$$\begin{aligned} P_{i}(t_{i,\text {min}}) = 1000 \end{aligned}$$
(9)

and the maximum tolerance value to be

$$\begin{aligned} P_{i}(t_{i,\text {max}}) = 0.1 \end{aligned}$$
(10)

With this scaling the coefficients \(a_i\), \(b_i\), \(c_i\) from Eq. 5 can be calculated. While all coefficients a are zero, i.e. \(a_i = 0\), \(c_i\) is calculated with respect to the upper and lower bounds of the tolerance value

$$\begin{aligned} c_{i} = \frac{ln(10000)}{t_{i,\text {max}}-t_{i,\text {min}}} \end{aligned}$$
(11)

and \(b_i\) is computed with

$$\begin{aligned} b_{i} = \frac{0.1}{e^{-c_i \cdot t_{i,\text {max}}}} \end{aligned}$$
(12)

An exemplary penalty curve is depicted in Fig. 2 for \(\pm t_{\text {min}} = \pm 0.5 \text {mm}\) and \(\pm t_{\text {max}} = \pm 10 \text {mm}\). The curve shows high penalty values for tight tolerances, which decrease rapidly with widening. For tolerances greater than \(\pm 5 \text {mm}\) the change in the penalty values decreases which results in low improvement of the objective function for further increasing the tolerance values.

4 Application

The presented tolerance optimization method is applied to a composite structure under structural loads. The method is implemented in Mathworks Matlab R2021a. The FEA is performed using Ansys® Workbench 2022 R1, including ACP tool and Mechanical, which are accessed via the Matlab code through a Python scripting interface. The structure is a simplified front wing suspension of a formula student car. The following section shows the application of the presented approach divided into the model preparation, meta model training and tolerance optimization.

Model Preparation

The CAD-model of the front wing suspension is presented in Fig. 7. It consists of a main body with the fixture in the top area and the applied force from the wing on the bottom bore holes. The applied force represents a misuse load case with \(F_z = 40~\text {N}\). On both sides of the suspension a reinforcement patch is bonded symmetrically to the x-y plane. The laminate design includes a base lay-up for the main body, consisting of four layers with an orientation \((10^\circ , 100^\circ )_S\) to the reference direction of the base part and a thickness of \((0.25 \text {mm}, 0.25 \text {mm})_S\). The material properties, listed in Table 1, are based on an unidirectional carbon fiber prepreg material. The two reinforcement patches are modeled as separate parts defined by the nominal parameters \(\varvec{d_0}\) listed in Table 2. The material orientation is \((0^\circ )\) to the patch reference axis with a thickness of \((1 \text {mm})\). The investigated quality criteria, respectively the output parameters, are the maximum deformations of the structure (total, x-direction, y-direction, z-direction) and different failure criteria implemented in Ansys APDL: the Tsai-Wu criterion, the Puck fiber failure criterion and the Puck matrix failure criterion [47]. The FEA model is discretized using quadratic, layered shell elements with an element size of 5 mm for the main body and the reinforcement patches with a finer mesh with 2 mm element size at the leading edge. The loads and fixtures are applied corresponding to Fig. 7 as static loads. The reinforcement patches are bonded to the main body using a linear contact.

Fig. 7
figure 7

Use case part based on a front wing suspension; a load case definition including fixtures (blue) and applied forces (red) as well as reference directions (yellow); b laminate lay-up of the base part; c dimensions and patch parametrization

Table 1 Mechanical material properties
Table 2 Variational input parameters and their defined design space for meta model training

The nominal values of the quality criteria are shown in Table 3 and Fig. 8. In Fig. 8a the total deformation is presented, which shows the maximum deflection of the front wing suspension at the tip, near the load. In Fig. 8b the strain energy density in the elements is depicted. It represents the strain energy stored in the elements per volume. The more energy stored the higher the local load state. The front wing suspension shows highly loaded areas at the fixtures and at the load introduction. The patches are placed as a direct path between the fixtures and loads. They have low strain energy density values, while being adjoint to the highly loaded areas of the fixture.

Fig. 8
figure 8

Nominal results of the front wing suspension; a total deformation; b strain energy density

Meta Model Training

Based on the prepared simulation model the meta model training can be performed. Therefore, a LHS is generated with 300 samples, as a compromise between computational effort and meta model accuracy. The design space is defined using the distribution information given in Table 2. The design points are distributed uniformly in the design space with no correlation between the parameters.

In the next step, the FEA model is solved for all design points of the sampling. To transfer the information of the single design points to the FEA model a workbench journal file, which is based on the Python programming language, is used. The updated model is then solved automatically and exports the relevant results as text files, which are imported using the Matlab code and further prepared for processing. After the sampling is solved completely the input and output data is fed into the meta model training procedure. The meta model is trained using a 5-fold cross-validation.

Furthermore, a validation data set is created using the same procedure as above with a sample size of 50. Instead of training the meta model with this data set, the prediction of the meta model on this data set is compared to the FEA results and the COP value is calculated. The quality measures of the meta models can be found in Table 3. They show acceptable values for all output parameters which lie in the defined ranges from Sect. 3.1.

Table 3 Functional key characteristics, respectively output parameters, of the use case with their nominal and boundary values and the meta model quality measures

In Fig. 9 the meta model training data and predicted data for the output parameter maximum total deformation with comparably high variations is depicted, including a histogram. The data points show slight linear correlation (Pearson correlation coefficient = 0.46) with the input value but with an high amount of scatter, presumably because of the other varying input parameters. The low errors of the prediction can be observed in the figure by good coincidence of the data points of the FEA and the meta model prediction. The histograms of the output parameter show very small differences in the distribution. The results of the meta model training presented in Fig. 9 complement the good results the meta model quality measures.

Fig. 9
figure 9

a Comparison of training data from the FEA and predicted data from the meta model of the maximum total deformation; b Sampling of the tolerance optimization result for the angle \(\alpha\) of patch 1 and the predicted data from the meta model with the non-conform area marked in red

Tolerance Optimization

The trained meta models allow proceeding with the actual tolerance optimization. Therefore, the optimization problem has to be set up as described in Sect. 3.1. The lower and upper boundaries for the tolerances of the input parameters are shown in Table 4, complemented by their penalty curve coefficients. For the position parameters, higher upper bounds are chosen because of the more difficult and imprecise placement process. The dimensional parameters are more restricted due to less uncertainties, especially when CNC and automated processes are applied. In contrast to the meta model training, the sampling with sample size N used during the tolerance optimization is normally distributed.

Table 4 Variational parameter settings for tolerance optimization

The quality measures of the composite structure are listed in Table 3 including the nominal values as well as the boundary values \(B_k\), which represent the allowed variation of the output values resulting from the tolerances. The boundary values together with the settings in Table 5 define the constraints of the tolerance analysis. The optimization furthermore is restricted to a maximum of 200 generations with a population size of 50 individuals.

Table 5 Tolerance analysis and optimization settings

The tolerance optimization results in a penalty value of \(P_{\text {tot}} = 267.80\) after 110 generations. The optimization history is shown in Fig. 10. It shows the best, respectively minimal penalty \(P_{\text {tot}}\) in a generation as well as the worst, respectively maximum of each generation. In the beginning of the optimization a large gap between the min and max curve can be observed, which closes till generation 50. For later generations only small changes can be observed until the optimization converges.

The resulting tolerance values for the parameters of both patches are presented in Fig. 11. No tolerance value reaches the upper neither the lower boundary set for optimization. Between the tolerances of the patches and the parameter types differences exist. The tolerance values for patch 1 mostly are lower than for patch 2. The tolerances on the patch dimensions \(w\) and \(l\) are the tightest ones, followed by the positions \(p_{x}\) and \(p_{y}\) and the orientation \(\alpha\). The lower tolerances on the dimensional parameters can be explained with reduced changes of the patch area and therefore in the resulting mass and stiffness of the patches. The FKCs show lower variations to the nominal state. Additionally the length changes enlarge the patches into the region of higher loaded areas of the part, which are represented by the strain energy density in Fig. 8b. The coverage of highly loaded areas leads to a better usage of the patch’s stiffness and improvement of the structural behavior. The tolerances on both directions of the position are similar due to equal rates of changes regarding the load state. Interestingly, the orientation of patch 2 has a much higher tolerance than the orientation of patch 1. The tolerance values mainly agree with additionally performed sensitivity analysis on the training data set, which calculates the influence of input parameters on output parameters.

With the resulting set of tolerances the constraint is fulfilled. The maximum scrap rate of the tolerance analysis of 1000 ppm is not exceeded. The 100 non-conform parts all exceed the boundary value \(B_k\) of the maximum total deformation, depicted in Fig. 9b. Furthermore, 83 of the 100 non-conformal parts as well exceed the boundary of the maximum deformation in z-direction. Therefore, the tolerance analysis is dominated by single output parameters. Figure 9b shows the greatly reduced variations of the maximum total deformation. The non-conform parts are depicted in the red area.

Fig. 10
figure 10

Optimization history represented by the minimum and maximum penalty value of each generation 

Fig. 11
figure 11

Optimized tolerances for a sample size \(N=100000\)  

5 Discussion

The presented contribution elaborated the necessity of tolerance management for composite structures on the laminate scale. Therefore, a tolerance optimization method to minimize the cost, respectively penalty, induced by tolerances of local reinforcement patches has been developed and applied successfully to a case study. The application showed good results for the reduction of the penalty value as well as for the tolerances of the design parameters of the reinforcement patches. The method gives reasonable results and thus can be used for tolerance management of composite structures. Nonetheless, the results have to be discussed regarding the research questions from Sect. 2.4.

First an appropriate way of modeling local patch reinforcements had to be found, which on the one hand allows the parametrization of the rectangular patches and on the other hand is applicable to variation simulation with no negative influence on the result quality of the FEA. Two modeling approaches, sketch based and part based, have been presented. For further application in tolerance optimization the part based modeling has been chosen due to smaller changes from remeshing the geometry and smaller resulting discretization errors. While for the presented use case the additional modeling effort is low for the part based patches, this will change for patches on more complex structures like double curved surfaces. Further research needs to investigate the modeling and inclusion of draping effects of such patches into variation analysis.

Based on the model, including the patch modeling and parametrization, the influence of variations of the patch parameters was investigated in the application section. Based on the training data set of the meta models, the influence of varying parameters on the structural behavior was pointed out. Variations from the nominal state led to a wide scatter of the output parameters. For the presented use case improvements as well as worsening of the quality criteria compared to the nominal design can be observed. Generally the results showed that variations of the patch acting to the direction of higher loaded areas, highlighted through high strain energy density values, show a higher influence on the output parameters. Additionally variations of the patch dimensions lead to changed patch areas and therefore to a changed total stiffness of the part affecting the structural behavior. Concluding, in the presented case study the patch dimensions need the tightest tolerance values.

To ensure the quality with respect to variations, a new tolerance optimization approach to compute tolerance values for patch parameters has been presented and successfully applied to an application example. Tolerances have been successfully optimized for the design parameters of local reinforcements. Furthermore, penalty curves for different parameters have been defined and result in reasonable tolerance values. The results showed a good convergence of the penalty and exploitation of the maximum allowed scrap rate.

Further improvements of the presented tolerance optimization approach must integrate draping simulation to enable the lay-up of local reinforcement patches over double curved surfaces. Moreover, the method needs to be extended to assemblies out of composite structures. Especially manufacturing variations of single parts propagate to assemblies and lead to preloading of the assembly structure.

6 Conclusion

Composite materials offer a wide variety of new potentials in lightweight design. On their way to the application in a wide range of industries and technological fields, different challenges have to be overcome. Rising volumes of composite structures require to focus more on the quality of the product. Product developers use methods from tolerance management to ensure the desired behavior. The newly developed and presented method for tolerancing design parameters of local reinforcement patches in composite structures contributes to the consideration of variations of material and design parameters during the design phase. It improves the applicability of tolerance optimization methods to composite structures and therefore enhances quality, which will shift into focus when scaling up composite manufacturing.