1 Introduction

Nonlinear system identification is an essential tool for understanding and modeling the dynamics of nonlinear structures, and plenty of methods have been developed to perform this task [1]. This process generally refers to the extraction of information directly from the measured data, and it may or may not involve a model, depending on the information that is sought and on the algorithms that are adopted. In the case of nonlinear systems, the complexity of this process is much more important due to the phenomena that nonlinearity brings and the lack of superposition principle [2, 3]. Generally, methods for system identification are classified based on different aspects, such as their “gray” tone [4], the domain in which they operate (time or frequency), or the kind of excitation they need. The latter topic is particularly interesting in the case of nonlinear systems, with the two main categories being harmonic-based and random (or broadband)-based. Harmonic-based methods gain their strength from the theory laying behind nonlinear dynamics, which is mainly focused on harmonic excitations [2]. This class of methods is generally devoted to the study of the periodic solutions of the system and its nonlinear frequency response curves (NFRCs), which are the generalization of the frequency response functions of linear systems. NFRCs are response amplitude dependent, and several nonlinear phenomena may arise when changing the excitation level, such as sub- and super-harmonics, bifurcations, unstable and isolated solutions, periodicity breaks, and chaos. Methods belonging to this class generally use the harmonic balance method (HBM) to solve the equation of motion considering a finite number of harmonics [5, 6]. When used in conjunction with a continuation technique, HBM can be used to reconstruct the NFRCs of the system and to track its bifurcations [5,6,7].

The direct estimation of the NFRCs from experimental measurements is a challenging task, and several powerful approaches have been developed by the research community. Generally, the base requirement of these methods is to have one or more feedback control loops on the input forcing function. Control loops are needed to tune the response amplitude in the case of stepped-sine tests [8], or to experimentally replicate a continuation procedure, such as in [9, 10]. In the latter case a phase control is also required to keep the system response in quadrature with the input. The necessity of control strategies demands an adequate instrumentation and an iterative test procedure. Furthermore, the interaction with the shaker [10] and the presence of harmonics in the input spectrum [9, 11] are two aspects that must be accounted for. Despite the experimental difficulties, this class of methods has the advantage of directly measuring the NFRCs, as well as the backbone curves [9, 12, 13], the unstable branches [14, 15] and the nonlinear normal modes [10, 16,17,18]. Sensitivity analyses and confidence intervals have been also proposed in this framework [19, 20].

On the other hand, broadband-based techniques offer a quick and easy way to estimate a nonlinear model from the measured data, since multiple modes can be excited simultaneously in an open loop, thus simplifying the experimental setup. This class of methods include the nonlinear subspace identification (NSI) technique [21, 22] and the polynomial nonlinear state-space approach (PNLSS) [23]. The former in particular is based on the feedback interpretation of the nonlinearity [24, 25] combined with the classical subspace identification framework, and it has been adopted to identify both localized and distributed nonlinearities [26,27,28], and bistable systems [29,30,31]. In order to adopt the NSI technique, the following prior information must be known/assumed: (1) the functional form of the nonlinearities of the system, in the following called “nonlinear basis functions”; (2) the location of the nonlinearities (in the case of localized nonlinear behavior). Methods exist to characterize and/or locate a nonlinear behavior of a structure and they are generally based on the estimation of the restoring surface at the location of interest [32]. Furthermore, the measured data used for the identification must properly engage the nonlinear behavior of the system under test.

In this paper, the two different approaches, harmonic-based and random-based, are combined to study the periodic solutions of the system starting from the nonlinear subspace identification under random excitation. The reconstruction of the NFRCs in the nonlinear state-space framework has been presented in [33], and this approach is here further developed in the NSI formulation and with a novel study on the stability of the solution paths and the system bifurcations tracking using a monodromy-based approach. The method allows a comprehensive study of the periodic solutions of a nonlinear system without performing any harmonic-based experimental measurements, which are generally complex and require specific instrumentation. Instead, the periodic solutions are indirectly studied from the estimated state-space model. It follows that the accuracy of the predicted NFRCs depends on the accuracy of the state-space estimation. The latter can be assessed by using a validation dataset, or by inspecting the invariance of the underlying-linear dynamic features (linear FRFs, natural frequencies, damping ratios, mode shapes). The method is tested on numerical and experimental applications. The effects of noise, nonlinear modeling errors and extrapolation of the results are also evaluated.

In the following, the proposed method will be referred to as HBM-NSI.

The paper is organized as follows. Section 2 recalls the problem statement of NSI and describes the proposed method. Section 3 proposes two numerical demonstrations with smooth and non-smooth nonlinearities. The experimental application is presented in Sect. 4, and the results are compared with the system measurements, also addressing period-doubling phenomena. Finally, the conclusions of the present work are summarized in Sect. 5.

2 Problem statement

A discrete nonlinear mechanical system having \(N\) degrees of freedom (DOFs) is considered, described by the following equation of motion

$${\varvec{M}}\ddot{{\varvec{y}}}\left(t\right)+{{\varvec{C}}}_{v}\dot{{\varvec{y}}}\left(t\right)+{\varvec{K}}{\varvec{y}}\left(t\right)+{{\varvec{f}}}^{\mathrm{nl}}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right)={\varvec{f}}(t)$$
(1)

where \({\varvec{M}}\), \({{\varvec{C}}}_{v}\) and \({\varvec{K}}\) \(\in {\mathbb{R}}^{N\times N}\) are the mass, viscous damping and stiffness matrices respectively, while \({\varvec{y}}\left(t\right)\) and \({\varvec{f}}(t)\) \(\in {\mathbb{R}}^{N}\) are the generalized displacement and external force vectors. The nonlinear part of the equation is described by the term \({{\varvec{f}}}^{nl}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right)\in {\mathbb{R}}^{N}\) and generally depends on displacements and/or velocities. It is assumed that \({{\varvec{f}}}^{nl}\) can be decomposed into \(J\) distinct nonlinear contributions using a linear-in-the-parameters model, thus yielding:

$${{\varvec{f}}}^{nl}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right)={\sum }_{j=1}^{J}{\mu }_{j}{{\varvec{L}}}_{j}{g}_{j}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right)$$
(2)

Each term of the summation is defined by a coefficient \({\mu }_{j}\), a nonlinear basis function \({g}_{j}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right)\) and a location vector \({{\varvec{L}}}_{j}\in {\mathbb{R}}^{N}\). The elements of \({{\varvec{L}}}_{j}\) can assume the values −1, 1 or 0 and define the position of the \({j}^{\mathrm{th}}\) nonlinearity. As stated in the introduction, both the location vector and the nonlinear basis functions are prior knowledge of NSI. The term \({{\varvec{f}}}^{\mathrm{nl}}\) can be shifted to the right-hand side of Eq. (1), thus becoming an additional forcing term to the so-called underlying-linear system (ULS):

$${\varvec{M}}\ddot{{\varvec{y}}}\left(t\right)+{{\varvec{C}}}_{v}\dot{{\varvec{y}}}\left(t\right)+{\varvec{K}}{\varvec{y}}\left(t\right)={\varvec{f}}\left(t\right)-{{\varvec{f}}}^{nl}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right)$$
(3)

Defining now the extended input vector \({{\varvec{f}}}^{e}\) as

$${{\varvec{f}}}^{e}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right)={\left[{\varvec{f}}{\left(t\right)}^{\mathrm{T}}, {-g}_{1}({\varvec{y}}, \dot{{\varvec{y}}},t), \dots , {-g}_{J}({\varvec{y}}, \dot{{\varvec{y}}},t)\right]}^{\mathrm{T}}$$
(4)

and introducing the state vector \({\varvec{x}}={\left[{{\varvec{y}}}^{\mathrm{T}},\boldsymbol{ }{\dot{{\varvec{y}}}}^{\mathrm{T}}\right]}^{\mathrm{T}}\), a continuous state-space formulation can be retrieved:

$$\left\{\begin{array}{c}\dot{{\varvec{x}}}\left(t\right)={{\varvec{A}}}_{c}x\left(t\right)+{{\varvec{B}}}_{c}^{e}{{\varvec{f}}}^{e}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right) \\ y\left(t\right)=Cx\left(t\right)+{{\varvec{D}}}^{e}{{\varvec{f}}}^{e}\left({\varvec{y}}, \dot{{\varvec{y}}},t\right).\end{array}\right.$$
(5)

The matrices \({{\varvec{A}}}_{c},\boldsymbol{ }{{\varvec{B}}}_{c}^{e}, {\varvec{C}}, {{\varvec{D}}}^{e}\) are the state, extended input, output and extended direct feedthrough matrices, respectively, and the subscript \({\cdot }_{c}\) stands for “continuous”. All the steps here presented will be related to the continuous-time formulation, but they can easily be adapted to the discrete one, for instance by applying the transformations reported in [21].

Subspace identification can be performed to identify the state-space matrices, rearranging the measured displacements into Hankel-type block matrices. The idea is borrowed from the linear subspace identification theory [34], and detailed steps can be found in [21, 35]. The identification does not involve any optimization or minimization problem, as it is performed in a single step. It is worth noticing that the matrix \({{\varvec{A}}}_{c}\) is the state-matrix of the ULS, thus the underlying-linear dynamics of the structure can be easily estimated by classical eigenvalue decomposition of \({{\varvec{A}}}_{c}\). The extended FRF matrix \({{\varvec{G}}}^{e}\left(\omega \right)\) can be obtained from

$${{\varvec{G}}}^{e}\left(\omega \right)={{\varvec{D}}}^{e}+{\varvec{C}}{\left(i\omega {{\varvec{I}}}_{2N}-{{\varvec{A}}}_{c}\right)}^{-1}{{\varvec{B}}}_{c}^{e},$$
(6)

where \({{\varvec{I}}}_{2N}\) is the identity matrix of size \(2N\) and \(i\) is the imaginary unit. The matrix \({{\varvec{G}}}^{e}\left(\omega \right)\) has the same structure of the extended force vector \({{\varvec{f}}}^{e}\), so that its first block \({\varvec{G}}\left(\omega \right)\) is the FRF matrix of the underlying-linear system:

$${{\varvec{G}}}^{e}\left(\omega \right)=\left[{\varvec{G}}\left(\omega \right), {\varvec{G}}\left(\omega \right){\mu }_{1}{{\varvec{L}}}_{1}, \;\dots , \;{\varvec{G}}\left(\omega \right){{\mu }_{J}{\varvec{L}}}_{J}\right].$$
(7)

The coefficients of the nonlinear terms \({\mu }_{j}\) can eventually be deduced from the remaining blocks of \({{\varvec{G}}}^{e}\left(\omega \right)\) [21].

It should be highlighted that the state-space model order must be determined by the user. This represents a classic step in system identification methods and can be performed using well-known techniques, such as the inspection of the singular values of the system and the use of stabilization diagrams. The latter will be adopted in this paper considering the estimated modal parameters of the ULS. In particular, the stabilization of modal masses is included as described in [35, 36] along with natural frequencies, damping ratios and mode shapes. More recent developments of NSI use modal decomposition to have a better estimation of the coefficients of the nonlinearities and to remove possible spurious poles from the identified model [35,36,37].

3 Harmonic balance method in the extended state-space framework

Let us assume that the state-space model described by Eq. (5) is driven by a periodic excitation and that the system response has the same period as the excitation. The extended force vector, the state vector and the system outputs can therefore be represented as a Fourier series up to order \(H\)

$${{\varvec{f}}}^{e}\left(t\right)={{\varvec{F}}}^{(0)}+{\sum }_{h=1}^{H}\left({{\varvec{F}}}_{c}^{(h)}\mathrm{cos}\left(h\omega t\right)+{{\varvec{F}}}_{s}^{(h)}\mathrm{sin}\left(h\omega t\right)\right)$$
$${\varvec{x}}\left(t\right)={{\varvec{X}}}^{(0)}+{\sum }_{h=1}^{H}\left({{\varvec{X}}}_{c}^{(h)}\mathrm{cos}\left(h\omega t\right)+{{\varvec{X}}}_{s}^{(h)}\mathrm{sin}\left(h\omega t\right)\right)$$
$${\varvec{y}}\left(t\right)={{\varvec{Y}}}^{(0)}+{\sum }_{h=1}^{H}\left({{\varvec{Y}}}_{c}^{(h)}\mathrm{cos}\left(h\omega t\right)+{{\varvec{Y}}}_{s}^{(h)}\mathrm{sin}\left(h\omega t\right)\right),$$
(8)

where the Fourier coefficients are recast into vectors of size \(2N\times \left(2H+1\right)\)

$${{\varvec{F}}}^{e}={\left[{{\varvec{F}}}^{{\left(0\right)}^{\mathrm{T}}}, {{\varvec{F}}}_{c}^{{\left(1\right)}^{\mathrm{T}}}, {{\varvec{F}}}_{s}^{{\left(1\right)}^{\mathrm{T}}}, \dots , {{\varvec{F}}}_{c}^{{\left(H\right)}^{\mathrm{T}}}, {{\varvec{F}}}_{s}^{{\left(H\right)}^{\mathrm{T}}}\right]}^{\mathrm{T}}$$
$${\varvec{X}}={\left[{{\varvec{X}}}^{{\left(0\right)}^{\mathrm{T}}}, {{\varvec{X}}}_{c}^{{\left(1\right)}^{\mathrm{T}}}, {{\varvec{X}}}_{s}^{{\left(1\right)}^{\mathrm{T}}}, \dots , {{\varvec{X}}}_{c}^{{\left(H\right)}^{\mathrm{T}}}, {{\varvec{X}}}_{s}^{{\left(H\right)}^{\mathrm{T}}}\right]}^{\mathrm{T}}$$
$${\varvec{Y}}={\left[{{\varvec{Y}}}^{{\left(0\right)}^{\mathrm{T}}}, {{\varvec{Y}}}_{c}^{{\left(1\right)}^{\mathrm{T}}}, {{\varvec{Y}}}_{s}^{{\left(1\right)}^{\mathrm{T}}}, \dots , {{\varvec{Y}}}_{c}^{{\left(H\right)}^{\mathrm{T}}}, {{\varvec{Y}}}_{s}^{{\left(H\right)}^{\mathrm{T}}}\right]}^{\mathrm{T}}.$$
(9)

By introducing the linear operator \({\varvec{Q}}\left(t\right)=\left[1\, \mathrm{cos}\left(\omega t\right)\, \mathrm{sin}\left(\omega t\right)\, \dots \,\mathrm{cos}\left(H\omega t\right)\, \mathrm{sin}\left(H\omega t\right)\right]\), Eq. (8) can be written in a more compact form as

$${{\varvec{f}}}^{e}\left(t\right)=\left({\varvec{Q}}\left(t\right)\otimes {{\varvec{I}}}_{J+1}\right){{\varvec{F}}}^{e},{\varvec{x}}\left(t\right)=\left({\varvec{Q}}\left(t\right)\otimes {{\varvec{I}}}_{2N}\right){\varvec{X}},{\varvec{y}}\left(t\right)=\left({\varvec{Q}}\left(t\right)\otimes {{\varvec{I}}}_{N}\right){\varvec{Y}},$$
(10)

where \({\varvec{I}}\) is the identity matrix and \(\otimes \) is the Kronecker tensor product [6]. The first derivative of \({\varvec{x}}\) is \(\dot{{\varvec{x}}}\left(t\right)=\left({\varvec{Q}}\left(t\right)\nabla \otimes {{\varvec{I}}}_{2N}\right){\varvec{X}}\), where the operator \(\nabla \) is defined as

$$\nabla =\mathrm{diag}\left({\mathbf{0}}, {\nabla }_{1}, \dots , {\nabla }_{H}\right),{\nabla }_{h}=h\omega \left[\begin{array}{cc}0& 1\\ -1& 0\end{array}\right].$$
(11)

Recalling the mixed-product property of the Kronecker tensor product, the state-space model of Eq. (5) can be written as [33]

$$\left\{\begin{array}{c}\left(\mathbf{Q}\left(t\right)\nabla \otimes {{\varvec{I}}}_{2N}\right)X=\left({\varvec{Q}}\left(t\right)\otimes {{\varvec{A}}}_{c}\right)X+\left({\varvec{Q}}\left(t\right)\otimes {{\varvec{B}}}_{c}^{e}\right){{\varvec{F}}}^{e}\\ \left({\varvec{Q}}\left(t\right)\otimes {{\varvec{I}}}_{N}\right)Y=\left({\varvec{Q}}\left(t\right)\otimes {\varvec{C}}\right)X+\left({\varvec{Q}}\left(t\right)\otimes {{\varvec{D}}}^{e}\right){{\varvec{F}}}^{e}.\end{array}\right.$$
(12)

As commonly done in HB approaches, a Galerkin procedure is then applied to remove the time dependency by projecting Eq. (12) on the trigonometrical basis \({\varvec{Q}}\). Equation (12) is therefore multiplied by \(\frac{2}{T}{{\varvec{Q}}}^{\mathrm{T}}\) and integrated over one period of the external force, equal to \(T\). Considering that

$$\frac{2}{T}{\int }_{0}^{T}{{\varvec{Q}}}^{\mathrm{T}}\left(t\right){\varvec{Q}}\left(t\right)\mathrm{d}t={{\varvec{I}}}_{2H+1\boldsymbol{ }},$$
(13)

and defining \(\widetilde{H}=2H+1\), the state-space formulation can be eventually written as

$$\left\{\begin{array}{c}\left(\nabla \otimes {{\varvec{I}}}_{2N}\right)X=\left({{\varvec{I}}}_{\widetilde{H}}\otimes {{\varvec{A}}}_{c}\right)X+\left({{\varvec{I}}}_{\widetilde{H}}\otimes {{\varvec{B}}}_{c}^{e}\right){{\varvec{F}}}^{e}\\ {{\varvec{I}}}_{N\widetilde{H}}Y=\left({{\varvec{I}}}_{\widetilde{H}}\otimes {\varvec{C}}\right)X+\left({{\varvec{I}}}_{\widetilde{H}}\otimes {{\varvec{D}}}^{e}\right){{\varvec{F}}}^{e} .\end{array}\right.$$
(14)

The input–output relation can be obtained by rearranging the first equation as

$${\varvec{X}}={\left(\nabla \otimes {{\varvec{I}}}_{2N}-{{\varvec{I}}}_{\widetilde{H}}\otimes {{\varvec{A}}}_{c}\right)}^{-1}\left({{\varvec{I}}}_{\widetilde{H}}\otimes {{\varvec{B}}}_{c}^{e}\right){{\varvec{F}}}^{e},$$
(15)

and substituting into the second. This yields

$${{\varvec{I}}}_{N\widetilde{H}}{\varvec{Y}}={[\left({{\varvec{I}}}_{\widetilde{H}}\otimes {{\varvec{D}}}^{e}\right)+\left({{\varvec{I}}}_{\widetilde{H}}\otimes {\varvec{C}}\right)\left(\nabla \otimes {{\varvec{I}}}_{2N}-{{\varvec{I}}}_{\widetilde{H}}\otimes {{\varvec{A}}}_{c}\right)}^{-1}\left({{\varvec{I}}}_{\widetilde{H}}\otimes {{\varvec{B}}}_{c}^{e}\right)] {{\varvec{F}}}^{e}={\boldsymbol{\Lambda }}^{e}\left(\omega \right){{\varvec{F}}}^{e}.$$
(16)

The matrix \({\boldsymbol{\Lambda }}^{e}\left(\omega \right)\) relates the inputs to the outputs, just as the extended FRF matrix \({{\varvec{G}}}^{e}\left(\omega \right)\) defined in Eq. (6), and its formulation is consistent with the one found in [33]. As for the vector \({{\varvec{F}}}^{e}\), it contains the Fourier coefficients of both the forcing term and the nonlinear basis functions. The latter depend on displacements and/or velocities, and therefore their Fourier coefficients are a function of \({\varvec{Y}}\). Since the vector \({\varvec{Y}}\) is unknown, a recursive methodology must be implemented to solve Eq. (16). In particular, the following nonlinear problem can be deduced:

$${\varvec{\varepsilon}}\left({\varvec{Y}},\omega \right)={\varvec{\Lambda }}^{e}\left(\omega \right){{\varvec{F}}}^{e}\left({\varvec{Y}}\right)-{{\varvec{I}}}_{N\widetilde{H}}{\varvec{Y}}={\mathbf{0}},$$
(17)

which can be solved using an incremental-iterative Newton–Raphson procedure, as commonly done in HB approaches. It is possible to spot three main steps in the solution of Eq. (17):

  1. 1.

    Choose an appropriate starting point for the iterations. This can be selected as the linear solution, or the previous solution in the nonlinear frequency response (see Sect. 2.2).

  2. 2.

    Compute the quantity \({{\varvec{F}}}^{e}\left({\varvec{Y}}\right)\). An efficient way of performing this operation is obtained by using the alternating frequency/time-domain technique (AFT) [38]. With respect to the proposed method, the AFT algorithm consists in using the inverse Fourier transform to compute the nonlinear basis functions and thus the extended forces in the time domain. The vector \({{\varvec{F}}}^{e}\) can then be obtained by switching back to the frequency domain:

    $$ \begin{aligned} & {\varvec{Y}}\mathop \to \limits^{{{\text{FFT}}^{{ - 1}} }} {\varvec{y}}\left( t \right) \to {\varvec{f}}^{e} \\ & \quad \quad \quad \quad \quad \quad = \left[ {\varvec{f}^{T} , - g_{1} , \ldots , - g_{J} } \right]^{{\text{T}}} \mathop \to \limits^{{{\text{FFT}}}} {\varvec{F}}^{e} \left( {\varvec{Y}} \right). \\ \end{aligned} $$
    (18)
  3. 3.

    Compute the Jacobian matrix \({{\varvec{\varepsilon}}}_{{\varvec{Y}}}=\partial{\varvec{\varepsilon}}/\partial {\varvec{Y}}\) of Eq. (17) with respect to the Fourier coefficients \({\varvec{Y}}\). This must be provided in order to search for the solution in the Newton–Raphson algorithm, and it means computing the quantity \(\partial {{\varvec{F}}}^{e}/\partial {\varvec{Y}}\). As explained in [6], it is possible to express the inverse Fourier transform as a known linear operator \({\varvec{\Gamma}}\left(\omega \right)\), and therefore \({\varvec{y}}\) as \({\varvec{y}}={\varvec{\Gamma}}\left(\omega \right)\boldsymbol{ }{\varvec{Y}}\). Similarly, the direct Fourier transform can be written for the extended forces as \({{\varvec{F}}}^{e}={\left({\varvec{\Gamma}}\left(\omega \right)\right)}^{+}{{\varvec{f}}}^{e}\). The symbol \({\cdot }^{+}\) stands for the Moore–Penrose pseudoinverse \({{\varvec{\Gamma}}}^{+}={{\varvec{\Gamma}}}^{\mathrm{T}}{\left({\varvec{\Gamma}}{{\varvec{\Gamma}}}^{\mathrm{T}}\right)}^{-1}\). With this approach, the term \(\partial {{\varvec{F}}}^{e}/\partial {\varvec{Y}}\) can be written as

    $$\frac{\partial {{\varvec{F}}}^{e}}{\partial {\varvec{Y}}}=\frac{\partial {{\varvec{F}}}^{e}}{\partial {{\varvec{f}}}^{e}} \frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}} \frac{\partial {\varvec{y}}}{\partial {\varvec{Y}}}={{\varvec{\Gamma}}}^{+}\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}{\varvec{\Gamma}}.$$
    (19)

The last equation contains the derivative of the extended force vector with respect to the displacement vector. Recalling the definition of the extended force vector in Eq. (4), it is straightforward how this operation can be easily performed in the case of conservative nonlinearities, since a close expression of the nonlinear basis functions \({g}_{j}\) is already known. Note that this implies that the functions \({g}_{j}\) are differentiable w.r.t. the displacement vector in the considered range of motion. In the case of systems with distinct states, the differentiation of the nonlinear force can be performed as detailed in [39]. In the general case, Eq. (19) can be computed by discretizing one period of oscillations into \({\mathrm{N}}_{\mathrm{s}}\) samples and numerically perform the time derivatives involved, as described in [6, 7].

3.1 Nonlinear frequency response, stability and bifurcations

When the approach described in Sect.  2.1 is used in conjunction with a continuation technique, the nonlinear frequency response curves (NFRC) of the system can be obtained and the bifurcations can be tracked. Several methods have been developed in this framework, such as pseudo-arc length continuation [5] and Moore–Penrose [6]. The first method is adopted in this paper, based on tangent prediction and orthogonal corrections.

Whatever algorithm is adopted, the continuation procedure allows possibly to find several solutions on the frequency response for one given excitation frequency, some of which are not stable. Several algorithms have been developed to analyze the stability of periodical solutions, and a comparison of the most relevant ones is reported in [40]. In this paper, a novel methodology is therefore derived in the extended nonlinear state-space framework based on the evaluation of the monodromy matrix of the system \({\varvec{H}}\) [41], whose eigenvalues are called Floquet multipliers and determine the stability of the solution.

Starting from the state-space formulation of Eq. (5), a small perturbation can be applied to the states, yielding

$${\varvec{x}}\left(t\right)={{\varvec{x}}}^{*}\left(t\right)+{\varvec{\epsilon}}\left(t\right)$$
(20)

where the \({\cdot }^{*}\) symbol denotes the unperturbed quantity. The state equation becomes then

$${\dot{{\varvec{x}}}}^{\boldsymbol{*}}+\dot{{\varvec{\epsilon}}}={{\varvec{A}}}_{c}\left({{\varvec{x}}}^{\boldsymbol{*}}+{{\varvec{\epsilon}}}_{{\varvec{x}}}\right)+{{\varvec{B}}}_{c}^{e}{{\varvec{f}}}^{e}$$
(21)

It is convenient to write the extended forces \({{\varvec{f}}}^{e}\) as a function of the state vector \({\varvec{x}}\) rather than the output displacements and velocities. Note that an explicit relationship between the nonlinear basis functions and the state variables is not required, as it will be demonstrated later. The extended forces \({{\varvec{f}}}^{e}({\varvec{x}},t)\) are expanded in a Taylor series around the unperturbed solution and truncated to the first order, obtaining

$${{\varvec{f}}}^{e}\left({\varvec{x}},t\right)={{\varvec{f}}}^{e}\left({{\varvec{x}}}^{*}+{\varvec{\epsilon}},t\right)\cong {{\varvec{f}}}^{e}\left({{\varvec{x}}}^{*},t\right)+{\left.\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{x}}}\right|}_{{{\varvec{x}}}^{*}}{\varvec{\epsilon}}$$
(22)

Substituting Eq. (22) into Eq. (21) yields

$${\dot{{\varvec{x}}}}^{\boldsymbol{*}}\left(t\right)+\dot{{\varvec{\epsilon}}}\left(t\right)={{\varvec{A}}}_{c}\left({{\varvec{x}}}^{\boldsymbol{*}}\left(t\right)+{\varvec{\epsilon}}\left(t\right)\right)+{{\varvec{B}}}_{c}^{e}\left({{\varvec{f}}}^{e}\left({{\varvec{x}}}^{*},t\right)+{\left.\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{x}}}\right|}_{{{\varvec{x}}}^{*}}{\varvec{\epsilon}}\right)$$
(23)

Since by definition \({\dot{{\varvec{x}}}}^{\boldsymbol{*}}\left(t\right)={{\varvec{A}}}_{c}{{\varvec{x}}}^{\boldsymbol{*}}\left(t\right)+{{\varvec{B}}}_{c}^{e}{{\varvec{f}}}^{e}\left({{\varvec{x}}}^{*},t\right)\), the following equation is obtained:

$$\dot{{\varvec{\epsilon}}}\left(t\right)=\left({{\varvec{A}}}_{c}+{{\varvec{B}}}_{c}^{e}{\left.\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{x}}}\right|}_{{{\varvec{x}}}^{*}}\right){\varvec{\epsilon}}={\varvec{J}}\left({{\varvec{x}}}^{\boldsymbol{*}},t\right){\varvec{\epsilon}}$$
(24)

The matrix \({\varvec{J}}\left({{\varvec{x}}}^{*},t\right)\) is \(T\)-periodic and the monodromy matrix \({\varvec{H}}\in {\mathbb{R}}^{2N\times 2N}\) can be defined such that \({\varvec{\epsilon}}\left(T\right)={\varvec{H}}{\varvec{\epsilon}}\left( 0 \right)\). Equation (24) must be solved by proceeding to \(2N\) time integrations over one period, using \(2N\) linearly independent initial conditions. The easiest choice is \({{\varvec{\epsilon}}}_{k}\left(0\right)={{\varvec{e}}}_{k},k\in \left[1,\dots ,2N\right]\), where \({{\varvec{e}}}_{k}\) is a vector of zeros with a one at the \({k}^{th}\) position. Eventually, the monodromy matrix can be assembled as:

$${\varvec{H}}=\left[{{\varvec{\epsilon}}}_{1}\left(T\right),\boldsymbol{ }\boldsymbol{ }{{\varvec{\epsilon}}}_{2}\left(T\right),\boldsymbol{ }\boldsymbol{ }\dots ,\boldsymbol{ }\boldsymbol{ }{{\varvec{\epsilon}}}_{2N}\left(T\right)\right]$$
(25)

This process in principle requires the computation of \(\partial {{\varvec{f}}}^{e}/\partial {\varvec{x}}\), which is not straightforward. This step can be circumvented by recalling that the extended force vector contains the nonlinear basis functions expressed in terms of output displacements and/or velocities. For instance, if a closed form of \(\partial {{\varvec{f}}}^{e}/\partial {\varvec{y}}\) is known (i.e., geometric nonlinearities), the derivative chain rule can be applied yielding

$$\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{x}}}=\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}\frac{\partial {\varvec{y}}}{\partial {\varvec{x}}}=\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}\frac{\partial }{\partial {\varvec{x}}}\left({\varvec{C}}{\varvec{x}}+{{\varvec{D}}}^{e}{{\varvec{f}}}^{e}\right)=\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}\left({\varvec{C}}+{{\varvec{D}}}^{e}\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{x}}}\right)$$
(26)

Eventually the quantity \(\partial {{\varvec{f}}}^{e}/\partial {\varvec{x}}\) can be written as

$$\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{x}}}={\left({{\varvec{I}}}_{J}-\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}{{\varvec{D}}}^{e}\right)}^{-1}\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}{\varvec{C}}$$
(27)

and thus \({\varvec{J}}\) can be written as a function of \(({{\varvec{y}}}^{\boldsymbol{*}},t)\) as

$${\varvec{J}}\left({{\varvec{y}}}^{*},t\right)={{\varvec{A}}}_{c}+{{\varvec{B}}}_{c}^{e}{\left({{\varvec{I}}}_{J}-{\left.\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}\right|}_{{{\varvec{y}}}^{*}}{{\varvec{D}}}^{e}\right)}^{-1}{\left.\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}\right|}_{{{\varvec{y}}}^{*}}{\varvec{C}}$$
(28)

A similar result can be obtained if a closed form of \(\partial {{\varvec{f}}}^{e}/\partial \dot{{\varvec{y}}}\) is known (i.e., dissipative nonlinearities), by writing \(\partial \user2{f}^{e} {\text{/}}\partial \user2{x = }\,\partial \user2{f}^{e} {\text{/}}\partial \user2{\dot{y}}\quad\partial \user2{\dot{y}}/\partial \user2{\dot{x}}\quad{{\partial \user2{\dot{x}}} \mathord{\left/ {\vphantom {{\partial \user2{\dot{x}}} {\partial \user2{x}}}} \right. \kern-\nulldelimiterspace} {\partial \user2{x}}}\).

Once the monodromy matrix has been computed, its eigenvalues \({\sigma }_{k}, k=1, 2,\dots , 2N\) (Floquet multipliers) can be exploited to compute the stability of the solution: a periodic solution is unstable if there exists at least one Floquet multiplier with a magnitude higher than 1 [41]. The Floquet multipliers are generally represented in the complex plane and compared to the unit circle, as in Fig. 1. Bifurcations are detected when one or more Floquet multipliers leave the unit circle, in particular [42]:

  • a fold bifurcation (FB) or saddle-node bifurcation is detected when a Floquet multiplier crosses the real axis through + 1;

  • a Neimark–Sacker bifurcation (NSB) is detected when two complex-conjugate Floquet multipliers cross the unit circle, resulting in a quasi-periodic solution;

  • a period-doubling bifurcation (PDB) is detected when one Floquet multiplier crosses the real axis through − 1.

Fig. 1
figure 1

Loss of stability of a periodic solution. Blue dots: fold bifurcation; green dots: Neimark-Sacker bifurcation; red dots: period-doubling bifurcation

The latter case implies that a new branch of solution is created, having a period doubled compared to the solutions of the original branch. When this type of bifurcations appears in cascade, it can lead to chaos [2].

It is worth highlighting that all the steps presented here can be also applied in the case of distributed nonlinearities by adopting the modal version of NSI described in [36].

The flowchart of the proposed methodology is presented in Fig. 2.

Fig. 2
figure 2

Flowchart of the HBM-NSI method

4 Numerical demonstration

In this section, the HBM-NSI method is tested using synthetic data and validated against the theoretical results. Two cases will be considered, namely a single-degree-of-freedom Helmholtz–Duffing oscillator and a two-degree-of-freedom system with a contact nonlinearity. It is worth reminding that a single random test is needed to apply the proposed approach, though other tests will be used to validate the methodology in this paper.

4.1 Single-degree-of-freedom Helmholtz–Duffing oscillator

The equation of motion of a Helmholtz–Duffing oscillator reads [43]:

$$m\ddot{y}\left(t\right)+{c}_{v}\dot{y}\left(t\right)+ky\left(t\right)+{k}_{2}{y\left(t\right)}^{2}+{k}_{3}{y\left(t\right)}^{3}=f\left(t\right)$$
(29)

with the system parameters listed in Table 1. The system is excited by a zero-mean Gaussian random input, having a root-mean-square (RMS) value of 3 N. The numerical integration of the equation of motion is performed using the Newmark method with a sampling frequency of 4096 Hz and considering a time span of 100 s. No noise is added to the system response at first to focus on the validation of the methodology.

Table 1 System parameters of the Helmholtz–Duffing oscillator

The FRF obtained from the synthetic data using the H1-estimator is depicted in Fig. 3 and compared with the theoretical FRF of the ULS. The nonlinear behavior of the system can be deduced by the frequency shift visible between the two FRFs.

Fig. 3
figure 3

FRF of the Helmholtz–Duffing oscillator. Solid black line: theoretical FRF of the underlying-linear system; dashed-dotted red line: FRF estimation of the synthetic data

NSI is performed considering two nonlinear basis functions: \({g}_{1}\left(t\right)=-y{\left(t\right)}^{2}\) and \({g}_{2}\left(t\right)=-y{\left(t\right)}^{3}\), and the stabilization diagram of the estimated ULS is depicted in Fig. 4. Stabilization is checked for frequency, damping ratio and modal mass with thresholds equal to 0.5%, 20% and 20%, respectively. A model order equal to 2 is eventually selected.

Fig. 4
figure 4

Stabilization diagram of the ULS, Helmholtz–Duffing oscillator. Gray dot: new (not stable) pole; black plus: pole stable in frequency; green circle: pole stable in frequency and damping; red triangle: pole stable in frequency, damping and modal mass

The estimated modal parameters of the ULS are listed in Table 2 together with the theoretical values, showing a perfect match. The estimated coefficients of the nonlinear terms are: \({k}_{2}^{id}=4.9986\cdot {10}^{3} \mathrm{N}/{\mathrm{m}}^{2}\) and \({k}_{3}^{id}=1.4994\cdot {10}^{6} \mathrm{N}/{\mathrm{m}}^{3}\), with an average percentage difference of 0.03% from the theoretical ones of Table 1.

Table 2 Modal parameters of the Helmholtz–Duffing oscillator

The estimated state-space model is then used to construct the NFRC considering 5 harmonics and a forcing amplitude of 1 N. Equation (17) is therefore solved iteratively and the pseudo-arc length continuation method is adopted. Stability is checked at each step by adopting the methodology described in Sect. 2.2. In particular, the computation of \(\partial {{\varvec{f}}}^{e}/\partial {\varvec{y}}\) in this case reads:

$$\frac{\partial {{\varvec{f}}}^{e}}{\partial {\varvec{y}}}=\frac{\partial }{\partial {\varvec{y}}}\left[f\left(t\right), -{g}_{1}\left(y,t\right), -{g}_{2}\left(y,t\right)\right]=\left[0, 2y\left(t\right), 3y{\left(t\right)}^{2}\right]$$
(30)

The results are compared with the theoretical ones, obtained applying the standard HB method with the system parameters listed in Table 1. The obtained NFRC is depicted in Fig. 5, where a clear overlap with the theoretical NFRC can be appreciated. Also, the unstable path is correctly recognized, along with the fold bifurcation point (FB).

Fig. 5
figure 5

Frequency response, Helmholtz–Duffing oscillator. Black line: theoretical NFRC; dashed-dotted red line: HBM-NSI result. The unstable branch is depicted with a thin line. The value of the Floquet multipliers is also depicted for one point of the unstable branch according to HBM-NSI

Reminding that the frequency response is computed as a sum of harmonic contributions (see Eq. (8)), an in-depth comparison can be obtained by inspecting the single contributions \({Y}_{1}^{\left(h\right)}\) defined as

$$\left\{\begin{array}{l}{Y}_{1}^{\left(h\right)}, h=0 \\ {Y}_{1}^{(h)}={{Y}_{1}}_{c}^{\left(h\right)}+i{{Y}_{1}}_{s}^{\left(h\right)}, h=1,\dots , H.\end{array}\right.$$

The result is depicted in Fig. 6 for both the theoretical and the NSI-based results, showing a perfect match.

Fig. 6
figure 6

Single harmonic contributions, Helmholtz–Duffing oscillator. Black lines: theoretical results; dashed-dotted red lines: HBM-NSI result

4.2 Effects of noise and nonlinear modeling errors

The proposed numerical example did not consider the presence of noise in the system response nor the possibility of nonlinear modeling errors, as it was devoted to validating the methodology. As for the noise, it is known that it degrades the quality of the identification, and this is particularly true in the nonlinear case. It is therefore important to check how noise affects the estimation of the nonlinear frequency response when adopting the HBM-NSI technique. Nonlinear modeling errors instead originate from the need of fitting physical (and generally complex) nonlinear behaviors to a mathematical description in the identification process. This leads to intrinsic errors in the identification of real-life nonlinear systems, which can be mitigated by accurately selecting proper candidate nonlinear models. This step is generally referred to as nonlinear characterization and it is essential for a satisfactory identification [1, 44].

The same Helmholtz–Duffing oscillator is therefore considered in this section, but the output is here corrupted with an increasing level of Gaussian noise, and it is assumed that the true nonlinear basis functions are not known. A general polynomial expansion up to degree 7 is therefore considered: \({g}_{k}={\sum }_{k=1}^{7}y{\left(t\right)}^{k}\), leading to an over-parametrization of the nonlinear restoring force.

Four levels of noise are discussed, expressed as a percentage of the output signal standard deviation. HBM-NSI is therefore performed on 100 Monte Carlo simulations for each level of noise. First, the state-space model and the modal parameters of the ULS are estimated using NSI with a model order equal to 2. Afterward, the NFRC is constructed for each case, considering 5 harmonics and a forcing amplitude of 1 N.

The percentage errors on the natural frequency and the damping ratio are listed in Table 3, together with the maximum percentage errors on the nonlinear restoring force and on the NFRC w.r.t. the theoretical ones. Results are averaged over the Monte Carlo simulations, with standard deviations written in brackets.

Table 3 Effects of noise and nonlinear modeling errors on the estimation of the system parameter. Results are averaged over 100 Monte Carlo simulations, with standard deviations written in brackets

The effects of the nonlinear modeling errors are negligible in the absence of noise, while they increase with higher noise contamination. For completeness of information, the average percentage deviation on \({k}_{2}^{id}\) and \({k}_{3}^{id}\) with 3% noise corruption and no modeling errors is in the order of 0.4% and 0.9%, respectively. The identified nonlinear restoring force for the different levels of noise is displayed in Fig. 7 together with the RMS contributions of the single nonlinear terms. The NFRCs are instead depicted in Fig. 8.

Fig. 7
figure 7

Identification of the Helmholtz–Duffing oscillator with noise and nonlinear modeling errors. a Nonlinear restoring force. b Single RMS contributions to the nonlinear restoring force

Fig. 8
figure 8

Frequency response, Helmholtz–Duffing oscillator with noise and nonlinear modeling errors (stable path only)

4.3 Comments on the choice of the input amplitude

The proposed approach can be used in principle to study the periodic solutions of a nonlinear system without any limitation on the considered forcing amplitude. However, this is not generally the case in a real-life scenario, since the state-space model is estimated using a training data set covering a certain range of motion (ROM). If the chosen forcing amplitude for the computation of the NFRCs is such that the frequency response of the system exceeds the ROM of the training data, the model starts to extrapolate. In general, extrapolation in nonlinear system identification should be circumvented as far as possible, and this is particularly true for polynomial models [45]. It is therefore important to consider the ROM of the training data when simulating the NFRCs using the estimated state-space model.

To represent a realistic scenario, the Helmholtz–Duffing oscillator of the previous example with 3% noise and nonlinear modeling errors is considered (dotted green line in Fig. 8). The NFRC is computed for several forcing amplitudes and the maximum error between estimated and theoretical frequency response is calculated. For convenience, an amplitude index \(AI\) is defined as \(AI = \max \left( {Y_{1} } \right){\text{/}}\max \left( y \right),\) in such a way that extrapolation occurs when \(AI>1\). Results are depicted in Fig. 9 and confirm that extrapolation errors arise when exceeding the training data ROM.

Fig. 9
figure 9

Frequency response for several forcing input amplitudes, Helmholtz–Duffing oscillator with 3% noise and nonlinear modeling errors. The maximum error on the NFRC is depicted in a. Extrapolation occurs when AI > 1 (yellow region). Black lines in b and c: theoretical results; dashed-dotted red lines in b and c: HBM-NSI result

4.4 Two-degree-of-freedom system with non-smooth nonlinearity

The system depicted in Fig. 10 is considered, having two degrees of freedom and a mechanical stop positioned 2 mm away from DOF 2. This creates a strong, non-smooth asymmetric nonlinear behavior of contact type when the gap is filled. The system parameters are listed in Table 4.

Fig. 10
figure 10

Scheme of the 2-DOF system with contact nonlinearity

Table 4 Parameters of the 2 DOF system

The system is excited at DOF 1 by a zero-mean Gaussian random input, having a root-mean-square (RMS) value of 2 N. The numerical integration of the equation of motion is performed using the Newmark method with a sampling frequency of 4096 Hz and considering a time span of 100 s. As for the previous example, four levels of noise are considered and expressed as a percentage of the output signal standard deviation in the range 0–3%.

The driving point FRF \({G}_{\mathrm{1,1}}\) obtained from the synthetic data using the H1-estimator is depicted in Fig. 11 and compared with the theoretical FRF of the ULS. The first mode shows a strong nonlinear behavior, as it can be deduced from the figure. Instead, the second mode behaves linearly in the considered range of motion.

Fig. 11
figure 11

Driving point FRF of the 2-DOF system. Solid black line: theoretical FRF of the underlying-linear system; dashed-dotted red line: FRF estimation of the synthetic data with no noise

NSI is performed considering the following nonlinear basis functions and location vectors:

$$ g_{1} = \left\{ {\begin{array}{*{20}c} { - \left( {y_{2} - d} \right) \quad {\text{if}}\,y_{2} \left( t \right) \ge d} \\ {0\quad\quad\quad\quad\quad{\text{elsewhere}}} \\ \end{array} } \right.,\,g_{2} = \left\{ {\begin{array}{*{20}c} { - \left( {y_{2} - d} \right)^{3} \quad {\text{if}}\,y_{2} \left( t \right) \ge d} \\ {0\quad\quad\quad\quad\quad{\text{elsewhere}}} \\ \end{array} } \right.,\,L_{1} = L_{2} = \left[ {0\quad1} \right]^{{\text{T}}} $$

Note that the gap value \(d\) is assumed to be known in this example for simplicity and therefore is not included among the parameters to be estimated. The reader can refer to [21] for the gap estimation in NSI.

NSI is performed independently for the considered four levels of noise. The stabilization diagram of the estimated ULS is depicted in Fig. 12 (no noise), with thresholds equal to 0.5%, 20%, 99% and 20%, for frequency, damping ratio, MAC and modal mass respectively. Similar results are obtained with the other noise levels, and a model order equal to 4 is eventually selected for each case.

Fig. 12
figure 12

Stabilization diagram of the ULS, 2-DOF system with no noise. Gray dot: new (not stable) pole; black plus: pole stable in frequency; blue square: pole stable in frequency and MAC; green circle: pole stable in frequency, MAC and damping; red triangle: pole stable in frequency, MAC, damping and modal mass

The theoretical modal parameters of the ULS are listed in Table 5. The modal masses are computed considering the unit-normalization of the eigenvectors.

Table 5 Theoretical modal parameters of the 2 DOF system

The errors on the estimated modal parameters and nonlinear coefficients are listed in Table 6 and related to 100 Monte Carlo simulations for each level of noise. Results are averaged over the simulations, with standard deviations written in brackets. It can be seen that the errors are quite low, but they generally increase with the noise with a clear trend, as expected.

Table 6 Effects of noise on the estimation of the system parameter. Results are averaged over 100 Monte Carlo simulations, with standard deviations written in brackets

As for the previous example, the nonlinear state-space model is used to construct the NFRCs and to study the stability. The selected forcing amplitude is 0.3 N and the analysis is conducted around the first mode, considering a frequency range from 2 to 5 Hz. Five harmonics are included.

Note that in this case Eq. (19) can be applied for every value of \({y}_{2}\ne d\), since the basis function is not differentiable in \(d\). Practically, this is not an issue since the numerical values of \({y}_{2}\) are never exactly equal to \(d\) due to the numerical nature of the algorithm.

The obtained NFRCs are depicted in Fig. 13 for the case with zero noise and compared with the theoretical ones. A clear overlap can be appreciated also in this case, along with a correct recognition of the unstable paths. No extrapolation is involved in this case.

Fig. 13
figure 13

Frequency response, 2-DOF system with no noise. a DOF 1; b DOF 2. Black line: theoretical NFRC; dashed-dotted red line: HBM-NSI result. The unstable branch is depicted with a thin line

The single harmonic contributions of DOF 2 are represented in Fig. 14. It can be observed how the harmonic contributions different from 1 are null everywhere except the frequency region where the response amplitude exceeds the gap value of 2 mm.

Fig. 14
figure 14

Single harmonic contributions, 2nd DOF of the 2-DOF system. Black lines: theoretical results; dashed-dotted red lines: HBM-NSI result. The gap value is depicted with a dotted gray line

The NFRCs obtained with the noise-corrupted measurements are depicted in Fig. 15 for the second DOF. The frequency responses are consistent with each other, with minor deviations due to noise. In particular, the maximum percentage error in terms of response amplitude is roughly 0.8%, confirming the robustness of the algorithm with respect to the presence of noise.

Fig. 15
figure 15

Frequency response of the 2-DOF system in the presence of noise (stable path only). Black line: no noise; dashed blue line: 1% noise; dashed-dotted orange line: 2% noise; dotted green line: 3% noise

5 Experimental application

A double-well vibrating system is considered, composed by a u-shaped steel frame connected through rods to a moving mass \(m\) whose vertical movement is guided. Base excitation is provided with a shaking table, so that a displacement \(b(t)\) is imposed to the structure. A schematic representation of the device is depicted in Fig. 16, together with a photo.

Fig. 16
figure 16

Experimental setup. a Schematic representation; b Photo

The device has been already studied in [29] where the equation of motion with respect to the vertical movement of the mass has been derived. The system shows a double-well potential nature resulting in three equilibrium positions, two of which are stable. The device can oscillate around one equilibrium position (in-well motion) or encompassing all the equilibrium positions (cross-well motion). This originates a rich nonlinear behavior, which can include softening phenomena, period doublings and chaos. Only in-well oscillations will be considered in this work, but the reader can refer to [29, 31] for a deeper investigation about cross-well related phenomena.

Calling \(x\left(t\right)=y\left(t\right)-b\left(t\right)\) the relative vertical displacement around one stable equilibrium position, the equation of motion reads [29]:

$$m\ddot{x}\left(t\right)+{c}_{v}\dot{x}+{k}_{3}{x\left(t\right)}^{3}+{k}_{2}{x\left(t\right)}^{2}+{k}_{1}x(t)=-m\ddot{b}(t),$$
(31)

where \({c}_{v}, {k}_{3}, {k}_{2}\) and \({k}_{1}\) are the viscous damping, cubic stiffness, quadratic stiffness and linear stiffness coefficients, respectively, and \(\ddot{b}(t)\) is the acceleration of the base.

As for the damping model, detailed studies have been conducted in [44] to derive an ad-hoc dissipative model, which considers the effects of the sliding movement of the moving mass. The importance of such a model is quite significant when considering cross-well oscillations. However, the analysis in this work will be limited to in-well motion, and a simple equivalent viscous damping term has been found to be enough for the purposes of this study.

The moving mass is instrumented with a laser vibrometer to measure its absolute displacement \(y(t)\), starting from the equilibrium position depicted in Fig. 16. The acceleration of the base \(\ddot{b}\left(t\right)\) is recorded through an accelerometer and the displacement \(x(t)\) is computed as the difference between the laser measure \(y(t)\) and the displacement of the base \(b(t)\), obtained by integrating twice its measured acceleration.

5.1 Nonlinear system identification under random excitation

Random excitation is applied to perform the nonlinear system identification using NSI, with a sampling frequency of 512 Hz and a time span of 300 s. The RMS value of the base acceleration is \(26 \mathrm{m}/{\mathrm{s}}^{2} \mathrm{RMS}\), and the time history of the relative displacement \(x(t)\) is shown in Fig. 17, together with the restoring surface plot.

Fig. 17
figure 17

Double-well system under random excitation. a Time history of the relative displacement; b Restoring surface plot

The stabilization diagram of the estimated ULS is depicted in Fig. 18, with thresholds equal to 1%, 20% and 20%, for frequency, damping ratio and modal mass, respectively. A model order equal to 2 is eventually selected. The estimated natural frequency is 11.22 Hz, with a damping ratio of 6.22%.

Fig. 18
figure 18

Stabilization diagram of the ULS, double-well system. Gray dot: new (not stable) pole; black plus: pole stable in frequency; green circle: pole stable in frequency and damping; red triangle: pole stable in frequency damping and modal mass

A second random test is performed with a lower excitation level so as to have a linear reference and to make a comparison with the estimated ULS. The RMS value of the base acceleration is in this case equal to \(10\mathrm{m}/{\mathrm{s}}^{2}\mathrm{RMS}\). The experimental low-level transmissibility between relative and base displacements is compared with the transmissibility of the ULS estimated using the state-space model of NSI, and the result is depicted in Fig. 19.

Fig. 19
figure 19

Transmissibility of the double-well system. Solid black line: low excitation level (H1-estimator); dashed-dotted red line: high-excitation level (H1-estimator); solid green line: transmissibility of the ULS as obtained from NSI

It can be appreciated how close the estimated ULS transmissibility is to the one associated with the low excitation level, confirming the goodness of the identification. The estimated coefficients of the nonlinear terms are: \({k}_{2}^{id}=-4.5501\cdot {10}^{5}\mathrm{ N}{\mathrm{m}}^{-2}\) and \({k}_{3}^{id}=1.8924\cdot {10}^{5}\mathrm{ N}{\mathrm{m}}^{-3}\), obtained as in the classic NSI framework [21, 36] from the blocks of the extended FRF matrix (see Eq. (7)). The estimated discrete state-space model reads:

$${\varvec{A}}=\left[\begin{array}{cc}0.9849& 0.1443\\ -0.1279& 0.9793\end{array}\right], {{\varvec{B}}}^{e}=\left[\begin{array}{ccc}-0.002026& 108.6& -1143\\ -0.002247& 87.47& 257.7\end{array}\right], {\varvec{C}}=\left[\begin{array}{cc}-0.0211& 0.0143\end{array}\right], {\boldsymbol{ }{\varvec{D}}}^{e}=\left[\begin{array}{ccc}-2\cdot {10}^{-6}& -18.95& 321.6\end{array}\right].$$

5.2 NFRC reconstruction and sine-sweep comparison

The estimated state-space model is used to construct the NFRC and to study the stability, following the methodology presented in Sect.  2. The result is compared with the measured responses to two slow logarithmic frequency sweeps, namely a sweep-up and a sweep-down. The frequency range is from 7 to 16 Hz and the sampling frequency is 512 Hz. The spectrogram of the two sweep responses is depicted in Fig. 20, where even and odd harmonics of the fundamental frequency can be observed. A jump-up occurs during the sweep-up at around 0.3 min, while a jump-down occurs during the sweep-down between at around 1 min. Prior to that, subharmonics show up between 0.8 and 1 min. The instantaneous fundamental frequency is also represented, computed using the Hilbert transform [46] of the measured input signal.

Fig. 20
figure 20

Spectrogram of the logarithmic sweep response. a Sweep-up; b Sweep-down. The instantaneous fundamental frequency is depicted in red

The shaker is controlled so as to keep the amplitude of the base \({b}_{0}\) constant and equal to 4.7 mm. The forcing input term \(f\left(t\right)\) in Eq. (31) is proportional to the base acceleration, depicted in Fig. 21 in this case as a function of the instantaneous frequency. The amplitude of the acceleration grows quadratically with the frequency as \({b}_{0}{\left(2\pi \nu \right)}^{2}\), where \(\nu \) is the instantaneous frequency in Hz. To correctly compare the results with the measured frequency sweeps, the forcing input amplitude \(F\left(\nu \right)\) used in HBM-NSI is therefore sampled from the function \(F\left(\nu \right)=m{b}_{0}{\left(2\pi \nu \right)}^{2}\) for each value of \(\nu \) considered in the continuation procedure.

Fig. 21
figure 21

Measured base acceleration versus the instantaneous frequency. The amplitude of the acceleration is depicted with a dotted blue line

The proposed method is applied with 5 harmonics, and the result is depicted in Fig. 22 and compared with the sweep-up and sweep-down measured time responses. The amplitude of the oscillations predicted by the HBM-NSI approach is generally very close to the measured ones, confirming the goodness of the method. Furthermore, a fold bifurcation is detected at 9 Hz using the methodology presented in Sect. 2.2, where the sweep-up response shows a jump-up. Conversely, a jump-down occurs at 7.6 Hz during the sweep-down test, and the predicted unstable branch ends at the very close frequency of 7.7 Hz. A period-doubling bifurcation is detected at 7.9 Hz, initiating a new branch of solutions having half the fundamental period until 10.2 Hz (dashed-dotted line in Fig. 22). It should also be noted that the range of motion of the NFRC in Fig. 22 is comparable with the training data of the identification step depicted in Fig. 17. This allows the estimated nonlinear model to be safely used for the construction of the frequency response of the system with the selected input amplitude without encountering extrapolation issues.

Fig. 22
figure 22

Frequency response, double-well system. Blue line: sweep-down; yellow-line: sweep-up; black line: HBM-NSI result. The unstable branch is depicted with a thin line and the period-doubling branch is depicted with a dashed-dotted line

The locus of the two Floquet multipliers of the system around the fold and the period-doubling bifurcations is represented in Fig. 23.

Fig. 23
figure 23

Locus of the two Floquet multipliers of the double-well system depicted with solid red and dashed-dotted blue lines. a Solution around FB; b Solution around PDB

As for the period-doubling bifurcation, it should be highlighted that the occurrence of subharmonics has already been spotted in the spectrogram of the relative displacement (Fig. 20) between 8.5 and 10 Hz, in accordance with the HBM-NSI result. A clear picture of this behavior can be obtained by performing a sine test at the fixed frequency of 9 Hz and with base amplitude of 4.7 mm. The system response is depicted in Fig. 24, where the period-doubling phenomenon can be observed in the phase diagram, showing two nested orbits.

Fig. 24
figure 24

Response of the double-well system to a sine excitation at 9 Hz with base amplitude equal to 4.7 mm. a Time history; b Phase diagram

Eventually, the NFRC can be generated for several input amplitudes to track the evolution of the system response and its bifurcations. The result is shown in Fig. 25, where several base displacement amplitudes are considered and the unstable areas highlighted. It can be noted how the fold bifurcation (in red) starts at relatively low excitation levels, while the period-doubling bifurcation (in blue) needs higher excitation levels to appear. Note that the curves obtained with a base displacement amplitude higher than 4.7 mm involve extrapolation (yellow area).

Fig. 25
figure 25

Nonlinear response curves for different base displacement amplitudes (extrapolation in yellow). Solid black lines: stable paths of some NFRCs; solid red line: locus of the fold bifurcations; solid blue line: locus of the period-doubling bifurcations. Dotted lines represent unstable paths

6 Conclusions

The purpose of this paper was to study the periodic solutions of nonlinear mechanical systems starting from the nonlinear state-space model estimated using the nonlinear subspace identification (NSI) technique. In its standard form, NSI needs the input–output data from a nonlinear structure undergoing broadband excitation, and requires the prior knowledge of the locations and kind of nonlinearities to be estimated. The proposed technique allows to study the stability of the periodic solutions of the system by building the nonlinear frequency response curves (NFRCs) using a single broadband excitation, making it very convenient from an experimental point of view. To this end, a novel methodology is derived in the extended state-space framework considering a continuation procedure and a monodromy-based approach. The method has been first tested on simulated nonlinear systems in the presence of noise and nonlinear modeling errors. Results showed a satisfactory level of accuracy in determining the periodic solutions of the systems, with errors comparable to the noise floor. Eventually, the experimental identification of a double-well vibrating system has been carried out and the estimated NFRC validated against the acquired data, comprising fold and period-doubling bifurcations. The proposed method has been tested in the case of conservative nonlinearities, suggesting that future validations should be focused on the dissipative case.