1 Introduction

The mathematical modeling of biological membranes is an active field of research that has received much attention in the last half century starting with the pioneering works of Canham [5] and Helfrich [17]. They modeled the membranes as regular surfaces in the space and associate the equilibrium configurations with the minimum of an energy functional depending on the curvatures. If we denote by \(M\subseteq \mathbb {R}^{3}\) a two-dimensional, compact, and oriented submanifold (with an understood choice of the normal vector \(\nu :M\rightarrow \mathbb {S}^2\)), by H and K its mean and Gaussian curvatures, respectively, and by \(H_0\) a constant spontaneous curvature, the so-called Canham–Helfrich energy functional reads

$$\begin{aligned} {\mathcal {E}}(M):=\int _M \big (\alpha _H(H(p)-H_0)^2-\alpha _K K(p)\big )\,\mathrm d{\mathcal {H}}^{2}(p), \end{aligned}$$
(1.1)

where \({\mathcal {H}}^{2}\) is the 2-dimensional Hausdorff measure and\(\alpha _H,\alpha _K>0\) are the bending constants. These are physical, model-specific, constants and the range of possible values that they can take reveals to be crucial to determine the coercivity of the functional. We point out that the positivity of the constants and the minus sign between the two terms in the energy give competition between the two curvatures in (1.1).

In the smooth case where M is at least of class \({\mathcal {C}}^2\), the curvatures H and K are given by the usual formulae

$$\begin{aligned} H=\kappa _1+\kappa _2\quad \text {and}\quad K=\kappa _1\kappa _2, \end{aligned}$$

\(\kappa _i\) being the principal curvatures, with respect to which, when \(H_0=0\), the functional \({\mathcal {E}}\) is homogeneous of degree two.

If M is without boundary, one can invoke the Gauss–Bonnet theorem and obtain that the term involving K gives a constant contribution (determined by the Euler characteristic \(\chi (M)\) of M) to the energy, so that it can be neglected in view of the minimization of \({\mathcal {E}}\) among all surfaces with prescribed topology. In this case, and under the further constraint that the spontaneous curvature vanishes, the functional \({\mathcal {E}}\) reduces to the well known Willmore energy functional [24, 33, 37, 40, 42]

$$\begin{aligned} {\mathcal {W}}(M):=\int _M H^2(p)\,\mathrm d{\mathcal {H}}^{2}(p). \end{aligned}$$
(1.2)

Both functionals \({\mathcal {E}}\) and \({\mathcal {W}}\) are geometric in nature, since they depend on geometric features of the surface M, and can be studied in a number of different contexts, according to the regularity requests on M. Sobolev-type approaches to the minimization either of the Willmore functional (see [24] and the references therein, see also [21, 22, 29, 34]) or of the Canham–Helfrich functional (see, e.g., [6, 7, 18, 19, 26, 28, 43]) assume that M has fixed topology, or even symmetry constraints. Aiming at considering more general surfaces, a successful approach is the one through varifolds [20, 44], see [4, 13, 14]. We point out that other frameworks are available in the study of geometric functionals: for instance, currents [16] have been used to tackle the minimization of the area functional. Despite not being suitable for the formulation of problems involving curvatures, due to their lack of an intrinsic notion of curvature, special classes of currents have been introduced to overcome this issue. Nonetheless, it is possible to apply the technical tools of the theory of currents to the class of the so-called generalized Gauss graphs [1], which are motivated by a generalization of the graph of the Gauss map on smooth surfaces M. Instead of generalizing M itself, this approach has the remarkable advantage to allow one to exploit the fact that the curvatures of M are coded in its Gauss map, see Sect. 2 for details.

We point out that the typical form in which the functional \({\mathcal {E}}\) in (1.1) is found in the literature is

$$\begin{aligned} E(M):=\int _M \Big (\frac{a_H}{2}(H(p)-H_0)^2+a_K K(p)\Big )\,\mathrm d{\mathcal {H}}^{2}(p), \end{aligned}$$
(1.3)

under the condition that \(a_H>0\) and \(a_K<0\) to ensure the competition between the two curvatures. In this context, it is required that

$$\begin{aligned} a_H>0\qquad \text {and}\qquad \frac{a_K}{a_H}\in (-2,0) \end{aligned}$$
(1.4)

in order to ensure both the coercivity and the lower semicontinuity of the functional (1.3); this condition is the same assumed in [7, Theorem 1] and [6, formula (1.9)] in the Sobolev setting, see also [18, 19], whereas the more restrictive condition \(-6a_H<5a_K<0\) is considered in [4] in the varifold setting. We note that the typical physical range of the parameters is \(-a_H\leqslant a_K\leqslant 0\), see, e.g., [2, 3, 41],Footnote 1 the case \(a_K=0\) essentially reducing to the Willmore functional \({\mathcal {W}}\) of (1.2). Given the expression of the Canham–Helfrich functional \({\mathcal {E}}\) in (1.1), condition (1.4) reads

$$\begin{aligned} 4\alpha _H>\alpha _K>0. \end{aligned}$$
(1.5)

In this paper, we provide a suitable formulation of the Canham–Helfrich functional \({\mathcal {E}}\) introduced in (1.1) in the class of generalized Gauss graphs and study three minimization problems. Our main results are Theorems 4.5, 4.6, and 4.9 stating that, under condition (1.5), there exists a minimizer of the Canham–Helfrich functional in certain classes of generalized Gauss graphs, also enforcing area and enclosed volume constraints, the latter being the physically relevant setup for biological applications. Their proof is a consequence of the direct method in the Calculus of Variations, once lower semicontinuity and compactness are proved.

The main advantage of the generalized Gauss graphs setting is that we are able to cover the physical range (1.5) for the bending coefficients. One shortcoming is the need of technical conditions to define the classes where the functional is minimized. Nonetheless, regular two-dimensional oriented surfaces always belong to such minimization classes.

The plan of the paper is the following: in Sect. 2 we present a brief review of generalized Gauss graphs, after which we define the Canham–Helfrich energy of a generalized Gauss graph in Sect. 3. Section 4 is devoted to the main results, Theorems 4.5, 4.6, and 4.9, and is complemented by a regularity result, Theorem 4.12.

1.1 Motivations from Biological Membranes and Outlook

In this section we briefly describe the origin of the Canham–Helfrich functional \({\mathcal {E}}\) in (1.1) and present an outlook for future research.

In the early 1970s Canham and Helfrich independently proposed a free energy in an effort to model the shape of biological membranes. The lipid bilayer that usually constitutes biological membranes is composed of amphiphiles, polar molecules featuring a hydrophilic head and a hydrophobic fatty tail, that are arranged in a fashion so that the tails in the inner part of the bilayer, screened from the watery surrounding environment.

Given the thickness of a few nanometers, one such bilayer can be effectively described as a surface M and the form (1.1) of the energy depending only on the mean curvature H of M responds to the need of explaining the bi-concave shape of red blood cells [5]. The competing contribution coming from the Gaussian curvature K was added by Helfrich [17], whereas the presence of the spontaneous mean curvature \(H_0\) takes into account possibly preferred configurations: this is the case in which the asymmetry between the two layers, or the difference in the chemical potential across the membrane determine a natural bending of the membrane, even at rest.

Several derivation for the Canham–Helfrich energy (1.1) are available, see [38] and the references therein, which rely on formal expansions of microscopic energies for small thickness. A more rigorous derivation in terms of \(\Gamma \)-convergence would be amenable from the variational point of view. Some results in this direction are available. In [31] a complete derivation in dimension two is presented, while the full three-dimensional case is tackled in [25, 26], where only partial results are obtained: the \(\Gamma -\limsup \) inequality is proved, but the \(\Gamma -\liminf \) inequality is proved in the setting of generalized Gauss graphs for a simplified functional. It would be interesting to recover a full \(\Gamma \)-convergence result also in the three-dimensional case. This work sets the stage for possibly tackling this problem in the context of generalized Gauss graphs, especially in light of the sharpness of the bounds on the bending constants.

2 Brief Theory of Generalized Gauss Graphs

In this section we introduce generalized Gauss graphs and highlight their main properties. We start by introducing some notions from exterior algebra and rectifiable currents.

2.1 Exterior Algebra and Rectifiable Currents

We refer the reader to [15] for a comprehensive treatise on the theory of currents.

Let \(k,N\in \mathbb {N}\) be such that \(1 \leqslant k \leqslant N\). We define \({\bigwedge }^0(\mathbb {R}^{N}):=\mathbb {R}^{}\) and we denote by \({\bigwedge }^k(\mathbb {R}^{N})\) the space of k-covectors in \(\mathbb {R}^{N}\), that is the space of k-linear alternating forms on \(\mathbb {R}^{N}\); we denote by \({\bigwedge }_k(\mathbb {R}^{N})\) the dual space \(({\bigwedge }^k(\mathbb {R}^{N}))^*={\bigwedge }^k((\mathbb {R}^{N})^*)\), called the space of k-vectors in \(\mathbb {R}^{N}\). We recall that, if \(\{e_1, \ldots , e_N \}\) is a basis of \(\mathbb {R}^{N}\), then \(\{e_{i_1} \wedge \cdots \wedge e_{i_k}: 1 \leqslant i_1< \cdots <i_k \leqslant N \}\) is a basis of \({\bigwedge }_k(\mathbb {R}^{N})\), where \(\wedge \) denotes the exterior product. A k-vector v is called a simple k-vector if it can be written as \(v = v_1 \wedge \cdots \wedge v_k\), for some \(v_1,\ldots ,v_k\in {\bigwedge }_1(\mathbb {R}^{N})\simeq \mathbb {R}^{N}\).

Let \(\Omega \subseteq \mathbb {R}^{N}\) be an open set. A (differential) k-form \(\omega \) on \(\Omega \) is a map that to each \(x \in \Omega \) associates \(\omega (x) \in {\bigwedge }^k(\mathbb {R}^{N})\). Given \(\omega \) a 0-form on \(\Omega \) (that is, a scalar function \(\omega :\Omega \rightarrow \mathbb {R}^{}\)), we define \(\mathrm d\omega \) as the 1-form on \(\Omega \) given by the differential of \(\omega \); for \(k>0\), the definition of the exterior differential operator \(\mathrm d\) is extended from k-forms to \((k+1)\)-forms through the usual algebra of the exterior product. We denote by \({\mathcal {D}}^k(\Omega )\) the space of k-forms with compact support in \(\Omega \); the space of k-currents \({\mathcal {D}}_k(\Omega )\) is defined as the dual of \({\mathcal {D}}^k(\Omega )\). Given a sequence of currents \(\{T_n\}_{n \in \mathbb {N}} \subseteq {\mathcal {D}}_k(\Omega )\) and a current \(T \in {\mathcal {D}}_k(\Omega )\), we say that \(T_n \rightharpoonup T\) if and only if \(\langle T_n, \omega \rangle \rightarrow \langle T, \omega \rangle \) for every \(\omega \in {\mathcal {D}}^k(\Omega )\), where \(\langle \cdot , \cdot \rangle \) denotes the dual product. We denote by \(\partial T \in \mathcal {D}_{k-1}(\Omega )\) the boundary of the current \(T \in \mathcal {D}_k(\Omega )\), defined as \(\langle \partial T, \omega \rangle :=\langle T, \mathrm d\omega \rangle \) for every \(\omega \in \mathcal {D}^{k-1}(\Omega )\); we notice that \(\mathrm d\omega \in {\mathcal {D}}^k(\Omega )\) whenever \(\omega \in {\mathcal {D}}^{k-1}(\Omega )\), that is, exterior differentiation preserves the compactness of the support, so that the duality \(\langle \partial T,\omega \rangle \) is well defined. The mass of a current \(T \in \mathcal {D}_k(\Omega )\) in the open set \(W \subseteq \Omega \) is defined as

$$\begin{aligned} \mathbb {M}^{}_W(T):=\sup \big \{\langle T, \omega \rangle : \omega \in {\mathcal {D}}^k(W), \Vert \omega (x) \Vert \leqslant 1 \, \text { for every}\,\, x\in W \big \}. \end{aligned}$$

Here, \(\Vert \cdot \Vert \) denotes the comass norm, namely, for \(\alpha \in {\bigwedge }^k(\mathbb {R}^{N})\),

$$\begin{aligned} \Vert \alpha \Vert :=\sup \{\langle \alpha ,v\rangle :v \,\, \text { is a simple}\,\, k-\text {vector with}\,\, |v|\leqslant 1 \big \}, \end{aligned}$$

where \(|v|:=|v_1\wedge \cdots \wedge v_k|\) is the volume of the parallelepiped generated by \(v_1,\ldots ,v_k\).

Given a set \(M \subseteq \mathbb {R}^{N}\), we say that M is k-rectifiable if \(M \subseteq \bigcup _{m=0}^{\infty } M_i\), for a certain \({\mathcal {H}}^k\)-negligible subset \(M_0\subseteq \mathbb {R}^{N}\) and for certain k-dimensional \({\mathcal {C}}^1\) surfaces \(M_i\subseteq \mathbb {R}^{N}\), for \(i>0\). One can prove that, if M is a k-rectifiable set, for \(\mathcal {H}^k\)-almost every \(p \in M\) there exists an approximate tangent space denoted by \(T_pM\). We say that a map \(\eta :M\rightarrow {\bigwedge }_k(\mathbb {R}^{N})\) is an orientation of M if it is \(\mathcal {H}^k\)-measurable and if \(\eta (p)\) is a unit simple k-vector that spans the approximate tangent space \(T_pM\) for \(\mathcal {H}^k\)-almost every \(p \in M\). We say that a map \(\beta :M \rightarrow \mathbb {R}^{}\) is an integer multiplicity on M if it is \(\mathcal {H}^k\)-locally summable and with values in \(\mathbb {N}\). Finally, \(T \in \mathcal {D}_k(\Omega )\) is a k-rectifiable current with integer multiplicity if there exist a k-rectifiable set \(M\subseteq \mathbb {R}^{N}\), an orientation \(\eta \) of M, and an integer multiplicity \(\beta \) on M such that for every \(\omega \in \mathcal {D}^k(\Omega )\) we have

$$\begin{aligned} \langle T, \omega \rangle =\int _M \langle \omega (p), \eta (p) \rangle \beta (p) \,\mathrm d\mathcal {H}^k(p). \end{aligned}$$

We denote by \(\mathcal {R}_k(\Omega )\) the sets of such currents and write \(T= \llbracket M,\eta ,\beta \rrbracket \). In this case, we have that

$$\begin{aligned} \mathbb {M}^{}_W(T)=\int _{M\cap W} \beta (p)\,\mathrm d{\mathcal {H}}^{k}(p), \end{aligned}$$
(2.1)

which simply returns \({\mathcal {H}}^{k}(M\cap W)\) if the multiplicity \(\beta \) is 1.

We now state the celebrated Federer–Fleming compactness theorem, which establishes the compactness result for k-rectifiable currents with integer multiplicity.

Theorem 2.1

[15, Theorem 4.2.17] Let \(\{T_n\}_{n \in \mathbb {N}}\) be a sequence in \(\mathcal {R}_k(\Omega )\) such that \(\partial T_n \in \mathcal {R}_{k-1}(\Omega )\) for any \(n \in \mathbb {N}\). Assume that for any open set W with compact closure in \(\Omega \) there exists a constant \(c_{W}>0\) such that

$$\begin{aligned} \mathbb {M}^{}_W(T_n)+\mathbb {M}^{}_W(\partial T_n) < c_W. \end{aligned}$$

Then there exist a subsequence \(\{n_j\}_{j\in \mathbb {N}}\) and a current \(T \in \mathcal {R}_k(\Omega )\) with \(\partial T \in \mathcal {R}_{k-1}(\Omega )\) such that \(T_{n_j} \rightharpoonup T\) as \(j \rightarrow \infty \).

2.2 Gauss Graphs

We refer the reader to [11, 12] for the classical notions of differential geometry.

Let \(M\subseteq \mathbb {R}^{3}\) be a compact two-dimensional manifold of class \({\mathcal {C}}^2\); we say that M is orientable if there exists a map \(\nu :M\rightarrow \mathbb {S}^{2}\) of class \({\mathcal {C}}^1\) on M such that, for every \(p\in M\), the vector \(\nu (p)\) is perpendicular to the tangent space \(T_pM\). Once we fix a choice of such a map \(\nu \), we say that the manifold M is oriented and we call \(\nu \) the Gauss map of M. Since M is of class \({\mathcal {C}}^2\), the Gauss map is differentiable at any \(p \in M\) and, upon identifying \(T_{\nu (p)}\mathbb {S}^{2}\simeq T_pM\), its differential in p, \(\mathrm d\nu _p:T_pM\rightarrow T_{\nu (p)}\mathbb {S}^{2}\), is a self-adjoint linear operator that has two real eigenvalues \(\kappa _1(p)\) and \(\kappa _2(p)\), called the principal curvatures of M at p. We define the mean and Gaussian curvatures of M at p by

$$\begin{aligned} H(p):=\kappa _1(p)+\kappa _2(p),\quad K(p):=\kappa _1(p)\kappa _2(p). \end{aligned}$$

The map \(\mathrm d\nu _p\) can be extended to a linear map \(L_p:\mathbb {R}^{3}\rightarrow \mathbb {R}^{3}\) by setting

$$\begin{aligned} L_p:=\mathrm d\nu _p\circ \textbf{P}_p, \end{aligned}$$
(2.2)

where \(\textbf{P}_p:\mathbb {R}^{3}\rightarrow T_p M\) denotes the orthogonal projection on the tangent space. Observe that \(L_p\) has eigenvalues \(\kappa _1(p)\), \(\kappa _2(p)\), and 0; in particular, \(\det L_p=0\).

For convenience, we denote by \(\mathbb {R}^{3}_x\) the space of points p and by \(\mathbb {R}^{3}_y\) the space where \(\nu (p)\) takes its values, and we consider the graph of the Gauss map

$$\begin{aligned} G:=\big \{(p,\nu (p)) \in \mathbb {R}^{3}_x \times \mathbb {R}^{3}_y: p \in M\big \} \subset \mathbb {R}^{3}_x \times \mathbb {R}^{3}_y\simeq \mathbb {R}^{6}. \end{aligned}$$
(2.3)

Since M is a two-dimensional manifold of class \({\mathcal {C}}^2\), G is a two-dimensional embedded surface in \(\mathbb {R}^{3}_x \times \mathbb {R}^{3}_y\) of class \({\mathcal {C}}^1\). We remark that if M has a boundary then also G has a boundary which is given by \( \partial G=\big \{(p,\nu (p)): p \in \partial M\big \} \) and we notice that if \(\partial M = \emptyset \) then \(\partial G=\emptyset \).

We now define an orientation on G. We equip M with the orientation induced by \(\nu \) and let \(\tau (p):=*\nu (p)\), where

$$\begin{aligned} *:{\bigwedge }_1(\mathbb {R}^{3}) \rightarrow {\bigwedge }_2(\mathbb {R}^{3}) \end{aligned}$$

is the Hodge operator. Notice that \(\tau (p) \in {\bigwedge }_2(T_pM)\) for every \(p \in M\), thus the field \(p \mapsto \tau (p)\) is a tangent 2-vector field on M. Then, the action of the Hodge operator is that of, starting from \(\nu (p)\), providing the oriented basis vectors of the tangent plane to M at p, namely \(\tau (p)\). Let \(\Phi :M \rightarrow M \times \mathbb {S}^{2} \subseteq \mathbb {R}^{3}_x \times \mathbb {R}^{3}_y\) be given by \(\Phi (p):=(p,\nu (p))\) which is of class \({\mathcal {C}}^1\) on M. Observe that \(G=\Phi (M)\). For each \(p \in M\) we have

$$\begin{aligned} \mathrm d\Phi _p:T_pM&\rightarrow T_pM \times T_{\nu (p)}\mathbb {S}^{2}\subseteq \mathbb {R}^{3}_x \times \mathbb {R}^{3}_y\\ u&\mapsto (u,\mathrm d\nu _p(u)). \end{aligned}$$

Finally, we define \(\xi :G \rightarrow {\bigwedge }_2(\mathbb {R}^{3}_x \times \mathbb {R}^{3}_y)\) as

$$\begin{aligned} \xi (p,\nu (p)):=\mathrm d\Phi _p(\tau _1(p)) \wedge \mathrm d\Phi _p(\tau _2(p)), \quad \text {for}\quad \tau = \tau _1 \wedge \tau _2. \end{aligned}$$
(2.4)

It is easy to see that \(|\xi |\ge 1\), hence we can normalize \(\xi \) obtaining

$$\begin{aligned} \eta :=\frac{\xi }{|\xi |}, \end{aligned}$$
(2.5)

which is an orientation of G.

We now introduce the general setting of generalized Gauss graphs. Let \(\{e_1,e_2,e_3\}\) and \(\{\varepsilon _1,\varepsilon _2,\varepsilon _3\}\) be the canonical basis of \(\mathbb {R}^{3}_x\) and \(\mathbb {R}^{3}_y\), respectively. Given a 2-vector \(\xi \in {\bigwedge }_2(\mathbb {R}^{3}_x \times \mathbb {R}^{3}_y)\), we define the stratification of \(\xi \) as the unique decomposition

$$\begin{aligned} \xi =\xi _0+\xi _1+\xi _2, \quad \text {where} \quad \xi _0 \in {\bigwedge }_2(\mathbb {R}^{3}_x), \quad \xi _1 \in {\bigwedge }_1(\mathbb {R}^{3}_x) \wedge {\bigwedge }_1(\mathbb {R}^{3}_y), \quad \xi _2 \in {\bigwedge }_2(\mathbb {R}^{3}_y), \end{aligned}$$

given by

$$\begin{aligned} \xi _0&=\sum _{1\leqslant i<j\leqslant 3} \langle \mathrm dx_i \wedge \mathrm dx_j,\xi \rangle e_i\wedge e_j=:\sum _{1\leqslant i<j\leqslant 3} \xi _0^{ij} e_i\wedge e_j,\\ \xi _1&=\sum _{1\leqslant i,j\leqslant 3} \langle \mathrm dx_i \wedge \mathrm dy_j,\xi \rangle e_i\wedge \varepsilon _j=:\sum _{1\leqslant i,j\leqslant 3} \xi _1^{ij} e_i\wedge \varepsilon _j,\\ \xi _2&=\sum _{1\leqslant i<j\leqslant 3} \langle \mathrm dy_i \wedge \mathrm dy_j,\xi \rangle \varepsilon _i\wedge \varepsilon _j=:\sum _{1\leqslant i <j\leqslant 3} \xi _2^{ij} \varepsilon _i\wedge \varepsilon _j, \end{aligned}$$

where \(\{\mathrm dx_1,\mathrm dx_2, \mathrm dx_3\}\) and \(\{\mathrm dy_1,\mathrm dy_2,\mathrm dy_3\}\) denote the dual basis of \(\{e_1,e_2,e_3\}\) and \(\{\varepsilon _1,\varepsilon _2,\varepsilon _3\}\), respectively. Notice that the three equalities above serve as a definition of \(\xi _h^{ij}\); \(\xi _0\) and \(\xi _2\) are represented by \(3\times 3\) skew-symmetric matrices while \(\xi _1\) is represented by a \(3\times 3\) matrix.

From now on we take \(\Omega \subseteq {\mathbb {R}}_x^3\) an open set. We indicate by \({\text {curv}}_2(\Omega )\) the set of the currents \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \) that satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} \Sigma \text { and}\,\, \partial \Sigma \,\,\text { are rectifiable currents supported on } \,\,\Omega \times \mathbb {S}^{2}, \\ \displaystyle \langle \Sigma , g\varphi ^* \rangle \!\!=\!\! \int _G g(x,y)|\eta _0(x,y)|\beta (x,y)\, \mathrm d\mathcal {H}^2(x,y), \quad \text {for all }\,g \in {\mathcal {C}}_c(\Omega \times \mathbb {R}^{3}_y), \\ \langle \partial \Sigma , \varphi \wedge \omega \rangle =0, \quad \text {for all}\,\, \omega \in \mathcal {D}^0(\Omega \times \mathbb {R}^{3}_y), \end{array}\right. }\qquad \end{aligned}$$
(2.6)

where

$$\begin{aligned} \varphi (x,y) :=\sum _{j=1}^3 y_j \mathrm dx_j, \quad \varphi ^*(x,y):=\sum _{j=1}^3 (-1)^{j+1} y_j \mathrm d{\hat{x}}_j \end{aligned}$$

and \(\mathrm d{\hat{x}}_j=\mathrm dx_{j_1} \wedge \mathrm dx_{j_2}\) for \(1\leqslant j_1<j_2 \leqslant 3\), \(j_1,j_2 \ne j\). We can associate with the regular Gauss graph G the current \(\Sigma _G \in \mathcal {R}_2(\mathbb {R}^{3}_x \times \mathbb {R}^{3}_y)\) given by \(\Sigma _G:=\llbracket G,\eta ,1\rrbracket \), and this turns out to be an element of \({\text {curv}}_2(\Omega )\) (see [1, Section 2])Footnote 2. Given \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2(\Omega )\), we let \(G^*:=\{(x,y)\in G: \eta _0(x,y)\ne 0\}\) (notice that \(G^*\) is defined only \({\mathcal {H}}^2\)-a.e.).

The geometric meaning of the first condition in (2.6) is evident; the one of the second condition is the following: the variable y is orthogonal to the tangent space to \(p_1 G\), where we denote by \(p_1:\mathbb {R}^{3}_x\times \mathbb {R}^{3}_y\rightarrow \mathbb {R}^{3}_x\) the projection on the first component; the third condition is the analogue of the second one, on the boundary \(\partial \Sigma \).

For a rectifiable current \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \), according with the stratification of \(\eta \), we define the strata \(\Sigma _i\) by

$$\begin{aligned} \Sigma _i(\omega ):=\int _G \langle \omega (x,y),\eta _i(x,y)\rangle \beta (x,y)\, \mathrm d\mathcal {H}^2(x,y) \end{aligned}$$

for every \(\omega \in \mathcal {D}^2(\mathbb {R}^{3}_x\times \mathbb {R}^{3}_y)\). Given \(k\in \{1,2,3 \}\), consider a multi-index \(\lambda \in \{(\lambda _1,\ldots ,\lambda _k )\,:\, 0\leqslant \lambda _1< \cdots <\lambda _k \leqslant 2 \}\). Letting \(|\Sigma _{\lambda _i}|\) be the measure , a generalized Gauss graph \(\Sigma \in {\text {curv}}_2(\Omega )\) is said to be \(\lambda \)-special if

$$\begin{aligned} |\Sigma _{\lambda _i}| \ll |\Sigma _0| \quad \text {for} \quad i=1,\ldots ,k \end{aligned}$$

and we write \( \Sigma \in {\text {curv}}_2^\lambda (\Omega )\). We set \({\text {curv}}_2^*(\Omega ):={\text {curv}}_2^{(0,1,2)}(\Omega )\) and we call its elements special generalized Gauss graphs; in the sequel, we will also make use of the space

$$\begin{aligned} {\text {curv}}_2^{(0,1)}(\Omega )=\big \{\Sigma \in {\text {curv}}_2(\Omega ): |\Sigma _{1}| \ll |\Sigma _0|\big \}. \end{aligned}$$

We introduce the following class of functions.

Definition 2.2

A function \(f:\Omega \times \mathbb {R}^{3}_y\times \big ({\bigwedge }_1(\mathbb {R}^{3}_x) \wedge {\bigwedge }_1(\mathbb {R}^{3}_y)\big ) \rightarrow \mathbb {R}^{}\) is said to be a standard integrand in the setting of \({\text {curv}}_2(\Omega )\) if

  1. (i)

    f is continuous;

  2. (ii)

    f is convex in the last variable, i.e.,

    $$\begin{aligned} f(x,y,t p+(1-t)q) \leqslant tf(x,y,p)+(1-t) f(x,y,q), \end{aligned}$$

    for all \(t \in (0,1)\), for all \((x,y) \in \Omega \times \mathbb {R}^{3}_y\), and for all \(p,q \in {\bigwedge }_1(\mathbb {R}^{3}_x) \wedge {\bigwedge }_1(\mathbb {R}^{3}_y)\);

  3. (iii)

    f has superlinear growth in the last variable, i.e., there exists a continuous function \(\varphi :\Omega \times \mathbb {R}^{3}_y \times [0,+\infty ) \rightarrow [0,+\infty )\), non-decreasing in the last variable and such that \(\varphi (x,y,t) \rightarrow +\infty \) locally uniformly in (xy) as \(t \rightarrow +\infty \), and with

    $$\begin{aligned} \varphi (x,y,|q|)|q| \leqslant f(x,y,q) \end{aligned}$$

    for all \((x,y,q) \in \Omega \times \mathbb {R}^{3}_y \times \big ({\bigwedge }_1(\mathbb {R}^{3}_x) \wedge {\bigwedge }_1(\mathbb {R}^{3}_y)\big ).\)

Remark 2.3

A function f as in Definition 2.2 is called (1)-standard integrand in [10, Definition 3.3].

The following theorem ensures that an integral functional with a standard integrand as a density is lower semicontinuous.

Theorem 2.4

[10, Theorem 3.2] Let f be a standard integrand in the setting of \({\text {curv}}_2(\Omega )\) and, for every \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2(\Omega )\), set

$$\begin{aligned} I_f(\Sigma ):=\int _{G^*} f(x,y,\xi _1(x,y)) |\eta _0(x,y)| \beta (x,y) \,\mathrm d\mathcal {H}^2(x,y). \end{aligned}$$

Consider a sequence \(\{\Sigma _j \}_{j\in \mathbb {N}} \subset {\text {curv}}_2^{(0,1)}(\Omega )\) such that

  1. (i)

    \(\Sigma _j \rightharpoonup \Sigma \), where \(\Sigma \in \mathcal {R}_2(\Omega \times \mathbb {S}^{2})\);

  2. (ii)

    \(\sup _{j\in \mathbb {N}} I_f(\Sigma _j) <+\infty \).

Then

$$\begin{aligned} \Sigma \in {\text {curv}}_2^{(0,1)}(\Omega ) \quad \text {and} \quad I_f(\Sigma ) \leqslant \liminf _{j \rightarrow \infty } I_f(\Sigma _j). \end{aligned}$$

Theorem 2.5

[9, Corollary 4.2] Consider a sequence \(\Sigma _j =\llbracket G_j,\eta _j,\beta _j \rrbracket \in {\text {curv}}_2^{*}(\Omega )\) such that

$$\begin{aligned}{} & {} \sup _{j\in \mathbb {N}}\bigg \{ \int _{G_j^*} \!\! \bigg (|(\eta _j)_0(x,y)|+\frac{|(\eta _j)_1(x,y)|^2}{|(\eta _j)_0(x,y)|}+\frac{|(\eta _j)_2(x,y)|^2}{|(\eta _j)_0(x,y)|} \bigg )\beta _j(x,y)\,\mathrm d\mathcal {H}^2(x,y)+ \mathbb {M}^{}(\partial \Sigma _j)\bigg \}\\ {}{} & {} \quad <+\infty . \end{aligned}$$

Then there exist a subsequence \(\{\Sigma _{j_k}\}_{k\in \mathbb {N}}\) and \(\Sigma \in {\text {curv}}_2^*(\Omega )\) such that \(\Sigma _{j_k} \rightharpoonup \Sigma \) as \(k\rightarrow \infty \).

3 The Canham–Helfrich Energy of a Generalized Gauss Graph

In this section, we are going to define the Canham–Helfrich energy of a generalized Gauss graph in a way that is the natural extension of the definition for smooth surfaces. Let \(H_0\in \mathbb {R}^{}\). Here, \(M\subseteq \mathbb {R}^{3}\) denotes a compact and oriented (with an understood choice of the normal \(\nu \)) two-dimensional manifold of class \({\mathcal {C}}^2\); recall the definition (1.1) of the Canham–Helfrich energy functional \({\mathcal {E}}(M)\) on M.

Lemma 3.1

[[26, Lemma 4.2]] For \(\xi \in {\bigwedge }_2(\mathbb {R}^{3}_x\times \mathbb {R}^{3}_y)\) as in (2.4) the following hold true

$$\begin{aligned} {\left\{ \begin{array}{ll} \xi _0=\tau _1 \wedge \tau _2, \\ \xi _1=\tau _1 \wedge \mathrm d\nu ( \tau _2) -\tau _2 \wedge \mathrm d\nu ( \tau _1),\\ \xi _1^{ij}=(\tau _1 \otimes \mathrm d\nu ( \tau _2) -\tau _2 \otimes \mathrm d\nu ( \tau _1))_{ij},\\ \xi _2=\mathrm d\nu (\tau _1) \wedge \mathrm d\nu (\tau _2)=\kappa _1 \kappa _2 \tau _1 \wedge \tau _2.\\ \end{array}\right. } \end{aligned}$$
(3.1)

Proof

By the definition of \(\xi \) we have

$$\begin{aligned} \begin{aligned} \xi (p,\nu (p))&=(\tau _1(p), \mathrm d\nu _p(\tau _1(p))) \wedge (\tau _2(p), \mathrm d\nu _p(\tau _2(p)))\\&= \tau _1\wedge \tau _2 +\tau _1 \wedge \mathrm d\nu _p( \tau _2) -\tau _2 \wedge \mathrm d\nu _p( \tau _1)+\mathrm d\nu _p (\tau _1) \wedge \mathrm d\nu _p (\tau _2). \end{aligned} \end{aligned}$$
(3.2)

Then the equalities follow from straightforward computations. \(\square \)

Remark 3.2

If M is a two-dimensional oriented manifold of class \({\mathcal {C}}^2\) with multiplicity \(\bar{\beta }:M\rightarrow \mathbb {N}\), G is the Gauss graph associated with M via (2.3), and \(\Sigma _G:=\llbracket G,\eta ,\beta \rrbracket \) with \(\beta (x,y)=\bar{\beta }(x)\), then the equalities

$$\begin{aligned} \begin{aligned} \mathbb {M}^{}(M)&=\int _M \bar{\beta }(p)\,\mathrm d{\mathcal {H}}^2(p)=\int _G \frac{\beta (x,y)}{|\xi (x,y)|}\,\mathrm d{\mathcal {H}}^2(x,y) \\&=\int _G|\eta _0(x,y)|\beta (x,y)\,\mathrm d{\mathcal {H}}^2(x,y) \end{aligned} \end{aligned}$$
(3.3)

hold true by means of the area formula, (2.5), and the first identity in (3.1); here, by \(\mathbb {M}^{}(M)\) we mean the mass of the current \(\llbracket M,\nu ,\bar{\beta }\rrbracket \), see (2.1) with \(k=2\). In particular, if \(\bar{\beta }\equiv 1\), we obtain

$$\begin{aligned} {\mathcal {H}}^2(M)= \int _G |\eta _0(x,y)|\,\mathrm d{\mathcal {H}}^2(x,y). \end{aligned}$$
(3.4)

The next two lemmas are proved in [26]. We provide the proof in our context for the sake of completeness.

Lemma 3.3

[26, Lemma 4.5] Let \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2(\Omega )\) be a generalized Gauss graph. Then

  • for \(\mathcal {H}^2\)-almost every \((x,y) \in G\)

    $$\begin{aligned} \sum _{i=1}^{3} \eta _1^{ij}(x,y) y_i = 0 \quad \text {for all }\,\,1 \leqslant j \leqslant 3, \end{aligned}$$
    (3.5)
  • for \({\mathcal {H}}^2\)-almost every \((x,y) \in G^*\)

    $$\begin{aligned} \sum _{j=1}^{3} \eta _1^{ij}(x,y) y_j = 0 \quad \text {for all} \,\, 1 \leqslant i \leqslant 3. \end{aligned}$$
    (3.6)

Proof

As in the proof of [1, Proposition 2.4], we have that

$$\begin{aligned} \langle \eta (x,y), (y,0) \wedge (0,w) \rangle = 0 \quad \text {for all}\,\, w \in \mathbb {R}^{3} \text { and for} \mathcal {H}^2-\text {almost every}\,\, (x,y) \in G. \end{aligned}$$

From this we deduce that \(\sum _{ij} \eta _1^{ij}(x,y) y_i w_j=0\) for all \(w \in \mathbb {R}^{3}\), which implies (3.5).

By [1, Theorem 2.10(ii)], for \(\mathcal {H}^2\)-almost every \((x,y) \in G^*\), there are an embedded \({\mathcal {C}}^1\)-surface \(S \subset \mathbb {R}^{3}\) and a map \(\zeta :S \rightarrow \mathbb {S}^{2}\) of class \({\mathcal {C}}^1\) such that

$$\begin{aligned} \zeta (x)=y, \quad {\bigwedge }_2({\text {I}}\oplus \mathrm d\zeta _x)(*y) = \xi (x,y). \end{aligned}$$

By Lemma 3.1, we obtain, for \(i=1,2,3\) and \(*y=\tau _1 \wedge \tau _2\),

$$\begin{aligned} \sum _{j=1}^3 \xi _1^{ij}y_j=e_i \cdot (\tau _1 \otimes D\zeta (x) \tau _2-\tau _2 \otimes D \zeta (x) \tau _1)y=0, \end{aligned}$$

since \(D\zeta (x) \tau _k \cdot y=D\zeta (x) \tau _k \cdot \zeta (x)=0\) for \(k=1,2\) as \(\zeta \) takes values in \(\mathbb {S}^{2}\). Then (3.6) is proved recalling (2.5). \(\square \)

We recall that the permutation symbols are given by

$$\begin{aligned} \varepsilon _{ijk}={\left\{ \begin{array}{ll} 1 &{} \text {if }\,(ijk)\,\,\text { is an even permutation of}\,\, \{1,2,3\}, \\ -1 &{} \text {if} \,\,(ijk) \,\,\text { is an odd permutation of }\,\, \{1,2,3\}, \\ 0 &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

For any \(z\in \mathbb {R}^{3}\), we define

$$\begin{aligned} \Psi _{z}:=\sum _{i,j,k=1}^3 \varepsilon _{ijk}\,z_k \, \mathrm dx_i \wedge \mathrm dy_j. \end{aligned}$$
(3.7)

Lemma 3.4

[26, Lemma 4.6] For L as in (2.2), the following formulas hold

$$\begin{aligned} \begin{aligned}&H={\text {tr}}L=\nu _1 (\xi _1^{23}-\xi _1^{32}) - \nu _2 (\xi _1^{13}-\xi _1^{31}) + \nu _3 (\xi _1^{12}-\xi _1^{21})= \langle \Psi _{\nu }, \xi _1 \rangle ,\\&K={\text {tr}}({\text {cof}}L)=\nu \cdot ({\text {cof}}\xi _1) \nu , \end{aligned} \end{aligned}$$
(3.8)

where L and \(\nu \) are evaluated at \(p \in M\) and \(\xi \) is evaluated at \((p,\nu (p))\).

Proof

Since \(\{\tau _1,\tau _2,\nu \}\) is an orthonormal basis of \(\mathbb {R}^{3}\), we observe that for any \(r \in \mathbb {R}^{}\)

$$\begin{aligned} -r {\text {tr}}({\text {cof}}L)+r^2 {\text {tr}}L -r^3&=\det (L-r {\text {I}})= \det (\tau _1 | \tau _2 |\nu ) \det (L-r{\text {I}})\nonumber \\&=(L-r{\text {I}})\nu \cdot \left[ (L-r{\text {I}})\tau _1 \times (L-r{\text {I}})\tau _2 \right] \\&=-r(L\tau _1 \times L\tau _2) \cdot \nu + r^2(\tau _1\times L\tau _2-\tau _2 \times L \tau _1)\cdot \nu -r^3 ,\nonumber \end{aligned}$$
(3.9)

where we used the fact that \(L \nu =0\). Therefore, from Lemma 3.1 we deduce that

$$\begin{aligned} {\text {tr}}L&=(\tau _1 \times L\tau _2 -\tau _2 \times L \tau _1) \cdot \nu =\sum _{i,j,k=1}^3 (\tau _{1,i}e_j \cdot L\tau _2-\tau _{2,i}e_j \cdot L \tau _1)\nu _k \varepsilon _{ijk}\\&=\sum _{i,j,k=1}^3 \xi _1^{ij} \nu _k \varepsilon _{ijk}= \sum _{i<j} \sum _{k=1}^3 (\xi _1^{ij}-\xi _1^{ji}) \nu _k \varepsilon _{ijk} \\&=\nu _1 (\xi _1^{23}-\xi _1^{32}) - \nu _2 (\xi _1^{13}-\xi _1^{31}) + \nu _3 (\xi _1^{12}-\xi _1^{21}). \end{aligned}$$

Moreover, from (3.1) and (3.9) we also deduce that

$$\begin{aligned} {\text {tr}}({\text {cof}}L)= (L\tau _1 \times L\tau _2) \cdot \nu = (L\tau _1 \wedge L\tau _2) \cdot \xi _0=\kappa _1\kappa _2. \end{aligned}$$

Using (3.1) again and, since \(\det (\xi _1)=0\), by [39, Prop. 3.21], we have

$$\begin{aligned} \nu \cdot {\text {cof}}(\xi _1)\nu =\nu \cdot {\text {cof}}(\tau _1 \otimes L\tau _2 -\tau _2 \otimes L\tau _1)\nu = \det (\tau _1 \otimes L\tau _2 -\tau _2 \otimes L\tau _1+ \nu \otimes \nu )=:D. \end{aligned}$$

We can represent the matrix \(\tau _1 \otimes L\tau _2 -\tau _2 \otimes L\tau _1+ \nu \otimes \nu \) with respect to the basis \(\{\tau _1, \tau _2,\nu \}\), obtaining

$$\begin{aligned} D&= \det \begin{pmatrix} L\tau _2 \cdot \tau _1 &{} L\tau _2 \cdot \tau _2&{} 0\\ -L\tau _1 \cdot \tau _1 &{} -L\tau _1 \cdot \tau _2&{} 0\\ 0&{}0&{}1 \end{pmatrix} = \det \begin{pmatrix} L\tau _1 \cdot \tau _1 &{} L\tau _1 \cdot \tau _2\\ L\tau _2 \cdot \tau _1 &{} L\tau _2 \cdot \tau _2 \end{pmatrix}\\&=\kappa _1 \kappa _2 \det \begin{pmatrix} \tau _1 \cdot \tau _1 &{} \tau _1 \cdot \tau _2\\ \tau _2 \cdot \tau _1 &{} \tau _2 \cdot \tau _2 \end{pmatrix}=\kappa _1\kappa _2={\text {tr}}{\text {cof}}L, \end{aligned}$$

which concludes the proof. \(\square \)

The next proposition provides the expression of the Canham–Helfrich functional defined on manifolds, seen as regular Gauss graphs. In turns, this suggests how to define the Canham–Helfrich functional for general elements in \({\text {curv}}_2(\Omega )\).

Proposition 3.5

Fix \(y \in \mathbb {S}^{2}\) and let

$$\begin{aligned} \mathcal {X}_y:=\bigg \{ \zeta \in {\bigwedge }_1(\mathbb {R}^{3}_x) \wedge {\bigwedge }_1(\mathbb {R}^{3}_y): \sum _{k = 1}^3 \zeta ^{k i} y_{k}= \sum _{k=1}^3 \zeta ^{i k} y_{k}= \sum _{k=1}^3 \zeta ^{k k}= 0 \;\; \text {for all} \,\,i=1,2,3 \bigg \}.\nonumber \\ \end{aligned}$$
(3.10)

Let \(f_y:\mathcal {X}_y \rightarrow [0,+\infty )\) be defined by (recall (3.7))

$$\begin{aligned} f_y(\zeta ):=\alpha _H \langle \Psi _y, \zeta \rangle ^2-2\alpha _H H_0 \langle \Psi _y, \zeta \rangle +\alpha _H H_0^2-\alpha _K y \cdot ({\text {cof}}\zeta ) y \end{aligned}$$
(3.11)

Then, defining \(\eta \) as in (2.5), we have

$$\begin{aligned} {\mathcal {E}}(M)=\int _{\Phi (M)^*}f_y\left( \frac{\eta _1(x,y)}{|\eta _0(x,y)|} \right) |\eta _0(x,y)|\, \mathrm d\mathcal {H}^2(x,y). \end{aligned}$$

Proof

First observe that, by Lemma 3.3 and since by (3.1) the trace of \(\xi _1\) is zero, \(\eta _1(x,y)\) belongs to \(\mathcal {X}_y\) for almost every \((x,y) \in \Phi (M)^*\). Moreover, by (1.1), Lemma 3.4, and the area formula, we have

$$\begin{aligned} {\mathcal {E}}(M)&= \int _M \left( \alpha _H ({\text {tr}}L_p-H_0)^2-\alpha _K{\text {tr}}({\text {cof}}L_p)\right) \mathrm d\mathcal {H}^2(p)\\&=\int _M \Big (\alpha _H\big ( \langle \Psi _{\nu (p))}, \xi _1(p,\nu (p)) \rangle -H_0\big )^2-\alpha _K \nu (p) \cdot ({\text {cof}}\xi _1(p,\nu (p))) \nu (p)\Big )\\ {}&\quad \times \mathrm d\mathcal {H}^2(p)\\&=\int _{\Phi (M)^*} \!\! \bigg (\alpha _H\bigg ( \bigg \langle \Psi _{y}, \frac{\eta _1(x,y)}{|\eta _0(x,y)|} \bigg \rangle -H_0\bigg )^2\\ {}&\quad -\alpha _K y \cdot \bigg ({\text {cof}}\frac{\eta _1(x,y)}{|\eta _0(x,y)|}\bigg ) y\bigg ) |\eta _0(x,y)|\,\mathrm d\mathcal {H}^2(x,y), \end{aligned}$$

where we have used that \(|\xi |=1/|\eta _0|=|\det D\Phi |\). \(\square \)

We are now ready to define the functional \({\mathcal {E}}\) on a generalized Gauss graph.

Definition 3.6

The Canham–Helfrich functional defined on generalized Gauss graphs is the functional \({\mathcal {E}}:{\text {curv}}_2(\Omega ) \rightarrow [-\infty ,+\infty ]\) given by

$$\begin{aligned} {\mathcal {E}}(\Sigma ):=\int _{G^*} f_y\bigg (\frac{\eta _1(x,y)}{|\eta _0(x,y)|} \bigg )|\eta _0(x,y)|\beta (x,y)\,\mathrm d{\mathcal {H}}^2(x,y), \end{aligned}$$
(3.12)

for every \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2(\Omega )\), whenever the integral exists.

4 Existence and Regularity of Minimizers

4.1 Technical Lemmas

For every \(\zeta \in {\bigwedge }_1(\mathbb {R}^{3}_x)\wedge {\bigwedge }_1(\mathbb {R}^{3}_y)\) and for every \(y\in {\mathbb {S}}^2\), let us define

$$\begin{aligned} g_y(\zeta ):=\alpha _H \langle \Psi _y, \zeta \rangle ^2-\alpha _K y \cdot ({\text {cof}}\zeta ) y \quad \text {and}\quad h_y(\zeta ):=2 \alpha _H H_0\langle \Psi _y, \zeta \rangle \end{aligned}$$
(4.1)

and let us identify \(\zeta \) with a vector in \(u=u[\zeta ]\in \mathbb {R}^{9}\) by

$$\begin{aligned} u=u[\zeta ]:=(\zeta ^{11}, \zeta ^{12}, \zeta ^{13}, \zeta ^{21}, \zeta ^{22}, \zeta ^{23}, \zeta ^{31}, \zeta ^{32}, \zeta ^{33}). \end{aligned}$$

With these positions, we have (compare with the expression in (3.8))

$$\begin{aligned} \langle \Psi _y,\zeta \rangle= & {} (0,y_3,-y_2,-y_3,0,y_1,y_2,-y_1,0)\cdot \\ u= & {} y_1(u_6-u_8)-y_2(u_3-u_7)+y_3(u_2-u_4). \end{aligned}$$

Lemma 4.1

Let (1.5) hold. The function \(g_y:{\bigwedge }_1(\mathbb {R}^{3}_x)\wedge {\bigwedge }_1(\mathbb {R}^{3}_y)\rightarrow \mathbb {R}^{}\) defined in (4.1) is represented by a quadratic form \(u \mapsto u \cdot A_y u\) on \(\mathbb {R}^{9}\), where

$$\begin{aligned} A_y= \begin{pmatrix} 0 &{} 0 &{} 0 &{} 0 &{} -\frac{\alpha _K}{2}y_3^2 &{} \frac{\alpha _K}{2} y_2 y_3 &{} 0 &{} \frac{\alpha _K}{2}y_2y_3 &{} -\frac{\alpha _K}{2}y_2^2 \\ 0 &{} \alpha _Hy_3^2 &{} -\alpha _Hy_2y_3 &{}-\gamma y_3^2 &{} 0 &{}\gamma y_1y_3&{} \gamma y_2y_3 &{} -\alpha _Hy_1y_3 &{} \frac{\alpha _K}{2}y_1y_2\\ 0 &{} -\alpha _Hy_2y_3 &{} \alpha _Hy_2^2 &{}\gamma y_2y_3 &{} \frac{\alpha _K}{2} y_1 y_3 &{} -\alpha _H y_1 y_2 &{} -\gamma y_2^2 &{}\gamma y_1 y_2 &{} 0\\ 0 &{}-\gamma y_3^2 &{}\gamma y_2 y_3 &{} \alpha _H y_3^2 &{} 0 &{} -\alpha _H y_1 y_3 &{} -\alpha _H y_2 y_3 &{}\gamma y_1y_3 &{} \frac{\alpha _K}{2} y_1 y_2 \\ -\frac{\alpha _K}{2} y_3^2 &{} 0 &{} \frac{\alpha _K}{2} y_1 y_3 &{} 0 &{} 0 &{} 0 &{} \frac{\alpha _K}{2}y_1y_3 &{} 0 &{} -\frac{\alpha _K}{2} y_1^2\\ \frac{\alpha _K}{2} y_2 y_3 &{}\gamma y_1 y_3 &{} -\alpha _H y_1 y_2 &{} -\alpha _Hy_1 y_3 &{} 0 &{} \alpha _H y_1^2 &{}\gamma y_1y_2 &{}-\gamma y_1^2 &{} 0\\ 0 &{}\gamma y_2y_3 &{}-\gamma y_2^2 &{} -\alpha _H y_2 y_3 &{} \frac{\alpha _K}{2} y_1 y_3 &{}\gamma y_1 y_2 &{} \alpha _H y_2^2 &{} -\alpha _H y_1 y_2 &{} 0\\ \frac{\alpha _K}{2} y_2 y_3 &{} -\alpha _H y_1 y_3 &{}\gamma y_1 y_2 &{}\gamma y_1y_3 &{} 0 &{}-\gamma y_1^2 &{} -\alpha _H y_1y_2 &{} \alpha _H y_1^2 &{} 0\\ -\frac{\alpha _K}{2} y_2^2 &{} \frac{\alpha _K}{2} y_1 y_2 &{} 0 &{} \frac{\alpha _K}{2} y_1y_2 &{} -\frac{\alpha _K}{2} y_1^2 &{} 0 &{} 0 &{} 0 &{} 0 \end{pmatrix} \end{aligned}$$

for \(\gamma :=\alpha _H - \alpha _K/2\). Let

$$\begin{aligned} \begin{aligned} v(-\alpha _K/2):=&\; \begin{pmatrix} y_1^2-1\\ y_1 y_2\\ y_1 y_3\\ y_1 y_2\\ y_2^2-1\\ y_2 y_3\\ y_1 y_3\\ y_2 y_3\\ y_3^2 -1 \end{pmatrix},\qquad v(2\alpha _H -\alpha _K /2):=\begin{pmatrix} 0\\ -y_3\\ y_2\\ y_3\\ 0\\ -y_1\\ -y_2\\ y_1\\ 0 \end{pmatrix}, \\ v_1(\alpha _K/2):=&\; \begin{pmatrix} 2 y_1y_2y_3\\ y_3 y_2^2 -y_3 y_1^2\\ y_2y_3^2-y_2\\ y_3 y_2^2 -y_3 y_1^2\\ -2y_1 y_2y_3\\ y_1-y_1 y_3^2\\ y_2y_3^2-y_2\\ y_1-y_1 y_3^2\\ 0 \end{pmatrix}, \qquad v_2(\alpha _K/2):=\begin{pmatrix} y_1y_2^2-y_1 y_3^2 \\ y_2^3-y_2 \\ y_3y_1^2+ y_3y_2^2\\ y_2^3-y_2\\ y_1-y_1y_2^2\\ 0\\ y_3y_1^2+ y_3y_2^2\\ 0\\ -y_1^3-y_1y_2^2 \end{pmatrix}. \end{aligned} \end{aligned}$$

Then these vectors are eigenvectors of the matrix \(A_y\) with corresponding eigenvalues \(-\alpha _K /2 \), \(2\alpha _H -\alpha _K /2\), and \(\alpha _K /2\) with multiplicities 1, 1, and 2, respectively. The six vectors

$$\begin{aligned} \begin{aligned} v_1(0):=&\, \begin{pmatrix} y\\ 0\\ 0 \end{pmatrix},\;\; v_2(0):=\begin{pmatrix} 0\\ y\\ 0 \end{pmatrix},\;\; v_3(0):=\begin{pmatrix} 0\\ 0\\ y \end{pmatrix}, \\ v_4(0):=&\, \begin{pmatrix} y_1e_1\\ y_2e_1\\ y_3e_1 \end{pmatrix}, \;\; v_5(0):=\begin{pmatrix} y_1e_2\\ y_2e_2\\ y_3e_2 \end{pmatrix}, \;\; v_6(0):=\begin{pmatrix} y_1e_3\\ y_2e_3\\ y_3e_3 \end{pmatrix} \end{aligned} \end{aligned}$$

generate the 5-dimensional subspace associated with the eigenvector 0.

The function \(h_y:{\bigwedge }_1(\mathbb {R}^{3}_x)\wedge {\bigwedge }_1(\mathbb {R}^{3}_y)\rightarrow \mathbb {R}^{}\) defined in (4.1) is represented by a linear map \(u \mapsto u \cdot v_y\) where \( v_y:=-2\alpha _H H_0 v(2\alpha _H-\alpha _K/2). \)

Moreover, we have that

$$\begin{aligned} \begin{aligned}&\, {\text {span}}\{v_1(0),v_2(0),v_3(0),v_4(0),v_5(0),v_6(0),v(-\alpha _K/2)\} \\ =&\, {\text {span}}\left\{ \begin{pmatrix}y\\ 0\\ 0\end{pmatrix}, \begin{pmatrix}0\\ y\\ 0\end{pmatrix}, \begin{pmatrix}0\\ 0\\ y\end{pmatrix}, \begin{pmatrix}y_1e_1\\ y_2e_1\\ y_3e_1\end{pmatrix}, \begin{pmatrix}y_1e_2\\ y_2e_2\\ y_3e_2\end{pmatrix}, \begin{pmatrix}y_1e_3\\ y_2e_3\\ y_3e_3\end{pmatrix}, \begin{pmatrix}e_1\\ e_2\\ e_3\end{pmatrix} \right\} \end{aligned} \end{aligned}$$
(4.2)

and by the isomorphism \(\zeta \mapsto u[\zeta ]\) the space \(\mathcal {X}_y\) introduced in (3.10) transforms to

$$\begin{aligned} \widetilde{\mathcal {X}}_y:=\left\{ u \in \mathbb {R}^{9}: u \perp {\text {span}}\{v_1(0),v_2(0),v_3(0),v_4(0),v_5(0),v_6(0),v(-\alpha _K/2)\} \right\} .\nonumber \\ \end{aligned}$$
(4.3)

Proof

The claims follow by straightforward calculations. To prove (4.2), we observe that,

$$\begin{aligned} v(-\alpha _K/2)=&\, y_1 \begin{pmatrix}y\\ 0\\ 0\end{pmatrix}+y_2\begin{pmatrix}0\\ y\\ 0\end{pmatrix}+y_3\begin{pmatrix}0\\ 0\\ y\end{pmatrix}-\begin{pmatrix}e_1\\ e_2\\ e_3\end{pmatrix} \end{aligned}$$

and this concludes the proof. \(\square \)

Lemma 4.1 shows that the quadratic form \(A_y\) (and therefore the function \(g_y\)) has both a negative eigenvalue and the zero eigenvalue, which prevent positive definiteness. Nonetheless, since the space \({\mathcal {X}}_y\) defined in (3.10) transforms to \(\widetilde{\mathcal {X}}_y\) defined in (4.3), which is the orthogonal to the directions where there is loss of positive definiteness, we are able to prove, in the next Proposition, that it is possible to modify the integrand \(f_y\) defined in (3.11) to obtain the new function \({\tilde{f}}\) defined in (4.4) below, which is a standard integrand in the sense of Definition 2.2.

Proposition 4.2

Let (1.5) hold. For \(y \in \mathbb {S}^{2}\), define the map \(F_y:\mathbb {R}^{9}\rightarrow \mathbb {R}^{}\)

$$\begin{aligned} F_y(u)&:=g_y(u)-h_y(u)+\alpha _H H_0^2+ \frac{\alpha _K}{2}|\pi _0 u|^2 +\alpha _K |\pi _{-\alpha _K/2}u|^2\\&=u \cdot A_y u -u \cdot v_y+\alpha _H H_0^2+ \frac{\alpha _K}{2}|\pi _0 u|^2 +\alpha _K |\pi _{-\alpha _K/2}u|^2, \end{aligned}$$

where \(g_y,h_y\) are defined as in (4.1), \(\pi _0,\pi _{-\alpha _K/2}:\mathbb {R}^{9}\rightarrow \mathbb {R}^{9}\) are the orthogonal projections on \({\text {span}}\{v_1(0),\ldots ,v_5(0)\}\) and \({\text {span}}\{v(-\alpha _K/2)\}\), respectively. Moreover, let

$$\begin{aligned} {\tilde{f}}:\Omega \times \mathbb {S}^{2} \times \left( {\bigwedge }_1(\mathbb {R}^{3}_x) \wedge {\bigwedge }_1(\mathbb {R}^{3}_y)\right) \rightarrow \mathbb {R}^{}, \qquad {\tilde{f}}(x,y,\zeta ):=F_y(u[\zeta ]). \end{aligned}$$
(4.4)

Then \({\tilde{f}}\) is continuous, convex in the third variable, and there exist two constants \(c_1>0\) and \(c_2\geqslant 0\) such that

$$\begin{aligned} {\tilde{f}}(x,y,\zeta ) \geqslant c_1|\zeta |^2-c_2. \end{aligned}$$
(4.5)

In particular, \({\tilde{f}}\) has uniform superlinear growth in the third variable.

Proof

Let \(\pi _{2\alpha _H-\alpha _K/2},\pi _{\alpha _K/2}:\mathbb {R}^{9}\rightarrow \mathbb {R}^{9}\) be the orthogonal projections on \({\text {span}}\{v(2\alpha _H-\alpha _K/2)\}\) and \({\text {span}}\{v_1(\alpha _K/2),v_2(\alpha _K/2)\}\), respectively. For every \(u \in \mathbb {R}^{9}\), by Lemma 4.1, we have

$$\begin{aligned} F_y(u)&=-\frac{\alpha _K}{2}|\pi _{-\alpha _K/2}u|^2+\frac{\alpha _K}{2}|\pi _{\alpha _K/2}u|^2+\Big (2\alpha _H-\frac{\alpha _K}{2}\Big ) |\pi _{2\alpha _H-\alpha _K/2}u|^2\nonumber \\&\quad +2\alpha _H H_0\, u\cdot v(2\alpha _H-\alpha _K/2)+ \frac{\alpha _K}{2}|\pi _0 u|^2 +\alpha _K |\pi _{-\alpha _K/2}u|^2+\alpha _H H_0^2\nonumber \\&=\frac{\alpha _K}{2}|\pi _{-\alpha _K/2}u|^2+\frac{\alpha _K}{2}|\pi _{\alpha _K/2}u|^2+ \Big (2\alpha _H-\frac{\alpha _K}{2}\Big ) |\pi _{2\alpha _H-\alpha _K/2}u|^2\nonumber \\&\quad + \frac{\alpha _K}{2}|\pi _0 u|^2 + 2\alpha _H H_0\, u\cdot v(2\alpha _H-\alpha _K/2)+\alpha _H H_0^2. \end{aligned}$$
(4.6)

By (1.5), we deduce that \(F_y\) is convex (and therefore continuous) in u, so that \({\tilde{f}}\) is convex (and therefore continuous) in the third variable. Moreover, by reconstructing the norm \(|u[\zeta ]|^2=|\zeta |^2\) from the projections \(\pi _\bullet \) and by recalling that they are 1-Lipschitz functions, we have that

$$\begin{aligned} {\tilde{f}}(x,y,\zeta )=F_y(u[\zeta ]) \geqslant \min \left\{ \frac{\alpha _K}{2},2\alpha _H-\frac{\alpha _K}{2}\right\} |\zeta |^2-2\sqrt{2}\alpha _H |H_0| |\zeta |+\alpha _H H_0^2 \end{aligned}$$

(the factor \(\sqrt{2}=|v(2\alpha _H-\alpha _K/2)|\) comes from Schwarz inequality), from which we deduce the boundedness from below of \({\tilde{f}}\) and (4.5), with (a possible choice of)

$$\begin{aligned} c_1=\frac{1}{4}\min \{\alpha _K,4\alpha _H-\alpha _K\} \quad \text {and} \quad c_2=\alpha _HH_0^2\bigg (\frac{8\alpha _H}{\min \{\alpha _K,4\alpha _H-\alpha _K\}}-1\bigg ). \end{aligned}$$

Finally, the continuity of \({\tilde{f}}\) with respect to y follows from the structure of the matrix \(A_y\) and of the vector \(v_y\) in Lemma 4.1. \(\square \)

Proposition 4.3

Let \({\tilde{f}}\) be the function defined in (4.4). The, for every \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2(\Omega )\), it holds that

$$\begin{aligned} {\mathcal {E}}(\Sigma )= \int _{G^*}{\tilde{f}}\bigg (x,y,\frac{\eta _1(x,y)}{|\eta _0(x,y)|} \bigg )|\eta _0(x,y)|\beta (x,y)\,\mathrm d{\mathcal {H}}^2(x,y). \end{aligned}$$
(4.7)

Proof

Let \((x,y)\in G^*\). By Lemma 3.3 and Lemma 4.1 we have that \(u[\xi _1(x,y)]\in \widetilde{\mathcal {X}}_y\), from which we obtain that \(\pi _0u[\xi _1(x,y)]= \pi _{-\alpha _K/2}u[\xi _1(x,y)]=0\). Keeping (3.11), (4.1), and (4.4) into account, this implies that

$$\begin{aligned} {\tilde{f}}(x,y,\xi _1(x,y))=F_y(u[\xi _1(x,y)])=f_y(\xi _1(x,y)), \end{aligned}$$

which, by (3.12), implies (4.7). \(\square \)

Lemma 4.4

Let \(A\Subset \Omega \) and let \(\Sigma _j=\llbracket G_j,\eta _j,\beta _j\rrbracket \in {\text {curv}}_2(\Omega )\) be such that \({\text {spt}}\Sigma _j\subseteq A\times \mathbb {S}^{2}\) for every \(j\in \mathbb {N}\) and \(\Sigma _j\rightharpoonup \Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2(\Omega )\) as \(j\rightarrow \infty \). Then \({\text {spt}}\Sigma \subseteq A\times \mathbb {S}^{2}\) and

$$\begin{aligned} \lim _{j\rightarrow \infty }\int _{G_j} |(\eta _j)_0(x,y)|\beta _j(x,y)\,\mathrm d{\mathcal {H}}^{2}(x,y)=\int _G |\eta _0(x,y)|\beta (x,y)\,\mathrm d{\mathcal {H}}^{2}(x,y).\qquad \end{aligned}$$
(4.8)

In particular, if \(M_j,M\) are two-dimensional oriented manifold of class \({\mathcal {C}}^2\) contained in A, if \(G_j,G\) are the associated Gauss graphs by (2.3), and \(\Sigma _j=\Sigma _{G_j}=\llbracket G_j,\eta _j,1\rrbracket \), \(\Sigma =\Sigma _G=\llbracket G,\eta ,1\rrbracket \) are the associated currents, if \(\Sigma _j\rightharpoonup \Sigma \), then \({\mathcal {H}}^2(M_j)\rightarrow {\mathcal {H}}^2(M)\).

Proof

We first observe that the condition on the supports is closed, so that \({\text {spt}}\Sigma \subseteq A\times \mathbb {S}^{2}\). Let \(g\in {\mathcal {C}}_c(\Omega \times \mathbb {R}^{3}_y)\) be such that \(g=1\) on \(A\times \mathbb {S}^{2}\). Then the convergence

$$\begin{aligned} \int _{G_j} |(\eta _j)_0(x,y)|\beta _j(x,y)\,\mathrm d{\mathcal {H}}^2(x,y)= & {} \Sigma _j(g\varphi ^*)\rightarrow \Sigma (g\varphi ^*) \\ {}= & {} \int _G |\eta _0(x,y)|\beta (x,y)\,\mathrm d{\mathcal {H}}^{2}(x,y) \end{aligned}$$

follows immediately by (2.6). The proof of the last statement is obtained by combining (3.4) and (4.8):

$$\begin{aligned} \lim _{j\rightarrow \infty } {\mathcal {H}}^2(M_j)= & {} \lim _{j\rightarrow \infty }\int _{G_j} |(\eta _j)_0(x,y)|\,\mathrm d{\mathcal {H}}^{2}(x,y)\\ {}= & {} \int _G |\eta _0(x,y)|\,\mathrm d{\mathcal {H}}^{2}(x,y)={\mathcal {H}}^2(M). \end{aligned}$$

This concludes the proof. \(\square \)

4.2 Minimization Problems

In this section we study various minimization problems for the energy \({\mathcal {E}}\) in (3.12). In the first two (see Theorems 4.5 and 4.6 below), reasonable sufficient conditions for unconstrained minimization are provided. In the third one (see Theorem 4.9 below), we tackle constrained minimization in terms of prescribed enclosed volume and surface area for a closed membrane.

For \(A\Subset \Omega \) and \(c>0\), we define the class

$$\begin{aligned} {\mathcal {X}}_{A,c}^{(0,1)}(\Omega ):=\big \{ \Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2^{(0,1)}(\Omega ): {\text {spt}}\Sigma \subseteq A\times \mathbb {S}^{2},\, \mathbb {M}^{}(\partial \Sigma )+\mathbb {M}^{}(\Sigma )\leqslant c\big \}\nonumber \\ \end{aligned}$$
(4.9)

of generalized Gauss graphs with compact support and equi-bounded masses. Our first existence result is the following.

Theorem 4.5

Let (1.5) hold. The minimization problem

$$\begin{aligned} \min \Big \{{\mathcal {E}}(\Sigma ): \Sigma \in {\mathcal {X}}_{A,c}^{(0,1)}(\Omega )\Big \} \end{aligned}$$
(4.10)

has a solution.

Proof

Let \(c_2\) be the constant in (4.5) and, for every \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2^{(0,1)}(\Omega )\), define the functional

$$\begin{aligned}\begin{aligned} {\mathcal {E}}^{(0,1)}(\Sigma ):=&\int _{G^*} \left( {\tilde{f}} \left( x,y,\frac{\eta _1(x,y)}{|\eta _0(x,y)|} \right) +c_2 \right) |\eta _0(x,y)| \beta (x,y)\, \mathrm d\mathcal {H}^2(x,y) \\ =&\, {\mathcal {E}}(\Sigma )+c_2\int _{G^*} |\eta _0(x,y)| \beta (x,y)\, \mathrm d{\mathcal {H}}^2(x,y), \end{aligned}\end{aligned}$$

where the last equality follows from Proposition 4.3. Inequality (4.5) allows us to apply Theorem 2.4 and obtain that \({\mathcal {E}}^{(0,1)}\) is lower semicontinuous in \({\text {curv}}_2^{(0,1)}(\Omega )\). By Lemma 4.4, it follows that also the functional \({\mathcal {E}}\) is lower semicontinuous in \({\text {curv}}_2^{(0,1)}(\Omega )\). By Theorems 2.1 and 2.4, any minimizing sequence \(\Sigma _j=\llbracket G_j,\eta _j,\beta _j\rrbracket \in {\mathcal {X}}_{A,c}^{(0,1)}(\Omega )\) for \({\mathcal {E}}\) is compact in \({\mathcal {X}}_{A,c}^{(0,1)}(\Omega )\). The thesis then follows from the direct method of the Calculus of Variations. \(\square \)

Inequality (4.5) and Lemma 4.4 suggest that it is not necessary to bound the entire \(\int _{G^*} \frac{\beta }{|\eta _0|} \mathrm d\mathcal {H}^2\) for \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2^*(\Omega )\) in order to apply Theorem 2.5, so that we can consider the class

$$\begin{aligned} \begin{aligned} {\mathcal {X}}_{A,c}^{*}(\Omega ):=\bigg \{&\,\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2^{*}(\Omega ): {\text {spt}}\Sigma \subseteq A\times \mathbb {S}^{2},\\&\, \mathbb {M}^{}(\partial \Sigma )+ \int _{ G^*} \bigg (|\eta _0(x,y)|+\frac{|\eta _2(x,y)|^2}{|\eta _0(x,y)|}\bigg )\beta (x,y)\,\mathrm d{\mathcal {H}}^2(x,y)\leqslant c \bigg \}. \end{aligned}\nonumber \\ \end{aligned}$$
(4.11)

The bound on \(\int _{G^*}\frac{|\eta _1(x,y)|^2}{|\eta _0(x,y)|^2}|\eta _0(x,y)| \beta (x,y)\, \mathrm d\mathcal {H}^2(x,y)\), together with the one on the second term in (4.11), imply the boundedness of the mass of \(\Sigma \). Moreover, these bounds are needed in order to have closedness in the class \({\text {curv}}_2^*(\Omega )\), which in general is not closed, contrary to \({\text {curv}}_2(\Omega )\). In particular, for the regular Gauss graph G of a manifold M, they imply an \(L^4\)-bound on the curvatures of M, since

$$\begin{aligned} \int _{G^*} \left( |\eta _0|+\frac{|\eta _1|^2}{|\eta _0|}+\frac{|\eta _2|^2}{|\eta _0|}\right) \mathrm d\mathcal {H}^2= & {} \int _M |\xi |^2 \mathrm d\mathcal {H}^2\\ {}= & {} \int _M \left( H(x)^2+ (1-K(x))^2\right) \, \mathrm d\mathcal {H}^2(x), \end{aligned}$$

for the proof see [1, Proposition 1.1 and Example 1.2]. We present now our second existence result.

Theorem 4.6

Let (1.5) hold. The minimization problem

$$\begin{aligned} \min \Big \{{\mathcal {E}}(\Sigma ): \Sigma \in {\mathcal {X}}_{A,c}^{*}(\Omega )\Big \} \end{aligned}$$
(4.12)

has a solution.

Proof

Let us consider a minimizing sequence \(\Sigma _j=\llbracket G_j,\eta _j,\beta _j\rrbracket \in {\mathcal {X}}_{A,c}^{*}(\Omega )\) for the functional \({\mathcal {E}}\). By Proposition 4.3 and (4.5), we obtain

$$\begin{aligned}\begin{aligned} {\mathcal {E}}(\Sigma _j)=&\int _{G_j^*}{\tilde{f}} \left( x,y,\frac{(\eta _j)_1(x,y)}{|(\eta _j)_0(x,y)|} \right) |(\eta _j)_0(x,y)|\beta _j(x,y)\, \mathrm d\mathcal {H}^2(x,y) \\ \geqslant&\, c_1\int _{G_j^*}\frac{|(\eta _j)_1(x,y)|^2}{|(\eta _j)_0(x,y)|^2}|(\eta _j)_0(x,y)| \beta _j(x,y)\, \mathrm d\mathcal {H}^2(x,y) \\&\, -c_2\int _{G_j^*}|(\eta _j)_0(x,y)| \beta _j(x,y)\, \mathrm d\mathcal {H}^2(x,y). \end{aligned}\end{aligned}$$

Now, by (4.11), the minimizing sequence satisfies the hypotheses of Theorem 2.5 and therefore there exist a subsequence \(\{\Sigma _{j_k}\}_{k\in \mathbb {N}}\) and a special generalized Gauss graph \(\Sigma _\infty \in {\text {curv}}_2^{*}(\Omega )\) such that \(\Sigma _{j_k} \rightharpoonup \Sigma _\infty \) as \(k\rightarrow \infty \). The thesis follows from the direct method of the Calculus of Variations. \(\square \)

Remark 4.7

We called the minimization problems (4.10) and (4.12) unconstrained because the classes \({\mathcal {X}}_{A,c}^{(0,1)}(\Omega )\) in (4.9) and \({\mathcal {X}}_{A,c}^*(\Omega )\) in (4.11) do not contain geometric constraints, namely, there are no generalized Gauss graphs excluded from these classes based on their geometry. In particular, this allows us to consider the zero current \(\Sigma =0\) as a competitor for both minimization problems, and it turns out to be an absolute minimizer if \(H_0=0\). Indeed, in this case, (4.5) becomes \({\tilde{f}}(x,y,\zeta )\geqslant c_1|\zeta |^2\), so that \({\mathcal {E}}\geqslant 0\). Notice that also a generalized Gauss graph \(\Pi \) supported on a plane (\(H=K=0\)) has zero energy, showing that both (4.10) and (4.12) have no unique solution.

On the other hand, if \(H_0\ne 0\), observe that a sphere \(\Sigma \) (or a portion of it, compatibly with A) with mean curvature \(H=H_0\) makes the functional \({\mathcal {E}}\) negative. Indeed, since for spheres there holds \(K=H^2/4\), we have \({\mathcal {E}}(\Sigma )=-\alpha _K H_0^2 {\mathcal {H}}^2(\Sigma )/4<0={\mathcal {E}}(0)<\alpha _HH_0^2={\mathcal {E}}(\Pi )\).

Given \(\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2(\Omega )\), we define

$$\begin{aligned} {\mathcal {A}}(\Sigma ):=\int _G |\eta _0(x,y)|\beta (x,y)\,\mathrm d{\mathcal {H}}^{2}(x,y). \end{aligned}$$
(4.13)

In light of Remark 3.2, if \(\Sigma \) is a regular Gauss graph with multiplicity, the quantity \({\mathcal {A}}(\Sigma )\) has the geometric interpretation of mass of \(p_1\Sigma \), see (3.3); in particular, if \(\beta \equiv 1\), then \({\mathcal {A}}(\Sigma )={\mathcal {H}}^2(M)\), the area of the manifold \(M:=p_1\Sigma \), see (3.4).

We also define the quantity

$$\begin{aligned} {\mathcal {V}}(\Sigma ):=\frac{1}{3}\int _G (x\cdot y)\, |\eta _0(x,y)|\beta (x,y)\,\mathrm d{\mathcal {H}}^2(x,y). \end{aligned}$$
(4.14)

If \(\Sigma \) is a closed (\(\partial \Sigma =0\)) regular Gauss graph with multiplicity \(\beta \equiv 1\), by a simple application of the Divergence Theorem, the quantity \({\mathcal {V}}(\Sigma )\) has the geometric interpretation of the enclosed volume in \(M:=p_1\Sigma \). Indeed, if \(M=\partial A\), then by means of the area formula we get

$$\begin{aligned} \frac{1}{3}\int _G (x\cdot y)\, |\eta _0(x,y)|\,\mathrm d{\mathcal {H}}^2(x,y)= & {} \frac{1}{3}\int _M p\cdot \nu (p)\,\mathrm d{\mathcal {H}}^2(p)\\ {}= & {} \frac{1}{3}\int _A {\text {div}}(p)\, \mathrm dp ={\mathcal {H}}^3(A). \end{aligned}$$

Lemma 4.8

Let \(A\Subset \Omega \) and let \(\Sigma _j=\llbracket G_j,\eta _j,\beta _j\rrbracket \in {\text {curv}}_2(\Omega )\) be such that \({\text {spt}}\Sigma _j\subseteq A\times \mathbb {S}^{2}\) and \(\partial \Sigma _j=0\) for every \(j\in \mathbb {N}\) and \(\Sigma _j\rightharpoonup \Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2(\Omega )\) as \(j\rightarrow \infty \). Then \({\text {spt}}\Sigma \subseteq A\times \mathbb {S}^{2}\), \(\partial \Sigma =0\), and

$$\begin{aligned} \begin{aligned}&\lim _{j\rightarrow \infty }\int _{G_j} (x\cdot y)\, |(\eta _j)_0(x,y)|\beta _j(x,y)\,\mathrm d{\mathcal {H}}^{2}(x,y)\\&\quad =\int _G (x\cdot y)\, |\eta _0(x,y)|\beta (x,y)\,\mathrm d{\mathcal {H}}^{2}(x,y). \end{aligned} \end{aligned}$$
(4.15)

In particular, if \(M_j=\partial E_j\) and \(M=\partial E\) for \(E_j,E\) sets of class \({\mathcal {C}}^2\) contained in A, if \(G_j,G\) are the associated Gauss graphs by (2.3), and \(\Sigma _j=\Sigma _{G_j}=\llbracket G_j,\eta _j,1\rrbracket \), \(\Sigma =\Sigma _G=\llbracket G,\eta ,1\rrbracket \) are the associated currents, if \(\Sigma _j\rightharpoonup \Sigma \), then \({\mathcal {H}}^3(E_j)\rightarrow {\mathcal {H}}^3(E)\).

Proof

The proof is the same as that of Lemma 4.4. \(\square \)

Next we study constrained minimization problems, namely we prescribe the surface area and the enclosed volume. Given \(a,v>0\), we define the classes

$$\begin{aligned} \begin{aligned} {\mathcal {X}}_{A,c;a,v}^{(0,1)}(\Omega ):=\bigg \{&\,\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2^{(0,1)}(\Omega ): {\text {spt}}\Sigma \subseteq A\times \mathbb {S}^{2}, \partial \Sigma =0,\\&\, \mathbb {M}^{}(\Sigma ) \leqslant c, {\mathcal {A}}(\Sigma )=a,{\mathcal {V}}(\Sigma )=v \bigg \}. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} {\mathcal {X}}_{A,c;a,v}^{*}(\Omega ):=\bigg \{&\,\Sigma =\llbracket G,\eta ,\beta \rrbracket \in {\text {curv}}_2^{*}(\Omega ): {\text {spt}}\Sigma \subseteq A\times \mathbb {S}^{2}, \partial \Sigma =0,\\&\, \int _{ G^*} \frac{|\eta _2(x,y)|^2}{|\eta _0(x,y)|}\beta (x,y)\,\mathrm d{\mathcal {H}}^2(x,y)\leqslant c, {\mathcal {A}}(\Sigma )=a,{\mathcal {V}}(\Sigma )=v \bigg \}. \end{aligned} \end{aligned}$$

In order for two-dimensional closed oriented manifolds of class \({\mathcal {C}}^2\) to belong to these classes, we enforce the isoperimetric inequality

$$\begin{aligned} 36\pi \,v^2\leqslant a^3. \end{aligned}$$
(4.16)

Theorem 4.9

Let (1.5) hold and let \(a,v>0\) satisfy (4.16). The minimization problems

$$\begin{aligned} \min \Big \{{\mathcal {E}}(\Sigma ): \Sigma \in {\mathcal {X}}_{A,c;a,v}^{(0,1)}(\Omega )\Big \}, \quad \min \Big \{{\mathcal {E}}(\Sigma ): \Sigma \in {\mathcal {X}}_{A,c;a,v}^{*}(\Omega )\Big \} \end{aligned}$$
(4.17)

have a solution.

Proof

The proof is the same as that of Theorems 4.5 and 4.6, upon noting that Lemmas 4.4 and 4.8 provide the continuity for the area and enclosed volume constraints. \(\square \)

We conclude this subsection with two remarks on the necessity of assumption (1.5).

Remark 4.10

[\(4\alpha _H \leqslant \alpha _K\)] In the case, then there exists a constant \(r\geqslant 0\) such that \(\alpha _K=4\alpha _H+r\). For the Gauss graph G of a smooth surface M, we have

$$\begin{aligned} \mathcal {H}^2(G)= \int _M |\xi (p,\nu (p))| \,\mathrm d\mathcal {H}^{2}(p)=\int _M \sqrt{4H(p)^2+ (1-K(p))^2}\, \mathrm d\mathcal {H}^2(p), \end{aligned}$$

where \(\xi \) is defined in (2.4). We consider \( M_j=\partial B_{1/j}\), where \(B_{1/j}\) is the ball of radius 1/j centered in the origin, and we let \(\Sigma _{j}:=\Sigma _{G_j}=\llbracket G_j,\eta _j,1\rrbracket \). Since the principal curvatures of \(M_j\) are both equal to j, we get from the above formula

$$\begin{aligned} \mathbb {M}(\Sigma _j)=\mathcal {H}^2(G_j) \leqslant \frac{4\pi }{j^2}\sqrt{j^4+14j^2+1}, \end{aligned}$$

which is uniformly bounded for every \(j \in \mathbb {N}\setminus \{0\}.\) Thus, for \(\Omega =B_2\), we have that \(\Sigma _j\in {\text {curv}}_2^{(0,1)}(\Omega )\) for every \(j\in \mathbb {N}\) and, since \(\partial \Sigma _j=0\), we also have that \(\Sigma _j\) belongs to \({\mathcal {X}}_{A,c}^{(0,1)}(\Omega )\) for every \(j\in \mathbb {N}\setminus \{0\}\), for a suitable choice of A and c. Since \(\Sigma _j\) is a regular Gauss graph, \({\mathcal {E}}(\Sigma _j)={\mathcal {E}}(M_j)\), so that, using the expression in (1.1), we obtain

$$\begin{aligned} {\mathcal {E}}(M_j)=4\pi \bigg (\frac{\alpha _HH_0^2}{j^2}-\frac{4\alpha _HH_0}{j}-r\bigg ), \end{aligned}$$
(4.18)

using the fact that \(H^2=4K\) for spheres. We now consider two cases.

  1. (1)

    \(r>0\): the functional \({\mathcal {E}}\) is no longer lower semicontinuous, since \(\Sigma _j \rightharpoonup 0\) and, by (4.18), \(\displaystyle \liminf _{j\rightarrow \infty }{\mathcal {E}}(M_j)=-4\pi r < 0= {\mathcal {E}}(0)\).

  2. (2)

    \(r=0\) and \(H_0=0\): from (4.6) it is easy to see that \({\mathcal {E}}\geqslant 0\) and by (4.18) \({\mathcal {E}}(M_j)=0\) for every \(j\in \mathbb {N}{\setminus }\{0\}\), from which we obtain that \({\mathcal {E}}\) is minimized on spheres. We also notice that \({\mathcal {E}}\) is minimized on flat surfaces (\(H=K=0\)).

The construction above can adapted to the constrained case by taking

$$\begin{aligned} M_{j}=\partial B_{R_{j}}\cup \partial B_{\rho _{j}/j} \end{aligned}$$

for suitable \(R_{j},\rho _{j}>0\), where all the spherical surfaces are oriented with the outward normal, such that \({\mathcal {A}}(M_{j})={\mathcal {H}}^2(M_{j})=a\) and \({\mathcal {V}}(\Sigma _{M_{j}})=v\). Then \(\Sigma _{M_{j}}\in {\mathcal {X}}^{(0,1)}_{A,c;a,v}(\Omega )\) and \({\mathcal {E}}(M_{j})\) has an expression similar to that in (4.18), so that the same conclusions above hold.

The case \(r=0\) and \(H_0 \ne 0\) is open and we do not have a counterexample at the moment.

Remark 4.11

[\(\alpha _K=0\)] In this case, the Canham–Helfrich functional \({\mathcal {E}}\) in (1.1) reduces to the functional

$$\begin{aligned} {\mathcal {W}}_0(M):=\alpha _H\int _M (H(p)-H_0)^2\,\mathrm d{\mathcal {H}}^2(p), \end{aligned}$$
(4.19)

which is non-negative and is minimized by a (portion of a) sphere with mean curvature \(H=H_0\). Moreover, if \(H_0=0\), this further reduces to the Willmore functional \({\mathcal {W}}\) in (1.2), which is again non-negative and minimized, for instance, on flat surfaces or on minimal surfaces. There is a vast literature on the Willmore functional both in the constrained and unconstrained case, see, e.g., [23, 30, 32, 35, 36] in addition to those already mentioned in the Introduction.

Here we observe that Lemma 4.1 provides the eigenvalue \(2\alpha _H\) with multiplicity 1 and the zero eigenvalue with algebraic multiplicity 8. Moreover, it is necessary for the coercivity of \({\mathcal {E}}\) that all the eigenvectors associated with the zero eigenvalue belong to \(\widetilde{\mathcal {X}}_y^\perp \) and this is not the case. Therefore, we cannot prove the coercivity in (4.5) so that the direct method of the Calculus of Variations cannot be applied to show existence of minimizers. This suggests that the space of generalized Gauss graphs is not a good environment to study the Willmore functional \({\mathcal {W}}\) of (1.2).

4.3 Regularity of Minimizers

We prove a regularity result for minimizers of \( {\mathcal {E}}\).

Theorem 4.12

Let (1.5) hold and let \(\Sigma \in {\text {curv}}_2(\Omega )\) be a solution either of problem (4.10) or of problem (4.12) with \(\partial \Sigma =0\), or of problems (4.17). Then \(p_1\Sigma \) is \({\mathcal {C}}^2\)-rectifiable, that is there exists a countable family \(\{S_j\}_{j\in \mathbb {N}}\) of surfaces of class \({\mathcal {C}}^2\) in \({\mathbb {R}}^3\) such that

$$\begin{aligned} {\mathcal {H}}^2\bigg (p_1\Sigma \setminus \bigcup _{j\in \mathbb {N}}S_j\bigg )=0. \end{aligned}$$

Proof

We start by observing that, by [8, Theorem 6.1], since \(\partial \Sigma =0\) and \(|\Sigma _1|\ll |\Sigma _0|\), we get that\(p_1\Sigma \) is the support of a 2-dimensional curvature varifold (see the proof of [8, Theorem 6.1] for the explicit construction). The regularity of \(\Sigma \) is now a consequence of [27, Theorem 1]. \(\square \)

Remark 4.13

We point out that Theorem 4.12 cannot be obtained using the Structure Theorem [1, Theorem 2.10], which asserts that if \(\Sigma \) is a generalized Gauss graph then \(p_1\Sigma \) is (only) \({\mathcal {C}}^1\)-rectifiable.