Next Article in Journal
Cokriging Prediction Using as Secondary Variable a Functional Random Field with Application in Environmental Pollution
Previous Article in Journal
Improving Stability Conditions for Equilibria of SIR Epidemic Model with Delay under Stochastic Perturbations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Multivariate Theory of Functional Connections: Theory, Proofs, and Application in Partial Differential Equations

Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(8), 1303; https://doi.org/10.3390/math8081303
Submission received: 8 July 2020 / Revised: 27 July 2020 / Accepted: 4 August 2020 / Published: 6 August 2020
(This article belongs to the Section Difference and Differential Equations)

Abstract

:
This article presents a reformulation of the Theory of Functional Connections: a general methodology for functional interpolation that can embed a set of user-specified linear constraints. The reformulation presented in this paper exploits the underlying functional structure presented in the seminal paper on the Theory of Functional Connections to ease the derivation of these interpolating functionals—called constrained expressions—and provides rigorous terminology that lends itself to straightforward derivations of mathematical proofs regarding the properties of these constrained expressions. Furthermore, the extension of the technique to and proofs in n-dimensions is immediate through a recursive application of the univariate formulation. In all, the results of this reformulation are compared to prior work to highlight the novelty and mathematical convenience of using this approach. Finally, the methodology presented in this paper is applied to two partial differential equations with different boundary conditions, and, when data is available, the results are compared to state-of-the-art methods.

1. Introduction

The Theory of Functional Connections (TFC) is a mathematical framework used to construct functionals, functions of functions, that represent the family of all possible functions that satisfy some user-defined constraints; these functionals are referred to as “constrained expressions” in the context of the TFC. In other words, the TFC is a framework for performing functional interpolation. In the seminal paper on TFC [1], a univariate framework was presented that could construct constrained expressions for constraints on the values of points or arbitrary order derivatives at points. Furthermore, Reference [1] showed how to construct constrained expressions for constraints consisting of linear combinations of values and derivatives at points, called linear constraints; for example, y ( x 1 ) + 3 π y x x ( x 2 ) = 2 e , for some points x 1 and x 2 , where y x x symbolizes the second order derivative of y with respect to x. In the current formulation, the univariate constrained expression has been used for a variety of applications, including solving linear and non-linear differential equations [2,3], hybrid systems [4], optimal control problems [5,6], in quadratic and nonlinear programming [7], and other applications [8].
Recently, the TFC method has been extended to n-dimensions [9]. This multivariate framework can provide functionals representing all possible n-dimensional manifolds subject to constraints on the value and arbitrary order derivative of n 1 dimensional manifolds. However, Reference [9] does not discuss how the multivariate framework can be used to construct constrained expressions for linear constraints. Regardless, these multivariate constrained expressions have been used to embed constraints into machine learning frameworks [10,11,12] for use in solving partial differential equations (PDEs). Moreover, it was shown that this framework may be combined with orthogonal basis functions to solve PDEs [13]; this is essentially the n-dimensional equivalent of the ordinary differential equations (ODEs) solved using the univariate formulation [2,3].
The contributions of this article are threefold. First, this article examines the underlying structure of univariate constrained expressions and provides an alternative method for deriving them. This structure is leveraged to derive mathematical proofs regarding the properties of univariate constrained expressions. Second, using the aforementioned structure, this article extends the multivariate formulation presented in Reference [9] to include linear constraints by introducing the recursive application of univariate constrained expressions as a method for generating multivariate constrained expressions. Further, mathematical proofs are provided that prove the resultant constrained expressions indeed represent all possible manifolds subject to the given constraints. Thirdly, this article presents how the multivariate constrained expressions can be combined with a linear expansion of n-dimensional orthogonal basis functions to numerically estimate the solutions of PDEs. While Reference [13] showed that solving PDEs with the multivariate TFC is possible, it merely gave a cursory overview, skipping some rather important details; this article fills in those gaps.
The remainder of this article is structured as follows. Section 2 introduces the univariate constrained expression, examines its underlying structure, and provides an alternative method to derive univariate constrained expressions. Then, in Section 3, this structure is leveraged to rigorously define the univariate TFC constrained expression and provide some related mathematical proofs. In Section 4, this new structure and the mathematical proofs are extended to n-dimensions, and a compact tensor form of the multivariate constrained expression is provided. Section 5 discusses how to combine the multivariate constrained expression with multivariate basis functions to estimate the solutions of PDEs. Then, in Section 6, this method is used to estimate the solution of two PDEs, and the results are compared with state-of-the-art methods when data is available. Finally, Section 7 summarizes the article and provides some potential future directions for follow-on research.

2. Univariate TFC

Extending the multivariate TFC to include linear constraints requires recursive applications of the univariate TFC. Hence, it is paramount the reader understand univariate TFC before moving to the multivariate case. First, the general form of the univariate constrained expression will be presented, followed by a few examples. These examples serve to solidify the readers understanding of the univariate constrained expression, as well as highlight nuances of deriving constrained expressions. In addition, this section includes mathematical proofs that univariate TFC constrained expressions indeed represent the family of all possible functions that satisfy the constraints.
Given a set of k constraints, the univariate constrained expression takes the following form,
y ( x , g ( x ) ) = g ( x ) + j = 1 k s j ( x ) η j ,
where g ( x ) is a free function, s j ( x ) are k mutually linearly independent functions called support functions, and η j are k coefficient functionals that are solved by imposing the constraints. The free function g ( x ) can be chosen to be any function provided that it is defined at the constraints’ locations.
The following examples start from Equation (1), the framework proposed in the seminal paper on TFC [1], and highlight a unified structure that underlies the univariate TFC constrained expressions. Following these examples is a section that rigorously defines this structure and provides important mathematical proofs.

2.1. Univariate Example # 1: Constraints at a Point

Constraints at a point consist of constraints on the value at a point and constraints on a derivative at a point. Take for example the follow constraints,
y ( 0 ) = 1 , y x ( 1 ) = 2 , y ( 2 ) = 3 .
For this example, the support functions are chosen to be s 1 = 1 , s 2 = x 2 , and s 3 = x 3 . Following Equation (1) and imposing the three constraints leads to the simultaneous set of equations
y ( 0 ) = 1 = g ( 0 ) + η 1 y x ( 1 ) = 2 = g x ( 1 ) + 2 η 2 + 3 η 3 y ( 2 ) = 3 = g ( 2 ) + η 1 + 4 η 2 + 8 η 3 .
Solving this set of equations for the unknowns η j leads to the solution,
η 1 = 1 g ( 0 ) η 2 = 10 3 g ( 0 ) + 3 g ( 2 ) 8 g x ( 1 ) 4 η 3 = g ( 0 ) g ( 2 ) + 2 g x ( 1 ) 2 .
Substituting the coefficient functionals back into Equation (1) and simplifying yields,
y ( x , g ( x ) ) = g ( x ) + 2 x 3 + 3 x 2 + 4 4 1 g ( 0 ) + x 3 + 2 x 2 2 g x ( 1 ) + 2 x 3 3 x 2 4 3 g ( 2 ) .
It is simple to verify that regardless of how g ( x ) is chosen, provided g ( x ) exists at the constraint points, Equation (2) always satisfies the given constraints.
The support functions in the previous example were selected as s 1 = 1 , s 2 = x 2 , and s 3 = x 3 . However, these support functions could have been any mutually linearly independent set of functions that permits a solution for the coefficient functionals η j : to clarify the latter of these requirements, consider the same constraints with support functions s 1 = 1 , s 2 = x , and s 3 = x 2 . Then, the set of equations with unknowns η j is,
1 0 0 0 1 2 1 2 4 η 1 η 2 η 3 = 1 g ( 0 ) 2 g x ( 1 ) 3 g ( 2 ) .
Notice that when using these support functions the matrix that multiplies the coefficient functionals is singular. Thus, no solution exists, and therefore, the support functions s 1 = 1 , s 2 = x , and s 3 = x 2 are an invalid set for these constraints.
Note that the matrix singularity does not depend on the free function. This means that the singularity arises when a linear combination of the selected support functions cannot be used to interpolate the constraints. Therefore, the singularity of the support function matrix is dependent on both the support functions chosen and the specific constraints to be embedded. This raises another important restriction on the expression of the support functions, not only must they be linearly independent, but they must constitute an interpolation model that is consistent for the specified constraints.
Notice that each term, except the term containing only the free function, in the constrained expression is associated with a specific constraint and has a particular structure. To illustrate, examine the first constraint term from Equation (2),
2 x 3 + 3 x 2 + 4 4 ϕ 1 ( x ) ( 1 g ( 0 ) ) ρ 1 ( x , g ( x ) ) .
The first term in the product, ϕ 1 ( x ) , is called a switching function—Reference [1] introduced these switching functions as “coefficient” functions, β k —and is a function that is equal to 1 when evaluated at the constraint it is referencing, and equal to 0 when evaluated at all the other constraints. The second term of the product, ρ 1 ( x , g ( x ) ) , is called a projection functional, and is derived by setting the constraint function equal to zero and replacing y ( x ) with g ( x ) . In the case of constraints at a point, this is simply the difference between the constraint value and the free function evaluated at that constraint point. It is called the projection functional because it projects the free function to the set of functions that vanish at the constraint. When evaluating the switching function, ϕ 1 ( x ) , at the constraint it is referencing it is equal to 1 (i.e., ϕ 1 ( 0 ) = 1 ), and when it is evaluated at the other constraints it is equal to 0 (i.e., ϕ 1 x ( 1 ) = 0 and ϕ 1 ( 2 ) = 0 ). The projection functional, ρ 1 ( x , g ( x ) ) , is just the difference between the constraint y ( 0 ) = 1 and the free function evaluated at the constraint point, g ( 0 ) . This structure is important, as it shows up in the other constraint types too. Property 1 follows from the definition of the projection functional.
Property 1.
The projection functionals for constraints at a point are always equal to zero if the free function, g ( x ) , is selected such that it satisfies the associated constraint.
For example, if g ( x ) is selected such that g ( 0 ) = 1 , then the first projection functional in this example becomes ρ 1 ( x , g ( x ) ) = 1 g ( 0 ) = 0 . Based on this structure, an alternate way to define the constrained expression, shown in Equation (3), can be derived,
y ( x , g ( x ) ) = g ( x ) + j = 1 k ϕ j ( x ) ρ j ( x , g ( x ) ) .
The projection functionals are simple to derive, but the switching functions require some attention. From their definition, these functions must go to 1 at their associated constraint and 0 at all other constraints. Based on this definition, the following algorithm for deriving the switching functions is proposed:
  • Choose k support functions, s k ( x ) .
  • Write each switching function as a linear combination of the support functions with unknown coefficients.
  • Based on the switching function definition, write a system of equations to solve for the unknown coefficients.
To validate that this algorithm works, we will use the same constraints and support functions and rederive the constrained expression shown in Equation (2). Hence, ϕ 1 ( x ) = s i ( x ) α i 1 , ϕ 2 ( x ) = s i ( x ) α i 2 , and ϕ 3 ( x ) = s i ( x ) α i 3 , for some as yet unknown coefficients α i j . Note that in the previous mathematical expressions and throughout the remainder of the paper, the Einstein summation convention is used to improve readability. Now, the definition of the switching function is used to come up with a set of equations. For example, the first switching function has the three equations,
ϕ 1 ( 0 ) = 1 , ϕ 1 x ( 1 ) = 0 , and ϕ 1 ( 2 ) = 0 .
These equations are expanded in terms of the support functions,
ϕ 1 ( 0 ) = ( 1 ) · α 11 + ( 0 ) · α 21 + ( 0 ) · α 31 = 1 ϕ 1 x ( 1 ) = ( 0 ) · α 11 + ( 2 ) · α 21 + ( 3 ) · α 31 = 0 ϕ 1 ( 2 ) = ( 1 ) · α 11 + ( 4 ) · α 21 + ( 8 ) · α 31 = 0 ,
which can be compactly written as,
1 0 0 0 2 3 1 4 8 α 11 α 21 α 31 = 1 0 0 .
The same is done for the other two switching functions to produce a set of equations that can be solved by matrix inversion.
1 0 0 0 2 3 1 4 8 α 11 α 12 α 13 α 21 α 22 α 23 α 31 α 32 α 33 = 1 0 0 0 1 0 0 0 1 α 11 α 12 α 13 α 21 α 22 α 23 α 31 α 32 α 33 = 1 0 0 0 2 3 1 4 8 1 = 1 0 0 3 4 2 3 4 1 2 1 1 2
Substituting the constants back into the switching functions and simplifying yields,
ϕ 1 ( x ) = 2 x 3 + 3 x 2 + 4 4 , ϕ 2 ( x ) = x 3 + 2 x 2 , and ϕ 3 ( x ) = 2 x 3 3 x 2 4 .
Substituting the projection functionals and switching functions back into the constrained expression yields,
y ( x , g ( x ) ) = g ( x ) + 2 x 3 + 3 x 2 + 4 4 1 g ( 0 ) + x 3 + 2 x 2 2 g x ( 1 ) + 2 x 3 3 x 2 4 3 g ( 2 ) ,
which is identical to Equation (2). This approach to derive constrained expressions using switching functions has, similar to the first approach, the risk of obtaining a singular matrix if the support functions selected are not able to interpolate the constraints. However, as will be demonstrated in the coming sections, this approach can be easily extended to multivariate domains via recursive applications of the univariate theory, and this approach lends itself nicely to mathematical proofs.

2.2. Univariate Example # 2: Linear Constraints

Linear constraints consist of linear combinations of the previous types of constraints. Note that by this definition, relative constraints such as y ( 0 ) = y ( 1 ) are just a special case of linear constraints. Take for example the following two constraints,
y ( 0 ) = y ( 1 ) , and 3 = 2 y ( 2 ) + π y x x ( 0 ) .
To generate a constrained expression, the projection functionals and switching functions must be found. Similar to the constraints at a point, first the constraints are arranged such that one side of the constraint is equal to zero; for example,
y ( 0 ) y ( 1 ) = 0 and 3 2 y ( 2 ) π y x x ( 0 ) = 0 .
Then, the projection functionals can be defined by replacing y ( x ) with g ( x ) . Thus,
ρ 1 ( x , g ( x ) ) = g ( 0 ) g ( 1 ) and ρ 2 ( x , g ( x ) ) = 3 2 g ( 2 ) π g x x ( 0 ) .
The switching functions are again defined such that they are equal to 1 when evaluated with their associated constraint, and equal to 0 when evaluated at all other constraints. The word “evaluation” in the previous sentence requires clarification. Substitution of the constrained expression back into the constraint should result in the expression 0 = 0 (i.e., the constraint is satisfied). When doing so, the switching functions, ϕ ( x ) , will be evaluated in the same way y ( x ) is evaluated in the constraint. Thus, the constants within the constraint are not used in the evaluation. Moreover, because the projection functional is designed to exactly cancel the values of the free function in the constraint, the switching function equations should have the opposite sign. Hence, evaluation means to replace the function with the switching function, remove any terms not multiplied by the switching function, and multiply the entire equation by 1 . Any reader confused by this linguistic definition of switching function evaluation may refer to Property 4, which defines the switching function evaluation mathematically. For this example, this leads to,
ϕ 1 ( 1 ) ϕ 1 ( 0 ) = 1 , 2 ϕ 1 ( 2 ) + π 2 ϕ 1 x 2 ( 0 ) = 0 ,
for the first switching function, and
ϕ 2 ( 1 ) ϕ 2 ( 0 ) = 0 , 2 ϕ 2 ( 2 ) + π 2 ϕ 2 x 2 ( 0 ) = 1 ,
for the second switching function. Note that while this “evaluation” definition may seem convoluted at first, it is in fact exactly what was done for the constraints at a point case. However, in that case, due to the simple nature of the constraints and the way the projection functionals were defined, this was simply the switching function evaluated at the point.
Similar to the constraints at a point case, the switching functions are defined as a linear combination of support functions with unknown coefficients. Again, this can be written compactly in matrix form. For this example, the support functions s 1 ( x ) = 1 and s 2 ( x ) = x are chosen. Then,
0 1 2 4 α 11 α 12 α 21 α 22 = 1 0 0 1 α 11 α 12 α 21 α 22 = 0 1 2 4 1 = 2 1 2 1 0 ,
which results in the switching functions,
ϕ 1 ( x ) = x 2 , ϕ 2 ( x ) = 1 2 .
Substituting the switching and projection functionals back into the constrained expression form given in Equation (3) yields,
y ( x , g ( x ) ) = g ( x ) + ( x 2 ) g ( 0 ) g ( 1 ) + 1 2 3 2 g ( 2 ) π g x x ( 0 ) .
By substituting this expression for y back into the constraints, one can verify that indeed this constraint expression satisfies the constraints regardless of the choice of free function g ( x ) . Property 2 extends Property 1 to linear constraints.
Property 2.
The projection functionals for linear constraints are always equal to zero if the free function is selected such that it satisfies the associated constraint.
For example, if g ( x ) is selected such that g ( 1 ) = g ( 0 ) , then the first projection functional in this example becomes ρ 1 ( x , g ( x ) ) = g ( 1 ) g ( 0 ) = 0 .

3. General Formulation of the Univariate TFC

This section rigorously defines the TFC constrained expression and provides some relevant proofs. First, a functional is defined and its properties are investigated.
Definition 1.
A functional, e.g., f ( x , g ( x ) ) , has independent variable(s) and function(s) as inputs, and produces a function as an output.
Note that a function as defined here is coincident with the computer science definition of a functional. One can think of a functional as a map for functions. That is, the functional takes a function, g ( x ) , as its input and produces a function, h ( x ) = f ( x , g ( x ) ) for any specified g ( x ) , as its output. Since this article is focused on constraint embedding, or in other words functional interpolation, we will not concern ourselves with the domain/range of the input and output functions. Rather, we will discuss functionals only in the context of their potential input functions, hereon referred to as the domain of the functional, and potential output functions, hereon referred to as the codomain of the functional.
Next, the definitions of injective, surjective, and bijective are extended from functions to functionals.
Definition 2.
A functional, f ( x , g ( x ) ) , is said to be injective if every function in its codomain is the image of at most one function in its domain.
Definition 3.
A functional, f ( x , g ( x ) ) , is said to be surjective if for every function in the codomain, h ( x ) , there exists at least one g ( x ) such that h ( x ) = f ( x , g ( x ) ) .
Definition 4.
A functional, f ( x , g ( x ) ) , is said to be bijective if it is both injective and surjective.
To elaborate, Figure 1 gives a graphical representation of each of these functionals, and examples of each of these functionals follow. Note that the phrase “smooth functions” is used here to denote continuous, infinitely differentiable, real valued functions. Consider the functional f ( x , g ( x ) ) = e g ( x ) whose domain is all smooth functions and whose codomain is all smooth functions. The functional is injective because for every h ( x ) in the codomain there is at most one g ( x ) that maps f ( x , g ( x ) ) to h ( x ) .
However, the functional is not surjective, because the functional does not span the space of the codomain. For example, consider the desired output function h ( x ) = 2 : there is no g ( x ) that produces this output. Next, consider the functional f ( x , g ( x ) ) = g ( x ) g ( 0 ) whose domain is all smooth functions and whose codomain is the set of all smooth functions h ( x ) such that h ( 0 ) = 0 . This functional is surjective because it spans the space of all smooth functions that are 0 when x = 0 , but it is not injective. For example, the functions g ( x ) = x and g ( x ) = x + 3 produce the same result, i.e., f ( x , x ) = f ( x , x + 3 ) = x . Finally, consider the functional f ( x , g ( x ) ) = g ( x ) whose domain is all smooth functions and whose codomain is all smooth functions. This functional is bijective, because it is both injective and surjective.
In addition, the notion of projection is extended to functionals. Consider the typical definition of a projection matrix P n = P for some n Z + . In other words, when P operates on itself, it produces itself: a projection property for functionals can be defined similarly.
Definition 5.
A functional is said to be a projection functional if it produces itself when operating on itself.
For example, consider a functional operating on itself, f ( x , f ( x , g ( x ) ) ) . Then, if f ( x , f ( x , g ( x ) ) ) = f ( x , g ( x ) ) , then the functional is a projection functional. Note that proving f ( x , f ( x , g ( x ) ) ) = f ( x , g ( x ) ) automatically extends to a functional operating on itself n times: for example, f ( x , f ( x , f ( x , g ( x ) ) ) = f ( x , f ( x , g ( x ) ) ) = f ( x , g ( x ) ) , and so on.
Now that a functional and some properties of a functional have been investigated, the notation used in the prior section can be leveraged to rigorously define TFC related concepts. First, it is useful to define the constraint operator, denoted by the symbol ℭ.
Definition 6.
The constraint operator, ℭi, is a linear operator that when operating on a function returns the function evaluated at the i-th specified constraint.
As an example, consider the 2nd linear constraint ( i = 2 ) given in Section 2.2, 3 = 2 y ( 2 ) + π y x x ( 0 ) . For this problem, it follows that,
2 [ y ( x ) ] = 2 y ( 2 ) + π y x x ( 0 ) .
The constraint operator is a linear operator, as it satisfies the two properties of a linear operator: (1) i [ f ( x ) + g ( x ) ] = i [ f ( x ) ] + i [ g ( x ) ] and (2) i [ a g ( x ) ] = a i [ g ( x ) ] . For example, again consider the 2 nd linear constraint given in Section 2.2,
2 [ f ( x ) + g ( x ) ] = 2 [ f ( x ) ] + 2 [ g ( x ) ] = 2 f ( 2 ) + π f x x ( 0 ) + 2 g ( 2 ) + π g x x ( 0 ) 2 [ a f ( x ) ] = a 2 [ f ( x ) ] = a 2 f ( 2 ) + π f x x ( 0 ) .
Naturally, the constraint operator has specific properties when operating on the support functions, switching functions, and projection functionals.
Property 3.
The constraint operator acting on the support functions s j ( x ) produces the matrix
S i j = i [ s j ( x ) ] .
Again, consider the example from Section 2.2 where the support functions were s 1 ( x ) = 1 and s 2 ( x ) = x . By applying the constraint operator,
S i j = i [ s j ( x ) ] = 1 [ s 1 ( x ) ] 1 [ s 2 ( x ) ] 2 [ s 1 ( x ) ] 2 [ s 2 ( x ) ] = s 1 ( 1 ) s 1 ( 0 ) s 2 ( 1 ) s 2 ( 0 ) 2 s 1 ( 2 ) + π s 1 x x ( 0 ) 2 s 2 ( 2 ) + π s 2 x x ( 0 ) = 0 1 2 4 ,
which is identical to the matrix derived in Section 2.2. In fact, the matrix S i j is simply the matrix multiplying the α i j matrix in all the previous examples. Therefore, it follows that, S i j α j k = α i j S j k = δ i k , where δ i k is the Kroneker delta, and the solution of the α i j coefficients are precisely the inverse of the constraint operator operating on the support functions.
Property 4.
The constraint operator acting on the switching functions ϕ j ( x ) produces the Kronecker delta.
i [ ϕ j ( x ) ] = δ i j
This property is just a mathematical restatement of the linguistic definition of the switching function given earlier. One can intuit this property from the switching function definition, since they evaluate to 1 at their specified constraint condition (i.e., i = j ) and to 0 at all other constraint conditions (i.e., i j ).
Using this definition of the constraint operator, one can define the projection functional in a compact and precise manner.
Definition 7.
Let g ( x ) be the free function where g : R R , and let κ i R be the numerical portion of the i t h constraint. Then,
ρ i ( x , g ( x ) ) = κ i i [ g ( x ) ]
Following the example from Section 2.2, the projection functional for the second constraint is,
ρ 2 ( x , g ( x ) ) = κ 2 2 [ g ( x ) ] = 3 2 g ( 2 ) π g x x ( 0 ) .
Note that in the univariate case κ i is a scalar value, but in the multivariate case κ i is a function.
Theorem 1.
For any function, f ( x ) , satisfying the constraints, there exists at least one free function, g ( x ) , such that the TFC constrained expression y ( x , g ( x ) ) = f ( x ) .
Proof. 
As highlighted in Properties 1 and 2, the projection functionals are equal to zero whenever g ( x ) satisfies the constraints. Thus, if g ( x ) is a function that satisfies the constraints, then the constrained expression becomes y ( x , g ( x ) ) = g ( x ) + ρ i ( x , g ( x ) ) ϕ i ( x ) = g ( x ) + 0 i ϕ i ( x ) = g ( x ) . Hence, by choosing g ( x ) = f ( x ) , the constrained expression becomes y ( x , f ( x ) ) = f ( x ) . Therefore, for any function satisfying the constraints, f ( x ) , there exists at least one free function g ( x ) = f ( x ) , such that the constrained expression is equal to the function satisfying the constraints, i.e., y ( x , f ( x ) ) = f ( x ) . □
Theorem 2.
The TFC univariate constrained expression is a projection functional.
Proof. 
To prove Theorem 2, one must show that y ( x , y ( x , g ( x ) ) ) = y ( x , g ( x ) ) . By definition, the constrained expression returns a function that satisfies the constraints. In other words, for any g ( x ) , y ( x , g ( x ) ) is a function that satisfies the constraints. From Theorem 1, if the free function used in the constrained expression satisfies the constraints, then the constrained expression returns that free function exactly. Hence, if the constrained expression functional is given itself as the free function, it will simply return itself.  □
Theorem 3.
For a given function, f ( x ) , satisfying the constraints, the free function, g ( x ) , in the TFC constrained expression y ( x , g ( x ) ) = f ( x ) is not unique. In other words, the TFC constrained expression is a surjective functional.
Proof. 
Consider the free function choice g ( x ) = f ( x ) + β j s j ( x ) where β j are scalar values on R and s j ( x ) are the support functions used to construct the switching functions ϕ i ( x ) .
y ( x ) = g ( x ) + ϕ i ( x ) ρ i ( x , g ( x ) ) .
Substituting the chosen g ( x ) yields,
y ( x ) = f ( x ) + β j s j ( x ) + ϕ i ( x ) ρ i ( x , f ( x ) + β j s j ( x ) ) .
Now, according to Definition 7 of the projection functional,
y ( x ) = f ( x ) + β j s j ( x ) + ϕ i ( x ) κ i i [ f ( x ) + β j s j ( x ) ] .
Since the constraint operator ℭi is a linear operator,
y ( x ) = f ( x ) + β j s j ( x ) + ϕ i ( x ) κ i i [ f ( x ) ] i [ s j ( x ) ] β j .
Since f ( x ) is defined as a function satisfying the constraints, then i [ f ( x ) ] = κ i , and,
y ( x ) = f ( x ) + β j s j ( x ) ϕ i ( x ) i [ s j ( x ) ] β j .
Now, according to Property 3 of the constraint operator, and by decomposing the switching functions ϕ i ,
y ( x ) = f ( x ) + β j s j ( x ) α k i s k ( x ) S i j β j .
Collecting terms results in,
y ( x ) = f ( x ) + β j δ j k α k i S i j s k ( x ) .
However, S k i α i j = δ k j because α i j is the inverse of S k i . Therefore, by the definition of inverse, S k i α i j = α k i S i j = δ k j , and thus,
y ( x ) = f ( x ) + β j δ j k δ j k s k ( x ) .
Simplifying yields the result,
y ( x ) = f ( x ) ,
which is independent of the β j s j ( x ) terms in the free function. Therefore, the free function is not unique. □
Notice that the non-uniqueness of g ( x ) depends on the support functions used in the constrained expression, which has an immediate consequence when using constrained expressions in optimization. If any terms in g ( x ) are linearly dependent to the support functions used to construct the constrained expression, their contribution is negated and thus arbitrary. For some optimization techniques it is critical that the linearly dependent terms that do not contribute to the final solution be removed, else, the optimization technique becomes impaired. For example, prior research focused on using this method to solve ODEs [2,3] through a basis expansion of g ( x ) and least-squares, and the basis terms linearly dependent to the support functions had to be omitted from g ( x ) in order to maintain full rank matrices in the least-squares.
The previous proofs coupled with the functional and functional property definitions given earlier provide a more rigorous definition for the TFC constrained expression: the TFC constrained expression is a surjective, projection functional whose domain is the space of all real-valued functions that are defined at the constraints and whose codomain is the space of all real-valued functions that satisfy the constraints. It is surjective because it spans the space of all functions that satisfy the constraints, its codomain, based on Theorem 1, but is not injective, because Theorem 3 shows that functions in the codomain are the image of more than one function in the domain: the functional is thus not bijective either because it is not injective. Moreover, the TFC constrained expression is a projection functional as shown in Theorem 2.

4. Multivariate TFC

Consider the general multivariate function F : R n R m where F = ( f 1 , f 2 , , f m ) . In this definition, F is composed of the real-values functions f i : R n R such that f i = f i ( x 1 , x 2 , , x n ) where x i are the independent variables. In terms of the TFC, the functions f i can be expressed as individual constrained expressions, and therefore the extension to multidimensional functions only involves extending the original method developed in Section 2 for f ( x ) to f ( x 1 , x 2 , , x n ) . Once completed, the extension to the original definition of F is immediate: simply write a multivariate constrained expression for every f i in F.
In the following section, the multivariate TFC is developed using a recursive application of the univariate TFC. In this manner, it can be shown that this approach is a generalization of the original theory, and that all mathematical proofs for the univariate constrained expressions can easily be extended to the multivariate constrained expressions. Then, a tensor form of the multivariate constrained expression is introduced by simplifying the recursive method. The tensor formulation provides a succinct way to write multivariate constrained expressions.

4.1. Recursive Application of Univariate TFC

As discussed above, our extension to the multivariate case is concerned with deriving the constrained expression for the form f = f ( x 1 , x 2 , , x n ) . For a set of constraints in the multivariate case, one can first create the constrained expression all constraints on x 1 using the univariate TFC formulation. The resulting univariate constrained expression, which we can denote as, 1f is then used as the free function in a constrained expression that includes all the constraints on x 2 to produce the expression 2f. This method carries on until the final independent variable, x n , is reached and the expression nf = f and is the multivariate constrained expression.
This concept is best shown through some simple examples. These examples have two spatial dimensions only one dependent variable (i.e., F : R 2 R ) we adopt the following notation:
F = f 1 : = u ( x 1 , x 2 ) : = ( x , y ) .

4.1.1. Multivariate Example # 1: Value and Derivative Constraints

Take for example the following constraints in two dimensions,
u ( 0 , y ) = sin ( 2 π y ) , u x ( 0 , y ) = 0 , u ( x , 0 ) = x 2 , and u ( x , 1 ) = cos ( x ) 1 .
First, the constrained expression is built for the constraints involving x. Using, the univariate TFC, this can be written as,
  1 u ( x , y , g ( x , y ) ) = g ( x , y ) + sin ( 2 π y ) g ( 0 , y ) x g x ( 0 , y ) .
Then,   1 u ( x , y ) is used as the free function in the constrained expression involving the constraints on y. Since this problem is two-dimensional, the resultant expression is the multivariate TFC constrained expression.
u ( x , y ,   1 u ( x , y ) ) =   1 u ( x , y ) + ( 1 y ) x 2   1 u ( x , 0 ) + y cos ( x ) 1   1 u ( x , 1 )
Substituting   1 u into Equation (4) and simplifying yields,
u ( x , y , g ( x , y ) ) = g ( x , y ) + sin ( 2 π y ) g ( 0 , y ) x g x ( 0 , y ) + ( 1 y ) ( x 2 g ( x , 0 ) + g ( 0 , 0 ) + x g x ( 0 , 0 ) ) + y cos ( x ) 1 g ( x , 1 ) + g ( 0 , 1 ) + x g x ( 0 , 1 ) .
Alternatively, one could first write the expression for the constraints on y,
  2 u ( x , y , g ( x , y ) ) = g ( x , y ) + ( 1 y ) x 2 g ( x , 0 ) + y cos ( x ) 1 g ( x , 1 ) ,
and use   2 u ( x , y ) as the free function in a constrained expression for the constraints on x,
u ( x , y ,   2 u ( x , y ) ) =   2 u ( x , y ) + sin ( 2 π y )   2 u ( 0 , y ) x (   2 u ) x ( 0 , y ) .
Substituting   2 u ( x , y ) into Equation (6) and simplifying yields,
u ( x , y , g ( x , y ) ) = g ( x , y ) + sin ( 2 π y ) g ( 0 , y ) x g x ( 0 , y ) + ( 1 y ) ( x 2 g ( x , 0 ) + g ( 0 , 0 ) + x g x ( 0 , 0 ) ) + y cos ( x ) 1 g ( x , 1 ) + g ( 0 , 1 ) + x g x ( 0 , 1 ) ,
the exact same result as in Equation (5). Therefore, it does not matter in what order recursive univariate TFC is applied to produce multivariate constrained expressions, as the final result will be the same.
Figure 2 shows the constrained expression evaluated with the free function g ( x , y ) = x 2 c o s ( y ) + 4 . The constraints that can be visualized easily are shown as black lines. As expected, the TFC constrained expression satisfies these constraints exactly.

4.1.2. Multivariate Example # 2: Linear Constraints

Take for example the following constraints in two dimensions,
u ( 0 , y ) = y 2 sin ( π y ) , u ( 1 , y ) + u ( 2 , y ) = y sin ( π y ) , u y ( x , 0 ) = 0 , and u ( x , 0 ) = u ( x , 1 ) .
As in the first example, the univariate constrained expression is built for the constraints in x,
  1 u ( x , y , g ( x , y ) ) = g ( x , y ) + 3 2 x 3 y 2 sin ( π y ) g ( 0 , y ) + x 3 cos ( π y ) g ( 2 , y ) g ( 1 , y ) .
Then,   1 u ( x , y ) is used as the free function in the constrained expression for the constraints in y,
u ( x , y , g ( x , y ) ) =   1 u ( x , y ) ( y y 2 )   1 u y ( x , 0 ) y 2   1 u ( x , 1 )   1 u ( x , 0 ) .
Substituting in   1 u and simplifying yields,
u ( x , y ,   1 u ( x , y ) ) = g ( x , y ) + y y 2 3 2 x 3 g y ( 0 , 0 ) x 3 g y ( 1 , 0 ) g y ( 2 , 0 ) g y ( x , 0 ) y 2 ( 3 2 x 3 g ( 0 , 0 ) 3 2 x 3 g ( 0 , 1 ) x 3 ( g ( 1 , 0 ) g ( 2 , 0 ) ) + x 3 ( g ( 1 , 1 ) g ( 2 , 1 ) ) g ( x , 0 ) + g ( x , 1 ) ) + 3 2 x 3 y 2 sin ( π y ) g ( 0 , y ) + x 3 g ( 1 , y ) g ( 2 , y ) + y sin ( π y ) .
Just as in the previous example, one could first write the constrained expression for the constraints in y, call it   2 u ( x , y ) , and then use   2 u ( x , y ) as the free function in the constrained expression for the constraints in x: the result, after simplifying, would be identical to Equation (7). Figure 3 shows the constrained expression for the specific g ( x , y ) = x 2 cos y + sin ( 2 x ) , where the blue line signifies the constraint on u ( 0 , y ) , the black lines signify the derivative constraint on u y ( x , 0 ) , and the magenta lines signify the relative constraint u ( x , 0 ) = u ( x , 1 ) . The linear constraint u ( 1 , y ) + u ( 2 , y ) = y sin ( π y ) is not easily visualized, but is nonetheless satisfied by the constrained expression.

4.1.3. Multivariate Constrained Expression Theorems

Theorem 4.
For any function, f ( x ) , satisfying the constraints, there exists at least one free function, g ( x ) , such that the multivariate TFC constrained expression u ( x , g ( x ) ) = f ( x ) .
Proof. 
Based on Theorem 1, the univariate constrained expression will return the free function if the free function satisfies the constraints. Let   1 u ( x ) represent the univariate constrained expression for the independent variable x 1 that uses the free function g ( x ) ,   2 u ( x ) represent the univariate constrained expression for the independent variable x 2 that uses the free function   1 u ( x ) , and so on up to   n u ( x ) which is simply the total constrained expression u ( x ) . If we choose g ( x ) = f ( x ) , then based on Theorem 1   1 u ( x ) = f ( x ) . Applying Theorem 1 recursively leads to   2 u ( x ) = f ( x ) and so on until   n u ( x ) = u ( x ) = f ( x ) . Hence, for any function satisfying the constraints, f ( x ) , there exists a free function, g ( x ) = f ( x ) , such that the multivariate constrained expression is equal to the function satisfying the constraints, i.e., u ( x , f ( x ) ) = f ( x ) . □
Theorem 5.
The TFC multivariate constrained expression is a projection functional.
Proof. 
To prove Theorem 5, one must show that u ( x , u ( x , g ( x ) ) ) = u ( x , g ( x ) ) . By definition, the constrained expression returns a function that satisfies the constraints. In other words, for any g ( x ) , u ( x , g ( x ) ) is a function that satisfies the constraints. From Theorem 4, if the free function used in the constrained expression satisfies the constraints, then the constrained expression returns that free function exactly. Hence, if the constrained expression function is given itself as the free function, it will simply return itself. □
Theorem 6.
For a given function, f ( x ) , satisfying the constraints, the free function, g ( x ) , in the TFC constrained expression u ( x , g ( x ) ) = f ( x ) is not unique. In other words, the multivariate TFC constrained expression is a surjective functional.
Proof. 
Since each expression   i u ( x ) used in deriving the multivariate constrained expression is derived through the univariate formulation, then the results of the proof of Theorem 3 apply for each each   i u ( x ) , and therefore the free function g ( x ) is not unique. □
Just like in the univariate case, this proof has immediate implications when using the constrained expression for optimization. Through the recursive application of the univariate TFC approach, any terms in g ( x ) that are linearly dependent to the the support functions, s i ( x 1 ) , s j ( x 2 ) , ... , s k ( x n ) , will not contribute to the solution. In the multivariate case, this also includes products of the support functions that include one and only one support function from each independent variable, e.g., s i ( x 1 ) s j ( x 2 ) s k ( x n ) .
In addition, just as in the univariate case, Theorems 4, 5, and 6 allow for a more rigorous definition of the multivariate TFC constrained expression. The multivariate TFC constrained expression is a surjective, projection functional whose domain is the space of all real-valued functions that are defined at the constraints and whose codomain is the space of all real-valued functions that satisfy the constraints.

4.2. Tensor Form

Recursive applications of the univariate TFC lead to expressions that lend themselves nicely to mathematical proofs, such as those in the previous section. However, for applications, it is typically more convenient to expression the constrained expression in a more compact form. Conveniently, the multivariate constrained expressions that are formed from recursive applications of the univariate TFC can be succinctly expressed in the following tensor form,
u ( x ) = g ( x ) + M ( ρ ( x , g ( x ) ) i 1 i 2 i n Φ i 1 ( x 1 ) Φ i 2 ( x 2 ) Φ i n ( x n )
where i 1 , i 2 , , i n are n indices associated with the n-dimensions that have constraints, M is an n-dimensional tensor whose elements are based on the projection functionals, ρ ( x , g ( x ) ) , and the n vectors Φ are vectors whose elements are based on the switching functions for the associated dimension.
The M tensor can be constructed using a simple two-step process. Note that the arguments of functionals are dropped in this explanation for clarity.
  • The elements of the first order sub-tensors of M acquired by setting all but one index equal to one are a zero followed by the projection functionals for the dimension associated with that index. Mathematically,
    M 1 i k 1 = 0 ,   k ρ 1 , ,   k ρ k ,
    where   k ρ j indicates the j-th projection functional of the k-independent variable and k is the number of constraints associated with the k-th independent variable.
  • The remaining elements of the M tensor, those that have more than one index not equal to one, are the geometric intersection of the associated projection functionals multiplied by a sign (− or +). Mathematically, this can be written as,
    M i 1 i 2 i n =   j i j 1   j i k 1   h ρ i h 1 ( 1 ) m ,
    where i j , i k , ⋯, i h are the indices of M i 1 i 2 i n that are not equal to one and m is equal to the number of non-one indices. If the constraint functions and free function are differential up to the order of derivatives required to compute Equation (9), then by multiple applications of Clairaut’s Theorem the constraint operators can be freely permuted [9]. For example, Equation (9) could be re-written as,
    M i 1 i 2 i n =   h i h 1   j i j 1   k ρ i k 1 ( 1 ) m .
The elements of the vectors Φ i k are simply composed of a 1 followed by the switching functions associated with the k-th independent variable. Mathematically,
Φ i k = 1 ,   k ϕ 1 , ,   k ϕ k ,
where   k ϕ j denotes the j-th switching function of the k-th independent variable.
To solidify the reader’s understanding of the tensor form explained above, the constrained expressions for the two multivariate examples are re-derived using the tensor form.

4.2.1. Multivariate Example # 1: Value and Derivative Constraints Using the Tensor Form

The constraints from the first multivariate example are copied below for the reader’s convenience.
u ( 0 , y ) = sin ( 2 π y ) , u x ( 0 , y ) = 0 , u ( x , 0 ) = x 2 , and u ( x , 1 ) = cos ( x ) 1
The projection functionals are defined based on the constraints,
  1 ρ 1 ( x , y , g ( x , y ) ) = sin ( 2 π y ) g ( 0 , y ) ,   1 ρ 2 ( x , y , g ( x , y ) ) = g x ( 0 , y ) , 2 ] ρ 1 ( x , y , g ( x , y ) ) = x 2 g ( x , 0 ) , and   2 ρ 2 ( x , y , g ( x , y ) ) = cos ( x ) 1 g ( x , 1 ) .
Then, the first step in constructing the M tensor can be completed.
M i j ( x , y , g ( x , y ) ) = 0 x 2 g ( x , 0 ) cos ( x ) 1 g ( x , 1 ) sin ( 2 π y ) g ( 0 , y ) cos ( x ) 1 g ( x , 1 )
The remaining elements of the M tensor are found in step 2 by calculating the geometric intersection of the projection functionals. For example,
M 22 =   1 1   2 ρ 1 ( 1 ) 2 = x 2 g ( x , 0 ) | x = 0 = g ( 0 , 0 ) =   2 1   1 ρ 1 ( 1 ) 2 = sin ( 2 π y ) g ( 0 , y ) | y = 0 = g ( 0 , 0 ) ,
where functional arguments have been dropped for clarity. The remaining elements are computed in a similar fashion to produce,
M i j ( x , y , g ( x , y ) ) = 0 x 2 g ( x , 0 ) cos ( x ) 1 g ( x , 1 ) sin ( 2 π y ) g ( 0 , y ) g ( 0 , 0 ) g ( 0 , 1 ) cos ( x ) 1 g ( x , 1 ) g x ( 0 , 0 ) g x ( 0 , 1 ) .
The Φ vectors are assembled by concatenating a 1 with the switching functions for that independent variable. Hence,
Φ i ( x ) = 1 1 x Φ j ( y ) = 1 1 y y .
Now, the tensor form of the constrained expression can be compactly written as,
u ( x , y , g ( x , y ) ) = g ( x , y ) + M i j ( x , y , g ( x , y ) ) ) Φ i ( x ) Φ j ( y ) .
Note that expanding this expression produces the exact same constrained expression as the recursive method.

4.2.2. Multivariate Example # 2: Linear Constraints Using the Tensor Form

The constraints from the second multivariate example are copied below for the reader’s convenience.
u ( 0 , y ) = y 2 sin ( π y ) , u ( 1 , y ) + u ( 2 , y ) = y sin ( π y ) , u y ( x , 0 ) = 0 , and u ( x , 0 ) = u ( x , 1 )
The projection functionals are defined based on the constraints,
  1 ρ 1 ( x , y , g ( x , y ) ) = y 2 sin ( π y ) g ( 0 , y ) ,   1 ρ 2 ( x , y , g ( x , y ) ) = y sin ( π y ) g ( 1 , y ) g ( 2 , y ) , 2 ] ρ 1 ( x , y , g ( x , y ) ) = g y ( x , 0 ) , and   2 ρ 2 ( x , y , g ( x , y ) ) = g ( x , 1 ) g ( x , 0 ) ,
and the first step in constructing the M tensor is complete,
M i j ( x , y , g ( x , y ) ) = 0 g y ( x , 0 ) g ( x , 1 ) g ( x , 0 ) y 2 sin ( π y ) g ( 0 , y ) y sin ( π y ) g ( 1 , y ) g ( 2 , y ) .
Then, just as in the previous example, remaining elements of the M tensor are found by calculating the geometric intersection of the projection functionals. For example,
M 33 =   1 2   2 ρ 2 ( 1 ) 2 = g ( x , 1 ) g ( x , 0 ) | x = 1 g ( x , 1 ) g ( x , 0 ) | x = 2 = g ( 1 , 0 ) g ( 1 , 1 ) + g ( 2 , 0 ) g ( 2 , 1 ) =   2 2   1 ρ 2 ( 1 ) 2 = y sin ( π y ) g ( 1 , y ) g ( 2 , y ) | y = 1 y sin ( π y ) g ( 1 , y ) g ( 2 , y ) | y = 0 = g ( 1 , 1 ) g ( 2 , 1 ) + g ( 1 , 0 ) + g ( 2 , 0 ) ,
where functional arguments have been dropped for clarity. The complete M tensor for this example is,
M i j ( x , y , g ( x , y ) ) = 0 g y ( x , 0 ) g ( x , 1 ) g ( x , 0 ) y 2 sin ( π y ) g ( 0 , y ) g y ( 0 , 0 ) g ( 0 , 0 ) g ( 0 , 1 ) y sin ( π y ) g ( 1 , y ) g ( 2 , y ) g y ( 1 , 0 ) + g y ( 2 , 0 ) g ( 1 , 0 ) g ( 1 , 1 ) + g ( 2 , 0 ) g ( 2 , 1 ) .
The Φ vectors are again assembled by concatenating a 1 with the switching functions for the associated independent variable.
Φ i ( x ) = 1 , 3 2 x 3 , x 3 Φ j ( y ) = 1 , y y 2 , y 2
Now, the tensor form of the constrained expression can be compactly written as,
u ( x , y ) = g ( x , y ) + M i j ( x , y , g ( x , y ) ) Φ i ( x ) Φ j ( y ) .
Note that expanding this expression produces the exact same constrained expression as the recursive method.

5. Applications to PDEs

In this article, orthogonal bases in n-dimensions, namely Chebyshev orthogonal polynomials of the first kind and Legendre orthogonal polynomials, are leveraged to approximate the solutions of PDEs with the TFC. For completeness, the equations to compute these polynomials are provided in Appendix A.
In general, multivariate basis sets can be created by using all possible products of the functions in the univariate basis sets. The measure that makes up the new multivariate basis set will be the product of measures of the univariate basis sets, and the domain of the multivariate basis set will be the union of the domains that make up the univariate basis sets. More details and insights regarding two-dimensional and n-dimensional orthogonal basis functions can be found in Refs. [14,15,16].
In this article, the free function will be defined as a linear combination of some multivariate basis set with unknown coefficients. The resultant constrained expression and its derivatives are substituted into the PDE. Since the free function consists of known basis functions and unknown coefficients, the PDE is transformed into an algebraic equation. This algebraic equation is discretized over the problem domain, and the unknown coefficients are used to minimize the residual of the PDE over the set of discretized points. The following subsections provide a detailed explanation of each major step, and a summary of the entire process is given in Figure 4.

5.1. Defining the Free Function g ( x )

Let us define n independent variables in the vector x = { x 1 , , x k , , x n } T . Moreover, let the orthogonal basis set for each of these independent variables be denoted by B k m , where the superscript m denotes the m-th basis function and the subscript k denotes the k-th independent variable. For example, the third basis function for x 2 would be denoted as B 2 3 . The domain of the multivariate basis will be denoted by Ω = Ω 1 × Ω 2 × × Ω n , where the generic Ω k denotes the domain of the k-th basis set. Then, an arbitrary basis function for the multivariate domain can be written as,
B = B 1 m 1 B 2 m 2 B n m n ,
where m 1 , , m n Z + . In other words, Equation (10) generates a multivariate basis via a tensor product of univariate basis functions [17]. If one were to use all possible products of the functions in the individual basis sets, i.e., use all possible combinations of m 1 , , m n Z + , an infinite set, then the resulting multivariate basis would span the union of the individual univariate basis sets’ function spaces. However, when creating this expansion, one must pay attention to the results of Theorem 6. Theorem 6 shows that the functions used to constrained the expression must be omitted from the formulation of B . This ensures a full rank system in the later optimization steps.
As previously stated, the free function is chosen to be a linear combination of this multivariate basis with unknown coefficients. Mathematically, this can be expressed as,
g ( x ) = h T ξ ,
where h R k = 1 n m k whose elements are elements of B , and ξ is a same-sized vector of the unknown coefficients.

5.2. Derivatives of the Free Function

In most applications, the domains Ω k of the basis sets do not coincide with the domain of the problem (e.g., for Chebyshev and Legendre polynomials the functions are defined on [ 1 , + 1 ] ). Let the basis functions be defined on z [ z 0 , z f ] and the problem be defined on x k [ x k 0 , x k f ] where k corresponds to the dimension. In order to use the basis functions, a map between the basis function domain and problem domain must be created. The simplest map is a linear one,
z = z 0 + z f z 0 x k f x k 0 ( x x k 0 ) x k = x k 0 + x k f x k 0 z f z 0 ( z z 0 ) .
The subsequent derivatives of the free function can be computed,
n g x k n = d z d x k n n h T z n ξ ,
but by defining,
c k : = d z d x k = z f z 0 x k f x k 0 ,
the derivative computations can be simply written as,
n g x k n = c k n n h T z n ξ .
The immediate result is if the derivative of the function g ( x ) is taken with respect to the x k variable, then along with taking the derives of the basis functions with respect to this coordinate, the product must also be multiplied by the c k mapping coefficient. From this, it follows that a partial derivative with respect to multiple independent variables (e.g., x 1 and x 2 ) can be written as,
2 g x 1 x 2 = c 1 c 2 h T x 1 x 2 ξ .
This process applies to any derivative of the free function.

5.3. Discretization

In order to solve problems numerically, the problem domain (and therefore the basis function domain) must be discretized. Since this article uses Chebyshev and Legendre orthogonal polynomials, an optimal discretization scheme is the Chebyshev-Gauss-Lobatto nodes [18,19]. For, N + 1 points, the discrete points are calculated as,
z j = cos j π N for j = 0 , 1 , 2 , , N .
When compared with the uniform distribution, the collocation point distribution results in a much slower increase of the condition number of the matrix to be inverted in the least-squares as the number of basis functions, m, increases. The collocation points can be realized in the problem domain through the relationship provided in Equation (12).

5.4. Summary of the Major Steps to Solving PDEs

To summarize, consider a PDE for the function u ( x ) such that
F x ; u x 1 , u x n ; 2 u x 1 x 1 , 2 u x 1 x n ; = 0 ,
subject to k constraints. In general, the TFC approach to solving differential equations can be broken down into four major steps: (1) derive the constrained expression, (2) define the free function, (3) discretize the domain, and (4) minimize the residual of the differential equation. The flow chart in Figure 4 outlines these steps with all of the relevant equations.
In order to approximate the solution of Equation (15), the constrained expression must be created: this is done using the theory developed in earlier sections. This constrained expression embeds the constraints of the differential equation, and by substituting the constrained expression into Equation (15), transforms the original differential equation into an algebraic equation of the free function subject to no constraints. This transformed expression is denoted by the (∼) symbol. Next, by defining the free function g ( x ) according to Equation (11), the differential equation becomes an algebraic equation in the unknown coefficient vector ξ . Lastly, this equation is discretized according to Equation (14), leading to a linear system of ξ if the PDE is linear and a nonlinear system of ξ if the PDE is nonlinear, which can be solved by many available optimization techniques. Previous work [2,3] as well as the results provided in the following section utilize a least-squares approach.

6. Results

This section applies the method to two PDEs. For each problem, the PDE and associated constraints are summarized along with the equations needed to construct the constrained expression. All numerical results were performed in Python, and utilized the autograd package [20] to perform derivatives via automatic differentiation [21]. All computations were performed on a MacBook Pro (2016) macOS Version 10.15.3 with a 3.3 GHz Dual-Core Intel® Core™ i7 and with 16 GB of RAM. All run times were calculated using the default_timer function in the Python timeit package. In all cases, matrix inversion was handled with NumPy’s pinv function.
Consequently, two specific computation times are provided (and tabulated in Appendix B), (1) the full run-time of the problem and (2) the computational time associated with the least-squares. In these tables, the full run-time is drastically affected by the computational overhead from autograd, with full run times on the order of 0.5–50 s while the computation time for the least squares and nonlinear least squares is on the order of 0.5–185 milliseconds.
For the results in the following sections, the accuracy of the method was determined according to the following process:
  • Training
    (a)
    Estimate the solution of the PDE (i.e., determine the coefficients ξ ) using the TFC method with n points per independent variable and basis functions up to the to the m-th degree.
    (b)
    Maximum training error: Using the training set discretization, determine the absolute error of the estimated solution compared with the true solution and record the maximum value.
  • Test
    (a)
    Using converged ξ parameters from the training phase, refine the discretization of domain with n = 100 equally spaced points per dimension and evaluate estimated solution at these points.
    (b)
    Maximum test error: Using the test set discretization, determine the absolute error of the estimated solution compared with the true solution and record the maximum value.
Additionally, for both numerical tests, the method was completed over a varying range of discretization points per independent variable, n, and degree of basis expansion, m. The results in this section and in Appendix B are reported as a function of these two parameters. For example, a value of n = 5 would imply a 5 × 5 grid or 25 points. Likewise, a value of m = 5 would imply that all the univariate basis functions, and combinations thereof, are at most quintic functions. However, the number of coefficients to be solved, and therefore the size of the matrix to be inverted, is dependent on the constrained expression, since some terms need to be removed from the expression of g ( x , y ) (see Theorem 6). The total number of basis functions associated with each degree for both Problems #1 and #2 are displayed in Table 1.

6.1. Problem # 1

Consider the PDE solved in Lagaris et al. [22], Mall & Chakraverty [23], Sun et al. [24], and Schiassi et al. [12],
u x x ( x , y ) + u y y ( x , y ) = e x ( x 2 + y 3 + 6 y )
where x , y [ 0 , 1 ] and subject to,
u ( 0 , y ) = y 3 u ( 1 , y ) = ( 1 + y 3 ) e 1 u ( x , 0 ) = x e x u ( x , 1 ) = e x ( x + 1 ) ,
which has the true solution u ( x , y ) = e x ( x + y 3 ) . Using the proposed method, the constrained expression can be derived and written in its tensor form as,
u ( x , y , g ( x , y ) ) = g ( x , y ) + M i j ( x , y , g ( x , y ) ) Φ i ( x ) Φ j ( y ) ,
where g ( x , y ) is defined according to Equation (11) and for these numerical tests is implemented with either Chebyshev or Legendre polynomials. Furthermore, for this problem,
M i j ( x , y , g ( x , y ) ) = 0 x e x g ( x , 0 ) e x ( x + 1 ) g ( x , 1 ) y 3 g ( 0 , y ) g ( 0 , 0 ) 1 + g ( 0 , 1 ) ( 1 + y 3 ) e 1 g ( 1 , y ) e 1 + g ( 1 , 0 ) 2 e 1 + g ( 1 , 1 ) .
and
Φ i ( x ) = 1 , 1 x , x , Φ j ( y ) = 1 , 1 y , y .
It follows that the expanded constrained expression is,
u ( x , y ) = g ( x , y ) ( x 1 ) y ( g ( 0 , 0 ) + g ( 0 , 1 ) 1 ) + g ( 0 , 0 ) + y 3 + ( x 1 ) g ( 0 , y ) + x ( y g ( 1 , 1 ) ( y 1 ) g ( 1 , 0 ) ) x g ( 1 , y ) + ( y 1 ) g ( x , 0 ) y g ( x , 1 ) + x y y 2 1 e + e x ( x + y ) .
Figure 5 shows the analytical solution for this problem and Table 2 and Table 3 display the maximum error over the domain for the test set using both Chebyshev and Legendre polynomials.
It can be seen that the difference in accuracy of Chebyshev polynomials versus Legendre polynomials is negligible: the solutions that use Legendre polynomials maintain a slightly more accurate estimation than those that use Chebyshev polynomials when using higher order terms.
Finally, the results of the numerical test for Problem #1 are compared to the other approaches in Table 4, where the maximum training and test errors are presented. It can be seen that this method produces an estimate at machine level precision that is at least 3 orders of magnitudes more accurate than the other methods. In fact, the next closest method is a TFC based approach where the free function is expressed using an Extreme Learning Machine [25] in the paper by Schiassi et al. [12].

6.2. Problem #2

Problem #2 is a PDE with a linear constraint created by the authors.
u x x ( x , y ) + u x ( x , y ) u y ( x , y ) = 2 cos ( y ) 2 x 3 sin ( y ) cos ( y )
where ( x , y ) [ 0 , 1 ] × [ 0 , 2 π ] and subject to,
u ( 0 , y ) = 0 u ( 1 , y ) = cos ( y ) u ( x , 0 ) = u ( x , 1 ) ,
which has the true solution u ( x , y ) = x 2 cos ( y ) .
Using the TFC, the constrained expression for these boundary conditions can be written in the tensor form as,
u ( x , y , g ( x , y ) ) = g ( x , y ) + M i j ( x , y , g ( x , y ) ) Φ i ( x ) Φ j ( y ) ,
where
M i j ( x , y , g ( x , y ) ) = 0 g ( x , 0 ) g ( x , 2 π ) g ( 0 , y ) g ( 0 , 2 π ) g ( 0 , 0 ) cos ( y ) g ( 1 , y ) g ( 1 , 2 π ) g ( 1 , 0 )
and
Φ i ( x ) = 1 1 x x T , Φ j ( y ) = 1 y 2 π T ,
or in its expanded form as,
u ( x , y , g ( x , y ) ) = g ( x , y ) ( 1 x ) y ( g ( 0 , 0 ) g ( 0 , 2 π ) ) 2 π + g ( 0 , y ) + y ( g ( x , 0 ) g ( x , 2 π ) ) 2 π + x g ( 1 , y ) y ( g ( 1 , 0 ) g ( 1 , 2 π ) ) 2 π + cos ( y ) .
Figure 6 shows the analytical solution of Problem #2, and Table 5 and Table 6 show the maximum test set error over the domain using Chebyshev and Legendre polynomials respectively.
Table 5 and Table 6 show that the difference in error between Chebyshev and Legendre polynomials for Problem #2 is small: the Chebyshev polynomials perform slightly better than the Legendre polynomials as the number of points in the domain and number of basis functions increase.

7. Conclusions

This article illustrated that the structure of the univariate TFC constrained expression is composed of a free function and constraint terms, which contain products of projection functionals and switching functions. A method to calculate the projection functionals and switching functions was demonstrated, and their properties were defined. Then, these properties were used as the foundation for mathematical proofs related to the univariate constrained expression.
In addition, the projection/switching perspective of the univariate constrained expression lead directly to a multivariate extension via recursive application of the univariate theory. Since these multivariate constrained expressions where built from univariate constrained expressions, it was fairly simple to extend the mathematical proofs to the multivariate case as well. In the end, it was concluded that the univariate and multivariate TFC constrained expressions are surjective, projection functionals whose domain is the space of all real-valued functions that are defined at the embedded constraints and whose codomain is the space of all real-valued functions that satisfy said constraints. Additionally, a method for compactly writing multivariate constrained expressions via tensors was provided.
After introducing the multivariate TFC in this way, a methodology for solving PDEs via the TFC was presented. This methodology included choosing the free function to be a linear combination of multivariate orthogonal polynomials, discretizing the domain via collocation points, and finally minimizing the residual of the PDE via least-squares. Two example PDEs solved using this methodology were presented, and when available, the TFC solution accuracy was compared with other state-of-the-art methods.
While this article focused on using orthogonal basis functions, namely, Chebyshev and Legendre orthogonal polynomials, as the free function g ( x ) , and ultimately a linear/nonlinear least-square technique to find the unknown parameters ξ , the technique is not limited to this. At its heart, the TFC approach is a way to derive functionals which analytically satisfy the specified constraints. In other words, when solving ODEs or PDEs, these functionals transform a constrained optimization problem into an unconstrained optimization problem, and therefore a myriad valid definitions of g ( x ) and optimization schemes exist. For example, deep neural networks, support vector machines, and extreme learning machines have been used as free functions in the past. These other free function choices and associated optimization schemes are useful, as for sufficiently complex problems, the number of multivariate basis functions needed to estimate a PDE with sufficient accuracy can become computationally prohibitive.
As defined in this article, the TFC multivariate constrained expressions are capable of embedding value constraints, derivative constraints, and linear combinations thereof. However, other constraint types such as integral, component, and inequality constraints were not discussed. Future work will focus on incorporating these other constraint types. In addition, a more in-depth comparison between TFC and other state-of-the-art methods on a variety of PDEs is likely forthcoming.

Author Contributions

Conceptualization, C.L. and H.J.; Formal analysis, C.L. and H.J.; Methodology, C.L. and H.J.; Software, C.L. and H.J.; Supervision, D.M.; Validation, C.L. and H.J.; Writing—original draft, C.L. and H.J.; Writing—review & editing, C.L., H.J. and D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a NASA Space Technology Research Fellowship, Leake [NSTRF 2019] Grant #: 80NSSC19K1152 and Johnston [NSTRF 2019] Grant #: 80NSSC19K1149.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BVPBoundary Value Problem
FEMFinite Element Method
ODEOrdinary Differential Equation
PDEPartial Differential Equation
TFCTheory of Functional Connections
X-TFCExtreme Theory of Functional Connections

Appendix A. Orthogonal Polynomials

Appendix A.1. Chebyshev Orthogonal Polynomials

Chebyshev orthogonal polynomials are two sets of basis functions, the first and the second kind. They are usually indicated as T k ( z ) and U k ( z ) , respectively. This subsection summarizes the main properties of the first kind, T k ( z ) , only, which are defined on the domain z [ 1 , + 1 ] and with the measure d μ ( z ) = 1 1 z 2 d z . These polynomials can be generated using the following useful recursive function,
T k + 1 = 2 z T k T k 1 starting from : T 0 = 1 , T 1 = z .
Also, all the derivatives of Chebyshev orthogonal polynomials can be computed in a recursive way, starting from
d T 0 d z = 0 , d T 1 d z = 1 and d d T 0 d z d = d d T 1 d z d = 0 ( d > 1 ) ,
while the subsequent derivatives of T k + 1 ( z ) can be derived directly from the recursive definition. For k 1 they are,
d T k + 1 d z = 2 T k + z d T k d z d T k 1 d z d 2 T k + 1 d z 2 = 2 2 d T k d z + z d 2 T k d z 2 d 2 T k 1 d z 2 d d T k + 1 d z d = 2 d d d 1 T k d z d 1 + z d d T k d z d d d T k 1 d z d ; ( d 1 ) .
The inner product of two Chebyshev orthogonal polynomials satisfies the orthogonality property,
T i ( z ) , T j ( z ) = 1 + 1 T i ( z ) T j ( z ) 1 1 z 2 d z = = 0 if i j = π if i = j = 0 = π / 2 if i = j 0 .
where the orthogonality appears when taking two distinct Chebyshev orthogonal polynomials with indices, i j .

Appendix A.2. Legendre Orthogonal Polynomials

The Legendre orthogonal polynomials, L k ( z ) , are also defined on the domain z [ 1 , + 1 ] , with measure d μ ( z ) = d z . These polynomials can also be generated recursively by,
L k + 1 = 2 k + 1 k + 1 z L k k k + 1 L k 1 starting with : L 0 = 1 L 1 = z
All derivatives of Legendre orthogonal polynomials can be computed in a recursive way, starting from,
d L 0 d z = 0 , d L 1 d z = 1 and d d L 0 d z d = d d L 1 d z d = 0 ( d > 1 ) ,
while the subsequent derivatives of Equation (A1) for k 1 can be computed in cascade,
d L k + 1 d z = 2 k + 1 k + 1 L k + z d L k d z k k + 1 d L k 1 d z d 2 L k + 1 d z 2 = 2 k + 1 k + 1 2 d L k d z + z d 2 L k d z 2 k k + 1 d 2 L k 1 d z 2 d d L k + 1 d z d = 2 k + 1 k + 1 d d d 1 L k d z d 1 + z d d L k d z d k k + 1 d d L k 1 d z d ; ( d 1 ) .
Additionally, the orthogonality of Legendre polynomials is given by,
L i ( z ) , L j ( z ) = 1 + 1 L i ( z ) L j ( z ) d z = 2 2 i + 1 δ i j where δ i j = 1 , i = j 0 , i j .

Appendix B. Solution Times

Appendix B.1. Problem #1 Solution Times

Table A1. Least-squares solution time using Chebyshev polynomials in milliseconds for Problem #1.
Table A1. Least-squares solution time using Chebyshev polynomials in milliseconds for Problem #1.
m510152025
n
50.46----
100.491.45---
150.611.884.90--
200.682.426.5013.83-
250.923.938.2318.0735.63
300.974.0913.2322.0950.88
Table A2. Total solution time using Chebyshev polynomials in seconds for Problem #1.
Table A2. Total solution time using Chebyshev polynomials in seconds for Problem #1.
m510152025
n
50.053----
100.1190.128---
150.2100.2880.430--
200.3990.5840.8241.114-
250.6971.0111.5822.3443.609
301.0451.8712.9094.6838.164
Table A3. Least-squares solution time using Legendre polynomials in milliseconds for Problem #1.
Table A3. Least-squares solution time using Legendre polynomials in milliseconds for Problem #1.
m510152025
n
50.77----
100.581.41---
150.561.824.70--
200.652.566.6213.74-
250.863.157.9317.7135.61
301.323.539.4722.0646.26
Table A4. Total solution time using Legendre polynomials in seconds for Problem #1.
Table A4. Total solution time using Legendre polynomials in seconds for Problem #1.
m510152025
n
50.061----
100.1110.164---
150.2140.2980.368--
200.3940.6130.8631.108-
250.6921.0951.6062.3893.505
301.0311.6892.7654.6967.063

Appendix B.2. Problem #2 Solution Times

Table A5. Least-squares solution time using Chebyshev polynomials in milliseconds for Problem #2.
Table A5. Least-squares solution time using Chebyshev polynomials in milliseconds for Problem #2.
m510152025
n
51.76----
102.145.56---
152.387.7927.30--
202.9910.1736.3871.75-
253.9013.0042.4093.11144.5
304.6218.2048.68105.7185.8
Table A6. Total solution time using Chebyshev polynomials in seconds for Problem #2.
Table A6. Total solution time using Chebyshev polynomials in seconds for Problem #2.
m510152025
n
50.547----
101.9051.384---
154.0453.3374.460--
207.4606.7539.77110.97-
2511.9312.9517.4321.1522.88
3018.9421.9828.9038.9245.82
Table A7. Least-squares solution time using Legendre polynomials in milliseconds for Problem #2.
Table A7. Least-squares solution time using Legendre polynomials in milliseconds for Problem #2.
m510152025
n
51.64----
101.905.46---
152.2410.6321.52--
203.2711.7429.5669.51-
253.7913.8236.3291.89145.1
304.1713.7351.42112.4181.4
Table A8. Total solution time using Legendre polynomials in seconds for Problem #2.
Table A8. Total solution time using Legendre polynomials in seconds for Problem #2.
m510152025
n
50.508----
101.6841.330---
153.8754.4893.849--
207.2847.8828.07810.56-
2511.6312.8615.5021.3523.15
3017.5718.4428.4640.5345.60

References

  1. Mortari, D. The Theory of Connections: Connecting Points. Mathematics 2017, 5, 57. [Google Scholar] [CrossRef] [Green Version]
  2. Mortari, D. Least-Squares Solution of Linear Differential Equations. Mathematics 2017, 5, 48. [Google Scholar] [CrossRef]
  3. Mortari, D.; Johnston, H.; Smith, L. High Accuracy Least-squares Solutions of Nonlinear Differential Equations. J. Comput. Appl. Math. 2019, 352, 293–307. [Google Scholar] [CrossRef] [PubMed]
  4. Johnston, H.; Mortari, D. Least-squares Solutions of Boundary-value Problems in Hybrid Systems. arXiv 2019, arXiv:math.OC/1911.04390. [Google Scholar]
  5. Furfaro, R.; Mortari, D. Least-squares Solution of a Class of Optimal Space Guidance Problems via Theory of Connections. ACTA Astronaut. 2019. [Google Scholar] [CrossRef]
  6. Johnston, H.; Schiassi, E.; Furfaro, R.; Mortari, D. Fuel-Efficient Powered Descent Guidance on Large Planetary Bodies via Theory of Functional Connections. arXiv 2020, arXiv:math.OC/2001.03572. [Google Scholar]
  7. Mai, T.; Mortari, D. Theory of Functional Connections Applied to Nonlinear Programming under Equality Constraints. In Proceeding of the 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
  8. Johnston, H.; Leake, C.; Efendiev, Y.; Mortari, D. Selected Applications of the Theory of Connections: A Technique for Analytical Constraint Embedding. Mathematics 2019, 7, 537. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Mortari, D.; Leake, C. The Multivariate Theory of Connections. Mathematics 2019, 7, 296. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Leake, C.; Johnston, H.; Smith, L.; Mortari, D. Analytically Embedding Differential Equation Constraints into Least Squares Support Vector Machines Using the Theory of Functional Connections. Mach. Learn. Knowl. Extr. 2019, 1, 60. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Leake, C.; Mortari, D. Deep Theory of Functional Connections: A New Method for Estimating the Solutions of Partial Differential Equations. Mach. Learn. Knowl. Extr. 2020, 2, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Schiassi, E.; Leake, C.; Florio, M.D.; Johnston, H.; Furfaro, R.; Mortari, D. Extreme Theory of Functional Connections: A Physics-Informed Neural Network Method for Solving Parametric Differential Equations. arXiv 2020, arXiv:cs.LG/2005.10632. [Google Scholar]
  13. Leake, C.; Mortari, D. An Explanation and Implementation of Multivariate Theory of Functional Connections via Examples. In Proceeding of the AIAA/AAS Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
  14. Ye, J.; Gao, Z.; Wang, S.; Cheng, J.; Wang, W.; Sun, W. Comparative Assessment of Orthogonal Polynomials for Wavefront Reconstruction over the Square Aperture. J. Opt. Soc. Am. A 2014, 31, 2304–2311. [Google Scholar] [CrossRef] [PubMed]
  15. Dunkl, C.F.; Xu, Y. Orthogonal Polynomials of Several Variables, 2nd ed.; Encyclopedia of Mathematics and Its Applications; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar] [CrossRef] [Green Version]
  16. Xu, Y. Multivariate Orthogonal Polynomials and Operator Theory. Trans. Am. Math. Soc. 1994, 343, 193–202. [Google Scholar] [CrossRef]
  17. Langtangen, H.P. Computational Partial Differential Equations: Numerical Methods and Diffpack Programming; OCLC: 851766084; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  18. Lanczos, C. Applied Analysis. In Progress in Industrial Mathematics at ECMI 2008; Dover Publications, Inc.: New York, NY, USA, 1957; p. 504. [Google Scholar]
  19. Wright, K. Chebyshev Collocation Methods for Ordinary Differential Equations. Comput. J. 1964, 6, 358–365. [Google Scholar] [CrossRef] [Green Version]
  20. Maclaurin, D.; Duvenaud, D.; Johnson, M.; Townsend, J. Autograd. 2013. Available online: https://github.com/HIPS/autograd (accessed on 1 July 2020).
  21. Baydin, A.G.; Pearlmutter, B.A.; Radul, A.A.; Siskind, J.M. Automatic Differentiation in Machine Learning: A Survey. J. Mach. Learn. Res. 2018, 18, 1–43. [Google Scholar]
  22. Lagaris, I.E.; Likas, A.; Fotiadis, D.I. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 1998, 9, 987–1000. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Mall, S.; Chakraverty, S. Single Layer Chebyshev Neural Network Model for Solving Elliptic Partial Differential Equations. Neural Process. Lett. 2017, 45, 825–840. [Google Scholar] [CrossRef]
  24. Sun, H.; Hou, M.; Yang, Y.; Zhang, T.; Weng, F.; Han, F. Solving Partial Differential Equation Based on Bernstein Neural Network and Extreme Learning Machine Algorithm. Neural Process. Lett. 2019, 50, 1153–1172. [Google Scholar] [CrossRef]
  25. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of injective and surjective functionals.
Figure 1. Graphical representation of injective and surjective functionals.
Mathematics 08 01303 g001
Figure 2. Multivariate example # 1 constrained expression evaluated using g ( x , y ) = x 2 c o s ( y ) + 4 .
Figure 2. Multivariate example # 1 constrained expression evaluated using g ( x , y ) = x 2 c o s ( y ) + 4 .
Mathematics 08 01303 g002
Figure 3. Multivariate example # 2 constrained expression evaluated using g ( x , y ) = x 2 cos y + sin ( 2 x ) . The blue line signifies the constraint on u ( 0 , y ) , the black lines signify the derivative constraint on u y ( x , 0 ) and the magenta lines signify the relative constraint u ( x , 0 ) = u ( x , 1 ) . The linear constraint u ( 1 , y ) + u ( 2 , y ) = y sin ( π y ) is not easily visualized, but is nonetheless satisfied by the constrained expression.
Figure 3. Multivariate example # 2 constrained expression evaluated using g ( x , y ) = x 2 cos y + sin ( 2 x ) . The blue line signifies the constraint on u ( 0 , y ) , the black lines signify the derivative constraint on u y ( x , 0 ) and the magenta lines signify the relative constraint u ( x , 0 ) = u ( x , 1 ) . The linear constraint u ( 1 , y ) + u ( 2 , y ) = y sin ( π y ) is not easily visualized, but is nonetheless satisfied by the constrained expression.
Mathematics 08 01303 g003
Figure 4. General flowchart of the TFC approach to solving partial differential equations.
Figure 4. General flowchart of the TFC approach to solving partial differential equations.
Mathematics 08 01303 g004
Figure 5. Problem #1 analytical solution.
Figure 5. Problem #1 analytical solution.
Mathematics 08 01303 g005
Figure 6. Problem #2 analytical solution.
Figure 6. Problem #2 analytical solution.
Mathematics 08 01303 g006
Table 1. Equivalence of number of basis function compared to degree of basis expansion for both Problems #1 and #2.
Table 1. Equivalence of number of basis function compared to degree of basis expansion for both Problems #1 and #2.
mNumber of Functions
517
1062
15132
20227
25347
Table 2. Maximum test set solution error using Chebyshev polynomials for Problem # 1.
Table 2. Maximum test set solution error using Chebyshev polynomials for Problem # 1.
m510152025
n
56.26 × 10 4 ----
105.53 × 10 4 1.20 × 10 10 ---
155.30 × 10 4 1.17 × 10 10 4.44 × 10 16 --
205.20 × 10 4 1.16 × 10 10 5.00 × 10 16 4.44 × 10 16 -
255.13 × 10 4 1.15 × 10 10 7.22 × 10 16 2.61 × 10 15 5.55 × 10 16
305.09 × 10 4 1.14 × 10 10 6.66 × 10 16 8.88 × 10 16 3.22 × 10 15
Table 3. Maximum test set solution error using Legendre polynomials for Problem # 1.
Table 3. Maximum test set solution error using Legendre polynomials for Problem # 1.
m510152025
n
56.26 × 10 4 ----
105.53 × 10 4 1.20 × 10 10 ---
155.30 × 10 4 1.17 × 10 10 4.44 × 10 16 --
205.20 × 10 4 1.16 × 10 10 5.55 × 10 16 4.44 × 10 16 -
255.13 × 10 4 1.15 × 10 10 4.44 × 10 16 4.44 × 10 16 5.55 × 10 16
305.09 × 10 4 1.14 × 10 10 4.44 × 10 16 4.44 × 10 16 5.55 × 10 16
Table 4. Comparison of maximum training and test error of TFC with current state-of-the-art techniques for Problem # 1.
Table 4. Comparison of maximum training and test error of TFC with current state-of-the-art techniques for Problem # 1.
MethodTraining Set
Maximum Error
Test Set
Maximum Error
TFC 2.22 × 10 16 4.44 × 10 16
X-TFC [12] 3.8 × 10 13 5.1 × 10 13
FEM 2 × 10 8 1.5 × 10 5
Reference [22] 5 × 10 7 5 × 10 7
Reference [23]- 3.2 × 10 2
Reference [24]- 2.4 × 10 4
Table 5. Maximum test set solution error using Chebyshev polynomials for Problem # 2.
Table 5. Maximum test set solution error using Chebyshev polynomials for Problem # 2.
m510152025
n
51.03 × 10 1 ----
109.20 × 10 2 2.49 × 10 5 ---
159.03 × 10 2 1.54 × 10 5 4.34 × 10 9 --
208.94 × 10 2 1.52 × 10 5 4.56 × 10 9 5.33 × 10 15 -
258.88 × 10 2 1.50 × 10 5 4.53 × 10 9 2.72 × 10 15 4.44 × 10 16
308.85 × 10 2 1.49 × 10 5 4.51 × 10 9 2.72 × 10 15 3.33 × 10 16
Table 6. Maximum test set solution error using Legendre polynomials for Problem # 2.
Table 6. Maximum test set solution error using Legendre polynomials for Problem # 2.
m510152025
n
51.03 × 10 1 ----
109.20 × 10 2 2.49 × 10 5 ---
159.03 × 10 2 1.54 × 10 5 4.34 × 10 9 --
208.94 × 10 2 1.52 × 10 5 4.56 × 10 9 5.33 × 10 15 -
258.88 × 10 2 1.50 × 10 5 4.53 × 10 9 2.78 × 10 15 5.55 × 10 16
308.85 × 10 2 1.49 × 10 5 4.51 × 10 9 2.73 × 10 15 5.55 × 10 16

Share and Cite

MDPI and ACS Style

Leake, C.; Johnston, H.; Mortari, D. The Multivariate Theory of Functional Connections: Theory, Proofs, and Application in Partial Differential Equations. Mathematics 2020, 8, 1303. https://doi.org/10.3390/math8081303

AMA Style

Leake C, Johnston H, Mortari D. The Multivariate Theory of Functional Connections: Theory, Proofs, and Application in Partial Differential Equations. Mathematics. 2020; 8(8):1303. https://doi.org/10.3390/math8081303

Chicago/Turabian Style

Leake, Carl, Hunter Johnston, and Daniele Mortari. 2020. "The Multivariate Theory of Functional Connections: Theory, Proofs, and Application in Partial Differential Equations" Mathematics 8, no. 8: 1303. https://doi.org/10.3390/math8081303

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop