1 Introduction

The Schwartz famous impossibility result [17] states that:

Theorem 1.1

There is no associative algebra (\(\mathcal G,+,\circledast \)) satisfying the following properties:

  1. (A1)

    The space of Schwartz distributions \(\mathcal D'\) over \(\mathbb R\) is linearly embedded into \({\mathcal G}\).

  2. (A2)

    The function \(f(x)=1\) is the identity in \(\mathcal G\).

  3. (A3)

    There exists a linear derivative operator \(D:\mathcal G\rightarrow \mathcal G\) that:

    1. (a)

      satisfies the Leibniz rule, and

    2. (b)

      coincides with the usual distributional derivative \(D_x\) in \(\mathcal G\cap \mathcal D'\).

  4. (A4)

    The multiplication \(\circledast \) coincides with the usual product of functions in \((\mathcal G\cap \mathcal C)\times (\mathcal G\cap \mathcal C)\), \(\mathcal C\) is the space of continuous functions over \(\mathbb R\).

If we replace (A1) by:

  1. (A1’)

    \(\mathcal C^\infty _p\subseteq \mathcal G\subseteq \mathcal D'\)

where \(\mathcal C^{\infty }_p\) is the set of piecewise smooth functions, then:

Theorem 1.2

Let \(\mathcal A\equiv \cup _{i=0}^\infty D^i_x [\mathcal C^{\infty }_p]\) be the minimal space containing \(\mathcal C_p^\infty \) and closed for \(D_x\), and let \(*_M\), \(M \subseteq \mathbb R\), be the multiplicative product of distributions given in Definition 2.7. The family of associative algebras (\(\mathcal A\),+,\(*_M\)), \(M \subseteq \mathbb R\) satisfies the conditions (A1’) and (A2)-(A4).

The products \(*_M\), \(M \subseteq \mathbb R\) (cf. Definition 2.7) are extensions (to the case of possible intersecting singular supports) of the product of distributions with disjoint singular supports presented by Hörmander in [11, p. 55]. Theorem 1.2 was proved by two of us for the case \(M=\mathbb R\) in [4], and will be (easily) extended to the general case \(M\subseteq \mathbb R\) in Sect. 2.3.

In this paper we want to study the related problem of whether the associative algebras (\(\mathcal A,+,*_M\)) are unique, i.e. the only ones satisfying the conditions (A1’) and (A2)–(A4).

Let us introduce the following notation: We say that in \(\mathcal G\subseteq \mathcal D'\) the product \(\circledast \) by smooth functions is continuous (or simply that \(\circledast \) is partially continuous) at \(F\in \mathcal G\) iff for every \(\xi \in \mathcal C^{\infty }\), and every sequence \((F_n)_{n\in \mathbb N}\), \(F_n \overset{\mathcal D'}{\longrightarrow }\ F\) in \(\mathcal G\), we have: \( \xi \circledast F_n \overset{\mathcal D'}{\longrightarrow }\ \xi \circledast F\), and \(F_n \circledast \xi \overset{\mathcal D'}{\longrightarrow }\ F \circledast \xi \), where \(\overset{\mathcal D'}{\longrightarrow }\) denotes convergence in distribution sense.

We note that the dual product and the family of products \(*_M\) (defined in \(\mathcal A\)) are all partially continuous (cf. Theorem 2.11(vi)). We also remark that if \(\circledast \) (defined in \(\mathcal G\)) is partially continuous at zero then it is partially continuous everywhere in \(\mathcal G\) (because \(\circledast \) is bilinear, and \(\mathcal G\) is a vector space).

Our results are summarized in the following Theorem and Corollary:

Main Theorem

Let (\(\mathcal G,+,\circledast \)) be an associative algebra of distributions that satisfies the conditions (A1’), (A2)–(A4) given above, and

  1. (A5.1)

    Every \(F \in \mathcal G\) is locally a finite order derivative of some \(G \in \mathcal G\cap \mathcal C\),

  2. (A5.2)

    The product \(\circledast \) is partially continuous at zero;

then \(\mathcal A\subseteq \mathcal G\) and the restriction of \(\circledast \) to \(\mathcal A\) is given by \(*_M\) for some \(M \subseteq \mathbb R\). In other words,(\(\mathcal A,+,*_M\)) is a subalgebra of (\(\mathcal G,+,\circledast \)).

Remark 1.3

Notice that every \(F \in \mathcal D'\) is locally a finite order derivative of some continuous function \(G \in \mathcal C\) (cf. Theorem 3.4.2, [19]). The condition (A5.1) adds the requirement that if \(F\in \mathcal G\) then also \(G \in \mathcal G\). This condition can be replaced by (cf. Remark 3.3):

  1. (A5.1’)

    Anti-differentiation and the dual product by smooth functions are inner operations in \(\mathcal G\).

The conditions (A5.1) and (A5.1’) are both satisfied by \(\mathcal G= \mathcal D'\) and \(\mathcal G=\mathcal A\).

We also note that the conditions (A5.1) and (A5.2) can be replaced by the single, stronger condition (cf. Remark 3.2):

  1. (A6)

    Every \(F \in \mathcal G\) is globally a finite order derivative of some \(G \in \mathcal G\cap \mathcal C\);

which is satisfied by \(\mathcal G=\mathcal A\) and by \(\mathcal G=\mathcal D'(\Omega )\) for arbitrary compact sets \(\Omega \subset \mathbb R\). We will see that (A6) implies (A5.2) (and, of course, also (A5.1)). The imposition of one of the conditions (A5.1,A5.2), (A5.1’,A5.2) or (A6) is required for the proof of Theorem 3.1 which is an important intermediate result in the proof of the Main Theorem.

Corollary 1.4

For \(\mathcal A\) as defined above, let (\(\mathcal A,+,\circledast \)) be an associative algebra satisfying the conditions (A2)-(A4). Then \(\circledast =*_M\) for some \(M \subseteq \mathbb R\).

The proof of this Corollary is straightforward: since the space \(\mathcal A\) satisfies (A1’) and (A6), and thus (cf. Remark 1.3) also (A5.1) and (A5.2), it follows from the Main Theorem that \(\circledast =*_M\) for some \(M\subseteq \mathbb R\).

The problem of proving the uniqueness of the algebras (\(\mathcal A,+,*_M\)) was considered before in an article by B. Fuchssteiner that was published in Mathematische Annalen [8] (see also [9]) and recently reviewed in the Ph.D thesis [18]. The main result of [8] is basically our Corollary 1.4. Unfortunately, the paper [8] is not so well-known and came to our knowledge only after we have concluded our own proof of the uniqueness result. In spite of the obvious intersection with the results of [8], we have decided to write down our own results because: (i) our proof is different and, in our view, simpler than the one presented in [8, 18]; and (ii) our results are more general, because they do not apply only to the space \(\mathcal A\), but instead to the family \(\mathcal G\subseteq \mathcal D'\). In practice this means that we do not impose the restriction that the product \(\circledast \) is an inner operation in \(\mathcal A\); instead we prove that this is a consequence of the properties (A1’), (A2)-(A5) for a general space \(\mathcal G\subseteq \mathcal D'\).

Finally, we remark that the algebras \((\mathcal A, +,*_M)\) provide an interesting setting to obtain intrinsic formulations (i.e. defined within the space of Schwartz distributions) for some classes of differential operators and differential equations with singular coefficients. This approach has been explored namely for Schrödinger operators with point interactions and for ODEs with singular coefficients [5,6,7]. It yields a formulation which is more general than the ones based on other intrinsic products like the model products [1, 12, 15], and alternative to the non-intrinsic formulations like the ones in terms of Colombeau generalized functions [2, 3, 10, 13, 15, 16].

In the next section we study the main properties of the product \(*_M\) and show that for all \(M \subseteq \mathbb R\), the algebras \((\mathcal A,+,*_M)\) satisfy the conditions in Theorem 1.2. In Sect. 3 we prove the Main Theorem.

Notation 1.5

Spaces of functions or distributions over \(\mathbb R\) are denoted by calligraphic capital letters \(\mathcal A\), \(\mathcal C\), \(\mathcal D\), \(\mathcal D'\),....

Capital roman letters F, G and J denote general distributions; \(\phi \), \(\psi \) and \(\xi \) are smooth functions; and f, g and h are locally integrable functions or regular distributions (we normally use the same notation for both objects). If we need to be more specific, we use the subscript \(\mathcal D'\) for regular distributions; for instance \(f_{\mathcal D'}\) is the regular distribution associated to the locally integrable function f.

The characteristic function of \(\Omega \subseteq \mathbb R\) is written \(\chi _\Omega \); the Heaviside step function is \(H=\chi _{\mathbb R^+}\). As usual \(\delta _{x}\) is the Dirac measure with support at x; if \(x=0\) we write only \(\delta \).

2 The algebras \((\mathcal A,+,*_M)\)

2.1 General definitions

Let \(\mathcal D\) be the space of smooth functions with support on a compact subset of \(\mathbb R\), and \(\mathcal D'\) is its dual (the space of Schwartz distributions). As usual, supp F denotes the support of \(F \in \mathcal D'\), and sing supp F denotes its singular support.

For every locally integrable function \(f \in \mathcal L_\textrm{loc}^1\) one defines a regular distribution \(f_{\mathcal D'}\in {\mathcal D'}\) by

$$\begin{aligned} \left\langle f_{\mathcal D'},t\right\rangle =\int _\mathbb R{f(x) t (x)} \, dx ~, t \in \mathcal D~. \end{aligned}$$

By abuse of notation, we will usually identify \(f_{\mathcal D'}\) with f. The nth-order Schwartz distributional derivative of the distribution F is defined by

$$\begin{aligned} \left\langle D_x^n F,t \right\rangle =(-1)^{n}\left\langle F, d_x^{n}{t} \right\rangle ~, \quad t\in \mathcal D\end{aligned}$$

where \(d_x^{n}{t}\) denotes the nth-order classical (pointwise) derivative of t. If f is absolutely continuous, the Schwartz distributional derivative and the classical pointwise derivative (defined a.e.) coincide, i.e.

$$\begin{aligned} D_x f_{\mathcal D'} =\left( {{{d_x}}f}\right) _{\mathcal D'} ~. \end{aligned}$$
(2.1)

The dual product of a function \(\phi \in \mathcal C^\infty \) by a distribution \(F \in \mathcal D'\) is defined by

$$\begin{aligned} \left\langle \phi F,t\right\rangle =\left\langle F,\phi t\right\rangle , \quad t\in \mathcal D\end{aligned}$$
(2.2)

and it is a generalization of the standard product of functions, i.e.

$$\begin{aligned} \phi (h_{\mathcal D'}) =(\phi h)_{\mathcal D'} ~, \quad \text{ for } \text{ all } h \in \mathcal L_\textrm{loc}^1 ~. \end{aligned}$$
(2.3)

The dual product is bilinear. Moreover, the distributional derivative \(D_x\) satisfies the Leibniz rule with respect to the dual product:

$$\begin{aligned} D_x (\phi F) =({d_x}\phi ) F+\phi D_x F ~. \end{aligned}$$
(2.4)

2.2 The multiplicative product \(*\)

For a detailed presentation and proofs of the main results, the reader should refer to [4]. Let \(\mathcal C^\infty _p\) be the space of piecewise smooth functions on \(\mathbb R\): \(f\in \mathcal C^\infty _p\) iff there is a finite set \(I\subset \mathbb R\) such that \(f\in \mathcal C^\infty (\mathbb R\backslash I)\) and the lateral limits \(\lim _{x\rightarrow x_0^\pm }f^{(n)}(x)\) exist and are finite for all \(x_0\in I\) and all \(n\in \mathbb N_0\). We have of course \(\mathcal C_p^\infty \subset \mathcal L_\textrm{loc}^1\).

Definition 2.1

Let \(\mathcal A\) be the space of piecewise smooth functions \(\mathcal C^\infty _p\) (regarded as regular distributions) together with their distributional derivatives to all orders.

All the elements of \(\mathcal A\) are distributions with finite singular supports. They can be written explicitly in the form:

Lemma 2.2

\(F \in \mathcal A\) iff there is a finite set \({ I}=\{x_1<x_2<...<x_m\}\) associated with a set of open intervals \(\Omega _i=(x_i,x_{i+1})\), \(i=0,..,m\) (where \(x_0=-\infty \) and \(x_{m+1}=+\infty \)) such that:

$$\begin{aligned} F= f+\Delta ^F \end{aligned}$$
(2.5)

where \(f\in \mathcal C_p^\infty \) is of the form (\(\chi _{\Omega _i}\) is the characteristic function of \(\Omega _i\)):

$$\begin{aligned} f=\sum _{i=0}^m f_i \chi _{\Omega _i} ~,\quad f_i \in \mathcal C^{\infty } \end{aligned}$$
(2.6)

and \(\Delta ^F\) has support on a subset of I:

$$\begin{aligned} \Delta ^F=\sum _{i=1}^m \Delta ^F_{x_i} =\sum _{i=1}^m \sum _{j=0}^n {F}_{ij}\delta ^{(j)}_{x_i} ~,\quad { F}_{ij} \in \mathbb R~. \end{aligned}$$
(2.7)

We have, of course, sing supp \(F \subseteq { I}\).

Let us recall the definition of the Hörmander product of distributions with non-intersecting singular supports [11, p. 55].

Definition 2.3

Let \(F,G\in \mathcal D'\) be two distributions such that \(\text {sing supp } F\) and \(\text {sing supp } G\) are finite disjoint sets. Let \(\left\{ \Omega _i\subset \mathbb R,i=1,\ldots ,d\right\} \) be a finite covering of \(\mathbb R\) such that, on each open set \(\Omega _i\), either F or G is a smooth function. The Hörmander product of F by G is defined by

$$\begin{aligned} F \cdot G: (F\cdot G)|_{\Omega _i}=F|_{\Omega _i}G|_{\Omega _i} \end{aligned}$$

where \(F|_{\Omega _i}\) denotes the restriction of the F to the set \(\Omega _i\), and likewise for the other distributions. Moreover, the product \(F|_{\Omega _i}G|_{\Omega _i}\) is the dual product defined in (2.2).

Let us emphasise that the Hörmander product is well-defined for all \(F,G \in \mathcal A\) provided that \(\text {sing supp } F\) and \(\text {sing supp } G\) are finite disjoint sets. We now extend the Hörmander product to the case of distributions with intersecting singular supports (see [4] for details)

Definition 2.4

Let \(F,G\in \mathcal A\). The product \(*\) is defined by

$$\begin{aligned} F*G=\mathop {\lim }\limits _{\varepsilon \downarrow 0}F(x) \cdot G(x+\epsilon ) \end{aligned}$$
(2.8)

where the product \(F(x)\cdot G(x+\epsilon )\) is the Hörmander product and the limit is taken in the distributional sense.

Notice that for sufficiently small \(\epsilon >0\), F(x) and \(G(x+\epsilon )\) have disjoint singular supports, hence the Hörmander product in (2.8) is well-defined.

The next theorem provides an explicit formula for \(F*G\). Let \(F,G \in \mathcal A\), let \(I=(\text {sing supp F} \, \cup \text {sing supp G}) =\{x_1<..<x_m\}\), and consider the associated set of open intervals \(\Omega _i=(x_i,x_{i+1})\), \(i=0,..,m\) (with \(x_0=-\infty \) and \(x_{m+1}=+\infty \)). Then, in view of Lemma 2.2, F and G can be written in the form:

$$\begin{aligned} F = \sum _{i=0}^{m}f_i\chi _{\Omega _i}+ \sum _{i=1}^m \Delta ^F_{x_i} , \qquad G = \sum _{i=0}^{m}g_i\chi _{\Omega _i}+ \sum _{i=1}^m \Delta ^G_{x_i} \end{aligned}$$
(2.9)

where \(f_i,g_i \in \mathcal C^\infty \) and \(\Delta ^F_{x_i}=0\) if \(x_i \in I \backslash \text {sing supp F}\), and likewise for \(\Delta ^G_{x_i} \). Then we have:

Theorem 2.5

Let \(F,G \in \mathcal A\) be written in the form (2.9). Then \(F*G \) is given explicitly by

$$\begin{aligned} F * G = \sum _{i=0}^{m}f_i g_i\chi _{\Omega _i}+\sum _{i=1}^m\left[ g_i \Delta ^F_{x_i} + f_{i-1} \Delta ^G_{x_i}\right] ~. \end{aligned}$$
(2.10)

Finally, the main properties of the product \(*\) are summarized in the following Theorem (cf. Theorems 3.16 and 3.18, [4]):

Theorem 2.6

The product \(*\) is an inner operation in \(\mathcal A\), it is associative, distributive, non-commutative and it reproduces the product of continuous functions in \(\mathcal A\, \cap \,\mathcal C\). The distributional derivative \(D_x\) is an inner operator in \(\mathcal A\) and satisfies the Leibniz rule with respect to the product \(*\).

We conclude that the space \(\mathcal A\), endowed with the product \(*\), is an associative (but non-commutative) differential algebra of distributions that satisfies the properties stated in Theorem 1.2. It is, however, not the unique algebra that satisfies these conditions, as we now show.

2.3 The algebras (\(\mathcal A,+,*_M\))

Let \(F,G \in \mathcal A\) be written in the form:

$$\begin{aligned} F= f+ \sum _{x_i \in I_F} \Delta _{x_i}^F , \qquad G= g+ \sum _{y_i \in I_G} \Delta _{y_i}^G \end{aligned}$$
(2.11)

where \(f,g\in \mathcal C^\infty _p\), \(I_F=\) supp \(\Delta ^F\) and \(I_G=\) supp \(\Delta ^G\). Then:

$$\begin{aligned} F*G= fg + \sum _{y_i \in I_G} f * \Delta _{y_i}^G + \sum _{x_i \in I_F} \Delta _{x_i}^F *g \end{aligned}$$
(2.12)

and, likewise

$$\begin{aligned} G*F= fg + \sum _{y_i \in I_G} \Delta _{y_i}^G *f + \sum _{x_i \in I_F} g* \Delta _{x_i}^F ~. \end{aligned}$$
(2.13)

We can combine both formulas and obtain a slightly more general product (one that acts as \(F*G\) on the points that belong to a given set \(M \subseteq \mathbb R\), and as \(G*F\) on the points that don’t belong to M):

Definition 2.7

Let \(M \subseteq \mathbb R\) and \(F,G \in \mathcal A\). The product \(*_M\) is defined by

$$\begin{aligned} F*_M G= & {} f \, g + \sum _{y_i \in I_G \cap M} f * \Delta _{y_i}^G + \sum _{x_i \in I_F \cap M} \Delta _{x_i}^F *g \nonumber \\{} & {} + \sum _{y_i \in I_G \backslash M} \Delta _{y_i}^G *f + \sum _{x_i \in I_F\backslash M} g * \Delta _{x_i}^F \end{aligned}$$
(2.14)

Notice that for \(M = \mathbb R\) we have \(F*_M G=F*G\) and for \(M=\emptyset \), \(F*_MG=G*F\). The next Remark provides some explicit formulas:

Remark 2.8

Let \(n,m \in \mathbb N_0\) and \(s,t \in \mathbb R\). Let \(M \subseteq \mathbb R\) and let H be the Heaviside step function. It follows from (2.10) and (2.14) that:

$$\begin{aligned} H(x-t) *_M \delta _s^{(n)}= & {} \delta _s ^{(n)} *_M H(x-t)= \left\{ \begin{array}{l} \begin{array}{ll} \delta _s^{(n)} &{}\quad \text { if } s>t\\ 0 &{} \quad \text { if } s<t \end{array} \end{array} \right. \\ \delta _t^{(n)} *_M H(x-t)= & {} H(t-x) *_M \delta _t^{(n)} ~ = ~ \chi _M(t) ~\delta _t^{(n)}\\ H(x-t) *_M \delta _t ^{(n)}= & {} \delta _t^{(n)} *_M H(t-x) ~ = ~ (1-\chi _M(t))~ \delta _t^{(n)}\\ \delta ^{(n)}_s *_M \delta ^{(m)}_t= & {} 0 . \end{aligned}$$

Let us introduce the following distributions:

Definition 2.9

Let \(M \subseteq \mathbb R\) and let \(F \in \mathcal A\) be written in the form (2.11). The distribution \(F_M\in \mathcal A\) associated with F is defined by

(2.15)

We can now write \(F*_M G\) in a compact form:

Lemma 2.10

Let \(F,G \in \mathcal A\) and let \(F_M,G_M\) be the associated distributions of the form (2.15). Then

$$\begin{aligned} F*_M G=F_M*G_M + G_{\mathbb R\backslash M} * F_{\mathbb R\backslash M} ~. \end{aligned}$$
(2.16)

Proof

Using (2.15), we have

$$\begin{aligned} F_M*G_M= \frac{1}{2}fg+ f * \Xi ^{G_M} + \Xi ^{F_M} *g \end{aligned}$$

and likewise:

$$\begin{aligned} G_{\mathbb R\backslash M}*F_{\mathbb R\backslash M}= \frac{1}{2}fg + \Xi ^{G_{\mathbb R\backslash M}} *f + g * \Xi ^{F_{\mathbb R\backslash M}} ~. \end{aligned}$$

Hence, (cf. (2.14)):

$$\begin{aligned} F_M*G_M + G_{\mathbb R\backslash M} * F_{\mathbb R\backslash M}=F*_M G ~. \end{aligned}$$

\(\square \)

We now study the main properties of \(*_M\):

Theorem 2.11

For all \(M\subseteq \mathbb R\), the product \(*_M\) is (i) an inner operation in \(\mathcal A\), (ii) distributive and (iii) associative. Moreover, (iv) it reproduces the usual product of continuous functions in \(\mathcal A\cap \mathcal C\), and (v) the dual product of smooth functions by distributions in \(\mathcal A\). (vi) It is also partially continuous at zero and (vii) \(D_x\) satisfies the Leibniz rule with respect to \(*_M\).

Proof

Let \(F,G,J \in \mathcal A\) and let \(F_M,G_M,J_M\) be the associated distributions defined by (2.15). Then:

(i) By Lemma 2.2 we have \(F_M,G_M,F_{\mathbb R\backslash M},G_{\mathbb R\backslash M}\in \mathcal A\). Since (\(\mathcal A,+,*\)) is an algebra it follows that \(F_M*G_M + G_{\mathbb R\backslash M} * F_{\mathbb R\backslash M}\in \mathcal A\) and therefore \(F*_M G \in \mathcal A\).

(ii) From (2.16) we have:

$$\begin{aligned} F*_M J+G*_MJ=F_M*J_M + J_{\mathbb R\backslash M} * F_{\mathbb R\backslash M}+G_M*J_M + J_{\mathbb R\backslash M} * G_{\mathbb R\backslash M} ~. \end{aligned}$$

Since \(*\) is distributive and \(F_M+G_M=(F+G)_M\) for all \(M\subseteq \mathbb R\), we get

$$\begin{aligned} F*_M J+G*_MJ&=(F+G)_M*J_{ M} + J_{\mathbb R\backslash M} * (F+G)_{\mathbb R\backslash M}\\&=(F+G)*_MJ \end{aligned}$$

which proves that the product is right-distributive. Equivalently, one proves that it is also left-distributive.

(iii) We have for \(F,G,J\in \mathcal A\)

$$\begin{aligned} (F*_M G)*_M J=(F*_MG)_M*J_M+J_{\mathbb R\backslash M}*(F*_MG)_{\mathbb R\backslash M} \end{aligned}$$

Since

$$\begin{aligned} (F*_M G)_M=\frac{\sqrt{2}}{2} fg +\sqrt{2}\left( \Xi ^{F_M} *g+ f*\Xi ^{G_M}\right) \end{aligned}$$

and

$$\begin{aligned} (F*_M G)_{\mathbb R\backslash M}=\frac{\sqrt{2}}{2} fg +\sqrt{2} \left( g*\Xi ^{F_{\mathbb R\backslash M}}+ \Xi ^{G_{\mathbb R\backslash M}} *f \right) \end{aligned}$$

a simple calculation shows that:

$$\begin{aligned} (F*_M G)*_MJ&= fgj + (f*g) * \Xi ^{J_M} +(f*\Xi ^{G_M})*j +(\Xi ^{F_M} *g)*j\\&\quad +\Xi ^{J_{\mathbb R\backslash M}}*(f*g)+j* (g*\Xi ^{F_{\mathbb R\backslash M}})+j*(\Xi ^{G_{\mathbb R\backslash M}}*f) \end{aligned}$$

Using the associativity of \(*\) we get

$$\begin{aligned} (F*_M G)*_MJ&= fgj + f *(g*\Xi ^{J_M}) +f*(\Xi ^{G_M}*j)+\Xi ^{F_M} *(g*j)\\&\quad +(\Xi ^{J_{\mathbb R\backslash M}}*f)*g +(j*\Xi ^{G_{\mathbb R\backslash M}})*f +(j* g)*\Xi ^{F_{\mathbb R\backslash M}} \end{aligned}$$

which is exactly \(F*_M(G*_MJ)\). Hence the product \(*_M\) is associative.

(iv) If \(F,G\in (\mathcal A\cap \mathcal C)\) then \(F=f\) and \(G=g\) with \(f,g \in \mathcal C\cap \mathcal C_p^\infty \) [cf. (2.5)]. Hence, from (2.14), \(F*_M G=fg\).

(v) If \(F \in C^\infty \) then in (2.11) we have \(F=f\). It follows from (2.10) that \(F*G=G*F=fG\) for all \(G\in \mathcal A\) and so, from (2.14), that \(F*_MG=G*_MF=fG\).

(vi) Since \(\xi *_M F= F*_M \xi = \xi F\) for all \(\xi \in \mathcal C^\infty \) and \(F\in \mathcal A\), the partial continuity of \(*_M\) follows from the same property for the dual product. Let then \((F_n)_{n\in \mathbb N}\), \(F \overset{\mathcal D'}{\longrightarrow }\ 0\) be a sequence in \(\mathcal A\). We have:

$$\begin{aligned} \lim _{n\rightarrow +\infty } \langle \xi F_n,t \rangle = \lim _{n\rightarrow +\infty } \langle F_n, \xi t\rangle =0\,, \quad \forall t \in \mathcal D\end{aligned}$$

and so \(\xi F_n \overset{\mathcal D'}{\longrightarrow }\ 0\).

(vii) Let us write \(F*_M G\) in the form

$$\begin{aligned} F*_M G=F * G+\frac{\sqrt{2}}{2} J_{\mathbb R\backslash M} \end{aligned}$$
(2.17)

where \(J= G*F-F*G\), and \(J_{\mathbb R\backslash M}\) is defined by (2.15). Notice that from (2.12) and (2.13) we have explicitly:

$$\begin{aligned} \frac{\sqrt{2}}{2} J_{\mathbb R\backslash M}= \sum _{y_i \in I_G \backslash M} \left( \Delta _{y_i}^G * f -f * \Delta _{y_i}^G \right) + \sum _{x_i \in I_F \backslash M} \left( g* \Delta _{x_i}^F - \Delta _{x_i}^F * g \right) ~. \end{aligned}$$

Moreover, since J is of finite support (supp \(J \subseteq I_F \cup I_G\)), we have

$$\begin{aligned} D_x \left( J_{\mathbb R\backslash M}\right) = \left( D_x J\right) _{\mathbb R\backslash M} ~. \end{aligned}$$

It follows that

$$\begin{aligned} D_x\left( F*_M G \right) = D_x\left( F * G \right) + \frac{\sqrt{2}}{2} \left( D_x(G*F) - D_x(F*G) \right) _{\mathbb R\backslash M} \end{aligned}$$

and since \(D_x\) satisfies the Leibniz rule with respect to the product \(*\), by using (2.17) the terms on the r.h.s can be easily shown to yield \((D_x F) *_M G+F*_M (D_x G)\), concluding the proof.\(\square \)

Theorem 1.2 in the Introduction is a simple corollary of the previous result.

3 Main Theorem

In this section we prove the Main Theorem. Several preparatory results that are required for the proof will be given in Sect. 3.1 (Theorems 3.1, 3.5, 3.6 and 3.7).

3.1 Preparatory results

Theorem 3.1

Let \(\xi \in \mathcal C^\infty \) and \(F\in \mathcal G\). Then

$$\begin{aligned} \xi \circledast F=\xi F=F\circledast \xi ~. \end{aligned}$$
(3.1)

Proof

If \(F\in \mathcal G\cap \mathcal C\) then from (A4)

$$\begin{aligned} \xi \circledast F=\xi F ~. \end{aligned}$$

Moreover, if (3.1) is valid for some \(F\in \mathcal G\) (and all \(\xi \in \mathcal C^\infty \)) then it is valid for \(F'=D_xF\):

$$\begin{aligned} (\xi \circledast F)'=(\xi F)'\Longleftrightarrow & {} \xi '\circledast F+\xi \circledast F'=\xi 'F+\xi F'\\\Longleftrightarrow & {} \xi \circledast F'=\xi F' \end{aligned}$$

since \(\xi '\circledast F=\xi 'F\) by (3.1), and the dual product satisfies the Leibniz rule. This proves that

$$\begin{aligned} \xi \circledast g^{(k)} =\xi g^{(k)} \end{aligned}$$
(3.2)

for all \(g\in \mathcal G\cap \mathcal C\), \(\xi \in \mathcal C^\infty \) and all \(k\in \mathbb N_0\).

We now extend the previous result to all \(F\in \mathcal G\). We will need to impose the extra conditions (A5.1) and (A5.2). Let \((\phi _i)_i\) be a countable family of smooth real functions satisfying:

  1. (P1)

    \(\sum _{i=1}^{+\infty }\phi _i(x)=1,\,\forall x\in \mathbb R\).

  2. (P2)

    \(\text {supp } \phi _i\) is compact.

  3. (P3)

    If \(\Xi \subset \mathbb R\) is compact then \(\Xi \cap \text {supp } \phi _i\ne \emptyset \) only for a finite number of functions \(\phi _i\).

Then, \(\forall F\in \mathcal G\) we have

$$\begin{aligned} F=\left( \sum _{i=1}^{+\infty }\phi _i\right) F=\sum _{i=1}^{+\infty }\phi _i F=\sum _{i=1}^{+\infty }\phi _i g_i^{(k_i)} \end{aligned}$$

where (cf. (A5.1)) \(k_i\in \mathbb N_0\) and \(g_i\in \mathcal C\cap \mathcal G\) is such that for some bounded open set \(\Omega _i \supset \) supp \(\phi _i\) we have \(F|_{\Omega _ {i}}=g^{(k_i)}_i|_{\Omega _ {i}}\). Using (3.2) we get:

$$\begin{aligned} F=\sum _{i=1}^{+\infty }\phi _i\circledast g_i^{(k_i)} ~. \end{aligned}$$

Let \(F_i=\phi _i\circledast g_i^{(k_i)}\). Then \(F_i\in \mathcal G\) and is of compact support (because \(\phi _i\circledast g_i^{(k_i)}=\phi _i g_i^{(k_i)}\) and \(\text {supp } \phi _i\) is compact). Hence \(F_i=h_i^{(s_i)}\) for some \(h_i\in \mathcal G\cap \mathcal C\), \(s_i \in \mathbb N_0\) (cf. (A5.1)), and thus, for every \(F\in \mathcal G\), \(\xi \in \mathcal C^\infty \) and \(n \in \mathbb N\):

$$\begin{aligned} \xi \circledast F = \xi \circledast \sum _{i=1}^{+\infty } h_i^{(s_i)} = \xi \circledast \sum _{i=1}^{n} h_i^{(s_i)} + \xi \circledast \sum _{i=n+1}^{+\infty } h_i^{(s_i)}~. \end{aligned}$$
(3.3)

Let \(z_n=\sum _{i=n+1}^{+\infty } h_i^{(s_i)}\). Then \( z_n= F - \sum _{i=1}^{n} h_i^{(s_i)} \in \mathcal G\), and \(z_n \overset{\mathcal D'}{\longrightarrow }\ 0\) (since \(F=\lim _{n\rightarrow +\infty } \sum _{i=1}^{n} h_i^{(s_i)}\) in \(\mathcal D'\)).

We then rewrite (3.3) in the form

$$\begin{aligned} \xi \circledast F = \xi \circledast z_n+ \xi \circledast \sum _{i=1}^{n} h_i^{(s_i)} = \xi \circledast z_n+ \sum _{i=1}^{n} \xi \circledast h_i^{(s_i)} \end{aligned}$$

and take the limit \(n \rightarrow +\infty \). Using the partial continuity of the product (cf. (A5.2)), we find:

$$\begin{aligned} \xi \circledast F = \lim _{n\rightarrow +\infty } \sum _{i=1}^{n} \xi \circledast h_i^{(s_i)} =\sum _{i=1}^{+\infty } \xi \, h_i^{(s_i)} = \xi \sum _{i=1}^{+\infty }h_i^{(s_i)} =\xi \, F \end{aligned}$$

where we used (3.2) in the second step. In the same manner one proves that \(F\circledast \xi =\xi F\).\(\square \)

Remark 3.2

Notice that if we assume that \(\mathcal G\) satisfies (A6), the proof of the previous Theorem is concluded in Eq. (3.2) since every \(F \in \mathcal G\) is (globally) a finite order derivative of some \(g \in \mathcal G\cap \mathcal C\). The condition (A6) is satisfied by \(\mathcal G=\mathcal A\), and also by \(\mathcal G= \mathcal D'(\Omega )\) for \(\Omega \subseteq \mathbb R\) a compact set. If \(\mathcal G\) satisfies (A6) we can also conclude that \(\circledast \) is partially continuous (i.e. it also satisfies (A5.2)). This follows from the partial continuity of the dual product and the fact that from (3.2), \(\xi \circledast F= \xi F\) for all \(\xi \in \mathcal C^\infty \) and \(F\in \mathcal G\).

Remark 3.3

Theorem 3.1 still holds if in the definition of \(\mathcal G\) the condition (A5.1) is replaced by (A5.1’). To prove this let us consider the partition of unity \((\phi _i)_i\) defined above by (P1)-(P3). For \(F \in \mathcal G\) we have

$$\begin{aligned} F=\left( \sum _{i=1}^{+\infty } \phi _i\right) F=\sum _{i=1}^{+\infty }\phi _i F ~. \end{aligned}$$

Let \(F_i=\phi _i F\); since \(F_i\) is of compact support and \(F_i\in \mathcal G\) (cf. (A5.1’)) we have \(F_i=h_i^{(s_i)}\) for some \(s_i \in \mathbb N_0\) and \(h_i \in \mathcal C\) (cf. Corollary 3.4-2a, [19]). Moreover, since anti-differentiation is an inner operation in \(\mathcal G\) (cf. (A5.1’)), we also have \(h_i \in \mathcal G\). Hence \(F\in \mathcal G\) can be written in the form:

$$\begin{aligned} F=\sum _{i=1}^{+\infty } h_i^{(s_i)} ~, \quad h_i\in \mathcal C\cap \mathcal G~,\quad s_i\in \mathbb N_0 \end{aligned}$$

and the rest of the proof follows from Eq. (3.3).

Lemma 3.4

Let \(F,G \in \mathcal G\). Then \(\text {supp}~(F\circledast G)\subseteq \text {supp }F\cap \text {supp }G\).

Proof

We will prove that \(\text {supp }(F\circledast G) \subseteq \text {supp } F\) (the same result is valid for G). The previous statement is equivalent to proving that (\(\Omega ^c\) denotes the complement of \(\Omega \)):

$$\begin{aligned} \left\langle F\circledast G, t\right\rangle =0 ~, \quad \forall t \in \mathcal D: \text {supp }t \subset (\text {supp } F)^c \end{aligned}$$
(3.4)

with the obvious exception of the case \(\text {supp } F=\mathbb R\) for which the result is trivial. Let us consider a partition of unity \(\phi _1, \phi _2\in \mathcal C^{\infty } \) such that:

  1. (i)

    \(\phi _1 + \phi _2 =1\),

  2. (ii)

    \(\phi _1(x\in \text {supp } F) =0\),

  3. (iii)

    \(\phi _2(x\in \text {supp } t) =0\) .

Then

$$\begin{aligned} \left\langle F\circledast G, t\right\rangle= & {} \left\langle (\phi _1 + \phi _2) \circledast (F \circledast G), t\right\rangle \\= & {} \left\langle (\phi _1 \circledast F)\circledast G, t\right\rangle + \left\langle \phi _2 \, (F\circledast G), t\right\rangle \end{aligned}$$

where we used (3.1) to obtain the second term. It follows that:

$$\begin{aligned} \left\langle F\circledast G, t\right\rangle =\left\langle (\phi _1 F)\circledast G, t\right\rangle + \left\langle F\circledast G, \phi _2 t\right\rangle =0 \end{aligned}$$

because, by (ii) and (iii), \(\phi _1 F=0\) and \(\phi _2 t=0\), respectively.\(\square \)

Theorem 3.5

Let \(s,t\in \mathbb R\). Then \(H(x-s)\circledast H(x-t) = H(x-max\left\{ s,t\right\} ).\)

Proof

Let us first consider \(s<t\). Since \(\text {supp }H(s-x)\cap \text {supp } H(x-t)= \emptyset \), it follows from Lemma 3.4 that

$$\begin{aligned} H(s-x)\circledast H(x-t) =0 ~. \end{aligned}$$
(3.5)

Taking into account that \(H(s-x) =1-H(x-s) \), we easily obtain \(H(x-s)\circledast H(x-t) =H(x-t)\). The other case \(s>t\) is proved in the same way.

Let now \(s=t\). It follows from (A4) that

$$\begin{aligned} \left| x-s\right| \circledast \left| x-s\right| =(x-s)^2 \end{aligned}$$
(3.6)

Twice differentiating this equation, we get

$$\begin{aligned} \delta _s \circledast \left| x-s\right| + (D_x\left| x-s\right| )\circledast (D_x\left| x-s\right| )+\left| x-s\right| \circledast \delta _s =1 \end{aligned}$$
(3.7)

where we took into account that

$$\begin{aligned} D_x^2\left| x-s\right| =2\delta _s ~. \end{aligned}$$
(3.8)

We now prove that \(\delta _s \circledast \left| x-s\right| =0\). To make it simple let \(s=0\). We have:

$$\begin{aligned} \delta \circledast \left| x\right|= & {} \delta \circledast \left( x \circledast H -x \circledast H(-x) \right) \\= & {} (\delta \circledast x)\circledast H-(\delta \circledast x)\circledast H(-x)=0 ~. \end{aligned}$$

Notice that \(x \circledast H=x H\) and \(\delta \circledast x=x\circledast \delta = x \delta = 0\) (cf. Theorem 3.1). In the same way one proves that \({\left| x-s\right| }\circledast \delta _s =0 \). Hence Eq. (3.7) reduces to

$$\begin{aligned}{} & {} ({2H(x-s)-1})\circledast ({2H(x-s)-1})={1}\\{} & {} \quad \Longleftrightarrow {H(x-s)}\circledast {H(x-s)}={H(x-s)} \end{aligned}$$

which concludes the proof.\(\square \)

Before we proceed to the next Theorem, let us recall the following useful formula which is valid for all \(n,m \in \mathbb N_0\) (Eq. (26), Sect. 2.6, [14]):

$$\begin{aligned} x^{n}\delta ^{(m)}=\left\{ \begin{array}{ll} 0~, &{} \quad m < n \\ (-1)^n\frac{m!}{(m-n)!}\delta ^{(m-n)}~, &{} \quad m \ge n \end{array} \right. \end{aligned}$$
(3.9)

where we used the convention \(0!=1\). The case \(m\ge n\) can be easily inverted, yielding

$$\begin{aligned} \delta ^{(j)}=(-1)^n\frac{j!}{(j+n)!} x^{n}\delta ^{(j+n)} ~, \quad \forall j,n \in \mathbb N_0 ~. \end{aligned}$$
(3.10)

Theorem 3.6

For every \(s,t \in \mathbb R\), and every \(i,j \in \mathbb N_0\) we have

$$\begin{aligned} \delta ^{(i)}_s \circledast \delta ^{(j)}_t=0 ~. \end{aligned}$$
(3.11)

Proof

Since \(\text {supp }(\delta ^{(i)}_s \circledast \delta ^{(j)}_t)\subseteq \text {supp }\delta ^{(i)}_s\cap \text { supp }\delta ^{(j)}_t\) (cf. Lemma 3.4), for \(t \not = s\) we have \(\delta ^{(i)}_s \circledast \delta ^{(j)}_t=0\). Moreover, if \(t=s\), we get:

$$\begin{aligned} \delta ^{(i)}_s \circledast \delta ^{(j)}_s=\sum _{k=0}^n a_k\delta ^{(k)}_s \end{aligned}$$
(3.12)

for some \(n \in \mathbb N_0\) and \(a_k\in \mathbb R\), \(k=0,..,n\). To simplify the presentation, assume that \(s=0\) and \(i\le j\). Then:

$$\begin{aligned} x^{i+1}\circledast (\delta ^{(i)}\circledast \delta ^{(j)})= & {} (x^{i+1}\circledast \delta ^{(i)})\circledast \delta ^{(j)}\\= & {} (x^{i+1}\delta ^{(i)})\circledast \delta ^{(j)}=0 \end{aligned}$$

where we used (3.9). Substituting (3.12) in the previous expression:

$$\begin{aligned} x^{i+1}\circledast \sum _{k=0}^n a_k\delta ^{(k)}= x^{i+1}\sum _{k=0}^n a_k\delta ^{(k)}=0 ~\Longrightarrow ~ a_k=0, \quad \forall k\ge i+1 \end{aligned}$$
(3.13)

and so

$$\begin{aligned} \delta ^{(i)}\circledast \delta ^{(j)}= \sum _{k\le i} a_k \delta ^{(k)} ~. \end{aligned}$$
(3.14)

Let us return to (3.10) and set \(n=i+1\):

$$\begin{aligned} \delta ^{(j)}=b_{ij}\delta ^{(j+i+1)}x^{i+1} \quad \textrm{where} \quad b_{ij}=(-1)^{i+1} \tfrac{j!}{(j+i+1)!} ~. \end{aligned}$$
(3.15)

It follows that:

$$\begin{aligned} \delta ^{(i)}\circledast \delta ^{(j)}=b_{ij} \left( \delta ^{(i)}\circledast \delta ^{(j+i+1)} \right) \circledast x^{i+1} \end{aligned}$$
(3.16)

where we used (3.1) and the associativity of \(\circledast \). Since (3.14) is valid for all j, we have:

$$\begin{aligned} \delta ^{(i)}\circledast \delta ^{(j+i+1)}= \sum _{k\le i} a'_k \delta ^{(k)} \end{aligned}$$
(3.17)

for some \(a'_k\in \mathbb R\). Substituting in (3.16) and using (3.9), we finally get

$$\begin{aligned} \delta ^{(i)}\circledast \delta ^{(j)} = b_{ij} \left( \sum _{k\le i} a'_k \, \delta ^{(k)}\right) \circledast x^{i+1} = b_{ij} \sum _{k\le i} a'_k \, \delta ^{(k)} x^{i+1}=0 \end{aligned}$$

which concludes the proof.\(\square \)

Theorem 3.7

For all \(n\in \mathbb N_0\) and \(s,t\in \mathbb R\) we have:

  1. (1)

    \( H(x-t)\circledast \delta _s^{(n)} = \delta _s ^{(n)} \circledast H(x-t)= \left\{ \begin{array}{l} \begin{array}{ll} \delta _s^{(n)} &{}\quad \text { if } s>t\\ 0 &{}\quad \text { if } s<t \end{array} \end{array} \right. \)

  2. (2)

    \( H(x-t) \circledast \delta _t ^{(n)}=c_t\delta _t^{(n)},\,\,\,\,\,\,\, \delta _t ^{(n)} \circledast H(x-t)=(1-c_t)\delta _t^{(n)}\) where \(c_t\) is some function \(c_t:\mathbb R\longrightarrow \left\{ 0,1\right\} \).

Proof

(1) For \(s<t\), \(\text {supp }H(x-t)\cap \text {supp } \delta _s^{(n)}= \emptyset \) and thus the product is zero. For \(s>t\), we have \(H(x-t)=1-H(t-x)\) and since \(\text {supp }H(t-x)\cap \text {supp } \delta _s^{(n)}= \emptyset \) we get:

$$\begin{aligned} H(x-t)\circledast \delta _s^{(n)}=\delta _s^{(n)}\circledast H(x-t)=\delta _s^{(n)} ~. \end{aligned}$$

(2) Let us begin by proving that the formulas are true for \(n=0\). Assume for simplicity that \(t=0\). Since \(\text {supp }H(x)\cap \text {supp }\delta =\left\{ 0\right\} \) we have \(H(x)\circledast \delta ={\sum _{k=0}^m} c_k \delta ^{(k)}\), for some \(m \in \mathbb N_0\) and \(c_k\in \mathbb R\). As before

$$\begin{aligned} (H\circledast \delta )\circledast x=H\circledast (\delta \circledast x)=0 ~\Longrightarrow ~ \sum _{k=0}^m c_k (\delta ^{(k)}\circledast x)=0 ~\Longrightarrow ~ c_k=0, ~ \forall k\ne 0 \end{aligned}$$

and thus \(H\circledast \delta =c\delta \) for some \(c\in \mathbb R\). Moreover (cf. Theorem 3.5)

$$\begin{aligned}{} & {} H\circledast (H\circledast \delta )=(H\circledast H)\circledast \delta =H\circledast \delta \\\Longleftrightarrow & {} H\circledast c\delta =c\delta ~\Longleftrightarrow ~c^2\delta =c\delta ~\Longleftrightarrow ~ c=0 \vee c=1 . \end{aligned}$$

Generalizing now for \(t\in \mathbb R\): \(H(x-t)\circledast \delta _t=c_t \delta _t\) where \(c_t\) is an arbitrary function \(c_t:\mathbb R\longrightarrow \left\{ 0,1\right\} \). Each function \(c_t\) defines a particular \(\circledast \)-product. Let us denote it by \(\circledast _{c_t}\). Thus \(H(x-t)\circledast _{c_t}\delta _t=c_t\delta _t\).

Let us proceed. Differentiating \(H(x-t)\circledast _{c_t} H(x-t)=H(x-t)\) we get:

$$\begin{aligned}{} & {} \delta _t\circledast _{c_t} H(x-t)+H(x-t)\circledast _{c_t}\delta _t=\delta _t \\{} & {} \quad \Longleftrightarrow \delta _t\circledast _{c_t} H(x-t)=(1-c_t)\delta _t \end{aligned}$$

which completes the proof of (2) for \(n=0\).

Assume now that

$$\begin{aligned} H(x-t)\circledast _{c_t}\delta _t^{(n)}=c_t\delta _t^{(n)} \end{aligned}$$
(3.18)

holds for some \(n\in \mathbb N\). Differentiating (3.18) we get

$$\begin{aligned} \delta _t\circledast _{c_t}\delta _t^{(n)}+H(x-t)\circledast _{c_t}\delta _t^{(n+1)}=c_t\delta _t^{(n+1)} \end{aligned}$$

and since the first term is zero (cf. Theorem 3.6) we conclude that (3.18) is valid for \(n+1\), and thus for all \(n \in \mathbb N_{0}\). In the same way one proves that \(\delta _t ^{(n)} \circledast H(x-t)=(1-c_t)\delta _t^{(n)}\), for all \(n \in \mathbb N_{0}\).\(\square \)

3.2 Main Theorem

We can now easily prove the Main Theorem.

Proof

The inclusion \(\mathcal A\subseteq \mathcal G\) follows directly from (A1’) and the fact that the distributional derivative \(D_x\) is an inner operator in \(\mathcal G\).

Let then \(F,G \in \mathcal A\), we want to prove that \(F \circledast G= F *_M G\) for some \(M \subseteq \mathbb R\). Let us write FG in the form (2.5). From the distributive property of \(\circledast \):

$$\begin{aligned} F\circledast G= & {} (f+\Delta ^F) \circledast (g+\Delta ^G)\nonumber \\= & {} f\circledast g + f\circledast \Delta ^G+\Delta ^F \circledast g + \Delta ^F \circledast \Delta ^G ~. \end{aligned}$$
(3.19)

Let us consider each term separately:

1) We first prove that

$$\begin{aligned} f\circledast g= f*_Mg ~, \quad \forall f,g \in \mathcal C_p^\infty ~, \quad \forall M\subseteq \mathbb R~. \end{aligned}$$
(3.20)

Since (cf. Theorem 3.1):

$$\begin{aligned} \xi H(x-a)=\xi \circledast H(x-a) = H(x-a) \circledast \xi \,, \quad \forall \xi \in \mathcal C^{\infty } \end{aligned}$$

and also (cf. Theorem 3.5):

$$\begin{aligned} H(x-a) \circledast H(x-b) = H(x-\textrm{max}\{a,b\}) \end{aligned}$$

we have for \(f=f_iH(x-a)\) and \(g=g_jH(x-b)\), \(f_i,g_j \in \mathcal C^\infty \), using the associativity of \(\circledast \):

$$\begin{aligned} f\circledast g = f g= f*_M g \end{aligned}$$
(3.21)

where the second identity follows from (2.14) and holds for all \(M \subseteq \mathbb R\). Moreover, for f or g smooth, (3.21) also holds (cf. Theorem 3.1). Since every \(f,g \in C_p^\infty \) is the sum of a smooth function with a finite number of functions of the form \(\xi H(x-a)\), \(\xi \in \mathcal C^\infty \), \(a\in \mathbb R\), using the distributive property of \(\circledast \) and \(*_M\) we get (3.20).

2) We now prove that for some \(M\subseteq \mathbb R\)

$$\begin{aligned} f \circledast \Delta ^G = f *_M \Delta ^G \,, \quad \forall f \in \mathcal C_p^\infty ~, ~\forall G \in \mathcal A~. \end{aligned}$$
(3.22)

It follows from Theorem 3.7 and Remark 2.8 that

$$\begin{aligned} H(x-t)\circledast \delta _s^{(n)}=H(x-t)*_M\delta _s^{(n)} \end{aligned}$$
(3.23)

where \(M=\left\{ t\in \mathbb R:c_t=0\right\} \). Since (cf. Theorem 2.11(v) and Theorem 3.1)

$$\begin{aligned} \xi \circledast F = \xi *_M F= \xi F \,, \quad \forall \xi \in \mathcal C^{\infty } \,, \quad \forall F \in \mathcal A\end{aligned}$$
(3.24)

we get

$$\begin{aligned} (\xi H(x-t))\circledast \delta _s^{(n)}= & {} \xi \circledast (H(x-t)\circledast \delta _s^{(n)}) \nonumber \\= & {} \xi *_M (H(x-t)*_M\delta _s^{(n)})\nonumber \\= & {} (\xi H(x-t))*_M\delta _s^{(n)} \end{aligned}$$
(3.25)

where in the first and last steps we used (3.24) and the associativity of \(\circledast \) and \(*_M\), and in the second step we used (3.23) and (3.24).

Moreover, \(\Delta ^G\) is of the form (2.7) and every \(f\in \mathcal C_p^{\infty }\) is the sum of some \(\psi \in \mathcal C^\infty \) with a finite linear combination of terms of the form \(\xi H(x-t),~ \xi \in \mathcal C^{\infty }\). Hence, using the distributive property of \(\circledast \) and \(*_M\), we get (3.22). An equivalent proof shows that

$$\begin{aligned} \Delta ^F \circledast g =\Delta ^F*_M g ~, \quad \forall F\in \mathcal A\,, \quad \forall g\in \mathcal C_p^{\infty } ~. \end{aligned}$$
(3.26)

Notice that once the set M is fixed for the product \(g \circledast \Delta ^F\), the product in the reversed order is also fixed, i.e. \(\Delta ^F \circledast g= \Delta ^F *_M g\) (with the same M); this follows from Theorem 3.7(2), which determines that the function \(c_t\) (and thus the set M) is the same in both products.

3) Finally, we have

$$\begin{aligned} \Delta ^F \circledast \Delta ^G= \Delta ^F *_M \Delta ^G =0 \end{aligned}$$
(3.27)

which is true because \(\Delta ^F\) and \(\Delta ^G\) are both of the form (2.7) and so, from the distributive property of \(\circledast \), Theorem 3.6 and the definition of \(*_M\) (2.14), we easily conclude that both products in (3.27) are zero.

Substituting (3.20), (3.22), (3.26), (3.27) in (3.19) we obtain \(F \circledast G=F*_M G\), concluding the proof.\(\square \)