Abstract

Recently in the work of George, 2010, we considered a modified Gauss-Newton method for approximate solution of a nonlinear ill-posed operator equation , where is a nonlinear operator between the Hilbert spaces and . The analysis in George, 2010 was carried out using a majorizing sequence. In this paper, we consider also the modified Gauss-Newton method, but the convergence analysis and the error estimate are obtained by analyzing the odd and even terms of the sequence separately. We use the adaptive method in the work of Pereverzev and Schock, 2005 for choosing the regularization parameter. The optimality of this method is proved under a general source condition. A numerical example of nonlinear integral equation shows the performance of this procedure.

“Dedicated to Prof. Ulrich Tautenhahn”

1. Introduction

Inverse problems have been one of the fastest growing research area in applied mathematics in the last decades. It is well known that these problems typically lead to mathematical models that are ill-posed (according to Hadamard’s definition [1]) in the sense that it is not possible to provide a unique solution.

In this paper, we consider the task of approximately solving the nonlinear ill-posed equation This equation and the task of solving it make sense only when placed in an appropriate framework. Throughout this paper, we will assume that is a nonlinear operator between Hilbert spaces and with inner product and corresponding norm denoted by and , respectively, and . We assume that (1) has a unique solution . For , let be an available data with Since (1) is ill-posed, regularization methods are used to obtain stable approximate solutions [2, 3]. Iterative regularization methods are one such class of regularization methods [48].

In [4], Bakushinskii proposed an iterative method, namely, the iteratively regularized Gauss-Newton method, in which the iterations are defined by where (here and in the following denotes the Fréchet derivative of ) and is a sequence of real numbers satisfying for some constant . For convergence analysis, Bakushinskii used the following Hölder-type source condition on the exact solution of (1) for some . Later Hohage [9, 10] and Langer and Hohage [11] considered the iteratively regularized Gauss-Newton method under different source conditions and stopping rules.

In [5], Bakushinskii generalized the procedure in [4] by considering a generalized form of the regularized Gauss-Newton method in which the iterations are defined by where and are as in (3), and each for is a piecewise continuous function. It should be noted that the convergence of (3) was also shown by Bakushinsky and Smirnova in [12]. In [6], Blaschke et al. considered the above generalized procedure under a stopping index such that and the error estimate is obtained under the following Hölder-type source condition:

Recently, Mahale and Nair [8], considered the iteration procedure: where , is as in (3), and for is a positive real-valued piecewise continuous function defined in with . In [8], the stopping index for the iteration is chosen such that , where is a sufficiently large constant not depending on , and

To prove the results in [8], Mahale and Nair considered the following general source condition: for some with , . Here, is a continuous, strictly monotonically increasing function satisfying with .

Note that the source conditions (5) and (8) involves the Fréchet derivative at the exact solution which is unknown in practice. But the source condition (12) depends on the Fréchet derivative of at .

In [7], Kaltenbacher considered the following iteration procedure: where , and proved that the sequence converges to the critical point of the Tikhonov functional, characterized by . In order to obtain an error estimate of in [7], the following two kinds of conditions are used:(a), for ,(b), for , ,with for condition (a) and for condition (b).

In [13], the author considered a particular case of the method (9), that is, or equivalently, for approximately solving (1). Analysis in [13], was carried out using a suitably constructed majorizing sequence, and the stopping rule in [13] was based on this majorizing sequence.

Recall [14, Definition ], that a nonnegative increasing sequence (i.e., ) is said to be a majorizing sequence of a sequence in , if

The majorizing sequence in [13], depends on the initial guess , and the conditions on (see, e.g., in [13]) are restrictive, so the method is not suitable for practical consideration.

In this paper, we consider the sequence (17) and analyze it by considering its even and odd terms separately and obtain the optimal order of the error. The regularization parameter is chosen according to the balancing principle considered by Pereverzev and Schock in [15].

The organization of this paper is as follows. Proposed method and its convergence analysis are given in Section 2. Error analysis and parameter choice strategy are discussed in Section 3. Section 4 deals with the implementation of the method; a numerical example is given in Section 5, and finally, the paper ends with a conclusion in Section 6.

2. Convergence of the Method (17)

Let where and is the initial guess.

Remark 1. It can be seen that and , where is defined as in (17).

Assumption 2. There exists a constant such that for every and , there exists an element satisfying for all and .

Let

Hereafter, for convenience, we use the notation , , and for , , and , respectively.

Let , , the parameter is selected from some finite set Throughout this paper, we assume that the operator is Fréchet differentiable at all .

Remark 3. Note that if , then by Assumption 2, we have This can be seen as follows:

Using the inequality (24), we prove the following.

Theorem 4. Let , , where . Let be as in (22), and let and be as in (19) and (20), respectively, with and . Then, we have the following:(a), (b),(c),(d).

Proof. Observe that if , for all , then by Assumption 2, we have and hence This proves (a).
Again observe that if , for all , and hence by Assumption 2 and (27), we have This proves (b).
Thus, if , for all , then (c) follows from (a) and (b). Now, we will prove using induction that , for all . Note that , and hence by (27) and Remark 3, that is, , again by (29) and Remark 3, that is, . Suppose that for some . Then, since we shall first find an estimate for . Note that by (a), (b), and (c), we have Therefore by (32) and Remark 3, we have that is, . So by induction for all . Again by (a), (b) and (34) we have Thus, , and hence by induction, for all . This completes the proof of the theorem.

The main result of Section 2 is the following.

Theorem 5. Let and be as in (19) and (20), respectively, with , and assumptions of Theorem 4 hold. Then, is a Cauchy sequence in and converges to . Further, , and

Proof. Using the relation (33), we obtain Thus, is a Cauchy sequence in , and hence it converges, say to . Observe that and . Hence, also converges to .
Now, by in (20), we obtain , that is,
This completes the proof.

3. Error Analysis

The next assumption on source condition is based on a source function and a property of the source function . We will be using this assumption to obtain an error estimate for .

Assumption 6. There exists a continuous, strictly monotonically increasing function with satisfying(i),(ii), for all ,(iii)there exists such that

Theorem 7. Let be as in (38). Then,

Proof. Let . Then, and hence by (38), we have . Thus, where , , and . Note that by Assumption 6 and by Assumption 2 The result now follows from (42), (43), (44), and (45). This completes the proof of the theorem.

3.1. Error Bounds under Source Conditions

Combining the estimates in Theorems 5 and 7, we obtain the following.

Theorem 8. Let be defined as in (20). If all assumptions of Theorems 5 and 7 are fulfilled, then Further, if , then where .

3.2. A Priori Choice of the Parameter

Observe that the upper bound in Theorem 8 is of optimal order for the choice which satisfies . Now, using the function , , we have so that . Here, means the inverse function of .

Theorem 9. Suppose that all assumptions of Theorems 5 and 7 are fulfilled. For , let , and let be as in Theorem 8 with . Then,

3.3. Adaptive Choice of the Parameter

In the balancing principle considered by Pereverzev and Schock in [15], the regularization parameter is selected from some finite set Let and let , where is defined as in (20) with and . Then, from Theorem 8, we have Precisely, we choose the regularization parameter from the set defined by where .

To obtain a conclusion from this parameter choice, we consider all possible functions satisfying Assumptions 2 and . Any of such functions is called admissible for , and it can be used as a measure for the convergence of (see [16]).

The main result of Section 3 is the following.

Theorem 10. Assume that there exists such that . Let the assumptions of Theorems 5 and 7 be fulfilled, and let where is as in Theorem 8. Then, and

Proof. The proof is analogous to the proof of Theorem  4.4 in [13] and is omitted therefore here.

4. Implementation of the Method

Finally, the balancing algorithm associated with the choice of the parameter specified in Theorem 10 involves the following steps.(i)Choose such that and .(ii)Choose big enough but not too large and , .(iii)Choose .

4.1. Algorithm

(1)Set .(2)Choose .(3)Solve by using the iteration in (19) and (20) with and .(4)If , then take and return .(5)Else set , and return to step (2).

5. Numerical Example

We apply the algorithm by choosing a sequence of finite dimensional subspace of with dim . Precisely, we choose as the linear span of , where , are the linear splines in a uniform grid of points in .

We consider the same example of nonlinear integral operator as in [17, Section 4.3]. Let be defined by where The Fréchet derivative of is given by

Note that for , where .

Observe that So, Assumption 2 is satisfied with

In our computation, we take and . Then, the exact solution We use as our initial guess, so that the function satisfies the source condition where .

Observe that while performing numerical computation on finite dimensional subspace of , one has to consider the operator instead of , where is the orthogonal projection on to . Thus, incurs an additional error .

Let . For the operator defined in (57), (cf. [18]). Thus, we expect to obtain the rate of convergence .

We choose , , , and . The results of the computation are presented in Table 1. The plots of the exact solution and the approximate solution obtained are given in Figures 1 and 2.

6. Conclusion

In this paper, we considered a modified Gauss-Newton method for approximately solving the nonlinear ill-posed operator equation , where is a nonlinear operator between the Hilbert spaces and . The same method was considered in [13] by the author, but the analysis in [13] was based on a suitably constructed majorizing sequence. In this paper, we analyze the sequence by considering its even and odd terms separately. The analysis in this paper is easier than that of [13]. We use the adaptive method considered by Pereverzev and Schock in [15] for choosing the regularization parameter. The optimality of this method is proved under a general source condition. Finally, a numerical example of nonlinear integral equation shows the performance of this method.