Abstract

Optimization problems defined by (objective) functions for which derivatives are unavailable or available at an expensive cost are emerging in computational science. Due to this, the main aim of this paper is to attain as high as possible of local convergence order by using fixed number of (functional) evaluations to find efficient solvers for one-variable nonlinear equations, while the procedure to achieve this goal is totally free from derivative. To this end, we consider the fourth-order uniparametric family of Kung and Traub to suggest and demonstrate two classes of three-step derivative-free methods using only four pieces of information per full iteration to reach the optimal order eight and the optimal efficiency index 1.682. Moreover, a large number of numerical tests are considered to confirm the applicability and efficiency of the produced methods from the new classes.

1. Introduction

This paper focuses on finding approximate solutions to nonlinear scalar and sufficiently smooth equations by derivative-free methods. Techniques such as the false position method in the root-finding of a nonlinear equation , requires bracketing of the zero by two guesses. Such schemes are called bracketing methods. These methods are almost always convergent since they are based on reducing the interval between the two guesses so as to zero in on the root of the equation. In the Newton method, the zero is not bracketed. In fact, only one initial guess of the solution is needed to get the iterative process started in order to find the zero. The method hence falls in the category of open methods.

Convergence in open methods is not guaranteed but if the method does converge, it does so much faster than the bracketing methods [1]. Although Newton’s iteration has widely been discussed and improved in the literature, see for example, [2, 3], one of the main drawback, which is the matter of first derivative evaluation, occasionally restricts the application of this method or its variants. On the other hand, when we discuss improving the iterative without memory methods for solving one variable nonlinear equations, the conjecture of Kung and Traub [4] is taken automatically into consideration. It should be remarked that according to the unproved conjecture of Kung and Traub [4], the index of efficiency will not pass the maximum level , wherein is the total number of (functional) evaluations.

Apart from being involved in the first derivative, Newton’s iteration will use the second derivative of the function, when it is applied in optimization problems to final the local minima. Therefore, its use will be limited especially when the cost of first and second derivatives of the functions is expensive. And subsequently, derivative-free algorithms come to attention.

For the first time, Steffensen in [5] gave the following derivative-free form of Newton’s iteration which possesses the same rate of convergence and efficiency index as Newton’s.

In this work, we suggest novel classes of three-step four-point iterative methods, which are without memory, derivative-free, optimal, and therefore consistent with hard problems. For this reason, the contents of the paper unfold as comes next. Section 2 reveals our main contribution as some generalizations to the fourth-order uniparametric family of Kung and Traub [4]. Subsequently, Section 3 gives a very short discussion on the available derivative-free methods in the literature. Section 4 discusses the implementation of the new produced methods from our classes on large number of numerical examples. And finally, a short conclusion will be given in Section 5.

2. Main Outcome

In order to contribute and provide a class of methods, which are derivative-free with high efficiency index, we take into consideration the optimal two-step fourth-order uniparametric family of Kung and Traub [4] in the first two steps of a three-step cycle in which the Newton’s iteration has been performed in the last step as comes next

This scheme includes four evaluations of the function and one of its first-order derivative to reach the order eight and 1.516 as its efficiency index. To improve the efficiency index, we first consider the same approximation as used in the second step of (2.1) to annihilate and then take advantage of weight function approach as follows: where , , , , , without the index , and , , , are four real-valued weight functions, which force the order to arrive at the maximum level eight by using the fixed number of evaluations per cycle. Theorem 2.1 shows that (2.2) will arrive at local eighth-order convergence by using only four function evaluations per full cycle. This reveals that any method from our proposed class possesses 1.682 as the efficiency index, which is optimal according to the Kung and Traub conjecture, while it is fully free from derivative evaluations.

Theorem 2.1. Let be a simple zero of a sufficiently differentiable function and let that , . If is sufficiently close to , then, (i) the local order of convergence of the solution by the class of methods without memory defined in (2.2) is eight, when and (ii) this solution reads the error equation

Proof. We expand any terms of (2.2) around the simple root in the th iterate by taking into account and . Thus, we write . Accordingly, we attain In the same vein, we obtain We also have . Now using symbolic computation in the last step of (2.2), we attain Furthermore, by considering (2.7) and the real-valued weight functions as in (2.3), we will obtain the error equation (2.4). This shows that the proposed class of derivative-free methods (2.2)-(2.3) reaches the optimal eighth-order convergence by using only four function evaluations per full iteration. This ends the proof.

Now, any optimal three-step four-point derivative-free without memory method can be produced by using (2.2)-(2.3). As an instance, we can have () where its error equation reads

We should here recall that per computing step for any of the methods from the new class, the values of and should be computed only once, and then their values will be considered in the rest of the cycle wherever required.

Many of the nonlinear functions are arising from solving complex environmental engineering problems, where the objective depends on the output of a numerical simulation of a physical process. These simulators are expensive to evaluate because they involve numerically solving systems of partial differential equations governing the underlying physical phenomena. However, function evaluation remains the dominant expense in optimization problems since the savings in time are often offset by increased accuracy of the simulation. For these reasons, algorithms like (2.2)-(2.3) in which there is no need of derivative evaluation, the order is optimal as well as the efficiency index is high are important in hard problems.

Before going to the next sections, it is required to have a discussion on another similar class of derivative-free methods that is attainable by choosing a different approximation in the first step of (2.1) and also with a somehow different weight functions in the last step. Considering these in (2.1), we can have the following without memory derivative-free class which its order is given in Theorem 2.2 to be eight by choosing appropriate weight functions. Noting that , , , , , without the index , , and are four real-valued weight functions.

Theorem 2.2. Let be a simple zero of a sufficiently differentiable function and let that . If is sufficiently close to , then, (i) the local order of convergence of the solution by the class of methods without memory defined in (2.10) is eight, when and (ii) this solution reads the error equation

Proof. The proof of this theorem is similar to the proof of Theorem 2.1. It is hence omitted.

Now, by using (2.10)-(2.11), we can have some other efficient optimal eighth-order without memory derivative-free methods, such as where its error equation is as follows: Another example from the new class of optimal iterations (2.10)-(2.11) can be where by the following error equation

3. A Brief Look at the Literature

In this section, we shortly present some of the well-known high-order derivative-free techniques to find the simple zeros of nonlinear equations for the sake of comparison. Kung and Traub in [4] introduced the following without memory two-step iteration where , and it was in fact the first two steps of our novel class (2.2)-(2.3) in this paper. They moreover gave the following four-point eighth-order iterative scheme wherein . Soleymani in [6] suggested the following three-step derivative-free seventh-order method For further reading, one should consult the papers [710] and the references therein.

Remark 3.1. In terms of computational point of view, the efficiency index of our classes of derivative-free without memory methods (2.2)-(2.3) (and the class (2.10)-(2.11)) is greater than that of Newton’s and Steffensen’s, that is, 1.414, 1.565 of the sixth-order derivative-free technique given in [11], 1.587 of (3.1), 1.626 of (3.3), and is equal to 1.682 of the family (3.2).

4. Numerical Experiments

We now check the effectiveness of the novel derivative-free classes of iterative methods. In order to do this, we choose (2.8) as the representative from our class (2.2)-(2.3). Note that the derived methods from the class (2.10)-(2.11) can also be used.

We have compared (2.8) with Steffensen’s method (1.1), the fourth-order family of Kung and Traub (3.1) with , the seventh-order technique of Soleymani (3.3), and the optimal eighth-order family of Kung and Traub (3.2) with , using the examples listed in Table 1. The results of comparisons are given in Table 2 in terms of the number significant digits for each test function after the specified number of iterations, that is, for example, shows that the absolute value of the given nonlinear function () after nine iterations (corresponding to iteration (1.1)) is zero up to 207 decimal places.

It is important to review the proof of convergence for our proposed classes of methods (or the compared methods in Table 2) before implementing them. Specifically, one should review the assumptions made in the proofs, when the result of the iterations become divergent. For situations, where the method fails to converge, it is because the assumptions made in the proofs are not met. For this reason in the long run, we give a Mathematica 8. code to extract initial approximations for all the zeros of a nonlinear function in an interval.

It can be observed from Table 2, that almost in most cases our contributed method from the class (2.2)-(2.3) is superior in solving nonlinear equations. Generally speaking, iterative methods of the same order and efficiency, and also with the same background, that is, Newton-type or Steffensen-type methods, have similar numerical outputs due to their similar characters. Note also that experimental results show that whatever the value of is smaller, then the output results of solving nonlinear equations will be more accurate.

We have completed Table 2 for two different initial guesses. In Table 2, IT and TNE stand for number of iterations, and total number of (functional) evaluations. To have a fair comparison, we used high number of iterations for (1.1). The numerical results for the test functions support the theoretical discussions given in Section 2. We also point out that the procedure of applying the new iterative methods for nonsmooth functions is similar to those recently illustrated in [12]. And for multiple zeros, we must first take into account a transformation to convert the multiple zero into a simple one and then employ our iterative methods on the transformed equation. Moreover, for applying on functions with inexact coefficients, one remedy is to rationalize the coefficients first and then apply the desired scheme for obtaining the solutions with high accuracy.

An important aspect of implementing high-order nonlinear solvers is in finding very robust initial guesses to start the process, when high precision computing is needed. As discussed in Section 1, the convergence of our iterative methods is locally. To resolve this shortcoming, the best way is to rely on hybrid algorithms, in which the first item produces a robust initial point and the second item employs the new iterative methods when high precision is required. There are some ways in the literature to find robust starting points, mostly based on interval mathematics see for example, [13]. But herein we take into consideration the programming package Mathematica 8 [14] which could be efficiently applied on lists for high precision computing. In fact using [15], we could build a list of initial guesses enough close with good accuracy to start the procedure of our optimal derivative-free eighth-order methods. The procedure of finding such a robust list is based on the powerful command of NDSolve for the nonlinear function on the interval . Such a way can be written in the following piece of Mathematica code by considering an oscillatory function as the input test function on the domain , see Algorithm 1.

f x_ :=Sin[x]Cot x +Cos x2 +0.5;
a=-2.; b=12.;
zeros=Reap soln=y x /.First NDSolve y x ==Evaluate D f x ,x ,y b ==
(f b )},y x ,{x,a,b},
Method->{"EventLocator","Event"->y x ,
"EventAction":>Sow x,y x 2,1 ;
initialPoints=Sort Flatten Take zeros,Length zeros ,1

Thus now, we have an efficient list of initial approximations for the zeros of a nonlinear once differentiable function with finitely many zeros in an interval. The number of zeros and the graph of the function including the positions of the zeros can be given by the following commands (see Figure 1), see Algorithm 2.

Length[initialPoints]
Plot[f[x],{x,a,b},Epilog->{PointSize[Medium], Red, Point[zeros]},
PlotRange->All, PerformanceGoal->"Quality", PlotStyle->{Thick, Brown}]

For this test, there are 33 zeros in the considered interval which can easily be used as the starting points for our proposed high-order derivative-free methods. Note that the output of the vector “initialPoints” contains the initial approximations. In this test problem, the list of zeros are {−1.48293, 1.48293, 2.18909, 2.72517, 3.38886, 3.71339, 4.15977, 4.55782, 4.7902, 7.67254, 7.78846, 8.04476, 8.20923, 8.40603, 8.60286, 8.75671, 8.9734, 9.09881, 9.32263, 9.43461, 9.65251, 9.76525, 9.9662, 10.0901, 10.2669, 10.4082, 10.557, 10.719, 10.8379, 11.023, 11.11, 11.3224, 11.3721}. Note that we end this section by mentioning that for very oscillatory functions, it is better to first divide the interval into some smaller subintervals and then obtain the solutions. The command NDSolve uses Maximum number of 10000 steps, if it is needed this could be changed. In cases when NDSolve fails, this algorithm might fail too.

5. Concluding Remarks and Future Works

The importance and application of nonlinear solvers made the construction of new methods by the beginning of the new century. On the other hand, when the cost of derivative evaluation is expensive or in some cases not available, the need for higher-order methods with high efficiency index, which do not require derivative evaluations per full cycle, are more and more felt in the scientific computing.

Hence, this paper has recommended two wide classes of optimal eighth-order methods without memory to solve nonlinear scalar equations numerically. The merits of the produced methods from our classes were high efficiency index, being totally free from derivative, high accuracy in numerical examples and also consistency with the conjecture of Kung and Traub. The analytical proof of the main contribution was given in Section 2 and its stability and efficiency were tested in Section 4. The numerical results attested the theoretical aspects as well as the high efficiency index by as small as possible of number of evaluations to achieve the highest possible order. We have also pointed out some notes regarding multiple zeros and how to extract initial approximations for the nonlinear functions with finitely many zeros in an interval.

Providing with memory iterations using the classes (2.2)-(2.3) and (2.10)-(2.11) can be of researchers’ interests for future works.