首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 208 毫秒
1.
本文利用对称多项式与一元多项式之间的关系,结合连续同伦思想,构造了一条概率为1的正则同论曲线.然后,对这条同论路径,进行离散化跟踪,导出了一类带有步长参数的Durand-Kerner算法,我们证明了这类算法的整体收敛性,从而在理论上解决了人们关于Durand-Kerner算法具有整体性的推测.本文还深入讨论了步长参数的选择问题.最后,我们以足够的数值例子,检验了理论的正确性.  相似文献   

2.
The Durand-Kerner iteration is a well-known simultaneous method for approximation of (simple) zeros of a polynomial. By relating Weierstrass' correction and the minimal distance between approximations practical conditions for convergence have been obtained. These conditions also ensure the existence of isolating discs for the polynomial roots, i.e. each iteration step gives a refined set of inclusion discs. In this paper refined conditions of convergence are given.  相似文献   

3.
This is a study of the Durand-Kerner and Nourein methods for finding the roots of a given algebraic equation simultaneously. We consider the conditions under which the iterative methods fail. The numerical example is presented.  相似文献   

4.
A hybrid integration algorithm obtaining an indefinite integral of a rational function (say q/r, q and r are polynomials) with floating-point but real coefficients is proposed. The algorithm consists of four steps and is based on combinations of symbolic and numeric computations (hybrid computation). The first step is a hybrid preprocessing stage. An integrand is decomposed into rational and logarithmic parts by using an approximate Horowitz' method which allows floating-point coefficients. Here, we replace the Euclidean GCD algorithm with an approximate-GCD algorithm which was proposed by Sasaki and Noda recently. It is easy to integrate the rational part. The logarithmic part is integrated numerically in the second step. Zeros of a denominator of it are computed by the numerical Durand-Kerner method which computes all zeros of a polynomial equation simultaneously. The integrand is then decomposed into partial fractions in the third step. Coefficients of partial fractions are determined by residue theory. Finally, in the fourth step, partial fractions are transformed into the resulting indefinite integral by using well-known rules of integrals. The hybrid algorithm proposed here gives both indefinite integrals and accurate values of definite integrals. Numerical errors in the hybrid algorithm depend only on errors in the second step. The algorithm evaluates some problems where numerical methods are inefficient or incapable, or a pure symbolic method is theoretically insufficient.  相似文献   

5.
This paper reviews and extends some of the known results in the estimation in “errors-in-variables” models, treating the structural and the functional cases on a unified basis. The generalized least-squares method proposed by some previous authors is extended to the case where the error covariance matrix contains an unknown vector parameter. This alleviates the difficulty of multiple roots arising from defining estimators as roots to a set of unbiased estimating equations. An alternative method is also considered for cases with both known and unknown error covariance matrix. The relationship between this method and the usual maximum-likelihood and generalized least-squares approaches is also investigated, and it is shown that in a special case they do not necessarily give identical results in finite samples. Finally, asymptotic results are presented.  相似文献   

6.
We consider several possible criteria for comparing splittings used with the conjugate gradient algorithm. We present sharp upper bounds for the error at each step of the algorithm and compare several widely used splittings with respect to their effect on these sharp upper bounds. We then consider a more stringent comparison test, and present necessary and sufficient conditions for the error at each step with one splitting to be smaller than that with another, for all pairs of corresponding initial guesses.  相似文献   

7.
Summary Recently an iterative method for the solution of systems of nonlinear equations having at leastR-order 1+ for simple roots has been investigated by the author [7]; this method uses as many function evaluations per step as the classical Newton method. In the present note we deal with several properties of the method such as monotone convergence, asymptotic inclusion of the solution and convergence in the case of multiple roots.  相似文献   

8.
A Manhattan search algorithm to minimize artificial neural network error function is outlined in this paper. From an existing position in Cartesian coordinate, a search vector moves in orthogonal directions to locate minimum function value. The search algorithm computes optimized step length for rapid convergence. This step is performed when consecutive search is successful in minimizing function value. The optimized step length identifies favorable descent direction to minimize function value. The search method is suitable for complex error surface where derivative information is difficult to obtain or when the error surface is nearly flat. The rate of change in function value is almost negligible near the flat surface. Most of the derivative based training algorithm faces difficulty in such scenarios. This algorithm avoids derivative information of an error function. Therefore, it is an attractive search method when derivative based algorithm faces difficulty due to complex ridges and flat valleys. In case the algorithm gets into trapped minimum, the search vector takes steps to move out of a local minimum by exploring neighborhood descent search directions. The algorithm differs from the first and second order derivative based training methods. To measure the performance of the algorithm, estimation of electric energy generation model from Fiji Islands and “L-T” letter recognition problems are solved. Bootstrap analysis shows that the algorithm’s predictive and classification abilities are high. The algorithm is reliable when solution to a problem is unknown. Therefore, the algorithm identifies benchmark solution.  相似文献   

9.
In many numerical algorithms, integrals or derivatives of functions have to be approximated by linear combinations of function values at nodes. This ranges from numerical integration to meshless methods for solving partial differential equations. The approximations should use as few nodal values as possible and at the same time have a smallest possible error. For each fixed set of nodes and each fixed Hilbert space of functions with continuous point evaluation, e.g. a fixed Sobolev space, there is an error–optimal method available using the reproducing kernel of the space. But the choice of the nodes is usually left open. This paper shows how to select good nodes adaptively by a computationally cheap greedy method, keeping the error optimal in the above sense for each incremental step of the node selection. This is applied to interpolation, numerical integration, and numerical differentiation. The latter case is particularly important for the design of meshless methods with sparse generalized stiffness matrices. The greedy algorithm is described in detail, and numerical examples are provided. In contrast to the usual practice, the greedy method does not always use nearest neighbors for local approximations of function values and derivatives. Furthermore, it avoids multiple points from clusters and it is better conditioned than choosing nearest neighbors.  相似文献   

10.
Summary This paper deals with an algorithm for the solution of diffusion and/or convection equations where we mixed the method of characteristics and the finite element method. Globally it looks like one does one step of transport plus one step of diffusion (or projection) but the mathematics show that it is also an implicit time discretization of thePDE in Lagrangian form. We give an error bound (h+t+h×h/t in the interesting case) that holds also for the Navier-Stokes equations even when the Reynolds number is infinite (Euler equation).  相似文献   

11.
Newton-Raphson method has always remained as the widely used method for finding simple and multiple roots of nonlinear equations. In the past years, many new methods have been introduced for finding multiple zeros that involve the use of weight function in the second step, thereby, increasing the order of convergence and giving a flexibility to generate a family of methods satisfying some underlying conditions. However, in almost all the schemes developed over the past, the usual way is to use Newton-type method at the first step. In this paper, we present a new two-step optimal fourth-order family of methods for multiple roots (m > 1). The proposed iterative family has the flexibility of choice at both steps. The development of the scheme is based on using weight functions. The first step can not only recapture Newton's method for multiple roots as special case but is also capable of defining new choices of first step. A stability analysis of some particular cases is also given to explain the dynamical behavior of the new methods around the multiple roots and decide the best values of the free parameters involved. Finally, we compare our methods with the existing schemes of the same order with a real life application as well as standard test problems. From the numerical results, we find that our methods can be considered as a better alternative for the existing procedures of same order.  相似文献   

12.
A. Koch  C. Miehe 《PAMM》2003,2(1):523-524
A major difficulty in the context of adaptive analysis of geometrically nonlinear problems is to provide a robust remeshing procedure that accounts both for the error caused by the spatial discretization and for the error due to the time discretization. For stability problems, such as strain localization and necking, it is essential to provide a step–size control in order to get a robust algorithm for the solution of the boundary value problem. For this purpose we developed an easy to implement step–size control algorithm. In addition we will consider possible a posteriori error indicators for the spatial error distribution of elastoplastic problems at finite strains. This indicator is adopted for a density–function–based adaptive remeshing procedure. Both error indicators are combined for the adaptive analysis in time and space. The performance of the proposed method is documented by means of representative numerical examples.  相似文献   

13.
To ensure the proper qualitative characteristic of approximate numerical solution of the Cauchy problem for a system of ordinary differential equations, it is necessary to formulate certain conditions that have to be satisfied by numerical methods. The efficiency of a numerical method is determined by constructing the algorithm of integration step changing and the choice of the order of the method. The construction of such a method requires one to determine preliminarily the admissible error of the method in each integration step. A theorem on the evaluation of the local error of multistep numerical p th-order methods with variable integration step without taking into account the round-off error is formulated. This theorem enables one to construct an efficient algorithm for the step change and the choice of the corresponding order of the method.  相似文献   

14.
Summary The Chebyshev and second-order Richardson methods are classical iterative schemes for solving linear systems. We consider the convergence analysis of these methods when each step of the iteration is carried out inexactly. This has many applications, since a preconditioned iteration requires, at each step, the solution of a linear system which may be solved inexactly using an inner iteration. We derive an error bound which applies to the general nonsymmetric inexact Chebyshev iteration. We show how this simplifies slightly in the case of a symmetric or skew-symmetric iteration, and we consider both the cases of underestimating and overestimating the spectrum. We show that in the symmetric case, it is actually advantageous to underestimate the spectrum when the spectral radius and the degree of inexactness are both large. This is not true in the case of the skew-symmetric iteration. We show how similar results apply to the Richardson iteration. Finally, we describe numerical experiments which illustrate the results and suggest that the Chebyshev and Richardson methods, with reasonable parameter choices, may be more effective than the conjugate gradient method in the presence of inexactness.This work was supported in part by National Science Foundation Grants DCR-8412314 and DCR-8502014The work of this author was completed while he was on sabbatical leave at the Centre for Mathematical Analysis and Mathematical Sciences Research Institute at the Australian National University, Canberra, Australia  相似文献   

15.
Forcing strong convergence of proximal point iterations in a Hilbert space   总被引:1,自引:1,他引:0  
This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinite-dimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinite-dimensional case. We propose a new proximal-type algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns. Received January 6, 1998 / Revised version received August 9, 1999?Published online November 30, 1999  相似文献   

16.
We present a combination of two algorithms that accurately calculate multiple roots of general polynomials.

Algorithm I transforms the singular root-finding into a regular nonlinear least squares problem on a pejorative manifold, and it calculates multiple roots simultaneously from a given multiplicity structure and initial root approximations. To fulfill the input requirement of Algorithm I, we develop a numerical GCD-finder containing a successive singular value updating and an iterative GCD refinement as the main engine of Algorithm II that calculates the multiplicity structure and the initial root approximation. While limitations exist in certain situations, the combined method calculates multiple roots with high accuracy and consistency in practice without using multiprecision arithmetic, even if the coefficients are inexact. This is perhaps the first blackbox-type root-finder with such capabilities.

To measure the sensitivity of the multiple roots, a structure-preserving condition number is proposed and error bounds are established. According to our computational experiments and error analysis, a polynomial being ill-conditioned in the conventional sense can be well conditioned with the multiplicity structure being preserved, and its multiple roots can be computed with high accuracy.

  相似文献   


17.
In this paper, a parametric variant of Steffensen-secant method and three fast variants of Steffensen-secant method for solving nonlinear equations are suggested. They achieve cubic convergence or super cubic convergence for finding simple roots by only using three evaluations of the function per step. Their error equations and asymptotic convergence constants are deduced. Modified Steffensen’s method and modified parametric variant of Steffensen-secant method for finding multiple roots are also discussed. In the numerical examples, the suggested methods are supported by the solution of nonlinear equations and systems of nonlinear equations, and the application in the multiple shooting method.  相似文献   

18.
在固定步长的ICA极大似然估计自适应算法的基础上,通过一维搜索引入了步长修正方案,使新算法可在收敛速度和稳定状态时的失调误差这两个性能指标上达到最佳结合点,具有较好的时变系统跟踪能力。仿真结果证实了本文所提出的算法可以有效地提高ICA的自适应性,能够更准确地完成盲源分离。在此基础上将算法用在时变性很强的股票数据上,以验证该算法的有效性和可行性。  相似文献   

19.
The paper presents an effective version of the Pareto memetic algorithm with path relinking and efficient local search for multiple objective traveling salesperson problem. In multiple objective Traveling salesperson problem (TSP), multiple costs are associated with each arc (link). The multiple costs may for example correspond to the financial cost of travel along a link, time of travel, or risk in the case of hazardous materials. The algorithm searches for new good solutions along paths in the decision space linking two other good solutions selected for recombination. Instead of a simple local search it uses short runs of tabu search based on the steepest version of the Lin–Kernighan algorithm. The efficiency of local search is further improved by the techniques of candidate moves and locked arcs. In the final step of the algorithm the neighborhood of each potentially Pareto-optimal solution is searched for new solutions that could be added to this set. The algorithm is compared experimentally to the state-of-the-art algorithms for multiple objective TSP.  相似文献   

20.
ABSTRACT

Zhang Neural Networks rely on convergent 1-step ahead finite difference formulas of which very few are known. Those which are known have been constructed in ad-hoc ways and suffer from low truncation error orders. This paper develops a constructive method to find convergent look-ahead finite difference schemes of higher truncation error orders. The method consists of seeding the free variables of a linear system comprised of Taylor expansion coefficients followed by a minimization algorithm for the maximal magnitude root of the formula's characteristic polynomial. This helps us find new convergent 1-step ahead finite difference formulas of any truncation error order. Once a polynomial has been found with roots inside the complex unit circle and no repeated roots on it, the associated look-ahead ZNN discretization formula is convergent and can be used for solving any discretized ZNN based model. Our method recreates and validates the few known convergent formulas, all of which have truncation error orders at most 4. It also creates new convergent 1-step ahead difference formulas with truncation error orders 5 through 8.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号