首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
This paper concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness and Lipschitzian assumptions, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and B-differentiable versions of Newton’s method for nonsmooth Lipschitzian equations.  相似文献   

2.
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.  相似文献   

3.
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed.  相似文献   

4.
对全息测量下的X射线相位衬度断层成像问题提出了一种新的重建算法.该算法的主要想法是利用牛顿迭代法求解非线性的相位恢复问题.我们证明了牛顿方向满足的线性方程是非适定的,并利用共轭梯度法得到方程的正则化解.最后利用模拟数据进行了数值实验,数值结果验证了算法的合理性以及对噪声数据的数值稳定性,同时通过与线性化相位恢复算法的数值结果比较说明了新算法对探测数据不要求限制在Fresnel区域的近场,适用范围更广.  相似文献   

5.
李慧茹 《经济数学》2002,19(1):85-94
通过定义一种新的*-微分,本文给出了局部Lipschitz非光滑方程组的牛顿法,并对其全局收敛性进行了研究.该牛顿法结合了非光滑方程组的局部收敛性和全局收敛性.最后,我们把这种牛顿法应用到非光滑函数的光滑复合方程组问题上,得到了较好的收敛性.  相似文献   

6.
An efficient algorithm is described for calculating stationary one-dimensional transonic outflow solutions of the compressible Euler equations with gravity and heat source terms. The stationary equations are solved directly by exploiting their dynamical system form. Transonic expansions are the stable manifolds of saddle-point-type critical points, and can be obtained efficiently and accurately by adaptive integration outward from the critical points. The particular transonic solution and critical point that match the inflow boundary conditions are obtained by a two-by-two Newton iteration which allows the critical point to vary within the manifold of possible critical points. The proposed Newton Critical Point (NCP) method typically converges in a small number of Newton steps, and the adaptively calculated solution trajectories are highly accurate. A sample application area for this method is the calculation of transonic hydrodynamic escape flows from extrasolar planets and the early Earth. The method is also illustrated for an example flow problem that models accretion onto a black hole with a shock.  相似文献   

7.
陈志  邓乃扬  薛毅 《计算数学》1992,14(3):322-329
§1.引言 求解线性方程组 a_i~Tx=b_i,i=1,2,…,n,(1.1)其中a_1,a_2,…,a_n线性无关. 设y~((1))为初值,U~((1))为任意非奇异n阶矩阵,我们用如下方法求解方程组(1.1). 先考虑前k-1个方程组成的亚定方程组 a_i~Tx=b_i,i=1,2,…,k-1.设{U~((k))}={a_1,a_2,…,a_(k-1)},这里{U~((k))}表示由U~((k))的列组成的子空间.显然,rank(U~((k)))=n-b+1.若y~((k))是相应的亚定方程的一个特解,则将其看作方程组  相似文献   

8.
In this paper, a new smoothing Newton method is proposed for solving constrained nonlinear equations. We first transform the constrained nonlinear equations to a system of semismooth equations by using the so-called absolute value function of the slack variables, and then present a new smoothing Newton method for solving the semismooth equations by constructing a new smoothing approximation function. This new method is globally and quadratically convergent. It needs to solve only one system of unconstrained equations and to perform one line search at each iteration. Numerical results show that the new algorithm works quite well.  相似文献   

9.
Many optimization problems can be reformulated as a system of equations. One may use the generalized Newton method or the smoothing Newton method to solve the reformulated equations so that a solution of the original problem can be found. Such methods have been powerful tools to solve many optimization problems in the literature. In this paper, we propose a Newton-type algorithm for solving a class of monotone affine variational inequality problems (AVIPs for short). In the proposed algorithm, the techniques based on both the generalized Newton method and the smoothing Newton method are used. In particular, we show that the algorithm can find an exact solution of the AVIP in a finite number of iterations under an assumption that the solution set of the AVIP is nonempty. Preliminary numerical results are reported.  相似文献   

10.
Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton–Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.  相似文献   

11.
Newton’s method for unconstrained optimization problems on the Euclidean space can be generalized to that on Riemannian manifolds. The truncated singular value problem is one particular problem defined on the product of two Stiefel manifolds, and an algorithm of the Riemannian Newton’s method for this problem has been designed. However, this algorithm is not easy to implement in its original form because the Newton equation is expressed by a system of matrix equations which is difficult to solve directly. In the present paper, we propose an effective implementation of the Newton algorithm. A matrix-free Krylov subspace method is used to solve a symmetric linear system into which the Newton equation is rewritten. The presented approach can be used on other problems as well. Numerical experiments demonstrate that the proposed method is effective for the above optimization problem.  相似文献   

12.
苏剑  李开泰 《计算数学》2008,30(3):235-246
本文利用原始变量有限元法求解混合边界条件下的三维定常旋转Navier-Stokes方程,证明了离散问题解的存在唯一性,得到了有限元解的最优误差估计.给出了求解原始变量有限元逼近解的简单迭代算法,并证明了算法的收敛性.针对三维情况下计算资源的限制,采用压缩的行存储格式存储刚度矩阵的非零元素,并利用不完全的LU分解作预处理的GMRES方法求解线性方程组.最后分析了简单迭代和牛顿迭代的优劣对比,数值算例表明在同样精度下简单迭代更节约计算时间.  相似文献   

13.
For unconstrained optimization, an inexact Newton algorithm is proposed recently, in which the preconditioned conjugate gradient method is applied to solve the Newton equations. In this paper, we improve this algorithm by efficiently using automatic differentiation and establish a new inexact Newton algorithm. Based on the efficiency coefficient defined by Brent, a theoretical efficiency ratio of the new algorithm to the old algorithm is introduced. It has been shown that this ratio is greater than 1, which implies that the new algorithm is always more efficient than the old one. Furthermore, this improvement is significant at least for some cases. This theoretical conclusion is supported by numerical experiments.   相似文献   

14.
We introduce a new algorithm, namely two-step relaxation Newton, for solving algebraic nonlinear equations f(x)=0. This new algorithm is derived by combining two different relaxation Newton algorithms introduced by Wu et al. (Appl. Math. Comput. 201:553–560, 2008), and therefore with special choice of the so called splitting function it can be implemented simultaneously, stably with much less memory storage and CPU time compared with the Newton–Raphson method. Global convergence of this algorithm is established and numerical experiments show that this new algorithm is feasible and effective, and outperforms the original relaxation Newton algorithm and the Newton–Raphson method in the sense of iteration number and CPU time.  相似文献   

15.
We present an algorithm for large-scale unconstrained optimization based onNewton's method. In large-scale optimization, solving the Newton equations at each iteration can be expensive and may not be justified when far from a solution. Instead, an inaccurate solution to the Newton equations is computed using a conjugate gradient method. The resulting algorithm is shown to have strong convergence properties and has the unusual feature that the asymptotic convergence rate is a user specified parameter which can be set to anything between linear and quadratic convergence. Some numerical results on a 916 vriable test problem are given. Finally, we contrast the computational behavior of our algorithm with Newton's method and that of a nonlinear conjugate gradient algorithm. This research was supported in part by DOT Grant CT-06-0011, NSF Grant ENG-78-21615 and grants from the Norwegian Research Council for Sciences and the Humanities and the Norway-American Association. This paper was originally presented at the TIMS-ORSA Joint National Meeting, Washington, DC, May 1980.  相似文献   

16.
When solving large complex optimization problems, the user is faced with three major problems. These are (i) the cost in human time in obtaining accurate expressions for the derivatives involved; (ii) the need to store second derivative information; and (iii), of lessening importance, the time taken to solve the problem on the computer. For many problems, a significant part of the latter can be attributed to solving Newton-like equations. In the algorithm described, the equations are solved using a conjugate direction method that only needs the Hessian at the current point when it is multiplied by a trial vector. In this paper, we present a method that finds this product using automatic differentiation while only requiring vector storage. The method takes advantage of any sparsity in the Hessian matrix and computes exact derivatives. It avoids the complexity of symbolic differentiation, the inaccuracy of numerical differentiation, the labor of finding analytic derivatives, and the need for matrix store. When far from a minimum, an accurate solution to the Newton equations is not justified, so an approximate solution is obtained by using a version of Dembo and Steihaug's truncated Newton algorithm (Ref. 1).This paper was presented at the SIAM National Meeting, Boston, Massachusetts, 1986.  相似文献   

17.
一个三阶牛顿变形方法   总被引:3,自引:2,他引:1  
基于反函数建立的积分方程,结合Simpson公式,给出了一个非线性方程求根的新方法,即为牛顿变形方法.证明了它至少三次收敛到单根,与牛顿法相比,提高了收敛阶和效率指数.文末给出数值试验,且与牛顿法和同类型牛顿变形法做了比较.结果表明方法具有较好的优越性,它丰富了非线性方程求根的方法.  相似文献   

18.
解非线性方程组的一类离散的Newton算法   总被引:6,自引:0,他引:6  
1.引言考虑非线性方程组设xi是当前的迭代点,为计算下一个迭代点,Newton法是求解方程若用差商代替导数,离散Newton法要解如下的方程其中这里为了计算J(;;h),需计算n‘个函数值.为了提高效能,Brown方法l‘]使用代入消元的办法来减少函数值计算量.它是再通过一次内选代从h得到下一个迭代点14+1.设n;=(《1,…,Zn尸,t二(ti,…,t*”,t为变量.BfOWll方法的基本思想如下.对人(x)在X;处做线性近似解出然后代入第二个函数,得到这是关于tZ,…,tn的函数.当(tZ,…,t。尸一(ZZ,…,Z。厂时,由(1.4),…  相似文献   

19.
A Newton method to solve total least squares problems for Toeplitz systems of equations is considered. When coupled with a bisection scheme, which is based on an efficient algorithm for factoring Toeplitz matrices, global convergence can be guaranteed. Circulant and approximate factorization preconditioners are proposed to speed convergence when a conjugate gradient method is used to solve linear systems arising during the Newton iterations. The work of the second author was partially supported by a National Science Foundation Postdoctoral Research Fellowship.  相似文献   

20.
This paper investigates a pseudotransient continuation algorithm for solving a system of nonsmooth equations with inequality constraints. We first transform the inequality constrained system of nonlinear equations to an augmented nonsmooth system, and then employ the pseudotransient continuation algorithm for solving the corresponding augmented nonsmooth system. The method gets its global convergence properties from the dynamics, and inherits its local convergence properties from the semismooth Newton method. Finally, we illustrate the behavior of our approach by some numerical experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号