首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present an algorithm for very large-scale linearly constrained nonlinear programming (LCNP) based on a Limited-Storage Quasi-newton method. In large-scale programming solving the reduced Newton equation at each iteration can be expensive and may not be justified when far from a local solution; besides, the amount of storage required by the reduced Hessian matrix, and even the computing time for its Quasi-Newton approximation, may be prohibitive. An alternative based on the reduced Truncated-Newton methodology, that has proved to be satisfactory for large-scale problems, is not recommended for very large-scale problems since it requires an additional gradient evaluation and the solving of two systems of linear equations per each minor iteration. We recommend a 2-step BFGS approximation of the inverse of the reduced Hessian matrix that does not require to store any matrix since the product matrix-vector is the vector to be approximated; it uses the reduced gradient and information from two previous iterations and the so-termed restart iteration. A diagonal direct BFGS preconditioning is used.  相似文献   

2.
In this paper, we consider the nonsymmetric algebraic Riccati equation arising in transport theory. An important feature of this equation is that its minimal positive solution can be obtained via computing the minimal positive solution of a vector equation. We apply the Newton–Shamanskii method to solve the vector equation. Convergence analysis shows that the sequence of vectors generated by the Newton–Shamanskii method is monotonically increasing and converges to the minimal positive solution of the vector equation. Numerical experiments show that the Newton–Shamanskii method is feasible and effective, and outperforms the Newton method. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
When Newton's method is applied to find the maximal symmetric solution of an algebraic Riccati equation, convergence can be guaranteed under moderate conditions. In particular, the initial guess need not be close to the solution. The convergence is quadratic if the Fréchet derivative is invertible at the solution. In this paper we examine the behaviour of the Newton iteration when the derivative is not invertible at the solution. We find that a simple modification can improve the performance of the Newton iteration dramatically.

  相似文献   


4.
We consider an iterative preconditioning technique for non-convex large scale optimization. First, we refer to the solution of large scale indefinite linear systems by using a Krylov subspace method, and describe the iterative construction of a preconditioner which does not involve matrices products or matrices storage. The set of directions generated by the Krylov subspace method is used, as by product, to provide an approximate inverse preconditioner. Then, we experience our preconditioner within Truncated Newton schemes for large scale unconstrained optimization, where we generalize the truncation rule by Nash–Sofer (Oper. Res. Lett. 9:219–221, 1990) to the indefinite case, too. We use a Krylov subspace method to both approximately solve the Newton equation and to construct the preconditioner to be used at the current outer iteration. An extensive numerical experience shows that the proposed preconditioning strategy, compared with the unpreconditioned strategy and PREQN (Morales and Nocedal in SIAM J. Optim. 10:1079–1096, 2000), may lead to a reduction of the overall inner iterations. Finally, we show that our proposal has some similarities with the Limited Memory Preconditioners (Gratton et al. in SIAM J. Optim. 21:912–935, 2011).  相似文献   

5.
<正>In this paper we give an overview of the present state of fast solvers for the solution of the incompressible Navier-Stokes equations discretized by the finite element method and linearized by Newton or Picard's method.It is shown that block preconditioners form an excellent approach for the solution,however if the grids are not to fine preconditioning with a Saddle point ILU matrix(SILU) may be an attractive alternative. The applicability of all methods to stabilized elements is investigated.In case of the stand-alone Stokes equations special preconditioners increase the efficiency considerably.  相似文献   

6.
Timo Hylla  E. W. Sachs 《PAMM》2007,7(1):1060507-1060508
Optimal control problems involving PDEs often lead in practice to the numerical computation of feedback laws for an optimal control. This is achieved through the solution of a Riccati equation which can be large scale, since the discretized problems are large scale and require special attention in their numerical solution. The Kleinman-Newton method is a classical way to solve an algebraic Riccati equation. We look at two versions of an extension of this method to an inexact Newton method. It can be shown that these two implementable versions of Newton's method are identical in the exact case, but differ substantially for the inexact Newton method. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
Data sharpening involves perturbing the data to improve the performance of a statistical method. The versions of it that have been proposed in the past have been for bias reduction in curve estimation, and the amount of perturbation of each datum has been determined by an explicit formula. This article suggests a distance-based form of data sharpening, in which the sum of the distances that data are moved is minimized subject to a constraint imposed on an estimator. The constraint could be one that leads to bias reduction, or to variance or variability reduction, or to a curve estimator being monotone or unimodal. In contrast to earlier versions of the method, in the form presented in this article the amount and extent of sharpening is determined implicitly by a formula that is typically given as the solution of a Lagrange-multiplier equation. Sometimes the solution can be found by Newton–Raphson iteration, although when qualitative constraints are imposed it usually requires quadratic programming or a related method.  相似文献   

8.
This paper concerns with the analysis of the iterative procedure for the solution of a nonlinear reaction diffusion equation at the steady state in a two dimensional bounded domain supplemented by suitable boundary conditions. This procedure, called Lagged Diffusivity Functional Iteration (LDFI)-procedure, computes the solution by “lagging” the diffusion term. A model problem is considered and a finite difference discretization for that model problem is described. Furthermore, properties of the finite difference operator are proved. Then, sufficient conditions for the convergence of the LDFI-procedure are given. At each stage of the LDFI-procedure a weakly nonlinear algebraic system has to be solved and the simplified Newton–Arithmetic Mean (Newton–AM) method is used. This method is particularly well suited for implementation on parallel computers. Numerical studies show the efficiency, for different test functions, of the LDFI-procedure combined with the simplified Newton–AM method. Better results are obtained when in the reaction diffusion equation also a convection term is present.  相似文献   

9.
We explore the idea of using nonlinear schemes as preconditioners in Newton-Krylov schemes for unsteady flow computations. Analysis shows that left preconditioning changes the Newton scheme in a non equivalent way, leading to a stall in Newton convergence, whereas right preconditioning leads to a sound method. (© 2009 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
For the algebraic Riccati equation whose four coefficient matrices form a nonsingular M-matrix or an irreducible singular M-matrix K, the minimal nonnegative solution can be found by Newton’s method and the doubling algorithm. When the two diagonal blocks of the matrix K have both large and small diagonal entries, the doubling algorithm often requires many more iterations than Newton’s method. In those cases, Newton’s method may be more efficient than the doubling algorithm. This has motivated us to study Newton-like methods that have higher-order convergence and are not much more expensive each iteration. We find that the Chebyshev method of order three and a two-step modified Chebyshev method of order four can be more efficient than Newton’s method. For the Riccati equation, these two Newton-like methods are actually special cases of the Newton–Shamanskii method. We show that, starting with zero initial guess or some other suitable initial guess, the sequence generated by the Newton–Shamanskii method converges monotonically to the minimal nonnegative solution.We also explain that the Newton-like methods can be used to great advantage when solving some Riccati equations involving a parameter.  相似文献   

11.
This paper studies a primal–dual interior/exterior-point path-following approach for linear programming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primal–dual optimality equations. Under nondegeneracy assumptions, this nonlinear system is well-posed, i.e. it has a nonsingular Jacobian at optimality and is not necessarily ill-conditioned as the iterates approach optimality. Assuming that a basis matrix (easily factorizable and well-conditioned) can be found, we apply a simple preprocessing step to eliminate both the primal and dual feasibility equations. This results in a single bilinear equation that maintains the well-posedness property. Sparsity is maintained. We then apply either a direct solution method or an iterative solver (within an inexact Newton framework) to solve this equation. Since the linearization is well posed, we use affine scaling and do not maintain nonnegativity once we are close enough to the optimum, i.e. we apply a change to a pure Newton step technique. In addition, we correctly identify some of the primal and dual variables that converge to 0 and delete them (purify step). We test our method with random nondegenerate problems and problems from the Netlib set, and we compare it with the standard Normal Equations NEQ approach. We use a heuristic to find the basis matrix. We show that our method is efficient for large, well-conditioned problems. It is slower than NEQ on ill-conditioned problems, but it yields higher accuracy solutions.  相似文献   

12.
Liang Bao The non-symmetric algebraic Riccati equation arising in transporttheory can be rewritten as a vector equation and the minimalpositive solution of the non-symmetric algebraic Riccati equationcan be obtained by solving the vector equation. In this paper,we apply the modified Newton method to solve the vector equation.Some convergence results are presented. Numerical tests showthat the modified Newton method is feasible and effective, andoutperforms the Newton method.  相似文献   

13.
给出求解极大极小问题的一种对数障碍函数法.为了消除牛顿方程的病态,本文引进Broyden 和Attia 在求解约束优化问题时,为克服罚函数的病态所采取的一种策略,提高了算法的稳定性.对子问题的结束准则等对算法的效率有重要影响的技术细节,本文也进行了研究.  相似文献   

14.
We study the influence of a center Lipschitz condition for the first derivative of the operator involved when the solution of a nonlinear equation is approximated by Newton’s method in Banach spaces. As a consequence, we see that the domain of parameters associated to the Newton–Kantorovich theorem is enlarged.  相似文献   

15.
Implicit Runge-Kutta (IRK) methods (such as the s-stage Radau IIA method with s=3,5, or 7) for solving stiff ordinary differential equation systems have excellent stability properties and high solution accuracy orders, but their high computing costs in solving their nonlinear stage equations have seriously limited their applications to large scale problems. To reduce such a cost, several approximate Newton algorithms were developed, including a commonly used one called the simplified Newton method. In this paper, a new approximate Jacobian matrix and two new test rules for controlling the updating of approximate Jacobian matrices are proposed, yielding an improved approximate Newton method. Theoretical and numerical analysis show that the improved approximate Newton method can significantly improve the convergence and performance of the simplified Newton method.  相似文献   

16.
Global Newton methods for computing solutions of nonlinear systems of equations have recently received a great deal of attention. By using the theory of generalized equations, a homotopy method is proposed to solve problems arising in complementarity and mathematical programming, as well as in variational inequalities. We introduce the concepts of generalized homotopies and regular values, characterize the solution sets of such generalized homotopies and prove, under boundary conditions similar to Smale’s [10], the existence of a homotopy path which contains an odd number of solutions to the problem. We related our homotopy path to the Newton method for generalized equations developed by Josephy [3]. An interpretation of our results for the nonlinear programming problem will be given.  相似文献   

17.
We discuss the solution of large‐scale box‐constrained linear least‐squares problems by two recent affine‐scaling methods: a cyclic Barzilai–Borwein strategy and an Inexact Newton‐like method where a preconditioning technique allows for an efficient computation of the steps. A robust globally and fast locally convergent method based on the combination of the two procedures is presented along with extensive numerical results. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Algebraic Riccati equations (ARE) of large dimension arise when using approximations to design controllers for systems modeled by partial differential equations. We use a modified Newton method to solve the ARE that takes advantage of several special features of these problems. The modified Newton method leads to a right-hand side of rank equal to the number of inputs regardless of the weights. Thus, the resulting Lyapunov equation can be more efficiently solved. The Cholesky-ADI algorithm is used to solve the Lyapunov equation resulting at each step. The algorithm is straightforward to code. Performance is illustrated with a number of standard examples. An example on controlling the deflection of the Euler-Bernoulli beam indicates that for weakly damped problems a low rank solution to the ARE may not exist. Further analysis supports this point.  相似文献   

19.
This paper formulates the quadratic penalty function for the dual problem of the linear programming associated with the \(L_1\) constrained linear quantile regression model. We prove that the solution of the original linear programming can be obtained by minimizing the quadratic penalty function, with the formulas derived. The obtained quadratic penalty function has no constraint, thus could be minimized efficiently by a generalized Newton algorithm with Armijo step size. The resulting algorithm is easy to implement, without requiring any sophisticated optimization package other than a linear equation solver. The proposed approach can be generalized to the quantile regression model in reproducing kernel Hilbert space with slight modification. Extensive experiments on simulated data and real-world data show that, the proposed Newton quantile regression algorithms can achieve performance comparable to state-of-the-art.  相似文献   

20.
We propose a proximal Newton method for solving nondifferentiable convex optimization. This method combines the generalized Newton method with Rockafellar’s proximal point algorithm. At each step, the proximal point is found approximately and the regularization matrix is preconditioned to overcome inexactness of this approximation. We show that such a preconditioning is possible within some accuracy and the second-order differentiability properties of the Moreau-Yosida regularization are invariant with respect to this preconditioning. Based upon these, superlinear convergence is established under a semismoothness condition. This work is supported by the Australian Research Council.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号