首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper is concerned with the numerical solution of a symmetric indefinite system which is a generalization of the Karush–Kuhn–Tucker system. Following the recent approach of Luk?an and Vl?ek, we propose to solve this system by a preconditioned conjugate gradient (PCG) algorithm and we devise two indefinite preconditioners with good theoretical properties. In particular, for one of these preconditioners, the finite termination property of the PCG method is stated. The PCG method combined with a parallel version of these preconditioners is used as inner solver within an inexact Interior‐Point (IP) method for the solution of large and sparse quadratic programs. The numerical results obtained by a parallel code implementing the IP method on distributed memory multiprocessor systems enable us to confirm the effectiveness of the proposed approach for problems with special structure in the constraint matrix and in the objective function. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we study a class of tuned preconditioners that will be designed to accelerate both the DACG–Newton method and the implicitly restarted Lanczos method for the computation of the leftmost eigenpairs of large and sparse symmetric positive definite matrices arising in large‐scale scientific computations. These tuning strategies are based on low‐rank modifications of a given initial preconditioner. We present some theoretical properties of the preconditioned matrix. We experimentally show how the aforementioned methods benefit from the acceleration provided by these tuned/deflated preconditioners. Comparisons are carried out with the Jacobi–Davidson method onto matrices arising from various large realistic problems arising from finite element discretization of PDEs modeling either groundwater flow in porous media or geomechanical processes in reservoirs. The numerical results show that the Newton‐based methods (which includes also the Jacobi–Davidson method) are to be preferred to the – yet efficiently implemented – implicitly restarted Lanczos method whenever a small to moderate number of eigenpairs is required. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Theoretical Efficiency of an Inexact Newton Method   总被引:6,自引:0,他引:6  
We propose a local algorithm for smooth unconstrained optimization problems with n variables. The algorithm is the optimal combination of an exact Newton step with Choleski factorization and several inexact Newton steps with preconditioned conjugate gradient subiterations. The preconditioner is taken as the inverse of the Choleski factorization in the previous exact Newton step. While the Newton method is converging precisely with Q-order 2, this algorithm is also precisely converging with Q-order 2. Theoretically, its average number of arithmetic operations per step is much less than the corresponding number of the Newton method for middle-scale and large-scale problems. For instance, when n=200, the ratio of these two numbers is less than 0.53. Furthermore, the ratio tends to zero approximately at a rate of log 2/logn when n approaches infinity.  相似文献   

4.
This paper studies convergence analysis of a preconditioned inexact Uzawa method for nondifferentiable saddle-point problems. The SOR-Newton method and the SOR-BFGS method are special cases of this method. We relax the Bramble-Pasciak-Vassilev condition on preconditioners for convergence of the inexact Uzawa method for linear saddle-point problems. The relaxed condition is used to determine the relaxation parameters in the SOR-Newton method and the SOR-BFGS method. Furthermore, we study global convergence of the multistep inexact Uzawa method for nondifferentiable saddle-point problems.  相似文献   

5.
Timo Hylla  E. W. Sachs 《PAMM》2007,7(1):1060507-1060508
Optimal control problems involving PDEs often lead in practice to the numerical computation of feedback laws for an optimal control. This is achieved through the solution of a Riccati equation which can be large scale, since the discretized problems are large scale and require special attention in their numerical solution. The Kleinman-Newton method is a classical way to solve an algebraic Riccati equation. We look at two versions of an extension of this method to an inexact Newton method. It can be shown that these two implementable versions of Newton's method are identical in the exact case, but differ substantially for the inexact Newton method. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
For exact Newton method for solving monotone semidefinite complementarity problems (SDCP), one needs to exactly solve a linear system of equations at each iteration. For problems of large size, solving the linear system of equations exactly can be very expensive. In this paper, we propose a new inexact smoothing/continuation algorithm for solution of large-scale monotone SDCP. At each iteration the corresponding linear system of equations is solved only approximately. Under mild assumptions, the algorithm is shown to be both globally and superlinearly convergent.  相似文献   

7.
For unconstrained optimization, an inexact Newton algorithm is proposed recently, in which the preconditioned conjugate gradient method is applied to solve the Newton equations. In this paper, we improve this algorithm by efficiently using automatic differentiation and establish a new inexact Newton algorithm. Based on the efficiency coefficient defined by Brent, a theoretical efficiency ratio of the new algorithm to the old algorithm is introduced. It has been shown that this ratio is greater than 1, which implies that the new algorithm is always more efficient than the old one. Furthermore, this improvement is significant at least for some cases. This theoretical conclusion is supported by numerical experiments.   相似文献   

8.
We propose an inexact Newton method with a filter line search algorithm for nonconvex equality constrained optimization. Inexact Newton’s methods are needed for large-scale applications which the iteration matrix cannot be explicitly formed or factored. We incorporate inexact Newton strategies in filter line search, yielding algorithm that can ensure global convergence. An analysis of the global behavior of the algorithm and numerical results on a collection of test problems are presented.  相似文献   

9.
A QMR-based interior-point algorithm for solving linear programs   总被引:5,自引:0,他引:5  
A new approach for the implementation of interior-point methods for solving linear programs is proposed. Its main feature is the iterative solution of the symmetric, but highly indefinite 2×2-block systems of linear equations that arise within the interior-point algorithm. These linear systems are solved by a symmetric variant of the quasi-minimal residual (QMR) algorithm, which is an iterative solver for general linear systems. The symmetric QMR algorithm can be combined with indefinite preconditioners, which is crucial for the efficient solution of highly indefinite linear systems, yet it still fully exploits the symmetry of the linear systems to be solved. To support the use of the symmetric QMR iteration, a novel stable reduction of the original unsymmetric 3×3-block systems to symmetric 2×2-block systems is introduced, and a measure for a low relative accuracy for the solution of these linear systems within the interior-point algorithm is proposed. Some indefinite preconditioners are discussed. Finally, we report results of a few preliminary numerical experiments to illustrate the features of the new approach.  相似文献   

10.
Sparse covariance selection problems can be formulated as log-determinant (log-det) semidefinite programming (SDP) problems with large numbers of linear constraints. Standard primal–dual interior-point methods that are based on solving the Schur complement equation would encounter severe computational bottlenecks if they are applied to solve these SDPs. In this paper, we consider a customized inexact primal–dual path-following interior-point algorithm for solving large scale log-det SDP problems arising from sparse covariance selection problems. Our inexact algorithm solves the large and ill-conditioned linear system of equations in each iteration by a preconditioned iterative solver. By exploiting the structures in sparse covariance selection problems, we are able to design highly effective preconditioners to efficiently solve the large and ill-conditioned linear systems. Numerical experiments on both synthetic and real covariance selection problems show that our algorithm is highly efficient and outperforms other existing algorithms.  相似文献   

11.
马昌凤 《数学杂志》2001,21(3):285-289
本文针对非线性互补问题,提出了与其等价的非光滑方程的非精确逐次逼近算法,并在一定条件下证明了该算法的全局收敛性。  相似文献   

12.
We present a modified damped Newton method for solving large sparse linear complementarity problems, which adopts a new strategy for determining the stepsize at each Newton iteration. The global convergence of the new method is proved when the system matrix is a nondegenerate matrix. We then apply the matrix splitting technique to this new method, deriving an inexact splitting method for the linear complementarity problems. The global convergence of the resulting inexact splitting method is proved, too. Numerical results show that the new methods are feasible and effective for solving the large sparse linear complementarity problems.  相似文献   

13.
In this paper, we consider the so-called "inexact Uzawa" algorithm applied to the unstable Navier-Stokes problem. We use stabilization matrix to stabilize the unstable system and proved theoretically that under given proper preconditioners, Uzawa algorithm is convergent for the stablization system. Bounds for the iteration error are provided. We show numerically that Uzawa algorithm is convergent as well for the sta  相似文献   

14.
We study the local behavior of a primal-dual inexact interior point methods for solving nonlinear systems arising from the solution of nonlinear optimization problems or more generally from nonlinear complementarity problems. The algorithm is based on the Newton method applied to a sequence of perturbed systems that follows by perturbation of the complementarity equations of the original system. In case of an exact solution of the Newton system, it has been shown that the sequence of iterates is asymptotically tangent to the central path (Armand and Benoist in Math. Program. 115:199?C222, 2008). The purpose of the present paper is to extend this result to an inexact solution of the Newton system. We give quite general conditions on the different parameters of the algorithm, so that this asymptotic property is satisfied. Some numerical tests are reported to illustrate our theoretical results.  相似文献   

15.
In this paper, we apply the two‐step Newton method to solve inverse eigenvalue problems, including exact Newton, Newton‐like, and inexact Newton‐like versions. Our results show that both two‐step Newton and two‐step Newton‐like methods converge cubically, and the two‐step inexact Newton‐like method is super quadratically convergent. Numerical implementations demonstrate the effectiveness of new algorithms.  相似文献   

16.
For large sparse saddle point problems, we firstly introduce the block diagonally preconditioned Gauss-Seidl method (PBGS) which reduces to the GSOR method [Z.-Z. Bai, B.N. Parlett, Z.-Q. Wang, On generalized successive overrelaxation methods for augmented linear systems, Numer. Math. 102 (2005) 1-38] and PIU method [Z.-Z. Bai, Z.-Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl. 428 (2008) 2900-2932] when the preconditioners equal to different matrices, respectively. Then we generalize the PBGS method to the PPIU method and discuss the sufficient conditions such that the spectral radius of the PPIU method is much less than one. Furthermore, some rules are considered for choices of the preconditioners including the splitting method of the (1, 1) block matrix in the PIU method and numerical examples are given to show the superiority of the new method to the PIU method.  相似文献   

17.
Inexact Interior-Point Method   总被引:2,自引:0,他引:2  
In this paper, we introduce an inexact interior-point algorithm for a constrained system of equations. The formulation of the problem is quite general and includes nonlinear complementarity problems of various kinds. In our convergence theory, we interpret the inexact interior-point method as an inexact Newton method. This enables us to establish a global convergence theory for the proposed algorithm. Under the additional assumption of the invertibility of the Jacobian at the solution, the superlinear convergence of the iteration sequence is proved.  相似文献   

18.
We propose a new inexact column-and-constraint generation (i-C&CG) method to solve two-stage robust optimization problems. The method allows solutions to the master problems to be inexact, which is desirable when solving large-scale and/or challenging problems. It is equipped with a backtracking routine that controls the trade-off between bound improvement and inexactness. Importantly, this routine allows us to derive theoretical finite convergence guarantees for our i-C&CG method. Numerical experiments demonstrate computational advantages of our i-C&CG method over state-of-the-art column-and-constraint generation methods.  相似文献   

19.
We develop and analyze an affine scaling inexact generalized Newton algorithm in association with nonmonotone interior backtracking line technique for solving systems of semismooth equations subject to bounds on variables. By combining inexact affine scaling generalized Newton with interior backtracking line search technique, each iterate switches to inexact generalized Newton backtracking step to strict interior point feasibility. The global convergence results are developed in a very general setting of computing trial steps by the affine scaling generalized Newton-like method that is augmented by an interior backtracking line search technique projection onto the feasible set. Under some reasonable conditions we establish that close to a regular solution the inexact generalized Newton method is shown to converge locally p-order q-superlinearly. We characterize the order of local convergence based on convergence behavior of the quality of the approximate subdifferentials and indicate how to choose an inexact forcing sequence which preserves the rapid convergence of the proposed algorithm. A nonmonotonic criterion should bring about speeding up the convergence progress in some ill-conditioned cases.  相似文献   

20.
In this paper, we propose modifications to a prototypical branch and bound algorithm for nonlinear optimization so that the algorithm efficiently handles constrained problems with constant bound constraints. The modifications involve treating subregions of the boundary identically to interior regions during the branch and bound process, but using reduced gradients for the interval Newton method. The modifications also involve preconditioners for the interval Gauss-Seidel method which are optimal in the sense that their application selectively gives a coordinate bound of minimum width, a coordinate bound whose left endpoint is as large as possible, or a coordinate bound whose right endpoint is as small as possible. We give experimental results on a selection of problems with different properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号