首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a predictor–corrector non–interior path following algorithm for the monotone linear complementarity problem based on Chen–Harker–Kanzow–Smale smoothing techniques. Although the method is modeled on the interior point predictor–corrector strategies, it is the first instance of a non–interior point predictor–corrector algorithm. The algorithm is shown to be both globally linearly convergent and locally quadratically convergent under standard hypotheses. The approach to global linear convergence follows the authors’ previous work on this problem for the case of (P 0+R 0) LCPs. However, in this paper we use monotonicity to refine our notion of neighborhood of the central path. The refined neighborhood allows us to establish the uniform boundedness of certain slices of the neighborhood of the central path under the standard hypothesis that a strictly positive feasible point exists. Received September 1997 / Revised version received May 1999?Published online December 15, 1999  相似文献   

2.
3.
This paper is concerned with a primal–dual interior point method for solving nonlinear semidefinite programming problems. The method consists of the outer iteration (SDPIP) that finds a KKT point and the inner iteration (SDPLS) that calculates an approximate barrier KKT point. Algorithm SDPLS uses a commutative class of Newton-like directions for the generation of line search directions. By combining the primal barrier penalty function and the primal–dual barrier function, a new primal–dual merit function is proposed. We prove the global convergence property of our method. Finally some numerical experiments are given.  相似文献   

4.
In this paper we develop a trigonometrically fitted predictor–corrector (P–C) scheme, which is based on the well-known two-step second-order Adams–Bashforth method (as predictor) and on the third-order Adams–Moulton method (as corrector). Numerical experiments show that the new trigonometrically fitted P–C method is substantially more efficient than widely used methods for the numerical solution of initial-value problems (IVPs) with oscillating solutions.  相似文献   

5.
Finding an efficient implementation variant for the numerical solution of problems from computational science and engineering involves many implementation decisions that are strongly influenced by the specific hardware architecture. The complexity of these architectures makes it difficult to find the best implementation variant by manual tuning. For numerical solution methods from linear algebra, auto-tuning techniques based on a global search engine as they are used for ATLAS or FFTW can be used successfully. These techniques generate different implementation variants at installation time and select one of these implementation variants either at installation time or at runtime, before the computation starts. For some numerical methods, auto-tuning at installation time cannot be applied directly, since the best implementation variant may strongly depend on the specific numerical problem to be solved. An example is solution methods for initial value problems (IVPs) of ordinary differential equations (ODEs), where the coupling structure of the ODE system to be solved has a large influence on the efficient use of the memory hierarchy of the hardware architecture. In this context, it is important to use auto-tuning techniques at runtime, which is possible because of the time-stepping nature of ODE solvers.In this article, we present a sequential self-adaptive ODE solver that selects the best implementation variant from a candidate pool at runtime during the first time steps, i.e., the auto-tuning phase already contributes to the progress of the computation. The implementation variants differ in the loop structure and the data structures used to realize the numerical algorithm, a predictor–corrector (PC) iteration scheme with Runge–Kutta (RK) corrector considered here as an example. For those implementation variants in the candidate pool that use loop tiling to exploit the memory hierarchy of a given hardware platform we investigate the selection of tile sizes. The self-adaptive ODE solver combines empirical search with a model-based approach in order to reduce the search space of possible tile sizes. Runtime experiments demonstrate the efficiency of the self-adaptive solver for different IVPs across a range of problem sizes and on different hardware architectures.  相似文献   

6.
7.
In this paper numerical methods for solving stochastic differential equations with Markovian switching (SDEwMSs) are developed by pathwise approximation. The proposed family of strong predictor–corrector Euler–Maruyama methods is designed to overcome the propagation of errors during the simulation of an approximate path. This paper not only shows the strong convergence of the numerical solution to the exact solution but also reveals the order of the error under some conditions on the coefficient functions. A natural analogue of the pp-stability criterion is studied. Numerical examples are given to illustrate the computational efficiency of the new predictor–corrector Euler–Maruyama approximation.  相似文献   

8.
Two corrector–predictor interior point algorithms are proposed for solving monotone linear complementarity problems. The algorithms produce a sequence of iterates in the neighborhood of the central path. The first algorithm uses line search schemes requiring the solution of higher order polynomial equations in one variable, while the line search procedures of the second algorithm can be implemented in arithmetic operations, where n is the dimension of the problems, is a constant, and m is the maximum order of the predictor and the corrector. If then both algorithms have iteration complexity. They are superlinearly convergent even for degenerate problems.   相似文献   

9.
This work is concerned with an abstract problem in the form of a variational inequality, or equivalently a minimization problem involving a non-differential functional. The problem is inspired by a formulation of the initial–boundary value problem of elastoplasticity. The objective of this work is to revisit the predictor–corrector algorithms that are commonly used in computational applications, and to establish conditions under which these are convergent or, at least, under which they lead to decreasing sequences of the functional for the problem. The focus is on the predictor step, given that the corrector step by definition leads to a decrease in the functional. The predictor step may be formulated as a minimization problem. Attention is given to the tangent predictor, a line search approach, the method of steepest descent, and a Newton-like method. These are all shown to lead to decreasing sequences.  相似文献   

10.
This work extends the efficient results relative to the 0–1 knapsack problem to the multiple inequality constraints 0–1 linear programming problems. The two crucial phases for the solving of this type of problems are presented: (i) Two linear expected time complexity greedy algorithms are proposed for the determination of a lower bound on the optimal value by using a cascade of surrogate relaxations of the original problem whose sizes are decreasing step by step. A comparative study with the best known heuristic methods is reported; it concerned the accuracy of the approximate solutions and the practical computational times. (ii) This greedy algorithm is inserted in an efficient reduction framework. Variables and constraints are eliminated by the conjunction of tests applied to Lagrangean and surrogate relaxations of the original problem. A lot of computational results are summarized by considering test problems of the literature.  相似文献   

11.
This paper describes the construction of block predictor–corrector methods based on Runge–Kutta–Nyström correctors. Our approach is to apply the predictor–corrector method not only with stepsize h, but, in addition (and simultaneously) with stepsizes a i h, i = 1 ...,r. In this way, at each step, a whole block of approximations to the exact solution at off‐step points is computed. In the next step, these approximations are used to obtain a high‐order predictor formula using Lagrange or Hermite interpolation. Since the block approximations at the off‐step points can be computed in parallel, the sequential costs of these block predictor–corrector methods are comparable with those of a conventional predictor–corrector method. Furthermore, by using Runge–Kutta–Nyström corrector methods, the computation of the approximation at each off‐step point is also highly parallel. Numerical comparisons on a shared memory computer show the efficiency of the methods for problems with expensive function evaluations.  相似文献   

12.
In this paper, we study the global convergence of a large class of primal—dual interior point algorithms for solving the linearly constrained convex programming problem. The algorithms in this class decrease the value of a primal—dual potential function and hence belong to the class of so-called potential reduction algorithms. An inexact line search based on Armijo stepsize rule is used to compute the stepsize. The directions used by the algorithms are the same as the ones used in primal—dual path following and potential reduction algorithms and a very mild condition on the choice of the centering parameter is assumed. The algorithms always keep primal and dual feasibility and, in contrast to the polynomial potential reduction algorithms, they do not need to drive the value of the potential function towards — in order to converge. One of the techniques used in the convergence analysis of these algorithms has its root in nonlinear unconstrained optimization theory.Research supported in part by NSF Grant DDM-9109404.  相似文献   

13.
14.
The solution of KKT systems is ubiquitous in optimization methods and often dominates the computation time, especially when large-scale problems are considered. Thus, the effective implementation of such methods is highly dependent on the availability of effective linear algebra algorithms and software, that are able, in turn, to take into account specific needs of optimization. In this paper we discuss the mutual impact of linear algebra and optimization, focusing on interior point methods and on the iterative solution of the KKT system. Three critical issues are addressed: preconditioning, termination control for the inner iterations, and inertia control.  相似文献   

15.
16.
In the present paper, a family of predictor–corrector (PC) schemes are developed for the numerical solution of nonlinear parabolic differential equations. Iterative processes are avoided by use of the implicit–explicit (IMEX) methods. Moreover, compared to the predictor schemes, the proposed methods usually have superior accuracy and stability properties. Some confirmation of these are illustrated by using the schemes on the well-known Fisher’s equation.  相似文献   

17.
This paper concerns solving an overdetermined linear systemA T x=b in the leastl 1-norm orl -norm sense, whereA m×n ,m<n. We show that the primal-dual interior point approach for linear programming can be applied, in an effective manner, to linear programming versions of thel 1 andl -problems. The resulting algorithms are simple to implement and can attain quadratic or superlinear convergence rate. At each iteration, the algorithms must solve a linear system with anm×m positive-definite coefficient matrix of the formADA T , whereD is a positive diagonal matrix. The preliminary numerical results indicate that the proposed algorithms offer considerable promise.This research was supported in part by Grants NSF DMS-91-02761 and DOE DE-FG05-91-ER25100.  相似文献   

18.
Interior–point algorithms are among the most efficient techniques for solving complementarity problems. In this paper, a procedure for globalizing interior–point algorithms by using the maximum stepsize is introduced. The algorithm combines exact or inexact interior–point and projected–gradient search techniques and employs a line–search procedure for the natural merit function associated with the complementarity problem. For linear problems, the maximum stepsize is shown to be acceptable if the Newton interior–point search direction is employed. Complementarity and optimization problems are discussed, for which the algorithm is able to process by either finding a solution or showing that no solution exists. A modification of the algorithm for dealing with infeasible linear complementarity problems is introduced which, in practice, employs only interior–point search directions. Computational experiments on the solution of complementarity problems and convex programming problems by the new algorithm are included.  相似文献   

19.
This paper introduces an error propagation formula of a certain class of multi-level iterative aggregation–disaggregation (IAD) methods for numerical solutions of stationary probability vectors of discrete finite Markov chains. The formula can be used to investigate convergence by computing the spectral radius of the error propagation matrix for specific Markov chains. Numerical experiments indicate that the same type of the formula could be used for a wider class of the multi-level IAD methods. Using the formula we show that for given data there is no relation between convergence of two-level and of multi-level IAD methods.  相似文献   

20.
Semismooth Newton method is an effective method for solving a nonsmooth equation, which arises from a reformulation of the complementarity problem. Under appropriate conditions, we verify the monotone convergence of the method. We also present semismooth Newton Schwarz iterative methods for the nonsmooth equation. Under suitable conditions, the methods exhibit monotone and superlinear convergence properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号