首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We deal with the primal–dual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for so-called large-update methods. We present some new analysis tools, based on a proximity measure introduced by Jansen et al., in 1994, that may help to close this gap. This proximity measure has not been used in the analysis of large-update methods before. The new analysis does not improve the known complexity results but provides a unified way for the analysis of both large-update and small-update methods.  相似文献   

2.
In this paper, we present a new formulation for constructing an n-dimensional ellipsoid by generalizing the computation of the minimum volume covering ellipsoid. The proposed ellipsoid construction is associated with a user-defined parameter β∈[0,1), and formulated as a convex optimization based on the CVaR minimization technique proposed by Rockafellar and Uryasev (J. Bank. Finance 26: 1443–1471, 2002). An interior point algorithm for the solution is developed by modifying the DRN algorithm of Sun and Freund (Oper. Res. 52(5):690–706, 2004) for the minimum volume covering ellipsoid. By exploiting the solution structure, the associated parametric computation can be performed in an efficient manner. Also, the maximization of the normal likelihood function can be characterized in the context of the proposed ellipsoid construction, and the likelihood maximization can be generalized with parameter β. Motivated by this fact, the new ellipsoid construction is examined through a multiclass discrimination problem. Numerical results are given, showing the nice computational efficiency of the interior point algorithm and the capability of the proposed generalization.  相似文献   

3.
Hopfield neural networks and affine scaling interior point methods are combined in a hybrid approach for solving linear optimization problems. The Hopfield networks perform the early stages of the optimization procedures, providing enhanced feasible starting points for both primal and dual affine scaling interior point methods, thus facilitating the steps towards optimality. The hybrid approach is applied to a set of real world linear programming problems. The results show the potential of the integrated approach, indicating that the combination of neural networks and affine scaling interior point methods can be a good alternative to obtain solutions for large-scale optimization problems.  相似文献   

4.
一类线性约束凸规划的内椭球算法   总被引:3,自引:0,他引:3  
1引言自从1984年Karmarkar的著名算法——梯度投影算法发表以来,由其理论上的多项式收敛性及实际计算的有效性,使得内点算法成为近十几年来优化界研究的热点([1]).通过中外学者的深入研究,线性规划与凸二次规划的内点算法研究已取得了不少成果([2」、[3〕).这些算法大致可分为四种类型:梯度投影算法、仿射尺度算法、路径跟踪法和势函数减少法吸3]、〔9〕).近来,人们开始着手将这些方法推广到非线性规划中的凸规划问题、线性互补问题和非线性互补问题(【6」、[7」、〔sj、[10」、Ill〕).例如:文[8」对一类凸可分规…  相似文献   

5.
Interior point methods for optimization have been around for more than 25 years now. Their presence has shaken up the field of optimization. Interior point methods for linear and (convex) quadratic programming display several features which make them particularly attractive for very large scale optimization. Among the most impressive of them are their low-degree polynomial worst-case complexity and an unrivalled ability to deliver optimal solutions in an almost constant number of iterations which depends very little, if at all, on the problem dimension. Interior point methods are competitive when dealing with small problems of dimensions below one million constraints and variables and are beyond competition when applied to large problems of dimensions going into millions of constraints and variables.In this survey we will discuss several issues related to interior point methods including the proof of the worst-case complexity result, the reasons for their amazingly fast practical convergence and the features responsible for their ability to solve very large problems. The ever-growing sizes of optimization problems impose new requirements on optimization methods and software. In the final part of this paper we will therefore address a redesign of interior point methods to allow them to work in a matrix-free regime and to make them well-suited to solving even larger problems.  相似文献   

6.
Newton’s method is a basic tool in numerical analysis and numerous applications, including operations research and data mining. We survey the history of the method, its main ideas, convergence results, modifications, its global behavior. We focus on applications of the method for various classes of optimization problems, such as unconstrained minimization, equality constrained problems, convex programming and interior point methods. Some extensions (non-smooth problems, continuous analog, Smale’s results, etc.) are discussed briefly, while some others (e.g., versions of the method to achieve global convergence) are addressed in more details.  相似文献   

7.
In this paper, continuous methods are introduced to compute both the extreme and interior eigenvalues and their corresponding eigenvectors for real symmetric matrices. The main idea is to convert the extreme and interior eigenvalue problems into some optimization problems. Then a continuous method which includes both a merit function and an ordinary differential equation (ODE) is introduced for each resulting optimization problem. The convergence of each ODE solution is proved for any starting point. The limit of each ODE solution for any starting point is fully studied. Both the extreme and the interior eigenvalues and their corresponding eigenvectors can be easily obtained under a very mild condition. Promising numerical results are also presented.  相似文献   

8.
本文首先对IPA算法进行了修正,并证明了修正IPA算法的收敛性,然后将修正后的IPA应用到不等式约束凸优化问题中得到新的内点算法,并与传统的障碍函数法作了比较,从理论上体现了新算法的优势,并给出了其工程解求解法以及收敛性的证明.  相似文献   

9.
 In the last two decades, the mathematical programming community has witnessed some spectacular advances in interior point methods and robust optimization. These advances have recently started to significantly impact various fields of applied sciences and engineering where computational efficiency is essential. This paper focuses on two such fields: digital signal processing and communication. In the past, the widely used optimization methods in both fields had been the gradient descent or least squares methods, both of which are known to suffer from the usual headaches of stepsize selection, algorithm initialization and local minima. With the recent advances in conic and robust optimization, the opportunity is ripe to use the newly developed interior point optimization techniques and highly efficient software tools to help advance the fields of signal processing and digital communication. This paper surveys recent successes of applying interior point and robust optimization to solve some core problems in these two fields. The successful applications considered in this paper include adaptive filtering, robust beamforming, design and analysis of multi-user communication system, channel equalization, decoding and detection. Throughout, our emphasis is on how to exploit the hidden convexity, convex reformulation of semi-infinite constraints, analysis of convergence, complexity and performance, as well as efficient practical implementation. Received: January 22, 2003 / Accepted: April 29, 2003 Published online: May 28, 2003 RID="*" ID="*" This research was supported in part by the Natural Sciences and Engineering Research Council of Canada, Grant No. OPG0090391, and by the Canada Research Chair program. New address after April 1, 2003: Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA  相似文献   

10.
The solution of KKT systems is ubiquitous in optimization methods and often dominates the computation time, especially when large-scale problems are considered. Thus, the effective implementation of such methods is highly dependent on the availability of effective linear algebra algorithms and software, that are able, in turn, to take into account specific needs of optimization. In this paper we discuss the mutual impact of linear algebra and optimization, focusing on interior point methods and on the iterative solution of the KKT system. Three critical issues are addressed: preconditioning, termination control for the inner iterations, and inertia control.  相似文献   

11.
Many problems of optimization involve the minimization of an objective function on a convex cone. In this respect we define a concave gauge function which will be used in interior point methods.Application are given in particular on the space of real symmetric matrices.  相似文献   

12.
In this work, the optimal adjustment algorithm for p coordinates, which arose from a generalization of the optimal pair adjustment algorithm is used to accelerate the convergence of interior point methods using a hybrid iterative approach for solving the linear systems of the interior point method. Its main advantages are simplicity and fast initial convergence. At each interior point iteration, the preconditioned conjugate gradient method is used in order to solve the normal equation system. The controlled Cholesky factorization is adopted as the preconditioner in the first outer iterations and the splitting preconditioner is adopted in the final outer iterations. The optimal adjustment algorithm is applied in the preconditioner transition in order to improve both speed and robustness. Numerical experiments on a set of linear programming problems showed that this approach reduces the total number of interior point iterations and running time for some classes of problems. Furthermore, some problems were solved only when the optimal adjustment algorithm for p coordinates was used in the change of preconditioners.  相似文献   

13.
《Optimization》2012,61(1):59-79
In sensitivity analysis one wants to know how the problem and the optimal solutions change under the variation of the input data. We consider the case when variation happens in the right-hand side of the constraints and/or in the linear term of the objective function. We are interested to find the range of the parameter variation in Convex Quadratic Optimization (CQO) problems where the support set of a given primal optimal solution remains invariant. This question has been first raised in Linear Optimization (LO) and known as Type II (so-called Support Set Invariancy) sensitivity analysis. We present computable auxiliary problems to identify the range of parameter variation in support set invariancy sensitivity analysis for CQO. It should be mentioned that all the given auxiliary problems are LO problems and can be solved by an interior point method in polynomial time. We also highlight the differences between the characteristics of support set invariancy sensitivity analysis for LO and CQO.  相似文献   

14.
In this paper, we introduce two direct methods for solving some classes of linear programming problems. The first method produces the extreme vertex or a neighboring vertex with respect to the extreme point. The second method is based on the game theory. Both these methods can be used in the preparation of the starting point for the simplex method. The efficiency of the improved simplex method, whose starting point is constructed by these introduced methods, is compared with the original simplex method and the interior point methods, and illustrated by examples. Also, we investigate the elimination of excessive constraints.  相似文献   

15.
互补问题算法的新进展   总被引:20,自引:0,他引:20  
修乃华  高自友 《数学进展》1999,28(3):193-210
互补问题是一类重要的优化问题,在最近30多年的时间里,人们为求解它而提出了许多算法,该文主要介绍1990-1997年之间出现的某些新算法,它们大致可归类为:(1)光滑方程法;(2)非光滑方程法;(3)可微无约束优化法;(4)GLP投影法;(5)内点法;(6)磨光与非内点连续法,文中对每类算法及相应的收敛性结果做了描述与评论,并列出有关文献。  相似文献   

16.
We present an extension of Karmarkar's linear programming algorithm for solving a more general group of optimization problems: convex quadratic programs. This extension is based on the iterated application of the objective augmentation and the projective transformation, followed by optimization over an inscribing ellipsoid centered at the current solution. It creates a sequence of interior feasible points that converge to the optimal feasible solution in O(Ln) iterations; each iteration can be computed in O(Ln 3) arithmetic operations, wheren is the number of variables andL is the number of bits in the input. In this paper, we emphasize its convergence property, practical efficiency, and relation to the ellipsoid method.  相似文献   

17.
18.

In this paper, we investigate a new primal-dual long-step interior point algorithm for linear optimization. Based on the step size, interior point algorithms can be divided into two main groups, short-step, and long-step methods. In practice, long-step variants perform better, but usually, a better theoretical complexity can be achieved for the short-step methods. One of the exceptions is the large-update algorithm of Ai and Zhang. The new wide neighborhood and the main characteristics of the presented algorithm are based on their approach. In addition, we use the algebraic equivalent transformation technique of Darvay to determine new modified search directions for our method. We show that the new long-step algorithm is convergent and has the best known iteration complexity of short-step variants. We present our numerical results and compare the performance of our algorithm with two previously introduced Ai-Zhang type interior point algorithms on a set of linear programming test problems from the Netlib library.

  相似文献   

19.
This paper proves local convergence rates of primal-dual interior point methods for general nonlinearly constrained optimization problems. Conditions to be satisfied at a solution are those given by the usual Jacobian uniqueness conditions. Proofs about convergence rates are given for three kinds of step size rules. They are: (i) the step size rule adopted by Zhang et al. in their convergence analysis of a primal-dual interior point method for linear programs, in which they used single step size for primal and dual variables; (ii) the step size rule used in the software package OB1, which uses different step sizes for primal and dual variables; and (iii) the step size rule used by Yamashita for his globally convergent primal-dual interior point method for general constrained optimization problems, which also uses different step sizes for primal and dual variables. Conditions to the barrier parameter and parameters in step size rules are given for each case. For these step size rules, local and quadratic convergence of the Newton method and local and superlinear convergence of the quasi-Newton method are proved. A preliminary version of this paper was presented at the conference “Optimization-Models and Algorithms” held at the Institute of Statistical Mathematics, Tokyo, March 1993.  相似文献   

20.
In 1951, Fenchel discovered a special duality, which relates the minimization of a sum of two convex functions with the maximization of the sum of concave functions, using conjugates. Fenchel's duality is central to the study of constrained optimization. It requires an existence of an interior point of a convex set which often has empty interior in optimization applications. The well known relaxations of this requirement in the literature are again weaker forms of the interior point condition. Avoiding an interior point condition in duality has so far been a difficult problem. However, a non-interior point type condition is essential for the application of Fenchel's duality to optimization. In this paper we solve this problem by presenting a simple geometric condition in terms of the sum of the epigraphs of conjugate functions. We also establish a necessary and sufficient condition for the ε-subdifferential sum formula in terms of the sum of the epigraphs of conjugate functions. Our results offer further insight into Fenchel's duality. Dedicated to Terry Rockafellar on his 70th birthday  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号