首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Karush—Kuhn—Tucker (KKT) conditions can be regarded as optimality conditions for both variational inequalities and constrained optimization problems. In order to overcome some drawbacks of recently proposed reformulations of KKT systems, we propose casting KKT systems as a minimization problem with nonnegativity constraints on some of the variables. We prove that, under fairly mild assumptions, every stationary point of this constrained minimization problem is a solution of the KKT conditions. Based on this reformulation, a new algorithm for the solution of the KKT conditions is suggested and shown to have some strong global and local convergence properties. Accepted 10 December 1997  相似文献   

2.
In this paper, we propose a feasible QP-free method for solving nonlinear inequality constrained optimization problems. A new working set is proposed to estimate the active set. Specially, to determine the working set, the new method makes use of the multiplier information from the previous iteration, eliminating the need to compute a multiplier function. At each iteration, two or three reduced symmetric systems of linear equations with a common coefficient matrix involving only constraints in the working set are solved, and when the iterate is sufficiently close to a KKT point, only two of them are involved. Moreover, the new algorithm is proved to be globally convergent to a KKT point under mild conditions. Without assuming the strict complementarity, the convergence rate is superlinear under a condition weaker than the strong second-order sufficiency condition. Numerical experiments illustrate the efficiency of the algorithm.  相似文献   

3.
In this article, we propose a new algorithm for the resolution of mixed integer bi-level linear problem (MIBLP). The algorithm is based on the decomposition of the initial problem into the restricted master problem (RMP) and a series of problems named slave problems (SP). The proposed approach is based on Benders decomposition method where in each iteration a set of variables are fixed which are controlled by the upper level optimization problem. The RMP is a relaxation of the MIBLP and the SP represents a restriction of the MIBLP. The RMP interacts in each iteration with the current SP by the addition of cuts produced using Lagrangian information from the current SP. The lower and upper bound provided from the RMP and SP are updated in each iteration. The algorithm converges when the difference between the upper and lower bound is within a small difference ε. In the case of MIBLP Karush–Kuhn–Tucker (KKT) optimality conditions could not be used directly to the inner problem in order to transform the bi-level problem into a single level problem. The proposed decomposition technique, however, allows the use of KKT conditions and transforms the MIBLP into two single level problems. The algorithm, which is a new method for the resolution of MIBLP, is illustrated through a modified numerical example from the literature. Additional examples from the literature are presented to highlight the algorithm convergence properties.  相似文献   

4.
Barrier methods have led to several nonlinear programming (NLP) solvers (e.g. IPOPT, KNITRO, LOQO). However, certain regularity conditions are required for convergence of these methods. These conditions are violated for optimization models with dependent constraints, thus leading to method failure. These shortcomings can be identified by checking the inertia of the KKT matrix, and current solvers either add regularizing terms to correct the inertia of the KKT matrix or revert to more expensive trust region methods to solve the barrier problem. This study improves on these approaches with a new structured regularization strategy; within the Newton step it identifies an independent subset of equality constraints and removes the remaining constraints without modifying the KKT matrix structure. This approach leads to more accurate Newton steps and faster convergence, while maintaining global convergence properties. Implemented in IPOPT with linear solvers HSL_MA57, HSL_MA97 and MUMPS, we present numerical experiments on hundreds of examples from the CUTEr test set, modified for dependency. These results show an average reduction in iterations of more than 50 % over the current version of IPOPT. In addition, several nonlinear blending problems are solved with the proposed algorithm, and improvements over existing regularization strategies are further demonstrated.  相似文献   

5.
Karush–Kuhn–Tucker (KKT) optimality conditions are often checked for investigating whether a solution obtained by an optimization algorithm is a likely candidate for the optimum. In this study, we report that although the KKT conditions must all be satisfied at the optimal point, the extent of violation of KKT conditions at points arbitrarily close to the KKT point is not smooth, thereby making the KKT conditions difficult to use directly to evaluate the performance of an optimization algorithm. This happens due to the requirement of complimentary slackness condition associated with KKT optimality conditions. To overcome this difficulty, we define modified ${\epsilon}$ -KKT points by relaxing the complimentary slackness and equilibrium equations of KKT conditions and suggest a KKT-proximity measure, that is shown to reduce sequentially to zero as the iterates approach the KKT point. Besides the theoretical development defining the modified ${\epsilon}$ -KKT point, we present extensive computer simulations of the proposed methodology on a set of iterates obtained through an evolutionary optimization algorithm to illustrate the working of our proposed procedure on smooth and non-smooth problems. The results indicate that the proposed KKT-proximity measure can be used as a termination condition to optimization algorithms. As a by-product, the method helps to find Lagrange multipliers correspond to near-optimal solutions which can be of importance to practitioners. We also provide a comparison of our KKT-proximity measure with the stopping criterion used in popular commercial softwares.  相似文献   

6.
AbstractAn interior trust-region-based algorithm for linearly constrained minimization problems is proposed and analyzed. This algorithm is similar to trust region algorithms for unconstrained minimization: a trust region subproblem on a subspace is solved in each iteration. We establish that the proposed algorithm has convergence properties analogous to those of the trust region algorithms for unconstrained minimization. Namely, every limit point of the generated sequence satisfies the Krush-Kuhn-Tucker (KKT) conditions and at least one limit point satisfies second order necessary optimality conditions. In addition, if one limit point is a strong local minimizer and the Hessian is Lipschitz continuous in a neighborhood of that point, then the generated sequence converges globally to that point in the rate of at least 2-step quadratic. We are mainly concerned with the theoretical properties of the algorithm in this paper. Implementation issues and adaptation to large-scale problems will be addressed in a  相似文献   

7.
The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well-known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated ALS algorithm that accelerates ALS in a blockwise manner using a simple momentum-based extrapolation technique and a random perturbation technique. Specifically, our algorithm updates one factor matrix (i.e., block) at a time, as in ALS, with each update consisting of a minimization step that directly reduces the reconstruction error, an extrapolation step that moves the factor matrix along the previous update direction, and a random perturbation step for breaking convergence bottlenecks. Our extrapolation strategy takes a simpler form than the state-of-the-art extrapolation strategies and is easier to implement. Our algorithm has negligible computational overheads relative to ALS and is simple to apply. Empirically, our proposed algorithm shows strong performance as compared to the state-of-the-art acceleration techniques on both simulated and real tensors.  相似文献   

8.
A stochastic approximation (SA) algorithm with new adaptive step sizes for solving unconstrained minimization problems in noisy environment is proposed. New adaptive step size scheme uses ordered statistics of fixed number of previous noisy function values as a criterion for accepting good and rejecting bad steps. The scheme allows the algorithm to move in bigger steps and avoid steps proportional to $1/k$ when it is expected that larger steps will improve the performance. An algorithm with the new adaptive scheme is defined for a general descent direction. The almost sure convergence is established. The performance of new algorithm is tested on a set of standard test problems and compared with relevant algorithms. Numerical results support theoretical expectations and verify efficiency of the algorithm regardless of chosen search direction and noise level. Numerical results on problems arising in machine learning are also presented. Linear regression problem is considered using real data set. The results suggest that the proposed algorithm shows promise.  相似文献   

9.
Stabilized SQP revisited   总被引:1,自引:0,他引:1  
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91–124, 2002), the key to local superlinear convergence of sSQP are the following two properties: upper Lipschitzian behavior of solutions of the Karush-Kuhn-Tucker (KKT) system under canonical perturbations and local solvability of sSQP subproblems with the associated primal-dual step being of the order of the distance from the current iterate to the solution set of the unperturbed KKT system. According to Fernández and Solodov (Math Program 125:47–73, 2010), both of these properties are ensured by the second-order sufficient optimality condition (SOSC) without any constraint qualification assumptions. In this paper, we state precise relationships between the upper Lipschitzian property of solutions of KKT systems, error bounds for KKT systems, the notion of critical Lagrange multipliers (a subclass of multipliers that violate SOSC in a very special way), the second-order necessary condition for optimality, and solvability of sSQP subproblems. Moreover, for the problem with equality constraints only, we prove superlinear convergence of sSQP under the assumption that the dual starting point is close to a noncritical multiplier. Since noncritical multipliers include all those satisfying SOSC but are not limited to them, we believe this gives the first superlinear convergence result for any Newtonian method for constrained optimization under assumptions that do not include any constraint qualifications and are weaker than SOSC. In the general case when inequality constraints are present, we show that such a relaxation of assumptions is not possible. We also consider applying sSQP to the problem where inequality constraints are reformulated into equalities using slack variables, and discuss the assumptions needed for convergence in this approach. We conclude with consequences for local regularization methods proposed in (Izmailov and Solodov SIAM J Optim 16:210–228, 2004; Wright SIAM J. Optim. 15:673–676, 2005). In particular, we show that these methods are still locally superlinearly convergent under the noncritical multiplier assumption, weaker than SOSC employed originally.  相似文献   

10.
投影信赖域策略结合非单调线搜索算法解有界约束非线性半光滑方程组.基于简单有界约束的非线性优化问题构建信赖域子问题,半光滑类牛顿步在可行域投影得到投影牛顿的试探步,获得新的搜索方向,结合非单调线搜索技术得到回代步,获得新的步长.在合理的条件下,证明算法不仅具有整体收敛性且保持超线性收敛速率.引入非单调技术能克服高度非线性的病态问题,加速收敛性进程,得到超线性收敛速率.  相似文献   

11.
The minimax concave penalty (MCP) has been demonstrated theoretically and practically to be effective in nonconvex penalization for variable selection and parameter estimation. In this paper, we develop an efficient alternating direction method of multipliers (ADMM) with continuation algorithm for solving the MCP-penalized least squares problem in high dimensions. Under some mild conditions, we study the convergence properties and the Karush–Kuhn–Tucker (KKT) optimality conditions of the proposed method. A high-dimensional BIC is developed to select the optimal tuning parameters. Simulations and a real data example are presented to illustrate the efficiency and accuracy of the proposed method.  相似文献   

12.
Recently studies of numerical methods for degenerate nonlinear optimization problems have been attracted much attention. Several authors have discussed convergence properties without the linear independence constraint qualification and/or the strict complementarity condition. In this paper, we are concerned with quadratic convergence property of a primal-dual interior point method, in which Newton’s method is applied to the barrier KKT conditions. We assume that the second order sufficient condition and the linear independence of gradients of equality constraints hold at the solution, and that there exists a solution that satisfies the strict complementarity condition, and that multiplier iterates generated by our method for inequality constraints are uniformly bounded, which relaxes the linear independence constraint qualification. Uniform boundedness of multiplier iterates is satisfied if the Mangasarian-Fromovitz constraint qualification is assumed, for example. By using the stability theorem by Hager and Gowda (1999), and Wright (2001), the distance from the current point to the solution set is related to the residual of the KKT conditions.By controlling a barrier parameter and adopting a suitable line search procedure, we prove the quadratic convergence of the proposed algorithm.  相似文献   

13.
A design of varying step size approach both in time span and spatial coordinate systems to achieve fast convergence is demonstrated in this study. This method is based on the concept of minimization of residuals by the Bi‐CGSTAB algorithm, so that the convergence can be enforced by varying the time‐step size. The numerical results show that the time‐step size determined by the proposed method improves the convergence rate for turbulent computations using advanced turbulence models in low Reynolds‐number form, and the degree of improvement increases with the degree of the complexity of the turbulence models. © 2001 John Wiley & Sons, Inc. Numer Methods Partial Differential Eq 17: 454–474, 2001.  相似文献   

14.
A new active-set method for smooth box-constrained minimization is introduced. The algorithm combines an unconstrained method, including a new line-search which aims to add many constraints to the working set at a single iteration, with a recently introduced technique (spectral projected gradient) for dropping constraints from the working set. Global convergence is proved. A computer implementation is fully described and a numerical comparison assesses the reliability of the new algorithm.  相似文献   

15.
求解变量带简单界约束的非线性规划问题的信赖域方法   总被引:3,自引:0,他引:3  
陈中文  韩继业 《计算数学》1997,19(3):257-266
1.引言。本文考虑下述变量带简单界约束的非线性规划问题:问题(1.1)不仅是实际应用中出现的简单的约束最优化问题,而且相当一部分最优化问题可以把变量限制在有意义的区间内181.因此,无论在理论方面还是在实际应用方面,都有必要研究此种问题.给出简便而且有效的算法.有些文章提出了一些特殊的方法.如011和[2].14]及16]提出了一类信赖域方法,它们都借助于某种辅助点,证明了算法的全局收敛性.在收敛速度的分析方面,除要求在*-T点满足严格互补松弛外,它们还要求另一个条件,即在每次迭代中,辅助点的有效约束必须在尝…  相似文献   

16.
In this paper, we consider using the neural networks to efficiently solve the second-order cone constrained variational inequality (SOCCVI) problem. More specifically, two kinds of neural networks are proposed to deal with the Karush-Kuhn-Tucker (KKT) conditions of the SOCCVI problem. The first neural network uses the Fischer-Burmeister (FB) function to achieve an unconstrained minimization which is a merit function of the Karush-Kuhn-Tucker equation. We show that the merit function is a Lyapunov function and this neural network is asymptotically stable. The second neural network is introduced for solving a projection formulation whose solutions coincide with the KKT triples of SOCCVI problem. Its Lyapunov stability and global convergence are proved under some conditions. Simulations are provided to show effectiveness of the proposed neural networks.  相似文献   

17.
We improve the twin support vector machine(TWSVM)to be a novel nonparallel hyperplanes classifier,termed as ITSVM(improved twin support vector machine),for binary classification.By introducing the diferent Lagrangian functions for the primal problems in the TWSVM,we get an improved dual formulation of TWSVM,then the resulted ITSVM algorithm overcomes the common drawbacks in the TWSVMs and inherits the essence of the standard SVMs.Firstly,ITSVM does not need to compute the large inverse matrices before training which is inevitable for the TWSVMs.Secondly,diferent from the TWSVMs,kernel trick can be applied directly to ITSVM for the nonlinear case,therefore nonlinear ITSVM is superior to nonlinear TWSVM theoretically.Thirdly,ITSVM can be solved efciently by the successive overrelaxation(SOR)technique or sequential minimization optimization(SMO)method,which makes it more suitable for large scale problems.We also prove that the standard SVM is the special case of ITSVM.Experimental results show the efciency of our method in both computation time and classification accuracy.  相似文献   

18.
In this paper, we consider a method of centers for solving multi-objective programming problems, where the objective functions involved are concave functions and the set of feasible points is convex. The algorithm is defined so that the sub-problems that must be solved during its execution may be solved by finite-step procedures. Conditions are given under which the algorithm generates sequences of feasible points and constraint multiplier vectors that have accumulation points satisfying the KKT conditions. Finally, we establish convergence of the proposed method of centers algorithm for solving multiobjective programming problems.  相似文献   

19.
This paper presents a new trust-region algorithm for n-dimension nonlinear optimiza-tion subject to m nonlinear inequality constraints.Equivalent KKT conditions are derived,which is the basis for constructing the new algorithm.Global convergence of the algorithun to a first-order KKT point is eatablished under mild conditions on the trial steps.local quadratic convergence theorem is provcd for nondegenerate minimizer point.Numerical expcriment is prcsented to show the effectiveness of our approach.  相似文献   

20.
徐海文  孙黎明 《计算数学》2017,39(2):200-212
凸优化问题的混合下降算法利用近似条件的已知信息和随机数扩张预测校正步得到了一组下降方向.而前向加速收缩算法利用高斯赛德尔迭代算法的技术,结合邻近点算法和近似邻近点算法的思想,构造了富有扩张性的下降方向.本文借鉴混合下降算法和前向加速收缩算法的思想,利用已有近似规则信息改善了混合下降算法的下降方向,得到了一类凸优化问题的加速混合下降算法.随后利用Markov不等式、凸函数性质和投影的基本性质等,实现了算法的依概率收敛证明.一系列数值试验表明了加速混合下降算法的有效性和效率性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号