共查询到20条相似文献,搜索用时 78 毫秒
1.
针对带不等式约束的极大极小问题,借鉴一般约束优化问题的模松弛强次可行SQP算法思想,提出了求解不等式约束极大极小问题的一个新型模松弛强次可行SQCQP算法.首先,通过在QCQP子问题中选取合适的罚函数,保证了算法的可行性以及目标函数F(x)的下降性,同时简化QCQP子问题二次约束项参数α_k的选取,可保证算法的可行性和收敛性.其次,算法步长的选取合理简单.最后,在适当的假设条件下证明了算法具有全局收敛性及强收敛性.初步的数值试验结果表明算法是可行有效的. 相似文献
2.
《数学进展》2016,(2)
利用改进函数将非光滑凸约束优化问题转化成无约束优化问题,构造了一个具有迫近形式的不可行拟牛顿束算法.值得注意的是,随着每次迭代的进行,该算法的无约束优化子问题的目标函数可能发生改变(取零步目标函数不改变,取下降步则更新目标函数),为此必须做必要的调整以保证算法的收敛性.本文主要采用了Sagastizabal和So1odov的不可行束方法的思想,在每个迭代点不一定是原始可行的情况下,得出了算法产生序列的每一个聚点是原问题最优解的收敛性结果.进一步,本文针对目标函数强凸情况下的BFGS拟牛顿算法,得到了全局收敛结果中保证拟牛顿矩阵有界的条件以及迭代序列的R-线性收敛结果. 相似文献
3.
非线性整数规划问题是一类复杂的优化问题,填充函数算法是求解整数规划问题的一类有效方法.构造一个新的单参数填充函数,分析并证明了其填充性质;然后,基于该填充函数并结合离散最速下降法提出了一种新的填充函数算法;最后,采用新算法对6个测试函数进行数值实验,结果表明该算法具有良好的计算效果,是有效可行的. 相似文献
4.
5.
《数学物理学报(A辑)》2020,(3)
该文考虑求解带非线性不等式和等式约束的极大极小优化问题,借助半罚函数思想,提出了一个新的广义投影算法.该算法具有以下特点:由一个广义梯度投影显式公式产生的搜索方向是可行下降的;构造了一个新型的最优识别控制函数;在适当的假设条件下具有全局收敛性和强收敛性.最后,通过初步的数值试验验证了算法的有效性. 相似文献
6.
7.
8.
9.
本文借助一种新的求基转轴运算建立了带非线性不等式约束最优化问题的一个新的广义既约梯度法.算法不引入任何松驰变量,以致扩大问题的规模,也不需对约束函数和变量的界预先估计.另一重要特点是方法不再使用隐函数理论确定搜索方向,而是由简单的显式给出.因此方法计算量小,结构简单,便于应用.对于非K—T点x,我们构造的方向为可行下降的.本文证明了算法具有全局收敛性. 相似文献
10.
对约束优化问题给出了一类光滑罚算法.它是基于一类光滑逼近精确罚函数 l_p(p\in(0,1]) 的光滑函数 L_p 而提出的.在非常弱的条件下, 建立了算法的一个摄动定理, 导出了算法的全局收敛性.特别地, 在广义Mangasarian-Fromovitz约束规范假设下, 证明了当 p=1 时, 算法经过有限步迭代后, 所有迭代点都是原问题的可行解; p\in(0,1) 时,算法经过有限迭代后, 所有迭代点都是原问题可行解集的内点. 相似文献
11.
P. Tseng 《Journal of Optimization Theory and Applications》1991,71(3):425-463
Consider the problem of minimizing a convex essentially smooth function over a polyhedral set. For the special case where the cost function is strictly convex, we propose a feasible descent method for this problem that chooses the descent directions from a finite set of vectors. When the polyhedral set is the nonnegative orthant or the entire space, this method reduces to a coordinate descent method which, when applied to certain dual of linearly constrained convex programs with strictly convex essentially smooth costs, contains as special cases a number of well-known dual methods for quadratic and entropy (either –logx orx logx) optimization. Moreover, convergence of these dual methods can be inferred from a general convergence result for the feasible descent method. When the cost function is not strictly convex, we propose an extension of the feasible descent method which makes descent along the elementary vectors of a certain subspace associated with the polyhedral set. The elementary vectors are not stored, but generated using the dual rectification algorithm of Rockafellar. By introducing an -complementary slackness mechanism, we show that this extended method terminates finitely with a solution whose cost is within an order of of the optimal cost. Because it uses the dual rectification algorithm, this method can exploit the combinatorial structure of the polyhedral set and is well suited for problems with a special (e.g., network) structure.This work was partially supported by the US Army Research Office Contract No. DAAL03-86-K-0171 and by the National Science Foundation Grant No. ECS-85-19058. 相似文献
12.
Dong-Hui Li Lei Wu Zhe Sun Xiong-ji Zhang 《Computational Optimization and Applications》2014,59(1-2):263-284
In this paper, we first propose a constrained optimization reformulation to the \(L_{1/2}\) regularization problem. The constrained problem is to minimize a smooth function subject to some quadratic constraints and nonnegative constraints. A good property of the constrained problem is that at any feasible point, the set of all feasible directions coincides with the set of all linearized feasible directions. Consequently, the KKT point always exists. Moreover, we will show that the KKT points are the same as the stationary points of the \(L_{1/2}\) regularization problem. Based on the constrained optimization reformulation, we propose a feasible descent direction method called feasible steepest descent method for solving the unconstrained \(L_{1/2}\) regularization problem. It is an extension of the steepest descent method for solving smooth unconstrained optimization problem. The feasible steepest descent direction has an explicit expression and the method is easy to implement. Under very mild conditions, we show that the proposed method is globally convergent. We apply the proposed method to solve some practical problems arising from compressed sensing. The results show its efficiency. 相似文献
13.
Discrete global descent method for discrete global optimization and nonlinear integer programming 总被引:2,自引:0,他引:2
A novel method, entitled the discrete global descent method, is developed in this paper to solve discrete global optimization
problems and nonlinear integer programming problems. This method moves from one discrete minimizer of the objective function
f to another better one at each iteration with the help of an auxiliary function, entitled the discrete global descent function.
The discrete global descent function guarantees that its discrete minimizers coincide with the better discrete minimizers
of f under some standard assumptions. This property also ensures that a better discrete minimizer of f can be found by some classical local search methods. Numerical experiments on several test problems with up to 100 integer
variables and up to 1.38 × 10104 feasible points have demonstrated the applicability and efficiency of the proposed method. 相似文献
14.
Some Methods Based on the D-Gap Function for Solving Monotone Variational Inequalities 总被引:4,自引:0,他引:4
The D-gap function has been useful in developing unconstrained descent methods for solving strongly monotone variational inequality problems. We show that the D-gap function has certain properties that are useful also for monotone variational inequality problems with bounded feasible set. Accordingly, we develop two unconstrained methods based on them that are similar in spirit to a feasible method of Zhu and Marcotte based on the regularized-gap function. We further discuss a third method based on applying the D-gap function to a regularized problem. Preliminary numerical experience is also reported. 相似文献
15.
J. Herskovits 《Journal of Optimization Theory and Applications》1998,99(1):121-146
We propose a feasible direction approach for the minimization by interior-point algorithms of a smooth function under smooth equality and inequality constraints. It consists of the iterative solution in the primal and dual variables of the Karush–Kuhn–Tucker first-order optimality conditions. At each iteration, a descent direction is defined by solving a linear system. In a second stage, the linear system is perturbed so as to deflect the descent direction and obtain a feasible descent direction. A line search is then performed to get a new interior point and ensure global convergence. Based on this approach, first-order, Newton, and quasi-Newton algorithms can be obtained. To introduce the method, we consider first the inequality constrained problem and present a globally convergent basic algorithm. Particular first-order and quasi-Newton versions of this algorithm are also stated. Then, equality constraints are included. This method, which is simple to code, does not require the solution of quadratic programs and it is neither a penalty method nor a barrier method. Several practical applications and numerical results show that our method is strong and efficient. 相似文献
16.
C.-A.E Botsaris 《Journal of Mathematical Analysis and Applications》1979,69(2):372-397
An algorithm is presented that minimizes a nonlinear function in many variables under equality constraints by generating a monotonically improving sequence of feasible points along curvilinear search paths obeying an initialvalue system of differential equations. The derivation of the differential equations is based on the idea of a steepest descent curve for the objective function on the feasible region. Our method for small stepsize behaves as the generalized reduced gradient algorithm, whereas for large enough stepsize the constrained equivalent of Newton's method for unconstrained minimization is obtained. 相似文献
17.
We consider a mixed variational inequality problem involving a set-valued nonmonotone mapping and a general convex function, where only approximation sequences are known instead of exact values of the cost mapping and function, and feasible set. We suggest to apply a two-level approach with inexact solutions of each particular problem with a descent method and partial penalization and evaluation of accuracy with the help of a gap function. Its convergence is attained without concordance of penalty, accuracy, and approximation parameters under coercivity type conditions. 相似文献
18.
Steepest descent methods for multicriteria optimization 总被引:1,自引:0,他引:1
19.
Igor Konnov 《Journal of Optimization Theory and Applications》2017,175(2):478-501
We consider a general class of composite optimization problems where the goal function is the sum of a smooth function and a non-necessary smooth convex separable function associated with some space partition, whereas the feasible set is a Cartesian product concordant to this partition. We suggest an adaptive version of the partial linearization method, which makes selective component-wise steps satisfying some descent condition and utilizes a sequence of control parameters. This technique is destined to reduce the computational expenses per iteration and maintain the basic convergence properties. We also establish its convergence rates and describe some examples of applications. Preliminary results of computations illustrate usefulness of the new method. 相似文献
20.
In this paper, for solving variational inequality problems (VIPs) we propose a feasible descent algorithm via minimizing the regularized gap function of Fukushima. Under the condition that the underlying mapping of VIP is strongly monotone, the algorithm is globally convergent for any regularized parameter, which is nice because it bypasses the necessity of knowing the modulus of strong monotonicity, a knowledge that is requested by other similar algorithms. Some preliminary computational results show the efficiency of the proposed method. 相似文献