首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
对闭凸集约束的非线性规划问题构造了一个修正共轭梯度投影下降算法,在去掉迭代点列有界的条件下,分析了算法的全局收敛性.新算法与共轭梯度参数结合,给出了三类结合共轭梯度参数的修正共轭梯度投影算法.数值例子表明算法是有效的.  相似文献   

2.
给求解无约束规划问题的记忆梯度算法中的参数一个特殊取法,得到目标函数的记忆梯度G o ldste in-L av in tin-Po lyak投影下降方向,从而对凸约束的非线性规划问题构造了一个记忆梯度G o ldste in-L av in tin-Po lyak投影算法,并在一维精确步长搜索和去掉迭代点列有界的条件下,分析了算法的全局收敛性,得到了一些较为深刻的收敛性结果.同时给出了结合FR,PR,HS共轭梯度算法的记忆梯度G o ldste in-L av in tin-Po lyak投影算法,从而将经典共轭梯度算法推广用于求解凸约束的非线性规划问题.数值例子表明新算法比梯度投影算法有效.  相似文献   

3.
研究列正交约束下广义Sylvester方程极小化问题的有效算法.基于Stiefel流形的几何性质和欧氏空间中的MPRP共轭梯度法,构造一类黎曼MPRP共轭梯度迭代求解算法,给出算法全局收敛性.该迭代格式得到的搜索方向总能保证该目标函数下降.数值实验和数值比较验证所提出算法对于问题模型是高效可行的.  相似文献   

4.
并行技术在约束凸规划化问题的对偶算法中的应用   总被引:1,自引:0,他引:1  
用 Rosen(196 1)的投影梯度的方法求解约束凸规划化问题的对偶问题 ,在计算投影梯度方向时 ,涉及求关于原始变量的最小化问题的最优解 .我们用并行梯度分布算法 (PGD)计算出这一极小化问题的近似解 ,证明近似解可以达到任何给定的精度 ,并说明当精度选取合适时 ,Rosen方法仍然是收敛的  相似文献   

5.
刘金魁  孙悦  赵永祥 《计算数学》2021,43(3):388-400
基于HS共轭梯度法的结构,本文在弱假设条件下建立了一种求解凸约束伪单调方程组问题的迭代投影算法.该算法不需要利用方程组的任何梯度或Jacobian矩阵信息,因此它适合求解大规模问题.算法在每一次迭代中都能产生充分下降方向,且不依赖于任何线搜索条件.特别是,我们在不需要假设方程组满足Lipschitz条件下建立了算法的全...  相似文献   

6.
研究了与渐近非扩张半群不动点问题相关的分裂等式混合均衡问题.在等式约束下,为同时逼近两个空间中混合均衡问题和渐近非扩张半群不动点问题的公共解,借助收缩投影方法引出了一种迭代程序.在适当条件下,该迭代算法的强收敛性被证明.文末还把所得结果应用于分裂等式混合变分不等式问题和分裂等式凸极小化问题.  相似文献   

7.
基于谱梯度法和著名LS共轭梯度法的结构,该文建立了求解凸约束非线性伪单调方程组问题的谱LS型无导数投影算法.通过构建适当的谱参数,该算法在每一次迭代中都能保证搜索方向的充分下降性,并且独立于线搜索条件.在适当的假设条件和经典无导数线搜索条件下,算法具有全局收敛性.通过数值实验发现,该算法继承了LS共轭梯度法优秀的计算性能,并提高了稳定性.  相似文献   

8.
应用共轭梯度法,结合线性投影算子,给出迭代算法求解线性矩阵方程AXB+CXD=F在任意线性子空间上的约束解及其最佳逼近.当矩阵方程AXB+CXD=F有解时,可以证明,所给迭代算法经过有限步迭代可得到矩阵方程的约束解、极小范数解和最佳逼近.数值例子证实了该算法的有效性.  相似文献   

9.
梯度投影法已有许多有效算法,但这些算法还存在三个问题:1)为了保证算法的收敛性,在算法的每一迭代步,需要选取δ-主动约束集,计算量较大.2)在迭代过程中,需要跟踪主动约束集.3)只能处理非线性不等式约束问题.本文讨论非线性等式与不等式约束的优化问题,给出了一个广义梯度投影法,证明了算法的收敛性并且完满地解决了上述三个问题.本文算法结构简单且其处理技巧有普遍意义.  相似文献   

10.
研究线性矩阵方程AXB=C在闭凸集合R约束下的数值迭代解法.所考虑的闭凸集合R为(1)有界矩阵集合,(2)Q-正定矩阵集合和(3)矩阵不等式解集合.构造松弛交替投影算法求解上述问题,并用算子理论证明了由该算法生成的序列具有弱收敛性.给出了矩阵方程AXB=C求对称非负解和对称半正定解的数值算例,大量数值实验验证了该算法的可行性和高效性,并说明该算法与交替投影算法和谱投影梯度算法比较在迭代效率上的明显优势.  相似文献   

11.
《Optimization》2012,61(9):1367-1385
The gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. Based on Marino and Xu's method [G. Marino and H.-K. Xu, A general method for nonexpansive mappings in Hilbert space, J. Math. Anal. Appl. 318 (2006), pp. 43–52], we combine GPA and averaged mapping approach to propose implicit and explicit composite iterative algorithms for finding a common solution of an equilibrium and a constrained convex minimization problem for the first time in this article. Under suitable conditions, strong convergence theorems are obtained.  相似文献   

12.
It is well known that the gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. In this article, we first provide an alternative averaged mapping approach to the GPA. This approach is operator-oriented in nature. Since, in general, in infinite-dimensional Hilbert spaces, GPA has only weak convergence, we provide two modifications of GPA so that strong convergence is guaranteed. Regularization is also applied to find the minimum-norm solution of the minimization problem under investigation.  相似文献   

13.
This paper presents an algorithmic solution, the adaptive projected subgradient method, to the problem of asymptotically minimizing a certain sequence of non-negative continuous convex functions over the fixed point set of a strongly attracting nonexpansive mapping in a real Hilbert space. The method generalizes Polyak's subgradient algorithm for the convexly constrained minimization of a fixed nonsmooth function. By generating a strongly convergent and asymptotically optimal point sequence, the proposed method not only offers unifying principles for many projection-based adaptive filtering algorithms but also enhances the adaptive filtering methods with the set theoretic estimation's armory by allowing a variety of a priori information on the estimandum in the form, for example, of multiple intersecting closed convex sets.  相似文献   

14.
This research presents a new constrained optimization approach for solving systems of nonlinear equations. Particular advantages are realized when all of the equations are convex. For example, a global algorithm for finding the zero of a convex real-valued function of one variable is developed. If the algorithm terminates finitely, then either the algorithm has computed a zero or determined that none exists; if an infinite sequence is generated, either that sequence converges to a zero or again no zero exists. For solving n-dimensional convex equations, the constrained optimization algorithm has the capability of determining that the system of equations has no solution. Global convergence of the algorithm is established under weaker conditions than previously known and, in this case, the algorithm reduces to Newton’s method together with a constrained line search at each iteration. It is also shown how this approach has led to a new algorithm for solving the linear complementarity problem.  相似文献   

15.
Convex approximations to sparse PCA via Lagrangian duality   总被引:1,自引:0,他引:1  
We derive a convex relaxation for cardinality constrained Principal Component Analysis (PCA) by using a simple representation of the L1 unit ball and standard Lagrangian duality. The resulting convex dual bound is an unconstrained minimization of the sum of two nonsmooth convex functions. Applying a partial smoothing technique reduces the objective to the sum of a smooth and nonsmooth convex function for which an efficient first order algorithm can be applied. Numerical experiments demonstrate its potential.  相似文献   

16.
We use the merit function technique to formulate a linearly constrained bilevel convex quadratic problem as a convex program with an additional convex-d.c. constraint. To solve the latter problem we approximate it by convex programs with an additional convex-concave constraint using an adaptive simplicial subdivision. This approximation leads to a branch-and-bound algorithm for finding a global optimal solution to the bilevel convex quadratic problem. We illustrate our approach with an optimization problem over the equilibrium points of an n-person parametric noncooperative game.  相似文献   

17.
Numerical and theoretical questions related to constrained interpolation and smoothing are treated. The prototype problem is that of finding the smoothest convex interpolant to given univariate data. Recent results have shown that this convex programming problem with infinite constraints can be recast as a finite parametric nonlinear system whose solution is closely related to the second derivative of the desired interpolating function. This paper focuses on the analysis of numerical techniques for solving the nonlinear system and on the theoretical issues that arise when certain extensions of the problem are considered. In particular, we show that two standard iteration techniques, the Jacobi and Gauss-Seidel methods, are globally convergent when applied to this problem. In addition we use the problem structure to develop an efficient implementation of Newton's method and observe consistent quadratic convergence. We also develop a theory for the existence, uniqueness, and representation of solutions to the convex interpolation problem with nonzero lower bounds on the second derivative (strict convexity). Finally, a smoothing spline analogue to the convex interpolation problem is studied with reference to the computation of convex approximations to noisy data.  相似文献   

18.
Abstract

This article discusses a new technique for calculating maximum likelihood estimators (MLEs) of probability measures when it is assumed the measures are constrained to a compact, convex set. Measures in such sets can be represented as mixtures of simple, known extreme measures, and so the problem of maximizing the likelihood in the constrained measures becomes one of maximizing in an unconstrained mixing measure. Such convex constraints arise in many modeling situations, such as empirical likelihood and estimation under stochastic ordering constraints. This article describes the mixture representation technique for these two situations and presents a data analysis of an experiment in cancer genetics, where a partial stochastic ordering is assumed but the data are incomplete.  相似文献   

19.
The Newton Bracketing method [Y. Levin, A. Ben-Israel, The Newton Bracketing method for convex minimization, Comput. Optimiz. Appl. 21 (2002) 213-229] for the minimization of convex functions f:RnR is extended to affinely constrained convex minimization problems. The results are illustrated for affinely constrained Fermat-Weber location problems.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号