首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
研究一种新的无约束优化超记忆梯度算法,算法在每步迭代中充分利用前面迭代点的信息产生下降方向,利用Wolfe线性搜索产生步长,在较弱的条件下证明了算法的全局收敛性。新算法在每步迭代中不需计算和存储矩阵,适于求解大规模优化问题。  相似文献   

2.
研究一类新的求解无约束优化问题的超记忆梯度法,分析了算法的全局收敛性和线性收敛速率.算法利用一种多步曲线搜索准则产生新的迭代点,在每步迭代时同时确定下降方向和步长,并且不用计算和存储矩阵,适于求解大规模优化问题.数值试验表明算法是有效的.  相似文献   

3.
低秩矩阵恢复问题作为一类在图像处理和信号数据分析等领域中都十分重要的问题已被广泛研究.本文在交替方向算法的框架下,应用非单调技术,提出一种求解低秩矩阵恢复问题的新算法.该算法在每一步迭代过程中,首先利用一步带有变步长梯度算法同时更新低秩部分的两块变量,然后采用非单调技术更新稀疏部分的变量.在一定的假设条件下,本文证明了新算法的全局收敛性.最后通过解决随机低秩矩阵恢复问题和视频前景背景分离的实例验证新算法的有效性,同时也显示非单调技术极大改善了算法的效率.  相似文献   

4.
文章结合非单调信赖域方法和非单调线搜索技术提出了一类新的无约束优化算法.与传统的非单调信赖与算法相比,此算法在每步都采用非单调Wolfe线搜索得到下一个迭代点,信赖域半径由子问题的近似解和线搜索的步长调节,这样得到的新算法不仅不需重解子问题,而且在每步迭代保证目标函数的近似海赛矩阵的正定性,在一定条件下证明了算法具有全局收敛性和Q-二次收敛性.数值试验表明算法是十分有效的.  相似文献   

5.
利用Armijio条件和信赖域方法,构造新的价值函数.首次将内点算法与filter技术结合起来,提出一种求解非线性互补问题的新算法,即filter内点算法.在主算法中使用Armijio型线搜索求取步长,在修复算法中使用信赖域方法进行适当控制以保证算法的收敛性.文章还讨论了算法的全局收敛性.最后用数值实验表明了该方法是有效的.  相似文献   

6.
汤京永  董丽  郭淑利 《运筹与管理》2009,18(4):79-81,117
本文提出一类求解无约束优化问题的非单调曲线搜索方法, 在较弱条件下证明了其收敛性.该算法有如下特点:(1)采用曲线搜索方法, 在每步迭代时同时确定下降方向和步长;(2)采用非单调搜索技巧, 产生较大的迭代步长, 降低了算法的计算量;(3)利用当前和前面迭代点的信息产生下降方向, 无需计算和存储矩阵, 适于求解大型优化问题.  相似文献   

7.
姜帆  刘雅梅  蔡邢菊 《计算数学》2018,40(4):367-386
广义交替方向乘子法是求解凸优化问题的有效算法.当实际问题中子问题难以求解时,可以采用在子问题中添加邻近项的方法处理,邻近矩阵正定时,算法收敛,然而这也会使迭代步长较小.最新研究表明,邻近矩阵可以有一定的不正定性.本文在基于不定邻近项的广义交替方向乘子法框架下,提出一种自适应的广义交替方向乘子法,动态地选择邻近矩阵,增大迭代步长.在一些较弱的假设下,证明了算法的全局收敛性.我们进行一些初等数值实验,验证了算法的有效性.  相似文献   

8.
黄莎  董云达 《数学杂志》2011,31(5):952-954
本文研究了求解单调变分不等式问题的一个投影收缩算法.利用何炳生教授的分析手法,给出了新步长,并且证明了在该步长下算法的全局收敛性.初步的数值试验表明了新步长的实用性.  相似文献   

9.
主要研究对称正定矩阵群上的内蕴最速下降算法的收敛性问题.首先针对一个可转化为对称正定矩阵群上无约束优化问题的半监督度量学习模型,提出对称正定矩阵群上一种自适应变步长的内蕴最速下降算法.然后利用李群上的光滑函数在任意一点处带积分余项的泰勒展开式,证明所提算法在对称正定矩阵群上是线性收敛的.最后通过在分类问题中的数值实验说明算法的有效性.  相似文献   

10.
通过分析判断矩阵 ,一致性矩阵 ,导出矩阵及度量矩阵的关系 ,提出一种修改判断矩阵的预测加速修正的贪婪算法 .贪婪法不追求最优解 ,不要回溯 ,只希望得到较为满意的解 .当判断矩阵的一致性较差时 ,基于度量矩阵中偏离大的元素对判断矩阵一致性的影响较大 ,通过导出矩阵和度量矩阵得出加速修正的步长 .每次只修改判断矩阵的一对元素 .实例分析表明 ,修改 AHP中的判断矩阵的贪婪算法是可行的 .  相似文献   

11.
A fast descent algorithm, resorting to a “stretching” function technique and built on one hybrid method (GRSA) which combines simulated annealing (SA) algorithm and gradient based methods for large scale global optimizations, is proposed. Unlike the previously proposed method in which the original objective functions remain unchanged during the whole course of optimization, the new method firstly constructs an auxiliary function on one local minimizer obtained by gradient based methods and then SA is executed on this constructed auxiliary function instead of on the original objective function in order that we can improve the jumping ability of SA algorithm to escape from the currently discovered local minimum to a better one from which the gradient based methods restart a new local search. The above procedure is repeated until a global minimum is detected. In addition, corresponding to the adopted “stretching” technique, a new next trial point generating scheme is designed. It is verified by simulation especially on large scale problems that the convergence speed is greatly accelerated, which is its main difference from many other reported methods that mostly cope with functions with less than 50 variables and does not apply to large scale optimization problems. Furthermore, the new algorithm functions as a global optimization procedure with a high success probability and high solution precision.  相似文献   

12.
高晶  王薇 《运筹学学报》2013,17(2):124-130
提出了一个任意初始点的广义梯度滤子方法. 该方法不使用罚函数以避免由此带来的缺陷并可以减少计算量. 方法的另一个特点是不因使用了滤子技术而使算法早熟或陷入循环. 算法对初始点没有要求并在比较合理的条件下具有全局收敛性.  相似文献   

13.
The search direction in unconstrained minimization algorithms for large‐scale problems is usually computed as an iterate of the preconditioned) conjugate gradient method applied to the minimization of a local quadratic model. In line‐search procedures this direction is required to satisfy an angle condition that says that the angle between the negative gradient at the current point and the direction is bounded away from π/2. In this paper, it is shown that the angle between conjugate gradient iterates and the negative gradient strictly increases as far as the conjugate gradient algorithm proceeds. Therefore, the interruption of the conjugate gradient sub‐algorithm when the angle condition does not hold is theoretically justified. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
Proximal point algorithms are applicable to a variety of settings in optimization. See Rockafellar, R.T. (1976), and Spingarn, J.E. (1981) for examples. We consider a simple idealized proximal point algorithm using gradient minimization on C2 convex functions. This is compared to the direct use of the same gradient method with an appropriate mollifier. The comparison is made by determining estimates of the costrequired to reduce the function to a given precision E. Our object is to assess the potential efficiency of these algorithms even if we do not know how to realize this potential.

We find that for distant starting values, proximal point algorithms are considerably less laborious than a direct method. However there is no essential improvement in the complexity - only in the numerical factors. This negative conclusion holds for the entire family of proximal point algorithms based on the gradient methods of this paper.

The algorithms considered may be important for large scale optimization problems. In applications, the precision e that is desired is usually fixed. Assume this is the case and assume that one is given a family of problems parameterized by the dimension

n. Suppose further that for all n, the condition number Q (defined below) is bounded. Then it will be seen below that for all n sufficiently large our algorithms will require a smaller number of steps than a polynomial algorithm with cost n |Ine|  相似文献   

15.
The global optimization method based on discrete filled function is a new method that solves large scale max-cut problems. We first define a new discrete filled function based on the structure of the max-cut problem and analyze its properties. Unlike the continuous filled function methods, by the characteristic of the max-cut problem, the parameters in the proposed filled function does not need to be adjusted. By combining a procedure that randomly generates initial points for minimization of the proposed filled function, the proposed algorithm can greatly reduce the computational time and be applied to large scale max-cut problems. Numerical results and comparisons with several heuristic methods indicate that the proposed algorithm is efficient and stable to obtain high quality solution of large scale max-cut problems.  相似文献   

16.
In this work, we present an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives. The algorithm mixes an inexact restoration framework with filter techniques, where the forbidden regions can be given by the flat or slanting filter rule. Each iteration is decomposed into two independent phases: a feasibility phase which reduces an infeasibility measure without evaluations of the objective function, and an optimality phase which reduces the objective function value. As the derivatives of the objective function are not available, the optimality step is computed by derivative-free trust-region internal iterations. Any technique to construct the trust-region models can be used since the gradient of the model is a reasonable approximation of the gradient of the objective function at the current point. Assuming this and classical assumptions, we prove that the full steps are efficient in the sense that near a feasible nonstationary point, the decrease in the objective function is relatively large, ensuring the global convergence results of the algorithm. Numerical experiments show the effectiveness of the proposed method.  相似文献   

17.
A convergent decomposition algorithm for support vector machines   总被引:1,自引:0,他引:1  
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.  相似文献   

18.
给求解无约束规划问题的记忆梯度算法中的参数一个特殊取法,得到目标函数的记忆梯度G o ldste in-L av in tin-Po lyak投影下降方向,从而对凸约束的非线性规划问题构造了一个记忆梯度G o ldste in-L av in tin-Po lyak投影算法,并在一维精确步长搜索和去掉迭代点列有界的条件下,分析了算法的全局收敛性,得到了一些较为深刻的收敛性结果.同时给出了结合FR,PR,HS共轭梯度算法的记忆梯度G o ldste in-L av in tin-Po lyak投影算法,从而将经典共轭梯度算法推广用于求解凸约束的非线性规划问题.数值例子表明新算法比梯度投影算法有效.  相似文献   

19.
A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Convergence of the method is established, and it is applied to convex quadratic programming. In its implementable form, the algorithm is combined with the proximal point method. A computational study of large-scale quadratic network problems compares the algorithm to a coordinate ascent method and to conjugate gradient methods for the dual problem. This study shows that combining the new algorithm with the nonlinear conjugate gradient method is particularly effective on difficult network problems from the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号