共查询到19条相似文献,搜索用时 93 毫秒
1.
低秩矩阵补全问题作为一类在机器学习和图像处理等信息科学领域中都十分重要的问题已被广泛研究.一阶原始-对偶算法是求解该问题的经典算法之一.然而实际应用中处理的数据往往是大规模的.针对大规模矩阵补全问题,本文在原始-对偶算法的框架下,应用变步长校正技术,提出了一种改进的求解矩阵补全问题的原始-对偶算法.该算法在每一步迭代过程中,首先利用原始-对偶算法对原始变量和对偶变量进行更新,然后采用变步长校正技术对这两块变量进行进一步的校正更新.在一定的假设条件下,证明了新算法的全局收敛性.最后通过求解随机低秩矩阵补全问题及图像修复的实例验证新算法的有效性. 相似文献
2.
研究一种新的无约束优化超记忆梯度算法,算法在每步迭代中充分利用前面迭代点的信息产生下降方向,利用Wolfe线性搜索产生步长,在较弱的条件下证明了算法的全局收敛性。新算法在每步迭代中不需计算和存储矩阵,适于求解大规模优化问题。 相似文献
3.
4.
文章结合非单调信赖域方法和非单调线搜索技术提出了一类新的无约束优化算法.与传统的非单调信赖与算法相比,此算法在每步都采用非单调Wolfe线搜索得到下一个迭代点,信赖域半径由子问题的近似解和线搜索的步长调节,这样得到的新算法不仅不需重解子问题,而且在每步迭代保证目标函数的近似海赛矩阵的正定性,在一定条件下证明了算法具有全局收敛性和Q-二次收敛性.数值试验表明算法是十分有效的. 相似文献
5.
6.
利用Armijio条件和信赖域方法,构造新的价值函数.首次将内点算法与filter技术结合起来,提出一种求解非线性互补问题的新算法,即filter内点算法.在主算法中使用Armijio型线搜索求取步长,在修复算法中使用信赖域方法进行适当控制以保证算法的收敛性.文章还讨论了算法的全局收敛性.最后用数值实验表明了该方法是有效的. 相似文献
7.
8.
9.
本文研究了求解单调变分不等式问题的一个投影收缩算法.利用何炳生教授的分析手法,给出了新步长,并且证明了在该步长下算法的全局收敛性.初步的数值试验表明了新步长的实用性. 相似文献
10.
主要研究对称正定矩阵群上的内蕴最速下降算法的收敛性问题.首先针对一个可转化为对称正定矩阵群上无约束优化问题的半监督度量学习模型,提出对称正定矩阵群上一种自适应变步长的内蕴最速下降算法.然后利用李群上的光滑函数在任意一点处带积分余项的泰勒展开式,证明所提算法在对称正定矩阵群上是线性收敛的.最后通过在分类问题中的数值实验说明算法的有效性. 相似文献
11.
A fast descent algorithm, resorting to a “stretching” function technique and built on one hybrid method (GRSA) which combines simulated annealing (SA) algorithm and gradient based methods for large scale global optimizations, is proposed. Unlike the previously proposed method in which the original objective functions remain unchanged during the whole course of optimization, the new method firstly constructs an auxiliary function on one local minimizer obtained by gradient based methods and then SA is executed on this constructed auxiliary function instead of on the original objective function in order that we can improve the jumping ability of SA algorithm to escape from the currently discovered local minimum to a better one from which the gradient based methods restart a new local search. The above procedure is repeated until a global minimum is detected. In addition, corresponding to the adopted “stretching” technique, a new next trial point generating scheme is designed. It is verified by simulation especially on large scale problems that the convergence speed is greatly accelerated, which is its main difference from many other reported methods that mostly cope with functions with less than 50 variables and does not apply to large scale optimization problems. Furthermore, the new algorithm functions as a global optimization procedure with a high success probability and high solution precision. 相似文献
12.
提出了一个任意初始点的广义梯度滤子方法. 该方法不使用罚函数以避免由此带来的缺陷并可以减少计算量. 方法的另一个特点是不因使用了滤子技术而使算法早熟或陷入循环. 算法对初始点没有要求并在比较合理的条件下具有全局收敛性. 相似文献
13.
Yu‐Hong Dai Jos Mario Martínez Jin‐Yun Yuan 《Numerical Linear Algebra with Applications》2003,10(4):323-334
The search direction in unconstrained minimization algorithms for large‐scale problems is usually computed as an iterate of the preconditioned) conjugate gradient method applied to the minimization of a local quadratic model. In line‐search procedures this direction is required to satisfy an angle condition that says that the angle between the negative gradient at the current point and the direction is bounded away from π/2. In this paper, it is shown that the angle between conjugate gradient iterates and the negative gradient strictly increases as far as the conjugate gradient algorithm proceeds. Therefore, the interruption of the conjugate gradient sub‐algorithm when the angle condition does not hold is theoretically justified. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
14.
Proximal point algorithms are applicable to a variety of settings in optimization. See Rockafellar, R.T. (1976), and Spingarn, J.E. (1981) for examples. We consider a simple idealized proximal point algorithm using gradient minimization on C2 convex functions. This is compared to the direct use of the same gradient method with an appropriate mollifier. The comparison is made by determining estimates of the costrequired to reduce the function to a given precision E. Our object is to assess the potential efficiency of these algorithms even if we do not know how to realize this potential. We find that for distant starting values, proximal point algorithms are considerably less laborious than a direct method. However there is no essential improvement in the complexity - only in the numerical factors. This negative conclusion holds for the entire family of proximal point algorithms based on the gradient methods of this paper. The algorithms considered may be important for large scale optimization problems. In applications, the precision e that is desired is usually fixed. Assume this is the case and assume that one is given a family of problems parameterized by the dimension n. Suppose further that for all n, the condition number Q (defined below) is bounded. Then it will be seen below that for all n sufficiently large our algorithms will require a smaller number of steps than a polynomial algorithm with cost n |Ine| 相似文献
15.
The global optimization method based on discrete filled function is a new method that solves large scale max-cut problems. We first define a new discrete filled function based on the structure of the max-cut problem and analyze its properties. Unlike the continuous filled function methods, by the characteristic of the max-cut problem, the parameters in the proposed filled function does not need to be adjusted. By combining a procedure that randomly generates initial points for minimization of the proposed filled function, the proposed algorithm can greatly reduce the computational time and be applied to large scale max-cut problems. Numerical results and comparisons with several heuristic methods indicate that the proposed algorithm is efficient and stable to obtain high quality solution of large scale max-cut problems. 相似文献
16.
In this work, we present an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives. The algorithm mixes an inexact restoration framework with filter techniques, where the forbidden regions can be given by the flat or slanting filter rule. Each iteration is decomposed into two independent phases: a feasibility phase which reduces an infeasibility measure without evaluations of the objective function, and an optimality phase which reduces the objective function value. As the derivatives of the objective function are not available, the optimality step is computed by derivative-free trust-region internal iterations. Any technique to construct the trust-region models can be used since the gradient of the model is a reasonable approximation of the gradient of the objective function at the current point. Assuming this and classical assumptions, we prove that the full steps are efficient in the sense that near a feasible nonstationary point, the decrease in the objective function is relatively large, ensuring the global convergence results of the algorithm. Numerical experiments show the effectiveness of the proposed method. 相似文献
17.
A convergent decomposition algorithm for support vector machines 总被引:1,自引:0,他引:1
S. Lucidi L. Palagi A. Risi M. Sciandrone 《Computational Optimization and Applications》2007,38(2):217-234
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In
particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods
cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems
with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental
role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass
of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems,
have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous
methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the
objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be
suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence
of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results
on support vector classification problems with up to 100 thousands variables. 相似文献
18.
A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Convergence of the method is established, and it is applied to convex quadratic programming. In its implementable form, the algorithm is combined with the proximal point method. A computational study of large-scale quadratic network problems compares the algorithm to a coordinate ascent method and to conjugate gradient methods for the dual problem. This study shows that combining the new algorithm with the nonlinear conjugate gradient method is particularly effective on difficult network problems from the literature. 相似文献
19.
给求解无约束规划问题的记忆梯度算法中的参数一个特殊取法,得到目标函数的记忆梯度G o ldste in-L av in tin-Po lyak投影下降方向,从而对凸约束的非线性规划问题构造了一个记忆梯度G o ldste in-L av in tin-Po lyak投影算法,并在一维精确步长搜索和去掉迭代点列有界的条件下,分析了算法的全局收敛性,得到了一些较为深刻的收敛性结果.同时给出了结合FR,PR,HS共轭梯度算法的记忆梯度G o ldste in-L av in tin-Po lyak投影算法,从而将经典共轭梯度算法推广用于求解凸约束的非线性规划问题.数值例子表明新算法比梯度投影算法有效. 相似文献