首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
非负矩阵分解是一种流行的数据表示方法,已广泛应用于图像处理和模式识别等问题.但是非负矩阵分解忽略了数据的几何结构. 而现有的基于简单图的学习方法只考虑了图像的成对信息,并且对计算相似度时的参数选择非常敏感. 超图学习方法可以有效地解决这些问题. 超图利用超边将多个顶点相连接用以表示图像的高维结构信息. 然而, 现有的大部分超图学习方法都是无判别的学习方法.为了提高识别效果, 提出了基于具有判别信息的超图和非负矩阵分解方法的新模型, 利用交替方向法进行迭代求解新模型, 并结合最近邻方法进行人脸识别. 在几个常用标准人脸图像数据库上进行实验, 实验结果表明提出的方法是有效的.  相似文献   

2.
针对两个可分凸函数的和在线性约束下的极小化问题,在交替方向法的框架下,提出广义的交替近似梯度算法.在一定的条件下,该算法具有全局及线性收敛性.数值实验表明该算法有好的数值表现.  相似文献   

3.
Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems, and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss the usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with state-of-the-art algorithms and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix.  相似文献   

4.
In isogeometric analysis, it is frequently required to handle the geometric models enclosed by four-sided or non-four-sided boundary patches, such as trimmed surfaces. In this paper, we develop a Gregory solid based method to parameterize those models. First, we extend the Gregory patch representation to the trivariate Gregory solid representation. Second,the trivariate Gregory solid representation is employed to interpolate the boundary patches of a geometric model, thus generating the polyhedral volume parametrization. To improve the regularity of the polyhedral volume parametrization, we formulate the construction of the trivariate Gregory solid as a sparse optimization problem, where the optimization objective function is a linear combination of some terms, including a sparse term aiming to reduce the negative Jacobian area of the Gregory solid. Then, the alternating direction method of multipliers(ADMM)is used to solve the sparse optimization problem. Lots of experimental examples illustrated in this paper demonstrate the effectiveness and effciency of the developed method.  相似文献   

5.
带有正交约束的矩阵优化问题在材料计算、统计及数据分析等领域中有着广泛的应用.由于正交约束的可行域是Stiefel流形,一直以来流形上的优化方法是求解这一问题的主要方法.近年来,随着实际应用问题所要求的变量规模的扩大,传统的流形优化方法在计算上的劣势显现出来,而一些迭代简单、收敛快的新算法逐渐被提出.通过收缩方法、非收缩可行方法、不可行方法三个类别分别来介绍求解带有正交约束的矩阵优化问题的最新算法.通过分析这些方法的主要特性,以及应用问题的要求,对这类问题算法设计的研究进行了展望.  相似文献   

6.
In this paper we propose a new Riemannian conjugate gradient method for optimization on the Stiefel manifold. We introduce two novel vector transports associated with the retraction constructed by the Cayley transform. Both of them satisfy the Ring-Wirth nonexpansive condition, which is fundamental for convergence analysis of Riemannian conjugate gradient methods, and one of them is also isometric. It is known that the Ring-Wirth nonexpansive condition does not hold for traditional vector transports as the differentiated retractions of QR and polar decompositions. Practical formulae of the new vector transports for low-rank matrices are obtained. Dai’s nonmonotone conjugate gradient method is generalized to the Riemannian case and global convergence of the new algorithm is established under standard assumptions. Numerical results on a variety of low-rank test problems demonstrate the effectiveness of the new method.  相似文献   

7.
In this paper, we introduce a class of nonmonotone conjugate gradient methods, which include the well-known Polak–Ribière method and Hestenes–Stiefel method as special cases. This class of nonmonotone conjugate gradient methods is proved to be globally convergent when it is applied to solve unconstrained optimization problems with convex objective functions. Numerical experiments show that the nonmonotone Polak–Ribière method and Hestenes–Stiefel method in this nonmonotone conjugate gradient class are competitive vis-à-vis their monotone counterparts.  相似文献   

8.
Conjugate gradient methods have been widely used as schemes to solve large-scale unconstrained optimization problems. The search directions for the conventional methods are defined by using the gradient of the objective function. This paper proposes two nonlinear conjugate gradient methods which take into account mostly information about the objective function. We prove that they converge globally and numerically compare them with conventional methods. The results show that with slight modification to the direction, one of our methods performs as well as the best conventional method employing the Hestenes–Stiefel formula.  相似文献   

9.
We develop a generalization of Nesterov’s accelerated gradient descent method which is designed to deal with orthogonality constraints.To demonstrate the effectiveness of our method,we perform numerical experiments which demonstrate that the number of iterations scales with the square root of the condition number,and also compare with existing state-of-the-art quasi-Newton methods on the Stiefel manifold.Our experiments show that our method outperforms existing state-of-the-art quasi-Newton methods on some large,ill-conditioned problems.  相似文献   

10.
Structure-enforced matrix factorization (SeMF) represents a large class of mathematical models appearing in various forms of principal component analysis, sparse coding, dictionary learning and other machine learning techniques useful in many applications including neuroscience and signal processing. In this paper, we present a unified algorithm framework, based on the classic alternating direction method of multipliers (ADMM), for solving a wide range of SeMF problems whose constraint sets permit low-complexity projections. We propose a strategy to adaptively adjust the penalty parameters which is the key to achieving good performance for ADMM. We conduct extensive numerical experiments to compare the proposed algorithm with a number of state-of-the-art special-purpose algorithms on test problems including dictionary learning for sparse representation and sparse nonnegative matrix factorization. Results show that our unified SeMF algorithm can solve different types of factorization problems as reliably and as efficiently as special-purpose algorithms. In particular, our SeMF algorithm provides the ability to explicitly enforce various combinatorial sparsity patterns that, to our knowledge, has not been considered in existing approaches.  相似文献   

11.
In this paper, we incorporate importance sampling strategy into accelerated framework of stochastic alternating direction method of multipliers for solving a class of stochastic composite problems with linear equality constraint. The rates of convergence for primal residual and feasibility violation are established. Moreover, the estimation of variance of stochastic gradient is improved due to the use of important sampling. The proposed algorithm is capable of dealing with the situation, where the feasible set is unbounded. The experimental results indicate the effectiveness of the proposed method.  相似文献   

12.
The proximal alternating direction method of multipliers is a popular and useful method for linearly constrained, separable convex problems, especially for the linearized case. In the literature, convergence of the proximal alternating direction method has been established under the assumption that the proximal regularization matrix is positive semi-definite. Recently, it was shown that the regularizing proximal term in the proximal alternating direction method of multipliers does not necessarily have to be positive semi-definite, without any additional assumptions. However, it remains unknown as to whether the indefinite setting is valid for the proximal version of the symmetric alternating direction method of multipliers. In this paper, we confirm that the symmetric alternating direction method of multipliers can also be regularized with an indefinite proximal term. We theoretically prove the global convergence of the indefinite method and establish its worst-case convergence rate in an ergodic sense. In addition, the generalized alternating direction method of multipliers proposed by Eckstein and Bertsekas is a special case in our discussion. Finally, we demonstrate the performance improvements achieved when using the indefinite proximal term through experimental results.  相似文献   

13.
In this paper, we obtain global pointwise and ergodic convergence rates for a variable metric proximal alternating direction method of multipliers for solving linearly constrained convex optimization problems. We first propose and study nonasymptotic convergence rates of a variable metric hybrid proximal extragradient framework for solving monotone inclusions. Then, the convergence rates for the former method are obtained essentially by showing that it falls within the latter framework. To the best of our knowledge, this is the first time that global pointwise (resp. pointwise and ergodic) convergence rates are obtained for the variable metric proximal alternating direction method of multipliers (resp. variable metric hybrid proximal extragradient framework).  相似文献   

14.
本文研究球面上的$\ell_1$正则优化问题,其目标函数由一般光滑函数项和非光滑$\ell_1$正则项构成,且假设光滑函数的随机梯度可由随机一阶oracle估计.这类优化问题被广泛应用在机器学习,图像、信号处理和统计等领域.根据流形临近梯度法和随机梯度估计技术,提出一种球面随机临近梯度算法.基于非光滑函数的全局隐函数定理,分析了子问题解关于参数的Lipschtiz连续性,进而证明了算法的全局收敛性.在基于随机数据集和实际数据集的球面$\ell_1$正则二次规划问题、有限和SPCA问题和球面$\ell_1$正则逻辑回归问题上数值实验结果显示所提出的算法与流形临近梯度法、黎曼随机临近梯度法相比CPU时间上具有一定的优越性.  相似文献   

15.
In this paper, we study the local linear convergence properties of a versatile class of Primal–Dual splitting methods for minimizing composite non-smooth convex optimization problems. Under the assumption that the non-smooth components of the problem are partly smooth relative to smooth manifolds, we present a unified local convergence analysis framework for these methods. More precisely, in our framework, we first show that (i) the sequences generated by Primal–Dual splitting methods identify a pair of primal and dual smooth manifolds in a finite number of iterations, and then (ii) enter a local linear convergence regime, which is characterized based on the structure of the underlying active smooth manifolds. We also show how our results for Primal–Dual splitting can be specialized to cover existing ones on Forward–Backward splitting and Douglas–Rachford splitting/ADMM (alternating direction methods of multipliers). Moreover, based on these obtained local convergence analysis result, several practical acceleration techniques are discussed. To exemplify the usefulness of the obtained result, we consider several concrete numerical experiments arising from fields including signal/image processing, inverse problems and machine learning. The demonstration not only verifies the local linear convergence behaviour of Primal–Dual splitting methods, but also the insights on how to accelerate them in practice.  相似文献   

16.
In this paper, a modified Hestenes–Stiefel conjugate gradient method for unconstrained problems is developed, which can achieves the twin goals of generating sufficient descent direction at each iteration as well as being close to the Newton direction. In our methods, the hybridization parameter can also be obtained based on other kinds of conjugacy conditions. Under mild condition, we establish their global convergence for general objective functions. Numerical experimentation with the new method indicates that it efficiently solves the test problems and therefore is promising.  相似文献   

17.
A large number of free boundary problems can be formulated as linear-complementarity problems. In this paper, we propose an inexact alternating direction method of multipliers for solving linear complementarity problem arising from free boundary problems by using the special structure of these problems. The convergence of our proposed method is proved. Numerical results show that the proposed method is feasible and effective, and it is significantly faster than modified alternating direction implicit algorithm and many other methods, especially when dimension of the problem being solved is large.  相似文献   

18.
In the present paper, we propose a novel convergence analysis of the alternating direction method of multipliers, based on its equivalence with the overrelaxed primal–dual hybrid gradient algorithm. We consider the smooth case, where the objective function can be decomposed into one differentiable with Lipschitz continuous gradient part and one strongly convex part. Under these hypotheses, a convergence proof with an optimal parameter choice is given for the primal–dual method, which leads to convergence results for the alternating direction method of multipliers. An accelerated variant of the latter, based on a parameter relaxation, is also proposed, which is shown to converge linearly with same asymptotic rate as the primal–dual algorithm.  相似文献   

19.
This paper studies the matrix completion problem under arbitrary sampling schemes. We propose a new estimator incorporating both max-norm and nuclear-norm regularization, based on which we can conduct efficient low-rank matrix recovery using a random subset of entries observed with additive noise under general non-uniform and unknown sampling distributions. This method significantly relaxes the uniform sampling assumption imposed for the widely used nuclear-norm penalized approach, and makes low-rank matrix recovery feasible in more practical settings. Theoretically, we prove that the proposed estimator achieves fast rates of convergence under different settings. Computationally, we propose an alternating direction method of multipliers algorithm to efficiently compute the estimator, which bridges a gap between theory and practice of machine learning methods with max-norm regularization. Further, we provide thorough numerical studies to evaluate the proposed method using both simulated and real datasets.  相似文献   

20.
《Optimization》2012,61(7):929-941
To take advantage of the attractive features of the Hestenes–Stiefel and Dai–Yuan conjugate gradient (CG) methods, we suggest a hybridization of these methods using a quadratic relaxation of a hybrid CG parameter proposed by Dai and Yuan. In the proposed method, the hybridization parameter is computed based on a conjugacy condition. Under proper conditions, we show that our method is globally convergent for uniformly convex functions. We give a numerical comparison of the implementations of our method and two efficient hybrid CG methods proposed by Dai and Yuan using a set of unconstrained optimization test problems from the CUTEr collection. Numerical results show the efficiency of the proposed hybrid CG method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号