首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work, we propose a proximal algorithm for unconstrained optimization on the cone of symmetric semidefinite positive matrices. It appears to be the first in the proximal class on the set of methods that convert a Symmetric Definite Positive Optimization in Nonlinear Optimization. It replaces the main iteration of the conceptual proximal point algorithm by a sequence of nonlinear programming problems on the cone of diagonal definite positive matrices that has the structure of the positive orthant of the Euclidian vector space. We are motivated by results of the classical proximal algorithm extended to Riemannian manifolds with nonpositive sectional curvature. An important example of such a manifold is the space of symmetric definite positive matrices, where the metrics is given by the Hessian of the standard barrier function −lndet(X). Observing the obvious fact that proximal algorithms do not depend on the geodesics, we apply those ideas to develop a proximal point algorithm for convex functions in this Riemannian metric.  相似文献   

2.
Hiroyuki Sato 《Optimization》2017,66(12):2211-2231
The joint approximate diagonalization of non-commuting symmetric matrices is an important process in independent component analysis. This problem can be formulated as an optimization problem on the Stiefel manifold that can be solved using Riemannian optimization techniques. Among the available optimization techniques, this study utilizes the Riemannian Newton’s method for the joint diagonalization problem on the Stiefel manifold, which has quadratic convergence. In particular, the resultant Newton’s equation can be effectively solved by means of the Kronecker product and the vec and veck operators, which reduce the dimension of the equation to that of the Stiefel manifold. Numerical experiments are performed to show that the proposed method improves the accuracy of the approximate solution to this problem. The proposed method is also applied to independent component analysis for the image separation problem. The proposed Newton method further leads to a novel and fast Riemannian trust-region Newton method for the joint diagonalization problem.  相似文献   

3.
We generalize the Euclidean 1-center approximation algorithm of B?doiu and Clarkson (2003) [6] to arbitrary Riemannian geometries, and study the corresponding convergence rate. We then show how to instantiate this generic algorithm to two particular settings: (1) the hyperbolic geometry, and (2) the Riemannian manifold of symmetric positive definite matrices.  相似文献   

4.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

5.
We focus on efficient preconditioning techniques for sequences of Karush‐Kuhn‐Tucker (KKT) linear systems arising from the interior point (IP) solution of large convex quadratic programming problems. Constraint preconditioners (CPs), although very effective in accelerating Krylov methods in the solution of KKT systems, have a very high computational cost in some instances, because their factorization may be the most time‐consuming task at each IP iteration. We overcome this problem by computing the CP from scratch only at selected IP iterations and by updating the last computed CP at the remaining iterations, via suitable low‐rank modifications based on a BFGS‐like formula. This work extends the limited‐memory preconditioners (LMPs) for symmetric positive definite matrices proposed by Gratton, Sartenaer and Tshimanga in 2011, by exploiting specific features of KKT systems and CPs. We prove that the updated preconditioners still belong to the class of exact CPs, thus allowing the use of the conjugate gradient method. Furthermore, they have the property of increasing the number of unit eigenvalues of the preconditioned matrix as compared with the generally used CPs. Numerical experiments are reported, which show the effectiveness of our updating technique when the cost for the factorization of the CP is high.  相似文献   

6.
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ??better?? curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.  相似文献   

7.
In this paper, we introduce a cautious BFGS (CBFGS) update criterion in the reduced Hessian sequential quadratic programming (SQP) method. An attractive property of this update criterion is that the generated iterative matrices are always positive definite. Under mild conditions, we get the global convergence of the reduced Hessian SQP method. In particular, the second order sufficient condition is not necessary for the global convergence of the method. Furthermore, we show that if the second order sufficient condition holds at an accumulation point, then the reduced Hessian SQP method with CBFGS update reduces to the reduced Hessian SQP method with ordinary BFGS update. Consequently, the local behavior of the proposed method is the same as the reduced Hessian SQP method with BFGS update. The presented preliminary numerical experiments show the good performance of the method. This work was supported by the National Natural Science Foundation of China via grant 10671060 and 10471060.  相似文献   

8.
汪悦 《中国科学:数学》2014,44(3):287-294
本文研究Riemann流形上的改进的p-Laplace方程,运用截断函数的估计、Hessian比较定理和Laplace比较定理,得到该方程正解的梯度估计.并应用该结论,得到在Riemann流形上关于改进的p-Laplace方程正解的Harnack不等式和Liouville型定理.  相似文献   

9.
Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems, and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss the usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with state-of-the-art algorithms and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix.  相似文献   

10.
In this paper we consider an inverse problem for a damped vibration system from the noisy measured eigendata, where the mass, damping, and stiffness matrices are all symmetric positive‐definite matrices with the mass matrix being diagonal and the damping and stiffness matrices being tridiagonal. To take into consideration the noise in the data, the problem is formulated as a convex optimization problem involving quadratic constraints on the unknown mass, damping, and stiffness parameters. Then we propose a smoothing Newton‐type algorithm for the optimization problem, which improves a pre‐existing estimate of a solution to the inverse problem. We show that the proposed method converges both globally and quadratically. Numerical examples are also given to demonstrate the efficiency of our method. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
This paper introduces a robust preconditioner for general sparse matrices based on low‐rank approximations of the Schur complement in a Domain Decomposition framework. In this ‘Schur Low Rank’ preconditioning approach, the coefficient matrix is first decoupled by a graph partitioner, and then a low‐rank correction is exploited to compute an approximate inverse of the Schur complement associated with the interface unknowns. The method avoids explicit formation of the Schur complement. We show the feasibility of this strategy for a model problem and conduct a detailed spectral analysis for the relation between the low‐rank correction and the quality of the preconditioner. We first introduce the SLR preconditioner for symmetric positive definite matrices and symmetric indefinite matrices if the interface matrices are symmetric positive definite. Extensions to general symmetric indefinite matrices as well as to nonsymmetric matrices are also discussed. Numerical experiments on general matrices illustrate the robustness and efficiency of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
《Optimization》2012,61(2):257-270
Abstract

In this paper we consider the minimization problem with constraints. We will show that if the set of constraints is a Riemannian manifold of nonpositive sectional curvature, and the objective function is convex in this manifold, then the proximal point method in Euclidean space is naturally extended to solve that class of problems. We will prove that the sequence generated by our method is well defined and converge to a minimizer point. In particular we show how tools of Riemannian geometry, more specifically the convex analysis in Riemannian manifolds, can be used to solve nonconvex constrained problem in Euclidean, space.  相似文献   

13.
Filter approaches, initially presented by Fletcher and Leyffer in 2002, are attractive methods for nonlinear programming. In this paper, we propose an interior-point barrier projected Hessian updating algorithm with line search filter method for nonlinear optimization. The Lagrangian function value instead of the objective function value is used in the filter. The damped BFGS updating is employed to maintain the positive definiteness of the matrices in projected Hessian updating algorithm. The numerical experiments are reported to show the effectiveness of the proposed algorithm.  相似文献   

14.
A map of a Riemannian manifold into an euclidian space is said to be transnormal if its restrictions to neighbourhoods of regular level sets are integrable Riemannian submersions. Analytic transnormal maps can be used to describe isoparametric submanifolds in spaces of constant curvature and equifocal submanifolds with flat sections in simply connected symmetric spaces. These submanifolds are also regular leaves of singular Riemannian foliations with sections. We prove that regular level sets of an analytic transnormal map on a real analytic complete Riemannian manifold are equifocal submanifolds and leaves of a singular Riemannian foliation with sections.  相似文献   

15.
本文提出一类新的解无约束最优化问题的信整域方法。这类方法是通过对一般对称矩阵的Bunch-Parlett分解来产生搜索路径。它们既可以解目标函数是二次可微的也可以解目标函数是非二次可微的最优化问题,并且在由算法得到点列的任意聚点上,二次连续可微的目标函数的Hesse阵都是正定或半正定的。我们证明在一些较弱的条件下,算法是整体收敛的;对一致凸函数,是二次收敛的。一些数值结果表明这种新的方法是非常有效的。  相似文献   

16.
In this work we study Newton type method for functions on Riemannian manifolds whose Hessian satisfies a double inequality. The main results refer to global convergence and convergence rate estimates.  相似文献   

17.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

18.
Motivated by Nash equilibrium problems on ‘curved’ strategy sets, the concept of Nash–Stampacchia equilibrium points is introduced via variational inequalities on Riemannian manifolds. Characterizations, existence, and stability of Nash–Stampacchia equilibria are studied when the strategy sets are compact/noncompact geodesic convex subsets of Hadamard manifolds, exploiting two well-known geometrical features of these spaces both involving the metric projection map. These properties actually characterize the non-positivity of the sectional curvature of complete and simply connected Riemannian spaces, delimiting the Hadamard manifolds as the optimal geometrical framework of Nash–Stampacchia equilibrium problems. Our analytical approach exploits various elements from set-valued and variational analysis, dynamical systems, and non-smooth calculus on Riemannian manifolds. Examples are presented on the Poincaré upper-plane model and on the open convex cone of symmetric positive definite matrices endowed with the trace-type Killing form.  相似文献   

19.
Sparse principal component analysis (PCA), an important variant of PCA, attempts to find sparse loading vectors when conducting dimension reduction. This paper considers the nonsmooth Riemannian optimization problem associated with the ScoTLASS model 1 for sparse PCA which can impose orthogonality and sparsity simultaneously. A Riemannian proximal method is proposed in the work of Chen et al. 9 for the efficient solution of this optimization problem. In this paper, two acceleration schemes are introduced. First and foremost, we extend the FISTA method from the Euclidean space to the Riemannian manifold to solve sparse PCA, leading to the accelerated Riemannian proximal gradient method. Since the Riemannian optimization problem for sparse PCA is essentially nonconvex, a restarting technique is adopted to stabilize the accelerated method without sacrificing the fast convergence. Second, a diagonal preconditioner is proposed for the Riemannian proximal subproblem which can further accelerate the convergence of the Riemannian proximal methods. Numerical evaluations establish the computational advantages of the proposed methods over the existing proximal gradient methods on a manifold. Additionally, a short result concerning the convergence of the Riemannian subgradients of a sequence is established, which, together with the result in the work of Chen et al., 9 can show the stationary point convergence of the Riemannian proximal methods.  相似文献   

20.
After a discussion on definability of invariant subdivision rules we discuss rules for sequential data living in Riemannian manifolds and in symmetric spaces, having in mind the space of positive definite matrices as a major example. We show that subdivision rules defined with intrinsic means in Cartan-Hadamard manifolds converge for all input data, which is a much stronger result than those usually available for manifold subdivision rules. We also show weaker convergence results which are true in general but apply only to dense enough input data. Finally we discuss C 1 and C 2 smoothness of limit curves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号