首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  数学   4篇
  2018年   3篇
  2012年   1篇
排序方式: 共有4条查询结果,搜索用时 15 毫秒
1
1.
This paper introduces an algorithm for the nonnegative matrix factorization-and-completion problem, which aims to find nonnegative low-rank matrices X and Y so that the product XY approximates a nonnegative data matrix M whose elements are partially known (to a certain accuracy). This problem aggregates two existing problems: (i) nonnegative matrix factorization where all entries of M are given, and (ii) low-rank matrix completion where nonnegativity is not required. By taking the advantages of both nonnegativity and low-rankness, one can generally obtain superior results than those of just using one of the two properties. We propose to solve the non-convex constrained least-squares problem using an algorithm based on the classical alternating direction augmented Lagrangian method. Preliminary convergence properties of the algorithm and numerical simulation results are presented. Compared to a recent algorithm for nonnegative matrix factorization, the proposed algorithm produces factorizations of similar quality using only about half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the proposed algorithm yields overall better qualities than those produced by two recent matrix-completion algorithms that do not exploit nonnegativity.  相似文献
2.
The higher-order orthogonal iteration (HOOI) has been popularly used for finding a best low-multilinear rank approximation of a tensor. However, its convergence is still an open question. In this paper, we first analyse a greedy HOOI, which updates each factor matrix by selecting from the best candidates one that is closest to the current iterate. Assuming the existence of a block-nondegenerate cluster point, we establish its global iterate sequence convergence through the so-called Kurdyka–ᴌojasiewicz property. In addition, we show that if the starting point is sufficiently close to any block-nondegenerate globally optimal solution, the greedy HOOI produces an iterate sequence convergent to a globally optimal solution. Relating the iterate sequence by the original HOOI to that by the greedy HOOI, we then show that the original HOOI has global convergence on the multilinear subspace sequence and thus positively address the open question.  相似文献
3.
Block coordinate update (BCU) methods enjoy low per-update computational complexity because every time only one or a few block variables would need to be updated among possibly a large number of blocks. They are also easily parallelized and thus have been particularly popular for solving problems involving large-scale dataset and/or variables. In this paper, we propose a primal–dual BCU method for solving linearly constrained convex program with multi-block variables. The method is an accelerated version of a primal–dual algorithm proposed by the authors, which applies randomization in selecting block variables to update and establishes an O(1 / t) convergence rate under convexity assumption. We show that the rate can be accelerated to \(O(1/t^2)\) if the objective is strongly convex. In addition, if one block variable is independent of the others in the objective, we then show that the algorithm can be modified to achieve a linear rate of convergence. The numerical experiments show that the accelerated method performs stably with a single set of parameters while the original method needs to tune the parameters for different datasets in order to achieve a comparable level of performance.  相似文献
4.
Semidefinite programming, SDP, relaxations have proven to be extremely strong for many hard discrete optimization problems. This is in particular true for the quadratic assignment problem, QAP, arguably one of the hardest NP-hard discrete optimization problems. There are several difficulties that arise in efficiently solving the SDP relaxation, e.g., increased dimension; inefficiency of the current primal–dual interior point solvers in terms of both time and accuracy; and difficulty and high expense in adding cutting plane constraints. We propose using the alternating direction method of multipliers ADMM in combination with facial reduction, FR, to solve the SDP relaxation. This first order approach allows for: inexpensive iterations, a method of cheaply obtaining low rank solutions; and a trivial way of exploiting the FR for adding cutting plane inequalities. In fact, we solve the doubly nonnegative, DNN, relaxation that includes both the SDP and all the nonnegativity constraints. When compared to current approaches and current best available bounds we obtain robustness, efficiency and improved bounds.  相似文献
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号