首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
主要给出了迹稳定秩1的C*-代数的稳定有限性,证明了如果A是有单位元迹稳定秩1的C*-代数,则A是稳定有限的,引入了弱迹稳定秩1的定义,并且证明了如果有单位元的C*-代数A是迹稳定秩1的,则A是弱迹稳定秩1的.对于单的具有SP性质的有单位元的C*-代数A,如果A是弱迹稳定秩1的,则A是迹稳定秩1的.同时给出了迹稳定秩1的C*-代数的一个等价条件,证明了一个有单位元的可分的C*-代数A是迹稳定秩1的,等价于A=(t4)limn→∞(An,Pn),其中tsr(AN)=1.  相似文献   

2.
主要给出了迹稳定秩1的C~*-代数的稳定有限性,证明了如果A是有单位元迹稳定秩1的C~*-代数,则A是稳定有限的,引入了弱迹稳定秩1的定义,并且证明了如果有单位元的C~*-代数A是迹稳定秩1的,则A是弱迹稳定秩1的.对于单的具有SP性质的有单位元的C~*-代数A,如果A是弱迹稳定秩1的,则A是迹稳定秩1的.同时给出了迹稳定秩1的C~*-代数的一个等价条件,证明了一个有单位元的可分的C~*-代数A是迹稳定秩1的,等价于A=(t_4)limn→∞(A_n,p_n),其中tsr(A_n)=1.  相似文献   

3.
本文研究了秩约束下矩阵方程AX=B的反对称解问题.利用矩阵秩的方法,获得了矩阵方程AX=B有最大秩和最小秩解的充分必要条件以及定秩解的表达式,同时对于最小秩解的解集合,得到了最佳逼近解.  相似文献   

4.
付莹 《数学杂志》2014,34(2):243-250
本文研究了矩阵方程AX = B 的Hermitian R-对称最大秩和最小秩解问题. 利用矩阵秩的方法, 获得了矩阵方程AX = B有最大秩和最小秩解的充分必要条件以及解的表达式, 同时对于最小秩解的解集合, 得到了最佳逼近解.  相似文献   

5.
本文研究了矩阵方程AX=B的Hermitian R-对称最大秩和最小秩解问题.利用矩阵秩的方法,获得了矩阵方程AX=B有最大秩和最小秩解的充分必要条件以及解的表达式,同时对于最小秩解的解集合,得到了最佳逼近解.  相似文献   

6.
冯天祥 《数学杂志》2016,36(2):285-292
本文研究了矩阵方程AX=B的双对称最大秩和最小秩解问题.利用矩阵秩的方法,获得了矩阵方程AX=B有最大秩和最小秩解的充分必要条件以及解的表达式,同时对于最小秩解的解集合,得到了最佳逼近解.  相似文献   

7.
传统的求解0-1规划问题方法大多属于直接离散的解法.现提出一个包含严格转换和近似逼近三个步骤的连续化解法:(1)借助阶跃函数把0-1离散变量转化为[0,1]区间上的连续变量;(2)对目标函数采用逼近折中阶跃函数近光滑打磨函数,约束条件采用线性打磨函数逼近折中阶跃函数,把0-1规划问题由离散问题转化为连续优化模型;(3)利用高阶光滑的解法求解优化模型.该方法打破了特定求解方法仅适用于特定类型0-1规划问题惯例,使求解0-1规划问题的方法更加一般化.在具体求解时,采用正弦型光滑打磨函数来逼近折中阶跃函数,计算效果很好.  相似文献   

8.
结构矩阵低秩逼近在图像压缩、计算机代数和语音编码中有广泛应用.首先给出了几类结构矩阵的投影公式,再利用交替投影方法计算结构矩阵低秩逼近问题.数值试验表明新方法是可行的.  相似文献   

9.
本文证明了一个单的有单位元的迹稳定秩一的C*-代数具有消去律,利用此结果证明了单的有单位元的迹稳定秩一的C*-代数是稳定秩一的.最后讨论了迹稳定秩一的C*-代数的K0群的性质.  相似文献   

10.
Λ-稳定秩下的酉K_1-群   总被引:1,自引:0,他引:1  
A.Bak和唐国平在[1]中引入了Λ-稳定秩条件,这是比过去常用的酉稳定秩条件与绝对稳定秩条件都要弱的新的稳定秩条件.利用这一稳定秩条件证明了酉群(有限生成投射模上二次型的自同构群)的基本子群的正规性、二次型空间的消去性.以及酉K1-群的稳定性.这些结果不仅推广了已有的类似结果、极大地简化了证明过程,而更重要的是降低了稳定秩的下界.  相似文献   

11.
The symmetric tensor decomposition problem is a fundamental problem in many fields, which appealing for investigation. In general, greedy algorithm is used for tensor decomposition. That is, we first find the largest singular value and singular vector and subtract the corresponding component from tensor, then repeat the process. In this article, we focus on designing one effective algorithm and giving its convergence analysis. We introduce an exceedingly simple and fast algorithm for rank-one approximation of symmetric tensor decomposition. Throughout variable splitting, we solve symmetric tensor decomposition problem by minimizing a multiconvex optimization problem. We use alternating gradient descent algorithm to solve. Although we focus on symmetric tensors in this article, the method can be extended to nonsymmetric tensors in some cases. Additionally, we also give some theoretical analysis about our alternating gradient descent algorithm. We prove that alternating gradient descent algorithm converges linearly to global minimizer. We also provide numerical results to show the effectiveness of the algorithm.  相似文献   

12.
Biquadratic tensors play a central role in many areas of science.Examples include elastic tensor and Eshelby tensor in solid mechanics,and Riemannian curvature tensor in relativity theory.The singular values and spectral norm of a general third order tensor are the square roots of the M-eigenvalues and spectral norm of a biquadratic tensor,respectively.The tensor product operation is closed for biquadratic tensors.All of these motivate us to study biquadratic tensors,biquadratic decomposition,and norms of biquadratic tensors.We show that the spectral norm and nuclear norm for a biquadratic tensor may be computed by using its biquadratic structure.Then,either the number of variables is reduced,or the feasible region can be reduced.We show constructively that for a biquadratic tensor,a biquadratic rank-one decomposition always exists,and show that the biquadratic rank of a biquadratic tensor is preserved under an independent biquadratic Tucker decomposition.We present a lower bound and an upper bound of the nuclear norm of a biquadratic tensor.Finally,we define invertible biquadratic tensors,and present a lower bound for the product of the nuclear norms of an invertible biquadratic tensor and its inverse,and a lower bound for the product of the nuclear norm of an invertible biquadratic tensor,and the spectral norm of its inverse.  相似文献   

13.
Z-eigenvalue methods for a global polynomial optimization problem   总被引:2,自引:0,他引:2  
As a global polynomial optimization problem, the best rank-one approximation to higher order tensors has extensive engineering and statistical applications. Different from traditional optimization solution methods, in this paper, we propose some Z-eigenvalue methods for solving this problem. We first propose a direct Z-eigenvalue method for this problem when the dimension is two. In multidimensional case, by a conventional descent optimization method, we may find a local minimizer of this problem. Then, by using orthogonal transformations, we convert the underlying supersymmetric tensor to a pseudo-canonical form, which has the same E-eigenvalues and some zero entries. Based upon these, we propose a direct orthogonal transformation Z-eigenvalue method for this problem in the case of order three and dimension three. In the case of order three and higher dimension, we propose a heuristic orthogonal transformation Z-eigenvalue method by improving the local minimum with the lower-dimensional Z-eigenvalue methods, and a heuristic cross-hill Z-eigenvalue method by using the two-dimensional Z-eigenvalue method to find more local minimizers. Numerical experiments show that our methods are efficient and promising. This work is supported by the Research Grant Council of Hong Kong and the Natural Science Foundation of China (Grant No. 10771120).  相似文献   

14.
We consider the symmetric rank-one, quasi-Newton formula. The hereditary properties of this formula do not require quasi-Newton directions of search. Therefore, this formula is easy to use in constrained optimization algorithms; no explicit projections of either the Hessian approximations or the parameter changes are required. Moreover, the entire Hessian approximation is available at each iteration for determining the direction of search, which need not be a quasi-Newton direction. Theoretical difficulties, however, exist. Even for a positive-definite, quadratic function with no constraints, it is possible that the symmetric rank-one update may not be defined at some iteration. In this paper, we first demonstrate that such failures of definition correspond to either losses of independence in the directions of search being generated or to near-singularity of the Hessian approximation being generated. We then describe a procedure that guarantees that these updates are well-defined for any nonsingular quadratic function. This procedure has been incorporated into an algorithm for minimizing a function subject to box constraints. Box constraints arise naturally in the minimization of a function with many minima or a function that is defined only in some subregion of the space.  相似文献   

15.
The problem of finding the best rank-one approximation to higher-order tensors has extensive engineering and statistical applications. It is well-known that this problem is equivalent to a homogeneous polynomial optimization problem. In this paper, we study theoretical results and numerical methods of this problem, particularly focusing on the 4-th order symmetric tensor case. First, we reformulate the polynomial optimization problem to a matrix programming, and show the equivalence between these two problems. Then, we prove that there is no duality gap between the reformulation and its Lagrangian dual problem. Concerning the approaches to deal with the problem, we propose two relaxed models. The first one is a convex quadratic matrix optimization problem regularized by the nuclear norm, while the second one is a quadratic matrix programming regularized by a truncated nuclear norm, which is a D.C. function and therefore is nonconvex. To overcome the difficulty of solving this nonconvex problem, we approximate the nonconvex penalty by a convex term. We propose to use the proximal augmented Lagrangian method to solve these two relaxed models. In order to obtain a global solution, we propose an alternating least eigenvalue method after solving the relaxed models and prove its convergence. Numerical results presented in the last demonstrate, especially for nonpositive tensors, the effectiveness and efficiency of our proposed methods.  相似文献   

16.
We show that a best rank one approximation to a real symmetric tensor, which in principle can be nonsymmetric, can be chosen symmetric. Furthermore, a symmetric best rank one approximation to a symmetric tensor is unique if the tensor does not lie on a certain real algebraic variety.  相似文献   

17.
We consider an Iterated-Subspace Minimization(ISM) method for solving large-scale unconstrained minimization problems.At each major iteration of the method, a two-dimensional manifold, the iterated subspace, is constructed and an approximate minimizer of the objective function in this manifold then determined, and a symmetric rank-one updating is used to solve the inner minimization problem.  相似文献   

18.
In this paper, a modified Newton’s method for the best rank-one approximation problem to tensor is proposed. We combine the iterative matrix of Jacobi-Gauss-Newton (JGN) algorithm or Alternating Least Squares (ALS) algorithm with the iterative matrix of GRQ-Newton method, and present a modified version of GRQ-Newton algorithm. A line search along the projective direction is employed to obtain the global convergence. Preliminary numerical experiments and numerical comparison show that our algorithm is efficient.  相似文献   

19.
The approximation of solutions to partial differential equations by tensorial separated representations is one of the most efficient numerical treatment of high dimensional problems. The key step of such methods is the computation of an optimal low-rank tensor to enrich the obtained iterative tensorial approximation. In variational problems, this step can be carried out by alternating minimization (AM) technics, but the convergence of such methods presents a real challenge. In the present work, the convergence of rank-one AM algorithms for a class of variational linear elliptic equations is studied. More precisely, we show that rank-one AM-sequences are in general bounded in the ambient Hilbert tensor space and are compact if a uniform non-orthogonality condition between iterates and the reaction term is fulfilled. In particular, if a rank-one AM-sequence is weakly convergent then it converges strongly and the common limit is a solution of the rank-one optimization problem.  相似文献   

20.
In this paper we discuss the notion of singular vector tuples of a complex-valued \(d\) -mode tensor of dimension \(m_1\times \cdots \times m_d\) . We show that a generic tensor has a finite number of singular vector tuples, viewed as points in the corresponding Segre product. We give the formula for the number of singular vector tuples. We show similar results for tensors with partial symmetry. We give analogous results for the homogeneous pencil eigenvalue problem for cubic tensors, i.e., \(m_1=\cdots =m_d\) . We show the uniqueness of best approximations for almost all real tensors in the following cases: rank-one approximation; rank-one approximation for partially symmetric tensors (this approximation is also partially symmetric); rank- \((r_1,\ldots ,r_d)\) approximation for \(d\) -mode tensors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号