首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we study optimization problems involving eigenvalues of symmetric matrices. One of the difficulties with numerical analysis of such problems is that the eigenvalues, considered as functions of a symmetric matrix, are not differentiable at those points where they coalesce. Here we apply the $\mathcal{U}$ -Lagrangian theory to a class of D.C. functions (the difference of two convex functions): the arbitrary eigenvalue function λ i , with affine matrix-valued mappings, where λ i is a D.C. function. We give the first-and second-order derivatives of ${\mathcal{U}}$ -Lagrangian in the space of decision variables R m when transversality condition holds. Moreover, an algorithm framework with quadratic convergence is presented. Finally, we present an application: low rank matrix optimization; meanwhile, list its $\mathcal{VU}$ decomposition results.  相似文献   

2.
The generalized qd algorithm for block band matrices is an extension of the block qd algorithm applied to a block tridiagonal matrix. This algorithm is applied to a positive definite symmetric block band matrix. The result concerning the behavior of the eigenvalues of the first and the last diagonal block of the matrix containing the entries q (k) which was obtained in the tridiagonal case is still valid for positive definite symmetric block band matrices. The eigenvalues of the first block constitute strictly increasing sequences and those of the last block constitute strictly decreasing sequences. The theorem of convergence, given in Draux and Sadik (Appl Numer Math 60:1300?C1308, 2010), also remains valid in this more general case.  相似文献   

3.
In this paper, we study nonlinear optimization problems involving eigenvalues of symmetric matrices. One of the difficulties in solving these problems is that the eigenvalue functions are not differentiable when the multiplicity of the function is not one. We apply the \({\mathcal {U}}\)-Lagrangian theory to analyze the largest eigenvalue function of a convex matrix-valued mapping which extends the corresponding results for linear mapping in the literature. We also provides the formula of first-and second-order derivatives of the \({\mathcal {U}}\)-Lagrangian under mild assumptions. These theoretical results provide us new second-order information about the largest eigenvalue function along a suitable smooth manifold, and leads to a new algorithmic framework for analyzing the underlying optimization problem.  相似文献   

4.
We prove the Murnaghan-Nakayama rule for k-Schur functions of Lapointe and Morse, that is, we give an explicit formula for the expansion of the product of a power sum symmetric function and a k-Schur function in terms of k-Schur functions. This is proved using the noncommutative k-Schur functions in terms of the nilCoxeter algebra introduced by Lam and the affine analogue of noncommutative symmetric functions of Fomin and Greene.  相似文献   

5.
6.
A test of the equality of the first h eigenvectors of covariance matrices of several populations is constructed without the assumption that the sampled distributions are Gaussian. It is proved that the test statistic is asymptotically chi-square distributed. In this general setting, an explicit formula for column space of the asymptotic covariance matrix of the sample eigenvectors is derived and the rank of this matrix is computed. An essential assumption in deriving the asymptotic distribution of the presented test statistic is the existence of the finite fourth moments and the simplicity of the h largest eigenvalues of population covariance matrices, which makes possible to use the formulas for derivatives of eigenvectors of symmetric matrices.  相似文献   

7.
I outline a unified approach to characterizing Fréchet, limiting Fréchet, and Clarke subgradients of an arbitrary function of the eigenvalues of a real symmetric matrix. In particular, I compute various subdifferentials of thek'th largest eigenvalue. This paper summarizes the results and techniques presented in detail in [4].  相似文献   

8.
A divisible design graph is a graph whose adjacency matrix is the incidence matrix of a divisible design. Divisible design graphs are a natural generalization of (v,k,λ)-graphs, and like (v,k,λ)-graphs they make a link between combinatorial design theory and algebraic graph theory. The study of divisible design graphs benefits from, and contributes to, both parts. Using information of the eigenvalues of the adjacency matrix, we obtain necessary conditions for existence. Old results of Bose and Connor on symmetric divisible designs give other conditions and information on the structure. Many constructions are given using various combinatorial structures, such as (v,k,λ)-graphs, distance-regular graphs, symmetric divisible designs, Hadamard matrices, and symmetric balanced generalized weighing matrices. Several divisible design graphs are characterized in terms of the parameters.  相似文献   

9.
We prove that the Rieffel sharpness condition for a Banach space E is necessary and sufficient for an arbitrary Lipschitz function f: [a, b]→E to be differentiable almost everywhere on a segment [a, b]. We establish that, in the case where the sharpness condition is not satisfied, the major part (in the category sense) of Lipschitz functions has no derivatives at any point of the segment [a, b].  相似文献   

10.
A function, F, on the space of n×n real symmetric matrices is called spectral if it depends only on the eigenvalues of its argument, that is F(A)=F(UAUT) for every orthogonal U and symmetric A in its domain. Spectral functions are in one-to-one correspondence with the symmetric functions on : those that are invariant under arbitrary swapping of their arguments. In this paper we show that a spectral function has a quadratic expansion around a point A if and only if its corresponding symmetric function has quadratic expansion around λ(A) (the vector of eigenvalues). We also give a concise and easy to use formula for the ‘Hessian' of the spectral function. In the case of convex functions we show that a positive definite ‘Hessian' of f implies positive definiteness of the ‘Hessian' of F.  相似文献   

11.
The spread of a matrix with real eigenvalues is the difference between its largest and smallest eigenvalues. The Gerschgorin circle theorem can be used to bound the extreme eigenvalues of the matrix and hence its spread. For nonsymmetric matrices the Gerschgorin bound on the spread may be larger by an arbitrary factor than the actual spread even if the matrix is symmetrizable. This is not true for real symmetric matrices. It is shown that for real symmetric matrices (or complex Hermitian matrices) the ratio between the bound and the spread is bounded by p+1, where p is the maximum number of off diagonal nonzeros in any row of the matrix. For full matrices this is just n. This bound is not quite sharp for n greater than 2, but examples with ratios of n?1 for all n are given. For banded matrices with m nonzero bands the maximum ratio is bounded by m independent of the size of n. This bound is sharp provided only that n is at least 2m. For sparse matrices, p may be quite small and the Gerschgorin bound may be surprisingly accurate.  相似文献   

12.
We prove a weaker form of conjecture concerning the representation theorem for locally defined operators acting between some spaces of differentiable functions presented by K. Lichawski, J. Matkowski and J. Mi? [K. Lichawski, J. Matkowski, J. Mi?, Locally defined operators in the space of differentiable functions, Bull. Polish Acad. Sci. Math. 37 (1989) 315-325].As a corollary we obtain the representation formula for continuous locally defined operators mapping the Banach space Cm(A) into Ck(A), for k≥2 where Cm(A) is the space of m-times Whitney continuously differentiable functions on a perfect set .  相似文献   

13.
The converse of the Cauchy interlacing theorem, relating eigenvalues of a symmetric real matrix and eigenvalues of a principal submatrix, first proved by Fan and Pall, is extended to the case of symmetric matrices with entries in an arbitrary formally real field.  相似文献   

14.
The (parabolic) second-order directional derivatives of singular values of matrices and symmetric matrix-valued functions induced by real-valued functions play important roles in studying second-order optimality conditions for different types of matrix cone optimization problems. We propose a direct way to derive the formula for the second-order directional derivative of any eigenvalue of a symmetric matrix in Torki (Nonlinear Anal 46:1133–1150 2001), from which a formula for the second-order directional derivative of any singular value of a matrix is established. We demonstrate a formula for the second-order directional derivative of the symmetric matrix-valued function. As applications, the second-order derivative for the projection operator over the SDP cone is derived and used to get the second-order tangent set of the SDP cone in Bonnans and Shapiro (2000), and the tangent cone and the second-order tangent set of the epigraph of the nuclear norm are given as well.  相似文献   

15.
New elements of calculus on a complete real closed non-Archimedean field extension F of the real numbers ? will be presented. It is known that the total disconnectedness of F in the topology induced by the order makes the usual (topological) notions of continuity and differentiability too weak to extend real calculus results to F. In this paper, we introduce new stronger concepts of continuity and differentiability which we call derivate continuity and derivate differentiability [2, 12]; andwe show that derivate continuous and differentiable functions satisfy the usual addition, product and composition rules and that n-times derivate differentiable functions satisfy a Taylor formula with remainder similar to that of the real case. Then we generalize the definitions of derivate continuity and derivate differentiability to multivariable F-valued functions and we prove related results that are useful for doing analysis on F and F n in general.  相似文献   

16.
We show how Van Loan's method for annulling the (2,1) block of skew‐Hamiltonian matrices by symplectic‐orthogonal similarity transformation generalizes to general matrices and provides a numerical algorithm for solving the general quadratic matrix equation: For skew‐Hamiltonian matrices we find their canonical form under a similarity transformation and find the class of all symplectic‐orthogonal similarity transformations for annulling the (2,1) block and simultaneously bringing the (1,1) block to Hessenberg form. We present a structure‐preserving algorithm for the solution of continuous‐time algebraic Riccati equation. Unlike other methods in the literature, the final transformed Hamiltonian matrix is not in Hamiltonian–Schur form. Three applications are presented: (a) for a special system of partial differential equations of second order for a single unknown function, we obtain the matrix of partial derivatives of second order of the unknown function by only algebraic operations and differentiation of functions; (b) for a similar transformation of a complex matrix into a symmetric (and three‐diagonal) one by applying only finite algebraic transformations; and (c) for finite‐step reduction of the eigenvalues–eigenvectors problem of a Hermitian matrix to the eigenvalues– eigenvectors problem of a real symmetric matrix of the same dimension. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.
The principal component analysis is to recursively estimate the eigenvectors and the corresponding eigenvalues of a symmetric matrix A based on its noisy observations Ak=A+Nk, where A is allowed to have arbitrary eigenvalues with multiplicity possibly bigger than one. In the paper the recursive algorithms are proposed and their ordered convergence is established: It is shown that the first algorithm a.s. converges to a unit eigenvector corresponding to the largest eigenvalue, the second algorithm a.s. converges to a unit eigenvector corresponding to either the second largest eigenvalue in the case the largest eigenvalue is of single multiplicity or the largest eigenvalue if the multiplicity of the largest eigenvalue is bigger than one, and so on. The convergence rate is also derived.  相似文献   

18.
If A has no eigenvalues on the closed negative real axis, and B is arbitrary square complex, the matrix-matrix exponentiation is defined as A B := e log(A)B . It arises, for instance, in Von Newmann’s quantum-mechanical entropy, which in turn finds applications in other areas of science and engineering. In this paper, we revisit this function and derive new related results. Particular emphasis is devoted to its Fréchet derivative and conditioning. We propose a new definition of bivariate matrix function and derive some general results on their Fréchet derivatives, which hold, not only to the matrix-matrix exponentiation but also to other known functions, such as means of two matrices, second order Fréchet derivatives and some iteration functions arising in matrix iterative methods. The numerical computation of the Fréchet derivative is discussed and an algorithm for computing the relative condition number of A B is proposed. Some numerical experiments are included.  相似文献   

19.
Given a symmetric matrix A(x) = [aij(x)] depending on a parameter x, we study the second order sensitivity of all the eigenvalues λm(x) of A(x). Under some smoothness assumptions like the aij being twice differentiable, we prove that the second order directional derivatives (which naturally arise in the context of nonsmooth functions) of λm do exist, are equal, and we give their explicit expression in terms of the data of the parametrized matrix. The techniques used are quite different from those for the first order sensitivity; they rely on sharp results by G.W.Stewart concerning the perturbation of invariant subspaces.  相似文献   

20.
We investigate lower bounds for the eigenvalues of perturbations of matrices. In the footsteps of Weyl and Ipsen & Nadler, we develop approximating matrices whose eigenvalues are lower bounds for the eigenvalues of the perturbed matrix. The number of available eigenvalues and eigenvectors of the original matrix determines how close those approximations can be, and, if the perturbation is of low rank, such bounds are relatively inexpensive to obtain. Moreover, because the process need not be restricted to the eigenvalues of perturbed matrices, lower bounds for eigenvalues of bordered diagonal matrices as well as for singular values of rank-k perturbations and other updates of n×m matrices are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号