共查询到20条相似文献,搜索用时 15 毫秒
1.
The topic of this paper is the convergence analysis of subspace gradient iterations for the simultaneous computation of a few of the smallest eigenvalues plus eigenvectors of a symmetric and positive definite matrix pair (A,M). The methods are based on subspace iterations for A ? 1M and use the Rayleigh‐Ritz procedure for convergence acceleration. New sharp convergence estimates are proved by generalizing estimates, which have been presented for vectorial steepest descent iterations (see SIAM J. Matrix Anal. Appl., 32(2):443‐456, 2011). Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
2.
主要研究对称正定矩阵群上的内蕴最速下降算法的收敛性问题.首先针对一个可转化为对称正定矩阵群上无约束优化问题的半监督度量学习模型,提出对称正定矩阵群上一种自适应变步长的内蕴最速下降算法.然后利用李群上的光滑函数在任意一点处带积分余项的泰勒展开式,证明所提算法在对称正定矩阵群上是线性收敛的.最后通过在分类问题中的数值实验说明算法的有效性. 相似文献
3.
Claudio H. Morales Charles E. Chidume 《Proceedings of the American Mathematical Society》1999,127(12):3677-3683
Let be a uniformly smooth Banach space and let be a bounded demicontinuous mapping, which is also -strongly accretive on . Let and let be an arbitrary initial value in . Then the approximating scheme
converges strongly to the unique solution of the equation , provided that the sequence fulfills suitable conditions.
4.
To minimize a continuously differentiable quasiconvex functionf:
n
, Armijo's steepest descent method generates a sequencex
k+1 =x
k
–t
k
f(x
k
), wheret
k
>0. We establish strong convergence properties of this classic method: either
, s.t.
; or arg minf = , x
k
andf(x
k
) inff. We also discuss extensions to other line searches.The research of the first author was supported by the Polish Academy of Sciences. The second author acknowledges the support of the Department of Industrial Engineering, Hong Kong University of Science and Technology.We wish to thank two anonymous referees for their valuable comments. In particular, one referee has suggested the use of quasiconvexity instead of convexity off. 相似文献
5.
Mohamed Habibullah S. K. Katti 《Annals of the Institute of Statistical Mathematics》1991,43(2):391-404
In maximizing a non-linear function G(), it is well known that the steepest descent method has a slow convergence rate. Here we propose a systematic procedure to obtain a 1–1 transformation on the variables , so that in the space of the transformed variables, the steepest descent method produces the solution faster. The final solution in the original space is obtained by taking the inverse transformation. We apply the procedure in maximizing the likelihood functions of some generalized distributions which are widely used in modeling count data. It was shown that for these distributions, the steepest descent method via transformations produced the solutions very fast. It is also observed that the proposed procedure can be used to expedite the convergence rate of the first derivative based algorithms, such as Polak-Ribiere, Fletcher and Reeves conjugate gradient methods as well. 相似文献
6.
7.
In this paper, we introduce a novel projected steepest descent iterative method with frozen derivative. The classical projected steepest descent iterative method involves the computation of derivative of the nonlinear operator at each iterate. The method of this paper requires the computation of derivative of the nonlinear operator only at an initial point. We exhibit the convergence analysis of our method by assuming the conditional stability of the inverse problem on a convex and compact set. Further, by assuming the conditional stability on a nested family of convex and compact subsets, we develop a multi-level method. In order to enhance the accuracy of approximation between neighboring levels, we couple it with the growth of stability constants. This along with a suitable discrepancy criterion ensures that the algorithm proceeds from level to level and terminates within finite steps. Finally, we discuss an inverse problem on which our methods are applicable. 相似文献
8.
Ciyou Zhu 《Mathematical Programming》1994,67(1-3):53-76
This paper shows that the primal-dual steepest descent algorithm developed by Zhu and Rockafellar for large-scale extended linear—quadratic programming can be used in solving constrained minimax problems related to a generalC
2 saddle function. It is proved that the algorithm converges linearly from the very beginning of the iteration if the related saddle function is strongly convex—concave uniformly and the cross elements between the convex part and the concave part of the variables in its Hessian are bounded on the feasible region. Better bounds for the asymptotic rates of convergence are also obtained. The minimax problems where the saddle function has linear cross terms between the convex part and the concave part of the variables are discussed specifically as a generalization of the extended linear—quadratic programming. Some fundamental features of these problems are laid out and analyzed.This work was supported by Eliezer Naddor Postdoctoral Fellowship in Mathematical Sciences at the Department of Mathematical Sciences, the Johns Hopkins University during the year 1991–92. 相似文献
9.
Luca Bergamaschi Giuseppe Gambolati Giorgio Pini 《Numerical Linear Algebra with Applications》1997,4(2):69-84
Recently an efficient method (DACG) for the partial solution of the symmetric generalized eigenproblem A x = δB x has been developed, based on the conjugate gradient (CG) minimization of the Rayleigh quotient over successive deflated subspaces of decreasing size. The present paper provides a numerical analysis of the asymptotic convergence rate ρj of DACG in the calculation of the eigenpair λj, u j, when the scheme is preconditioned with A−1. It is shown that, when the search direction are A-conjugate, ρj is well approximated by 4/ξj, where ξj is the Hessian condition number of a Rayleigh quotient defined in appropriate oblique complements of the space spanned by the leftmost eigenvectors u 1, u 2,…, u j−1 already calculated. It is also shown that 1/ξj is equal to the relative separation between the eigenvalue λj currently sought and the next higher one λj+1 and the next higher one λj + 1. A modification of DACG (MDACG) is studied, which involves a new set of CG search directions which are made M-conjugate, with M-conjugate, with M-conjugate, with M a matrix approximating the Hessian. By distinction, MDACG has an asymptotic rate of convergence which appears to be inversely proportional to the square root of ξj, in complete agreement with the theoretical results known for the CG solution to linear systems. © 1997 by John Wiley & Sons, Ltd. 相似文献
10.
Z. J. Huang 《Journal of Optimization Theory and Applications》1992,75(2):331-344
In this paper, by a further investigation of the algorithm structure of the nonlinear block scaled ABS methods, we convert it into an inexact Newton method. Based on this equivalent version, we establish the semilocal convergence theorem of the nonlinear block scaled ABS methods and obtain convergence conditions that mainly depend on the behavior of the mapping at the initial point. This complements the convergence theory of the nonlinear block scaled ABS methods. 相似文献
11.
This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the case where the expected-value function is continuously differentiable; and the other, when that function is nondifferentiable but the sample performance functions are convex. The proofs are based on a version of the uniform law of large numbers which is provable for many discrete event systems where infinitesimal perturbation analysis is known to be strongly consistent. 相似文献
12.
Quan Yu Xinzhen Zhang Yannan Chen Liqun Qi 《Numerical Linear Algebra with Applications》2023,30(3):e2464
Low Tucker rank tensor completion has wide applications in science and engineering. Many existing approaches dealt with the Tucker rank by unfolding matrix rank. However, unfolding a tensor to a matrix would destroy the data's original multi-way structure, resulting in vital information loss and degraded performance. In this article, we establish a relationship between the Tucker ranks and the ranks of the factor matrices in Tucker decomposition. Then, we reformulate the low Tucker rank tensor completion problem as a multilinear low rank matrix completion problem. For the reformulated problem, a symmetric block coordinate descent method is customized. For each matrix rank minimization subproblem, the classical truncated nuclear norm minimization is adopted. Furthermore, temporal characteristics in image and video data are introduced to such a model, which benefits the performance of the method. Numerical simulations illustrate the efficiency of our proposed models and methods. 相似文献
13.
The block‐Lanczos method serves to compute a moderate number of eigenvalues and the corresponding invariant subspace of a symmetric matrix. In this paper, the convergence behavior of nonrestarted and restarted versions of the block‐Lanczos method is analyzed. For the nonrestarted version, we improve an estimate by Saad by means of a change of the auxiliary vector so that the new estimate is much more accurate in the case of clustered or multiple eigenvalues. For the restarted version, an estimate by Knyazev is generalized by extending our previous results on block steepest descent iterations and single‐vector restarted Krylov subspace iterations. The new estimates can also be reformulated and applied to invert‐block‐Lanczos methods for solving generalized matrix eigenvalue problems. 相似文献
14.
I. Pultarová 《Numerical Linear Algebra with Applications》2016,23(2):373-390
An asymptotic convergence analysis of a new multilevel method for numerical solution of eigenvalues and eigenvectors of symmetric and positive definite matrices is performed. The analyzed method is a generalization of the original method that has recently been proposed by R. Ku?el and P. Vaněk (DOI: 10.1002/nla.1975) and uses a standard multigrid prolongator matrix enriched by one full column vector, which approximates the first eigenvector. The new generalized eigensolver is designed to compute eigenvectors. Their asymptotic convergence in terms of the generalized residuals is proved, and its convergence factor is estimated. The theoretical analysis is illustrated by numerical examples. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
15.
Takeshi Iwashita Kota Ikehara Takeshi Fukaya Takeshi Mifune 《Numerical Linear Algebra with Applications》2023,30(6):e2512
In this article, we focus on solving a sequence of linear systems that have identical (or similar) coefficient matrices. For this type of problem, we investigate subspace correction (SC) and deflation methods, which use an auxiliary matrix (subspace) to accelerate the convergence of the iterative method. In practical simulations, these acceleration methods typically work well when the range of the auxiliary matrix contains eigenspaces corresponding to small eigenvalues of the coefficient matrix. We develop a new algebraic auxiliary matrix construction method based on error vector sampling in which eigenvectors with small eigenvalues are efficiently identified in the solution process. We use the generated auxiliary matrix for convergence acceleration in the following solution step. Numerical tests confirm that both SC and deflation methods with the auxiliary matrix can accelerate the solution process of the iterative solver. Furthermore, we examine the applicability of our technique to the estimation of the condition number of the coefficient matrix. We also present the algorithm of the preconditioned conjugate gradient method with condition number estimation. 相似文献
16.
Luciano Galeone Roberto Garrappa 《Journal of Computational and Applied Mathematics》1997,80(2):377-195
We study convergence properties of time-point relaxation (TR) Runge-Kutta methods for linear systems of ordinary differential equations. TR methods are implemented by decoupling systems in Gauss-Jacobi, Gauss-Seidel and successive overrelaxation modes (continuous-time iterations) and then solving the resulting subsystems by means of continuous extensions of Runge-Kutta (CRK) methods (discretized iterations). By iterating to convergence, these methods tend to the same limit called diagonally split Runge-Kutta (DSRK) method. We prove that TR methods are equivalent to decouple in the same modes the linear algebraic system obtained by applying DSRK limit method. This issue allows us to study the convergence of TR methods by using standard principles of convergence of iterative methods for linear algebraic systems. For a particular problem regions of convergence are plotted. 相似文献
17.
18.
We study the convergence properties of an iterative method for a variational inequality defined on a solution set of the split common fixed point problem. The method involves Landweber-type operators related to the problem as well as their extrapolations in an almost cyclic way. The evaluation of these extrapolations does not require prior knowledge of the matrix norm. We prove the strong convergence under the assumption that the operators employed in the method are approximately shrinking. 相似文献
19.
20.
Convergence analysis of implicit iterative algorithms for asymptotically nonexpansive mappings 总被引:1,自引:0,他引:1
In this paper, we consider the weak and strong convergence of an implicit iterative process with errors for two finite families of asymptotically nonexpansive mappings in the framework of Banach spaces. Our results presented in this paper improve and extend the recent ones announced by many others. 相似文献