首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Iterative orthogonalization is aimed to ensure small deviation from orthogonality in the Gram–Schmidt process. Former applications of this technique are restricted to classical Gram–Schmidt (CGS) and column-oriented modified Gram–Schmidt (MGS). The major aim of this paper is to explain how iterative orthogonalization is incorporated into row-oriented MGS. The interest that we have in a row-oriented iterative MGS comes from the observation that this method is capable of performing column pivoting. The use of column pivoting delays the deteriorating effects of rounding errors and helps to handle rank-deficient least-squares problems.

A second modification proposed in this paper considers the use of Gram–Schmidt QR factorization for solving linear least-squares problems. The standard solution method is based on one orthogonalization of the r.h.s. vector b against the columns of Q. The outcome of this process is the residual vector, r*, and the solution vector, x*. The modified scheme is a natural extension of the standard solution method that allows it to apply iterative orthogonalization. This feature ensures accurate computation of small residuals and helps in cases when Q has some deviation from orthogonality.  相似文献   


2.
The discrete prolate spheroidal sequences (DPSS's) provide an efficient representation for discrete signals that are perfectly timelimited and nearly bandlimited. Due to the high computational complexity of projecting onto the DPSS basis – also known as the Slepian basis – this representation is often overlooked in favor of the fast Fourier transform (FFT). We show that there exist fast constructions for computing approximate projections onto the leading Slepian basis elements. The complexity of the resulting algorithms is comparable to the FFT, and scales favorably as the quality of the desired approximation is increased. In the process of bounding the complexity of these algorithms, we also establish new nonasymptotic results on the eigenvalue distribution of discrete time–frequency localization operators. We then demonstrate how these algorithms allow us to efficiently compute the solution to certain least-squares problems that arise in signal processing. We also provide simulations comparing these fast, approximate Slepian methods to exact Slepian methods as well as the traditional FFT based methods.  相似文献   

3.
In some previous papers the author extended two algorithms proposed by Z. Kovarik for approximate orthogonalization of a finite set of linearly independent vectors from a Hilbert space, to the case when the vectors are rows (not necessary linearly independent) of an arbitrary rectangular matrix. In this paper we describe combinations between these two methods and the classical Kaczmarz’s iteration. We prove that, in the case of a consistent least-squares problem, the new algorithms so obtained converge to any of its solutions (depending on the initial approximation). The numerical experiments described in the last section of the paper on a problem obtained after the discretization of a first kind integral equation ilustrate the fast convergence of the new algorithms.  相似文献   

4.
In 1907, Erhard Schmidt published a paper in which he introduced an orthogonalization algorithm that has since become known as the classical Gram‐Schmidt process. Schmidt claimed that his procedure was essentially the same as an earlier one published by J. P. Gram in 1883. The Schmidt version was the first to become popular and widely used. An algorithm related to a modified version of the process appeared in an 1820 treatise by P. S. Laplace. Although related algorithms have been around for almost 200 years, it is the Schmidt paper that led to the popularization of orthogonalization techniques. The year 2007 marked the 100th anniversary of that paper. In celebration of that anniversary, we present a comprehensive survey of the research on Gram‐Schmidt orthogonalization and its related QR factorization. Its application for solving least squares problems and in Krylov subspace methods are also reviewed. Software and implementation aspects are also discussed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Constantin Popa 《PAMM》2008,8(1):10823-10824
In this paper we consider three versions of Kovarik's iterative orthogonalization algorithms, for approximating the minimal norm solution of symmetric least squares problems. Although the convergence of these algorithms is linear, in practical applications we observed that a too big number of iterations can dramatically deteriorate the already obtained approximation. In this respect we analyse the above mentioned Kovarik–like methods according to the modifications they make on the “machine zero” eigenvalues of the problem (symmetric) matrix. We establish a theoretical almost optimal formula for the number of iterations necessary to obtain an enough accurate approximation, as well as to avoid the above mentioned troubles. Experiments on collocation discretization of a Fredholm first kind integral equation ilustrate the efficiency of our considerations. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
The numerical stability of the lattice algorithm for least-squares linear prediction problems is analysed. The lattice algorithm is an orthogonalization method for solving such problems and as such is in principle to be preferred to normal equations approaches. By performing a first-order analysis of the method and comparing the results with perturbation bounds available for least-squares problems, it is argued that the lattice algorithm is stable and in fact comparable in accuracy to other known stable but less efficient methods for least-squares problems.Dedicated to Germund Dahlquist on the occasion of his 60th birthday.This work was partially supported by NSF Grant MCS-8003364 and contracts AFOSR 82-0210, ARO DAAG29-82-K-0082.  相似文献   

7.
The weighted least-squares solutions of coupled singular matrix equations are too difficult to obtain by applying matrices decomposition. In this paper, a family of algorithms are applied to solve these problems based on the Kronecker structures. Subsequently, we construct a computationally efficient solutions of coupled restricted singular matrix equations. Furthermore, the need to compute the weighted Drazin and weighted Moore–Penrose inverses; and the use of Tian's work and Lev-Ari's results are due to appearance in the solutions of these problems. The several special cases of these problems are also considered which includes the well-known coupled Sylvester matrix equations. Finally, we recover the iterative methods to the weighted case in order to obtain the minimum D-norm G-vector least-squares solutions for the coupled Sylvester matrix equations and the results lead to the least-squares solutions and invertible solutions, as a special case.  相似文献   

8.
The Schur algorithm and its time-domain counterpart, the fast Cholseky recursions, are some efficient signal processing algorithms which are well adapted to the study of inverse scattering problems. These algorithms use a layer stripping approach to reconstruct a lossless scattering medium described by symmetric two-component wave equations which model the interaction of right and left propagating waves. In this paper, the Schur and fast Chokesky recursions are presented and are used to study several inverse problems such as the reconstruction of nonuniform lossless transmission lines, the inverse problem for a layered acoustic medium, and the linear least-squares estimation of stationary stochastic processes. The inverse scattering problem for asymmetric two-component wave equations corresponding to lossy media is also examined and solved by using two coupled sets of Schur recursions. This procedure is then applied to the inverse problem for lossy transmission lines.The work of this author was supported by the Exxon Education FoundationThe work of this author was supported by the Air Force Office of Scientific Research under Grant AFOSR-82-0135A.  相似文献   

9.
The least-squares polynomial filtering and fixed-point smoothing problems of discrete-time signals from randomly delayed observations is addressed, when the Bernoulli random variables modelling the delay are correlated at consecutive sampling times. Recursive estimation algorithms are deduced without requiring full knowledge of the state-space model generating the signal process, but only information about the delay probabilities and the moments of the processes involved. Defining a suitable augmented observation vector, the polynomial estimation problem is reduced to the linear estimation problem of the signal based on the augmented observations, which is solved by using an innovation approach.  相似文献   

10.
This paper brings together a novel information representation model for use in signal processing and computer vision problems, with a particular algorithmic development of the Landweber iterative algorithm. The information representation model allows a representation of multiple values for a variable as well as an expression for confidence. Both properties are important for effective computation using multi-level models, where a choice between models will be implementable as part of the optimization process. It is shown that in this way the algorithm can deal with a class of high-dimensional, sparse, and constrained least-squares problems, which arise in various computer vision learning tasks, such as object recognition and object pose estimation. While the algorithm has been applied to the solution of such problems, it has so far been used heuristically. In this paper we describe the properties and some of the peculiarities of the channel representation and optimization, and put them on firm mathematical ground. We consider the optimization a convexly constrained weighted least-squares problem and propose for its solution a projected Landweber method which employs oblique projections onto the closed convex constraint set. We formulate the problem, present the algorithm and work out its convergence properties, including a rate-of-convergence result. The results are put in perspective with currently available projected Landweber methods. An application to supervised learning is described, and the method is evaluated in an experiment involving function approximation, as well as application to transient signals.  相似文献   

11.
We consider approximation algorithms for nonnegative polynomial optimization problems over unit spheres. These optimization problems have wide applications e.g., in signal and image processing, high order statistics, and computer vision. Since these problems are NP-hard, we are interested in studying on approximation algorithms. In particular, we propose some polynomial-time approximation algorithms with new approximation bounds. In addition, based on these approximation algorithms, some efficient algorithms are presented and numerical results are reported to show the efficiency of our proposed algorithms.  相似文献   

12.
As a continuation of [1], this paper considers implementation of ODE approaches. A modified Hamming's algorithm for integration of (ECP)-equation is suggested to obtain a local solution. In addition to the main algorithm, three supporting algorithms are also described: two are for evaluation of the right-hand side of (ECP)-equation, which may be especially suitable for certain kinds of (ECP)-equation when applied to large scale problems; the third one, with a convergence theorem, is for computing an initial feasible point. Our numerical results obtained by executing these algorithms on an example of (ECP)-equation given in [1] on five test problems indicate their remarkable superiority of performance to Tanabe's ODE version that is recently claimed to be much better than some well-known SQP techniques.  相似文献   

13.
解大型非对称特征问题的精化块不完全正交化算法   总被引:1,自引:0,他引:1  
0引言 块Arnoldi方法~[5]是解大型非对称特征值问题的正交投影方法,然而Jia~[3]的分析表  相似文献   

14.
The Newton-PCG (preconditioned conjugate gradient) like algorithms are usually very efficient. However, their efficiency is mainly supported by the numerical experiments. Recently, a new kind of Newton-PCG-like algorithms is derived in (J. Optim. Theory Appl. 105 (2000) 97; Superiority analysis on truncated Newton method with preconditioned conjugate gradient technique for optimization, in preparation) by the efficiency analysis. It is proved from the theoretical point of view that their efficiency is superior to that of Newton's method for the special cases where Newton's method converges with precise Q-order 2 and α(⩾2), respectively. In the process of extending such kind of algorithms to the more general case where Newton's method has no fixed convergence order, the first is to get the solutions to the one-dimensional optimization problems with many different parameter values of α. If these problems were solved by numerical method one by one, the computation cost would reduce the efficiency of the Newton-PCG algorithm, and therefore is unacceptable. In this paper, we overcome the difficulty by deriving an analytic expression of the solution to the one-dimensional optimization problem with respect to the parameter α.  相似文献   

15.
In this paper we consider a class of specific Urysohn integral equations for which the solutions are only determined with the exception of rearrangements of function values and associated arguments. As an alternative to Tikhonov's regularization method approximating minimum-norm solutions for this ill-posed class of inverse problems, a constrained least-squares approach is presented. This approach is aimed at finding decreasing rearrangements serving as appropriate solution representatives. It is shown that the inverses of these decresing solutions solve a Fredholm linear integral equation of the first kind.  相似文献   

16.
This is the first of two papers on multilinear problems, and it is devoted to the worst case setting. A second paper will analyze the average case setting. We show how to reduce the analysis of multilinear problems to linear subproblems. In particular, it is proven that adaption can help by a factor of at most k and k-linear problems. The error of multilinear algorithms is analyzed and optimality properties of spline algorithms for the Hilbert case are established. We illustrate our analysis with an example of a multilinear problem from signal processing.  相似文献   

17.
This paper addresses the problem of estimating signals from observation models with multiplicative and additive noises. Assuming that the state-space model is unknown, the multiplicative noise is non-white and the signal and additive noise are correlated, recursive algorithms are derived for the least-squares linear filter and fixed-point smoother. The proposed algorithms are obtained using an innovation approach and taking into account the information provided by the covariance functions of the process involved.  相似文献   

18.
There exist many classes of block-projections algorithms for approximating solutions of linear least-squares problems. Generally, these methods generate sequences convergent to the minimal norm least-squares solution only for consistent problems. In the inconsistent case, which usually appears in practice because of some approximations or measurements, these sequences do no longer converge to a least-squares solution or they converge to the minimal norm solution of a “perturbed” problem. In the present paper, we overcome this difficulty by constructing extensions for almost all the above classes of block-projections methods. We prove that the sequences generated with these extensions always converge to a least-squares solution and, with a suitable initial approximation, to the minimal norm solution of the problem. Numerical experiments, described in the last section of the paper, confirm the theoretical results obtained.  相似文献   

19.
In this paper, we propose a new design for the recursive least-squares (RLS) Wiener fixed-lag smoother and filter in linear discrete-time wide-sense stationary stochastic systems. It is assumed that the signal is observed with additive white observation noise. The signal is uncorrelated with the observation noise. The estimators require knowledge of the system matrix, the observation matrix and the variance of the state vector. These quantities can be obtained from the auto-covariance function of the signal. In the estimation algorithms, moreover, the variance of the observation noise is assumed to be known, as a priori information.  相似文献   

20.
In this paper we propose randomized first-order algorithms for solving bilinear saddle points problems. Our developments are motivated by the need for sublinear time algorithms to solve large-scale parametric bilinear saddle point problems where cheap online assessment of the solution quality is crucial. We present the theoretical efficiency estimates of our algorithms and discuss a number of applications, primarily to the problem of ? 1 minimization arising in sparsity-oriented signal processing. We demonstrate, both theoretically and by numerical examples, that when seeking for medium-accuracy solutions of large-scale ? 1 minimization problems, our randomized algorithms outperform significantly (and progressively as the sizes of the problem grow) the state-of-the art deterministic methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号