首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Iterative methods based on Lanczos bidiagonalization with full reorthogonalization (LBDR) are considered for solving large-scale discrete ill-posed linear least-squares problems of the form min x Ax–b2. Methods for regularization in the Krylov subspaces are discussed which use generalized cross validation (GCV) for determining the regularization parameter. These methods have the advantage that no a priori information about the noise level is required. To improve convergence of the Lanczos process we apply a variant of the implicitly restarted Lanczos algorithm by Sorensen using zero shifts. Although this restarted method simply corresponds to using LBDR with a starting vector (AA T) p b, it is shown that carrying out the process implicitly is essential for numerical stability. An LBDR algorithm is presented which incorporates implicit restarts to ensure that the global minimum of the CGV curve corresponds to a minimum on the curve for the truncated SVD solution. Numerical results are given comparing the performance of this algorithm with non-restarted LBDR.This work was partially supported by DARPA under grant 60NANB2D1272 and by the National Science Foundation under grant CCR-9209349.  相似文献   

2.
This paper presents an iterative method for the computation of approximate solutions of large linear discrete ill-posed problems by Lavrentiev regularization. The method exploits the connection between Lanczos tridiagonalization and Gauss quadrature to determine inexpensively computable lower and upper bounds for certain functionals. This approach to bound functionals was first described in a paper by Dahlquist, Eisenstat, and Golub. A suitable value of the regularization parameter is determined by a modification of the discrepancy principle. In memory of Germund Dahlquist (1925–2005).AMS subject classification (2000) 65R30, 65R32, 65F10  相似文献   

3.
LSQR, a Lanczos bidiagonalization based Krylov subspace iterative method, and its mathematically equivalent conjugate gradient for least squares problems (CGLS) applied to normal equations system, are commonly used for large-scale discrete ill-posed problems. It is well known that LSQR and CGLS have regularizing effects, where the number of iterations plays the role of the regularization parameter. However, it has long been unknown whether the regularizing effects are good enough to find best possible regularized solutions. Here a best possible regularized solution means that it is at least as accurate as the best regularized solution obtained by the truncated singular value decomposition (TSVD) method. We establish bounds for the distance between the k-dimensional Krylov subspace and the k-dimensional dominant right singular space. They show that the Krylov subspace captures the dominant right singular space better for severely and moderately ill-posed problems than for mildly ill-posed problems. Our general conclusions are that LSQR has better regularizing effects for the first two kinds of problems than for the third kind, and a hybrid LSQR with additional regularization is generally needed for mildly ill-posed problems. Exploiting the established bounds, we derive an estimate for the accuracy of the rank k approximation generated by Lanczos bidiagonalization. Numerical experiments illustrate that the regularizing effects of LSQR are good enough to compute best possible regularized solutions for severely and moderately ill-posed problems, stronger than our theory predicts, but they are not for mildly ill-posed problems and additional regularization is needed.  相似文献   

4.
An iterative method based on Lanczos bidiagonalization is developed for computing regularized solutions of large and sparse linear systems, which arise from discretizations of ill-posed problems in partial differential or integral equations. Determination of the regularization parameter and termination criteria are discussed. Comments are given on the computational implementation of the algorithm.Dedicated to Peter Naur on the occasion of his 60th birthday  相似文献   

5.
The affine rank minimization problem is to minimize the rank of a matrix under linear constraints. It has many applications in various areas such as statistics, control, system identification and machine learning. Unlike the literatures which use the nuclear norm or the general Schatten \(q~ (0<q<1)\) quasi-norm to approximate the rank of a matrix, in this paper we use the Schatten 1 / 2 quasi-norm approximation which is a better approximation than the nuclear norm but leads to a nonconvex, nonsmooth and non-Lipschitz optimization problem. It is important that we give a global necessary optimality condition for the \(S_{1/2}\) regularization problem by virtue of the special objective function. This is very different from the local optimality conditions usually used for the general \(S_q\) regularization problems. Explicitly, the global necessary optimality condition for the \(S_{1/2}\) regularization problem is a fixed point inclusion associated with the singular value half thresholding operator. Naturally, we propose a fixed point iterative scheme for the problem. We also provide the convergence analysis of this iteration. By discussing the location and setting of the optimal regularization parameter as well as using an approximate singular value decomposition procedure, we get a very efficient algorithm, half norm fixed point algorithm with an approximate SVD (HFPA algorithm), for the \(S_{1/2}\) regularization problem. Numerical experiments on randomly generated and real matrix completion problems are presented to demonstrate the effectiveness of the proposed algorithm.  相似文献   

6.
Tikhonov Regularization of Large Linear Problems   总被引:1,自引:0,他引:1  
Many numerical methods for the solution of linear ill-posed problems apply Tikhonov regularization. This paper presents a new numerical method, based on Lanczos bidiagonalization and Gauss quadrature, for Tikhonov regularization of large-scale problems. An estimate of the norm of the error in the data is assumed to be available. This allows the value of the regularization parameter to be determined by the discrepancy principle.  相似文献   

7.
Estimation of the L-Curve via Lanczos Bidiagonalization   总被引:6,自引:0,他引:6  
The L-curve criterion is often applied to determine a suitable value of the regularization parameter when solving ill-conditioned linear systems of equations with a right-hand side contaminated by errors of unknown norm. However, the computation of the L-curve is quite costly for large problems; the determination of a point on the L-curve requires that both the norm of the regularized approximate solution and the norm of the corresponding residual vector be available. Therefore, usually only a few points on the L-curve are computed and these values, rather than the L-curve, are used to determine a value of the regularization parameter. We propose a new approach to determine a value of the regularization parameter based on computing an L-ribbon that contains the L-curve in its interior. An L-ribbon can be computed fairly inexpensively by partial Lanczos bidiagonalization of the matrix of the given linear system of equations. A suitable value of the regularization parameter is then determined from the L-ribbon, and we show that an associated approximate solution of the linear system can be computed with little additional work.  相似文献   

8.
The computation of an approximate solution of linear discrete ill-posed problems with contaminated data is delicate due to the possibility of severe error propagation. Tikhonov regularization seeks to reduce the sensitivity of the computed solution to errors in the data by replacing the given ill-posed problem by a nearby problem, whose solution is less sensitive to perturbation. This regularization method requires that a suitable value of the regularization parameter be chosen. Recently, Brezinski et al. (Numer Algorithms 49, 2008) described new approaches to estimate the error in approximate solutions of linear systems of equations and applied these estimates to determine a suitable value of the regularization parameter in Tikhonov regularization when the approximate solution is computed with the aid of the singular value decomposition. This paper discusses applications of these and related error estimates to the solution of large-scale ill-posed problems when approximate solutions are computed by Tikhonov regularization based on partial Lanczos bidiagonalization of the matrix. The connection between partial Lanczos bidiagonalization and Gauss quadrature is utilized to determine inexpensive bounds for a family of error estimates. In memory of Gene H. Golub. This work was supported by MIUR under the PRIN grant no. 2006017542-003 and by the University of Cagliari.  相似文献   

9.
Nonstationary iterated Tikhonov is an iterative regularization method that requires a strategy for defining the Tikhonov regularization parameter at each iteration and an early termination of the iterative process. A classical choice for the regularization parameters is a decreasing geometric sequence which leads to a linear convergence rate. The early iterations compute quickly a good approximation of the true solution, but the main drawback of this choice is a rapid growth of the error for later iterations. This implies that a stopping criteria, e.g. the discrepancy principle, could fail in computing a good approximation. In this paper we show by a filter factor analysis that a nondecreasing sequence of regularization parameters can provide a rapid and stable convergence. Hence, a reliable stopping criteria is no longer necessary. A geometric nondecreasing sequence of the Tikhonov regularization parameters into a fixed interval is proposed and numerically validated for deblurring problems.  相似文献   

10.
Summary. We describe a new iterative method for the solution of large, very ill-conditioned linear systems of equations that arise when discretizing linear ill-posed problems. The right-hand side vector represents the given data and is assumed to be contaminated by measurement errors. Our method applies a filter function of the form with the purpose of reducing the influence of the errors in the right-hand side vector on the computed approximate solution of the linear system. Here is a regularization parameter. The iterative method is derived by expanding in terms of Chebyshev polynomials. The method requires only little computer memory and is well suited for the solution of large-scale problems. We also show how a value of and an associated approximate solution that satisfies the Morozov discrepancy principle can be computed efficiently. An application to image restoration illustrates the performance of the method. Received January 25, 1997 / Revised version received February 9, 1998 / Published online July 28, 1999  相似文献   

11.
In this paper, we consider the smoothing and regularization Broyden-like algorithm for the system of nonlinear inequalities. By constructing a new smoothing function $\phi(\mu,a)=\frac{1}{2}(a+\mu(\ln2+\ln(1+\cosh\frac{a}{\mu})))$ , the problem is approximated via a family of parameterized smooth equations H(μ,ε,x)=0. A smoothing and regularization Broyden-like algorithm with a non-monotone linear search is proposed for solving the system of nonlinear inequalities based on the new smoothing function. The global convergence of the algorithm is established under suitable assumptions. In addition, the smoothing parameter μ and the regularization parameter ε in our algorithm are viewed as two different independent variables. Preliminary numerical results show the efficiency of the algorithm and reveal that the regularization parameter ε in our algorithm plays an important role in numerical improvement, hence, our algorithm seems to be simpler and more easily implemented compared to many previous methods.  相似文献   

12.
A special case of fluid flow, the laminar flow of a Bingham fluid through a cylindrical pipe, can be described as a convex minimization problem where the objective function J0 is nondifferentiable. J0 can be approximated easily by smooth functions J, but for a small parameter >0, the corresponding discretized problems are ill-conditioned. The use of proximal point methods to set well ill-posed problems or to stabilize ill-conditioned problems is well known. We present some numerical experiences of comparing standard prox-regularization methods with weak norm-based regularization methods.  相似文献   

13.
We consider the nonstationary iterated Tikhonov regularization in Banach spaces which defines the iterates via minimization problems with uniformly convex penalty term. The penalty term is allowed to be non-smooth to include \(L^1\) and total variation (TV) like penalty functionals, which are significant in reconstructing special features of solutions such as sparsity and discontinuities in practical applications. We present the detailed convergence analysis and obtain the regularization property when the method is terminated by the discrepancy principle. In particular we establish the strong convergence and the convergence in Bregman distance which sharply contrast with the known results that only provide weak convergence for a subsequence of the iterative solutions. Some numerical experiments on linear integral equations of first kind and parameter identification in differential equations are reported.  相似文献   

14.
15.
Limitations of the L-curve method in ill-posed problems   总被引:3,自引:0,他引:3  
This paper considers the Tikhonov regularization method with the regularization parameter chosen by the so-called L-curve criterion. An infinite dimensional example is constructed for which the selected regularization parameter vanishes too rapidly as the noise to signal ratio in the data goes to zero. As a consequence the computed reconstructions do not converge to the true solution. Numerical examples are given to show that similar phenomena can be observed under more general assumptions in discrete ill-posed problems provided the exact solution of the problem is smooth.This work was partially supported by NATO grant CRG 930044.  相似文献   

16.
The conjugate gradient method for the iterative solution of a set of linear equationsAx=b is essentially equivalent to the Lanczos method, which implies that approximations to certain eigen-values ofA can be obtained at low cost. In this paper it is shown how the smallest active eigenvalue ofA can be cheaply approximated, and the usefulness of this approximation for a practical termination criterion for the conjugate gradient method is studied. It is proved that this termination criterion is reliable in many relevant situations.  相似文献   

17.
This paper mainly studies the numerical differentiation by integration method proposed first by Lanczos. New schemes of the Lanczos derivatives are put forward for reconstructing numerical derivatives for high orders from noise data. The convergence rate of these proposed methods is as the noise level δ→0. Numerical examples show that the proposed methods are stable and efficient.  相似文献   

18.
Summary The flow of a Bingham fluid in a cylindrical pipe can give rise to free boundary problems. The fluid behaves like a viscous fluid if the shear stress, expressed as a linear function of the shear rate, exceeds a yield value, and like a rigid body otherwise. The surfaces dividing fluid and rigid zones are the free boundaries. Therefore the solution for such highly nonlinear problems can in general only be obtained by numerical methods. Considerable progress has been made in the development of numerical algorithms for Bingham fluids [2,20,27,29,30,32]. However, very little research can be found in the literature regarding the rate of convergence of the numerical solution to the true continuous solution, that is the error estimate of these numerical methods. Error estimates are a critically important issue because they tells us how to control the error by appropriately choosing the grid sizes and other related parameters. This paper concerns the error estimates of a unsteady Bingham fluid modeled as a variational inequality due to Duvaut-Lions [16] and Glowinski [30]. The difficulty both in the analytical and numerical treatment of the mathematical model is due to the fact that it contains a nondifferentiable term. A common technique, called the regularization method, is to replace the non-differentiable term by a perturbed differentiable term which depends on a small regularization parameter . The regularization method effectively reduces the variational inequality to an equation (a regularized problem) which is much easier to cope with. This paper has achieved the following. (1) Error estimates are derived for a continuous time Galerkin method in suitable norms. (2) We give an estimate of the difference between the true solution and the regularized solution in terms of . (3) Some regularity properties for both regularized solution and the true solution are proved. (4) The error estimates for full discretization of the regularized problem using piecewise linear finite elements in space, and backward differencing in time are established for the first time by coupling the regularization parameter and the discretization parameters h and t. (5) We are able to improve our estimates in the one-dimensional case or under stronger regularity assumptions on the true solution. The estimates for the one-dimensional case are optimal and confirmed by numerical experiment. The estimates from (4) and (5) provide very important information on the measure of the error and give us a powerful mechanism to properly choose the parameters h, t and in order to control the error (see Corollary 4.4). The above estimates extend the error bounds derived in Glowinski, Lions and Trémolières [32] (chapter 5, pp. 348–404) for the stationary Bingham fluid to the time-dependent one, which is the main contribution of this paper. Mathematics Subject Classification (2000):35k85, 65M15, 65M60, 76A05, 76M10I wish to thank my thesis advisor, Professor Todd Dupont, for his motivation and help on the writing of this paper during my Ph.D study at the University of Chicago. I also want to thank my postdoctoral advisor, Professor James Glimm, for his assistance with improvements to this paper.  相似文献   

19.
Two families of derivative free two-point iterative methods for solving nonlinear equations are constructed. These methods use a suitable parametric function and an arbitrary real parameter. It is proved that the first family has the convergence order four requiring only three function evaluations per iteration. In this way it is demonstrated that the proposed family without memory supports the Kung-Traub hypothesis (1974) on the upper bound 2n of the order of multipoint methods based on n + 1 function evaluations. Further acceleration of the convergence rate is attained by varying a free parameter from step to step using information available from the previous step. This approach leads to a family of two-step self-accelerating methods with memory whose order of convergence is at least and even in special cases. The increase of convergence order is attained without any additional calculations so that the family of methods with memory possesses a very high computational efficiency. Numerical examples are included to demonstrate exceptional convergence speed of the proposed methods using only few function evaluations.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号