首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Motivated by the recently popular probabilistic methods for low‐rank approximations and randomized algorithms for the least squares problems, we develop randomized algorithms for the total least squares problem with a single right‐hand side. We present the Nyström method for the medium‐sized problems. For the large‐scale and ill‐conditioned cases, we introduce the randomized truncated total least squares with the known or estimated rank as the regularization parameter. We analyze the accuracy of the algorithm randomized truncated total least squares and perform numerical experiments to demonstrate the efficiency of our randomized algorithms. The randomized algorithms can greatly reduce the computational time and still maintain good accuracy with very high probability.  相似文献   

2.
反问题是现在数学物理研究中的一个热点问题,而反问题求解面临的一个本质性困难是不适定性。求解不适定问题的普遍方法是:用与原不适定问题相“邻近”的适定问题的解去逼近原问题的解,这种方法称为正则化方法.如何建立有效的正则化方法是反问题领域中不适定问题研究的重要内容.当前,最为流行的正则化方法有基于变分原理的Tikhonov正则化及其改进方法,此类方法是求解不适定问题的较为有效的方法,在各类反问题的研究中被广泛采用,并得到深入研究.  相似文献   

3.
The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low-rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical ill-conditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CP-ALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CP-ALS subproblems efficiently, have the same complexity as the standard CP-ALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when ill-conditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error.  相似文献   

4.
We present three cubically convergent methods for choosing the regularization parameters in linear inverse problems. The detailed algorithms are given and the convergence rates are estimated. Our basic tools are Tikhonov regularization and Morozov's discrepancy principle. We prove that, in comparison with the standard Newton method, the computational costs for our cubically convergent methods are nearly the same, but the number of iteration steps is even less. Numerical experiments for an elliptic boundary value problem illustrate the efficiency of the proposed algorithms.  相似文献   

5.
Linear least squares problems with box constraints are commonly solved to find model parameters within bounds based on physical considerations. Common algorithms include Bounded Variable Least Squares (BVLS) and the Matlab function lsqlin. Here, the goal is to find solutions to ill-posed inverse problems that lie within box constraints. To do this, we formulate the box constraints as quadratic constraints, and solve the corresponding unconstrained regularized least squares problem. Using box constraints as quadratic constraints is an efficient approach because the optimization problem has a closed form solution. The effectiveness of the proposed algorithm is investigated through solving three benchmark problems and one from a hydrological application. Results are compared with solutions found by lsqlin, and the quadratically constrained formulation is solved using the L-curve, maximum a posteriori estimation (MAP), and the χ2 regularization method. The χ2 regularization method with quadratic constraints is the most effective method for solving least squares problems with box constraints.  相似文献   

6.
We discuss the problem of parameter choice in learning algorithms generated by a general regularization scheme. Such a scheme covers well-known algorithms as regularized least squares and gradient descent learning. It is known that in contrast to classical deterministic regularization methods, the performance of regularized learning algorithms is influenced not only by the smoothness of a target function, but also by the capacity of a space, where regularization is performed. In the infinite dimensional case the latter one is usually measured in terms of the effective dimension. In the context of supervised learning both the smoothness and effective dimension are intrinsically unknown a priori. Therefore we are interested in a posteriori regularization parameter choice, and we propose a new form of the balancing principle. An advantage of this strategy over the known rules such as cross-validation based adaptation is that it does not require any data splitting and allows the use of all available labeled data in the construction of regularized approximants. We provide the analysis of the proposed rule and demonstrate its advantage in simulations.  相似文献   

7.
Using the least squares, modified Lagrangian function, and some other methods as examples, the capabilities of the new optimization technique based on the quadratic approximation of penalty functions that has been recently proposed by O. Mangasarian for a special class of linear programming problems are demonstrated. The application of this technique makes it possible to use unified matrix operations and standard linear algebra packages (including parallel ones) for solving large-scale problems with sparse strongly structured constraint matrices. With this technique, the computational schemes of some well-known algorithms can take an unexpected form.  相似文献   

8.
Least squares problems arise frequently in many disciplines such as image restorations. In these areas, for the given least squares problem, usually the coefficient matrix is ill-conditioned. Thus if the problem data are available with certain error, then after solving least squares problem with classical approaches we might end up with a meaningless solution. Tikhonov regularization, is one of the most widely used approaches to deal with such situations. In this paper, first we briefly describe these approaches, then the robust optimization framework which includes the errors in problem data is presented. Finally, our computational experiments on several ill-conditioned standard test problems using the regularization tools, a Matlab package for least squares problem, and the robust optimization framework, show that the latter approach may be the right choice.  相似文献   

9.
This paper describes a nonlinear least squares framework to solve a separable nonlinear ill-posed inverse problem that arises in blind deconvolution. It is shown that with proper constraints and well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved and the nonlinear minimization problem can be effectively solved by a Gauss–Newton method. Although uncertainties in the data and inaccuracies of linear solvers make it unlikely to obtain a smooth and convex objective function, it is shown that implicit filtering optimization methods can be used to avoid becoming trapped in local minima. Computational considerations, such as computing the Jacobian, are discussed, and numerical experiments are used to illustrate the behavior of the algorithms. Although the focus of the paper is on blind deconvolution, the general mathematical model addressed in this paper, and the approaches discussed to solve it, arise in many other applications.  相似文献   

10.
针对传统Kriging模型在多变量(高维)输入全局优化中因超参数过多而引发收敛速度慢,精度低,建模效率不高问题,提出了基于偏最小二乘变换技术和Kriging模型的有效全局优化方法.首先,构造偏最小二乘高斯核函数;其次,借助差分进化算法寻找满足期望改进准则最大化条件的新样本点;然后,将不同核函数和期望改进准则组合,构建四种有效全局优化算法并进行比较;最后,数值算例结果表明,基于偏最小二乘变换的Kriging全局优化方法在解决高维全局优化问题方面相比于标准的全局优化算法在收敛精度及收敛速度方面更具优势.  相似文献   

11.
The G-algorithm was proposed by Bareiss [1] as a method for solving the weighted linear least squares problem. It is a square root free algorithm similar to the fast Givens method except that it triangularizes a rectangular matrix a column at a time instead of one element at a time.In this paper an error analysis of the G-algorithm is presented which shows that it is as stable as any of the standard orthogonal decomposition methods for solving least squares problems. The algorithm is shown to be a competitive method for sparse least squares problems.A pivoting strategy is given for heavily weighted problems similar to that in [14] for the Householder-Golub algorithm. The strategy is prohibitively expensive, but it is not necessary for most of the least squares problems that arise in practice.The research was supported by the National Science Foundation under contract no. MCS-8201065 and by the Office of Naval Research under contract no. N0014-80-0517.  相似文献   

12.
Two iterative algorithms are presented in this paper to solve the minimal norm least squares solution to a general linear matrix equations including the well-known Sylvester matrix equation and Lyapunov matrix equation as special cases. The first algorithm is based on the gradient based searching principle and the other one can be viewed as its dual form. Necessary and sufficient conditions for the step sizes in these two algorithms are proposed to guarantee the convergence of the algorithms for arbitrary initial conditions. Sufficient condition that is easy to compute is also given. Moreover, two methods are proposed to choose the optimal step sizes such that the convergence speeds of the algorithms are maximized. Between these two methods, the first one is to minimize the spectral radius of the iteration matrix and explicit expression for the optimal step size is obtained. The second method is to minimize the square sum of the F-norm of the error matrices produced by the algorithm and it is shown that the optimal step size exits uniquely and lies in an interval. Several numerical examples are given to illustrate the efficiency of the proposed approach.  相似文献   

13.
Kernel logistic regression (KLR) is a very powerful algorithm that has been shown to be very competitive with many state-of the art machine learning algorithms such as support vector machines (SVM). Unlike SVM, KLR can be easily extended to multi-class problems and produces class posterior probability estimates making it very useful for many real world applications. However, the training of KLR using gradient based methods or iterative re-weighted least squares can be unbearably slow for large datasets. Coupled with poor conditioning and parameter tuning, training KLR can quickly design matrix become infeasible for some real datasets. The goal of this paper is to present simple, fast, scalable, and efficient algorithms for learning KLR. First, based on a simple approximation of the logistic function, a least square algorithm for KLR is derived that avoids the iterative tuning of gradient based methods. Second, inspired by the extreme learning machine (ELM) theory, an explicit feature space is constructed through a generalized single hidden layer feedforward network and used for training iterative re-weighted least squares KLR (IRLS-KLR) and the newly proposed least squares KLR (LS-KLR). Finally, for large-scale and/or poorly conditioned problems, a robust and efficient preconditioned learning technique is proposed for learning the algorithms presented in the paper. Numerical results on a series of artificial and 12 real bench-mark datasets show first that LS-KLR compares favorable with SVM and traditional IRLS-KLR in terms of accuracy and learning speed. Second, the extension of ELM to KLR results in simple, scalable and very fast algorithms with comparable generalization performance to their original versions. Finally, the introduced preconditioned learning method can significantly increase the learning speed of IRLS-KLR.  相似文献   

14.
A multilevel approach for nonnegative matrix factorization   总被引:1,自引:0,他引:1  
Nonnegative matrix factorization (NMF), the problem of approximating a nonnegative matrix with the product of two low-rank nonnegative matrices, has been shown to be useful in many applications, such as text mining, image processing, and computational biology. In this paper, we explain how algorithms for NMF can be embedded into the framework of multilevel methods in order to accelerate their initial convergence. This technique can be applied in situations where data admit a good approximate representation in a lower dimensional space through linear transformations preserving nonnegativity. Several simple multilevel strategies are described and are experimentally shown to speed up significantly three popular NMF algorithms (alternating nonnegative least squares, multiplicative updates and hierarchical alternating least squares) on several standard image datasets.  相似文献   

15.
The main idea of this paper is to utilize the adaptive iterative schemes based on regularization techniques for moderately ill‐posed problems that are obtained by a system of linear two‐dimensional Volterra integral equations with a singular matrix in the leading part. These problems may arise in the modeling of certain heat conduction processes as well as in the dynamic simulation packages such as compressible flow through a plant piping network. Owing to the ill‐posed nature of the first kind Volterra equation that appears in the system, we will focus on the two families of regularization algorithms, ie, the Landweber and Lavrentiev type methods, where we treat both the exact and perturbed data. Our aim is to work directly with the original Volterra equations without any kind of reduction. Two fast iterative algorithms with reasonable computational complexity are developed. Numerical experiments on a few test problems are used to illustrate the validity and efficiency of the proposed iterative methods in comparison with the classical regularization methods.  相似文献   

16.
The multiexponential analysis problem of fitting kinetic models to time-resolved spectra is often solved using gradient-based algorithms that treat the spectral parameters as conditionally linear. We make a comparison of the two most-applied such algorithms, alternating least squares and variable projection. A numerical study examines computational efficiency and linear approximation standard error estimates. A new derivation of the Fisher information matrix under the full Golub-Pereyra gradient allows a numerical comparison of parameter precision under variable projection variants. Under the criteria of efficiency, quality of standard error estimates and parameter precision, we conclude that the Kaufman variable projection technique performs well, while techniques based on alternating least squares have significant disadvantages for application in the problem domain.  相似文献   

17.
Aiming at identifying nonlinear systems, one of the most challenging problems in system identification, a class of data-driven recursive least squares algorithms are presented in this work. First, a full form dynamic linearization based linear data model for nonlinear systems is derived. Consequently, a full form dynamic linearization-based data-driven recursive least squares identification method for estimating the unknown parameter of the obtained linear data model is proposed along with convergence analysis and prediction of the outputs subject to stochastic noises. Furthermore, a partial form dynamic linearization-based data-driven recursive least squares identification algorithm is also developed as a special case of the full form dynamic linearization based algorithm. The proposed two identification algorithms for the nonlinear nonaffine discrete-time systems are flexible in applications without relying on any explicit mechanism model information of the systems. Additionally, the number of the parameters in the obtained linear data model can be tuned flexibly to reduce computation complexity. The validity of the two identification algorithms is verified by rigorous theoretical analysis and simulation studies.  相似文献   

18.
This work addresses the problem of regularized linear least squares (RLS) with non-quadratic separable regularization. Despite being frequently deployed in many applications, the RLS problem is often hard to solve using standard iterative methods. In a recent work [M. Elad, Why simple shrinkage is still relevant for redundant representations? IEEE Trans. Inform. Theory 52 (12) (2006) 5559–5569], a new iterative method called parallel coordinate descent (PCD) was devised. We provide herein a convergence analysis of the PCD algorithm, and also introduce a form of the regularization function, which permits analytical solution to the coordinate optimization. Several other recent works [I. Daubechies, M. Defrise, C. De-Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math. LVII (2004) 1413–1457; M.A. Figueiredo, R.D. Nowak, An EM algorithm for wavelet-based image restoration, IEEE Trans. Image Process. 12 (8) (2003) 906–916; M.A. Figueiredo, R.D. Nowak, A bound optimization approach to wavelet-based image deconvolution, in: IEEE International Conference on Image Processing, 2005], which considered the deblurring problem in a Bayesian methodology, also obtained element-wise optimization algorithms. We show that the last three methods are essentially equivalent, and the unified method is termed separable surrogate functionals (SSF). We also provide a convergence analysis for SSF. To further accelerate PCD and SSF, we merge them into a recently developed sequential subspace optimization technique (SESOP), with almost no additional complexity. A thorough numerical comparison of the denoising application is presented, using the basis pursuit denoising (BPDN) objective function, which leads all of the above algorithms to an iterated shrinkage format. Both with synthetic data and with real images, the advantage of the combined PCD-SESOP method is demonstrated.  相似文献   

19.
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than global optimum. Genetic algorithms have been applied successfully to function optimization and therefore would be effective for nonlinear least squares estimation. This paper provides an illustration of a genetic algorithm applied to a simple nonlinear least squares example.  相似文献   

20.
This paper focuses on efficient computational approaches to compute approximate solutions of a linear inverse problem that is contaminated with mixed Poisson–Gaussian noise, and when there are additional outliers in the measured data. The Poisson–Gaussian noise leads to a weighted minimization problem, with solution-dependent weights. To address outliers, the standard least squares fit-to-data metric is replaced by the Talwar robust regression function. Convexity, regularization parameter selection schemes, and incorporation of non-negative constraints are investigated. A projected Newton algorithm is used to solve the resulting constrained optimization problem, and a preconditioner is proposed to accelerate conjugate gradient Hessian solves. Numerical experiments on problems from image deblurring illustrate the effectiveness of the methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号