首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a new algorithm for solving a linear least squares problem with linear constraints. These are equality constraint equations and nonnegativity constraints on selected variables. This problem, while appearing to be quite special, is the core problem arising in the solution of the general linearly constrained linear least squares problem. The reduction process of the general problem to the core problem can be done in many ways. We discuss three such techniques.The method employed for solving the core problem is based on combining the equality constraints with differentially weighted least squares equations to form an augmented least squares system. This weighted least squares system, which is equivalent to a penalty function method, is solved with nonnegativity constraints on selected variables.Three types of examples are presented that illustrate applications of the algorithm. The first is rank deficient, constrained least squares curve fitting. The second is concerned with solving linear systems of algebraic equations with Hilbert matrices and bounds on the variables. The third illustrates a constrained curve fitting problem with inconsistent inequality constraints.  相似文献   

2.
It is well known that the standard algorithm for the mixed least squares–total least squares (MTLS) problem uses the QR factorization to reduce the original problem into a standard total least squares problem with smaller size, which can be solved based on the singular value decomposition (SVD). In this paper, the MTLS problem is proven to be closely related to a weighted total least squares problem with its error‐free columns multiplied by a large weighting factor. A criterion for choosing the weighting factor is given; and for the sake of stability in solving the MTLS problem, the Cholesky factorization‐based inverse (Cho‐INV) iteration and Rayleigh quotient iteration are also considered. For large‐scale MTLS problems, numerical tests show that Cho‐INV is superior to the standard QR‐SVD method, especially for the case with big gap between the desired and undesired singular values and the case when the coefficient matrix has much more error‐contaminated columns. Rayleigh quotient iteration behaves more efficient than QR‐SVD for most cases and fails occasionally, and in some cases, it converges much faster than Cho‐INV but still less efficient due to its higher computation cost.  相似文献   

3.
An algorithm for solving nonlinear least squares problems with general linear inequality constraints is described.At each step,the problem is reduced to an unconstrained linear least squares problem in a subs pace defined by the active constraints,which is solved using the quasi-Newton method.The major update formula is similar to the one given by Dennis,Gay and Welsch (1981).In this paper,we state the detailed implement of the algorithm,such as the choice of active set,the solution of subproblem and the avoidance of zigzagging.We also prove the globally convergent property of the algorithm.  相似文献   

4.
本文提出具有线性等式约束多目标规划问题的一个降维算法.当目标函数全是二次或线性但至少有一个二次型时,用线性加权法转化原问题为单目标二次规划,再用降维方法转化为求解一个线性方程组.若目标函数非上述情形,首先用线性加权法将原问题转化为具有线性等式约束的非线性规划,然后,对这一非线性规划的目标函数二次逼近,构成线性等式约束二次规划序列,用降维法求解,直到满足精度要求为止.  相似文献   

5.
Linear least squares problems with box constraints are commonly solved to find model parameters within bounds based on physical considerations. Common algorithms include Bounded Variable Least Squares (BVLS) and the Matlab function lsqlin. Here, the goal is to find solutions to ill-posed inverse problems that lie within box constraints. To do this, we formulate the box constraints as quadratic constraints, and solve the corresponding unconstrained regularized least squares problem. Using box constraints as quadratic constraints is an efficient approach because the optimization problem has a closed form solution. The effectiveness of the proposed algorithm is investigated through solving three benchmark problems and one from a hydrological application. Results are compared with solutions found by lsqlin, and the quadratically constrained formulation is solved using the L-curve, maximum a posteriori estimation (MAP), and the χ2 regularization method. The χ2 regularization method with quadratic constraints is the most effective method for solving least squares problems with box constraints.  相似文献   

6.
Due to the limitation of computational resources, traditional statistical methods are no longer applicable to large data sets. Subsampling is a popular method which can significantly reduce computational burden. This paper considers a subsampling strategy based on the least absolute relative error in the multiplicative model for massive data. In addition, we employ the random weighting and the least squares methods to handle the problem that the asymptotic covariance of the estimator is difficult to be estimated directly. Moreover, the comparison among the least absolute relative error, least absolute deviation and least squares under the optimal subsampling strategy are given in simulation studies and real examples.  相似文献   

7.
Some new perturbation results are presented for least squares problems with equality constraints, in which relative errors are obtained on perturbed solutions, least squares residuals, and vectors of Lagrange multipliers of the problem, based on the equivalence of the problem to a usual least squares problem and optimal perturbation results for orthogonal projections.  相似文献   

8.
§ 1  IntroductionThe nonlinear complementarity problem(NCP) is to find a pointx∈Rn such thatx Tf(x) =0 ,x≥ 0 ,f(x)≥ 0 ,(1 .1 )where f is a continuously differentiable function from Rninto itself.It is well known thatthe NCP is equivalent to a system of smoothly nonlinear equations with nonnegative con-straintsH (z)∶ =y -f(x)x . y =0 ,s.t. x≥ 0 ,y≥ 0 ,(1 .2 )where z=(x,y) and x y=(x1 y1 ,...,xnyn) T.Based on the above reformulation,many in-terior-point methods are established;see,fo…  相似文献   

9.
In this paper, the nonlinear complementarity problem is transformed into the least squares problem with nonnegative constraints ,and a SQP algorithm for this reformulation based on a damped Gauss-Newton type method is presented. It is shown that the algorithm is globally and locally superlinearly (quadratically) convergent without the assumption of monotonicity.  相似文献   

10.
众所周知,加权法是解等式约束不定最小二乘问题的方法之一.通过探讨极限意义下,双曲MGS算法解对应加权问题的本质,得到一类消去算法.实验表明,该算法以和文献中现有的GHQR算法达到一样的精度,但实际计算量只需要GHQR算法的一半.  相似文献   

11.
It is shown how a least squares problem subject to equality constraints can be replaced by an unconstrained least squares problem. Constraints and equations may be non linear. Results seem to be too complicated to be applied to general cases but can quite successfully be used for special problems like the closing of balances for instance.  相似文献   

12.
In this paper we propose a projected semismooth Newton method to solve the problem of calibrating least squares covariance matrix with equality and inequality constraints. The method is globally and quadratically convergent with proper assumptions. The numerical results show that the proposed method is efficient and comparable with existing methods.  相似文献   

13.
We study a linear, discrete ill-posed problem, by which we mean a very ill-conditioned linear least squares problem. In particular we consider the case when one is primarily interested in computing a functional defined on the solution rather than the solution itself. In order to alleviate the ill-conditioning we require the norm of the solution to be smaller than a given constant. Thus we are lead to minimizing a linear functional subject to two quadratic constraints. We study existence and uniqueness for this problem and show that it is essentially equivalent to a least squares problem with a linear and a quadratic constraint, which is easier to handle computationally. Efficient algorithms are suggested for this problem.  相似文献   

14.
The integer least squares problem is an important problem that arises in numerous applications. We propose a real relaxation-based branch-and-bound (RRBB) method for this problem. First, we define a quantity called the distance to integrality, propose it as a measure of the number of nodes in the RRBB enumeration tree, and provide computational evidence that the size of the RRBB tree is proportional to this distance. Since we cannot know the distance to integrality a priori, we prove that the norm of the Moore–Penrose generalized inverse of the matrix of coefficients is a key factor for bounding this distance, and then we propose a preconditioning method to reduce this norm using lattice reduction techniques. We also propose a set of valid box constraints that help accelerate the RRBB method. Our computational results show that the proposed preconditioning significantly reduces the size of the RRBB enumeration tree, that the preconditioning combined with the proposed set of box constraints can significantly reduce the computational time of RRBB, and that the resulting RRBB method can outperform the Schnorr and Eucher method, a widely used method for solving integer least squares problems, on some types of problem data.  相似文献   

15.
The linear least squares problem, minxAx − b∥2, is solved by applying a multisplitting (MS) strategy in which the system matrix is decomposed by columns into p blocks. The b and x vectors are partitioned consistently with the matrix decomposition. The global least squares problem is then replaced by a sequence of local least squares problems which can be solved in parallel by MS. In MS the solutions to the local problems are recombined using weighting matrices to pick out the appropriate components of each subproblem solution. A new two-stage algorithm which optimizes the global update each iteration is also given. For this algorithm the updates are obtained by finding the optimal update with respect to the weights of the recombination. For the least squares problem presented, the global update optimization can also be formulated as a least squares problem of dimension p. Theoretical results are presented which prove the convergence of the iterations. Numerical results which detail the iteration behavior relative to subproblem size, convergence criteria and recombination techniques are given. The two-stage MS strategy is shown to be effective for near-separable problems. © 1998 John Wiley & Sons, Ltd.  相似文献   

16.
The nonsymmetric semidefinite least squares problem (NSDLS) is to find a nonsymmetric semidefinite matrix which is closest to a given matrix in Frobenius norm. It is an extension of the semidefinite least squares problem (SDLS) and has important application in the area of robotics and automation. In this note, by developing the minimal representation of the underlying cone with the linear constraints, we obtain a regularized strong duality with low-dimensional projection for NSDLS. Further, we study the generalized differential properties and nonsingularity of the first order optimality system about the dual problem. These theoretical results demonstrate that we can solve NSDLS as good as the current Lagrangian dual approaches to SDLS.  相似文献   

17.
In this paper, we introduce a kind of complex representation of quaternion matrices (or quaternion vectors) and quaternion matrix norms, study quaternionic least squares problem with quadratic inequality constraints (LSQI) by means of generalized singular value decomposition of quaternion matrices (GSVD), and derive a practical algorithm for finding solutions of the quaternionic LSQI problem in quaternionic quantum theory.  相似文献   

18.
In previous work we introduced a construction to produce biorthogonal multiresolutions from given subdivisions. The approach involved estimating the solution to a least squares problem by means of a number of smaller least squares approximations on local portions of the data. In this work we use a result by Dahlquist, et al. on the method of averages to make observational comparisons between this local least squares estimation and full least squares approximation. We have explored examples in two problem domains: data reduction and data approximation. We observe that, particularly for design matrices with a repetitive pattern of column entries, the least squares solution is often well estimated by local least squares, that the estimation rapidly improves with the size of the local least squares problems, and that the quality of the estimate is largely independent of the size of the full problem. In memory of Germund Dahlquist (1925–2005).AMS subject classification (2000) 93E24  相似文献   

19.
Nikazad  T.  Abbasi  M.  Afzalipour  L.  Elfving  T. 《Numerical Algorithms》2022,90(3):1253-1277
Numerical Algorithms - In this paper, we consider a regularized least squares problem subject to convex constraints. Our algorithm is based on the superiorization technique, equipped with a new...  相似文献   

20.
In this paper an implementation is discussed of a modified CANDECOMP algorithm for fitting Lazarsfeld's latent class model. The CANDECOMP algorithm is modified such that the resulting parameter estimates are non-negative and ‘best asymptotically normal’. In order to achieve this, the modified CANDECOMP algorithm minimizes a weighted least squares function instead of an unweighted least squares function as the traditional CANDECOMP algorithm does. To evaluate the new procedure, the modified CANDECOMP procedure with different weighting schemes is compared on five published data sets with the widely-used iterative proportional fitting procedure for obtaining maximum likelihood estimates of the parameters in the latent class model. It is found that, with appropriate weights, the modified CANDECOMP algorithm yields solutions that are nearly identical with those obtained by means of the maximum likelihood procedure. While the modified CANDECOMP algorithm tends to be computationally more intensive than the maximum likelihood method, it is very flexible in that it easily allows one to try out different weighting schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号