首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 163 毫秒
1.
Semiseparable matrices and many other rank‐structured matrices have been widely used in developing new fast matrix algorithms. In this paper, we generalize the hierarchically semiseparable (HSS) matrix representations and propose some fast algorithms for HSS matrices. We represent HSS matrices in terms of general binary HSS trees and use simplified postordering notation for HSS forms. Fast HSS algorithms including new HSS structure generation and HSS form Cholesky factorization are developed. Moreover, we provide a new linear complexity explicit ULV factorization algorithm for symmetric positive definite HSS matrices with a low‐rank property. The corresponding factors can be used to solve the HSS systems also in linear complexity. Numerical examples demonstrate the efficiency of the algorithms. All these algorithms have nice data locality. They are useful in developing fast‐structured numerical methods for large discretized PDEs (such as elliptic equations), integral equations, eigenvalue problems, etc. Some applications are shown. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
Abstract

Test-based variable selection algorithms in regression often are based on sequential comparison of test statistics to cutoff values. A predetermined a level typically is used to determine the cutoffs based on an assumed probability distribution for the test statistic. For example, backward elimination or forward stepwise involve comparisons of test statistics to prespecified t or F cutoffs in Gaussian linear regression, while a likelihood ratio. Wald, or score statistic, is typically used with standard normal or chi square cutoffs in nonlinear settings. Although such algorithms enjoy widespread use, their statistical properties are not well understood, either theoretically or empirically. Two inherent problems with these methods are that (1) as in classical hypothesis testing, the value of α is arbitrary, while (2) unlike hypothesis testing, there is no simple analog of type I error rate corresponding to application of the entire algorithm to a data set. In this article we propose a new method, backward elimination via cross-validation (BECV), for test-based variable selection in regression. It is implemented by first finding the empirical p value α*, which minimizes a cross-validation estimate of squared prediction error, then selecting the model by running backward elimination on the entire data set using α* as the nominal p value for each test. We present results of an extensive computer simulation to evaluate BECV and compare its performance to standard backward elimination and forward stepwise selection.  相似文献   

3.
We will propose an outer-approximation (cutting plane) method for minimizing a function f X subject to semi-definite constraints on the variables XR n. A number of efficient algorithms have been proposed when the objective function is linear. However, there are very few practical algorithms when the objective function is nonlinear. An algorithm to be proposed here is a kind of outer-approximation(cutting plane) method, which has been successfully applied to several low rank global optimization problems including generalized convex multiplicative programming problems and generalized linear fractional programming problems, etc. We will show that this algorithm works well when f is convex and n is relatively small. Also, we will provide the proof of its convergence under various technical assumptions.  相似文献   

4.
Abstract

Maximum likelihood estimation with nonnormal error distributions provides one method of robust regression. Certain families of normal/independent distributions are particularly attractive for adaptive, robust regression. This article reviews the properties of normal/independent distributions and presents several new results. A major virtue of these distributions is that they lend themselves to EM algorithms for maximum likelihood estimation. EM algorithms are discussed for least Lp regression and for adaptive, robust regression based on the t, slash, and contaminated normal families. Four concrete examples illustrate the performance of the different methods on real data.  相似文献   

5.
Algorithms are proposed for the approximate calculation of the matrix product $ \tilde C $ \tilde C ≈ C = A · B, where the matrices A and B are given by their tensor decompositions in either canonical or Tucker format of rank r. The matrix C is not calculated as a full array; instead, it is first represented by a similar decomposition with a redundant rank and is then reapproximated (compressed) within the prescribed accuracy to reduce the rank. The available reapproximation algorithms as applied to the above problem require that an array containing r 2d elements be stored, where d is the dimension of the corresponding space. Due to the memory and speed limitations, these algorithms are inapplicable even for the typical values d = 3 and r ∼ 30. In this paper, methods are proposed that approximate the mode factors of C using individually chosen accuracy criteria. As an application, the three-dimensional Coulomb potential is calculated. It is shown that the proposed methods are efficient if r can be as large as several hundreds and the reapproximation (compression) of C has low complexity compared to the preliminary calculation of the factors in the tensor decomposition of C with a redundant rank.  相似文献   

6.
In many problems involving generalized linear models, the covariates are subject to measurement error. When the number of covariates p exceeds the sample size n, regularized methods like the lasso or Dantzig selector are required. Several recent papers have studied methods which correct for measurement error in the lasso or Dantzig selector for linear models in the p > n setting. We study a correction for generalized linear models, based on Rosenbaum and Tsybakov’s matrix uncertainty selector. By not requiring an estimate of the measurement error covariance matrix, this generalized matrix uncertainty selector has a great practical advantage in problems involving high-dimensional data. We further derive an alternative method based on the lasso, and develop efficient algorithms for both methods. In our simulation studies of logistic and Poisson regression with measurement error, the proposed methods outperform the standard lasso and Dantzig selector with respect to covariate selection, by reducing the number of false positives considerably. We also consider classification of patients on the basis of gene expression data with noisy measurements. Supplementary materials for this article are available online.  相似文献   

7.
In the present paper, methods and algorithms for numerical solution of spectral problems and some problems in algebra related to them for one- and two-parameter polynomial and rational matrices are considered. A survey of known methods of solving spectral problems for polynomial matrices that are based on the rank factorization of constant matrices, i.e., that apply the singular value decomposition (SVD) and the normalized decomposition (the QR factorization), is given. The approach to the construction of methods that makes use of rank factorization is extended to one- and two-parameter polynomial and rational matrices. Methods and algorithms for solving some parametric problems in algebra based on ideas of rank factorization are presented. Bibliography: 326titles.Dedicated to the memory of my son AlexanderTranslated fromZapiski Nauchnykh Seminarov POMI, Vol. 238, 1997, pp. 7–328.Translated by V. N. Kublanovskaya.  相似文献   

8.
ABS methods are a large class of algorithms for solving continuous and integer linear algebraic equations, and nonlinear continuous algebraic equations, with applications to optimization. Recent work by Chinese researchers led by Zunquan Xia has extended these methods also to stochastic, fuzzy and infinite systems, extensions not considered here. The work on ABS methods began almost thirty years. It involved an international collaboration of mathematicians especially from Hungary, England, China and Iran, coordinated by the university of Bergamo. The ABS method are based on the rank reducing matrix update due to Egerváry and can be considered as the most fruitful extension of such technique. They have led to unification of classes of methods for several problems. Moreover they have produced some special algorithms with better complexity than the standard methods. For the linear integer case they have provided the most general polynomial time class of algorithms so far known; such algorithms have been extended to other integer problems, as linear inequalities and LP problems, in over a dozen papers written by Iranian mathematicians led by Nezam Mahdavi-Amiri. ABS methods can be implemented generally in a stable way, techniques existing to enhance their accuracy. Extensive numerical experiments have shown that they can outperform standard methods in several problems. Here we provide a review of their main properties, for linear systems and optimization. We also give the results of numerical experiments on some linear systems. This paper is dedicated to Professor Egerváry, developer of the rank reducing matrix update, that led to ABS methods.  相似文献   

9.
The linear regression problem is considered as an improper interpolation problem. The metric l1 is used to correct (approximate) all the initial data. A probabilistic justification of this metric in the case of the exponential noise distribution is given. The original improper interpolation problem is reduced to a set of a finite number of linear programming problems. The corresponding computational algorithms are implemented in MATLAB.  相似文献   

10.
Abstract

We present an efficient algorithm for generating exact permutational distributions for linear rank statistics defined on stratified 2 × c contingency tables. The algorithm can compute exact p values and confidence intervals for a rich class of nonparametric problems. These include exact p values for stratified two-population Wilcoxon, Logrank, and Van der Waerden tests, exact p values for stratified tests of trend across several binomial populations, exact p values for stratified permutation tests with arbitrary scores, and exact confidence intervals for odds ratios embedded in stratified 2 × c tables. The algorithm uses network-based recursions to generate stratum-specific distributions and then combines them into an overall permutation distribution by convolution. Where only the tail area of a permutation distribution is desired, additional efficiency gains are achieved by backward induction and branch-and-bound processing of the network. The algorithm is especially efficient for highly imbalanced categorical data, a situation where the asymptotic theory is unreliable. The backward induction component of the algorithm can also be used to evaluate the conditional maximum likelihood, and its higher order derivatives, for the logistic regression model with grouped data. We illustrate the techniques with an analysis of two data sets: The leukemia data on survivors of the Hiroshima atomic bomb and data from an animal toxicology experiment provided by the U.S. Food and Drug Administration.  相似文献   

11.
《Quaestiones Mathematicae》2013,36(1-4):227-244
Abstract

In this paper the topological methods introduced by Kaplansky and the theory of linear compactifications are used to prove a result classifying the maximal immediate extensions of a valued field. Results on the existence of a complete discrete rank n valued field of characteristic 0 with prescribed residue class field of characteristic p > 0 are discussed. By applying results of Endler and Ribenboim the existence of a valued field of characteristic 0 and having prescribed residue field of characteristic p > 0 when the value group has finite rank but need not be discrete is demonstrated.  相似文献   

12.
Higham considered two types of nearest correlation matrix (NCM) problems, namely the W-weighted case and the H-weighted case. Since there exists well-defined computable formula for the projection onto the symmetric positive semidefinite cone under the W-weighting, it has been well studied to make several Lagrangian dual-based efficient numerical methods available. But these methods are not applicable for the H-weighted case mainly due to the lack of a computable formula. The H-weighted case remains numerically challenging, especially for the highly ill-conditioned weight matrix H. In this paper, we aim to solve the dual form of the H-weighted NCM problem, which has three separable blocks in the objective function with the second part being linear. Based on the linear part, we reformulate it as a new problem with two separable blocks, and introduce the 2-block semi-proximal alternating direction method of multipliers to deal with it. The efficiency of the proposed algorithms is demonstrated on the random test problems, whose weight matrix H are highly ill-conditioned or rank deficient.  相似文献   

13.
《Optimization》2012,61(3):185-217
Two switching algorithms QNSWl and QNSW2 are proposed in this paper. These algorithms are developed based on the eigenvalues of matrices which are inertial to the symmetric rank-one (SR1) updates and the BFGS updates. First, theoretical results on the eigenvalues and condition numbers of these matrices are presented. Second, switch-ing mechanisms are then developed based on theoretical results obtained so that each proposed algorithm has the capability of applying appropriate updating formulae at each iterative point during the whole minimization process. Third, the performance of

each of the proposed algorithms is evaluated over a wide range of test problems with variable dimensions. These results are then compared to the results obtained by some well-known minimization packages. Comparative results show that among the tested methods, the QNSW2 algorithm has the best overall performance for the problems examined. In some cases, the number of iterations and the number function/gradient calls required by certain existing methods are more than a four-fold increase over that required by the proposed switching algorithms  相似文献   

14.
15.
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing 1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general 1-regularized convex minimization problems, i.e., the problem of minimizing an 1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale 1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale 1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale 1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems.  相似文献   

16.
The most widely used stable methods for numerical determination of the rank of a matrix A are the singular value decomposition and the QR algorithm with column interchanges. Here two algorithms are presented which determine rank and nullity in a numerically stable manner without using column interchanges. One algorithm makes use of the condition estimator of Cline, Moler, Stewart, and Wilkinson and relative to alternative stable algorithms is particularly efficient for sparse matrices. The second algorithm is important in the case that one wishes to test for rank and nullity while sequentially adding columns to a matrix.  相似文献   

17.
Abstract

An updating algorithm for bivariate local linear regression is proposed. Thereby, we assume a rectangular design and a polynomial kernel constrained to rectangular support as weight function. Results of univariate regression estimators are extended to the bivariate setting. The updates are performed in a way that most of the well-known numerical instabilities of a naive update implementation can be avoided. Some simulation results illustrate the properties of several algorithms with respect to computing time and numerical stability.  相似文献   

18.
《Optimization》2012,61(6):1107-1130
ABSTRACT

We develop three algorithms to solve the subproblems generated by the augmented Lagrangian methods introduced by Iusem-Nasri (2010) for the equilibrium problem. The first algorithm that we propose incorporates the Newton method and the other two are instances of the subgradient projection method. One of our algorithms is also capable of solving nondifferentiable equilibrium problems. Using well-known test problems, all algorithms introduced here are implemented and numerical results are reported to compare their performances.  相似文献   

19.
Quick Approximation to Matrices and Applications   总被引:1,自引:0,他引:1  
m ×n matrix A with entries between say −1 and 1, and an error parameter ε between 0 and 1, we find a matrix D (implicitly) which is the sum of simple rank 1 matrices so that the sum of entries of any submatrix (among the ) of (AD) is at most εmn in absolute value. Our algorithm takes time dependent only on ε and the allowed probability of failure (not on m, n). We draw on two lines of research to develop the algorithms: one is built around the fundamental Regularity Lemma of Szemerédi in Graph Theory and the constructive version of Alon, Duke, Leffman, R?dl and Yuster. The second one is from the papers of Arora, Karger and Karpinski, Fernandez de la Vega and most directly Goldwasser, Goldreich and Ron who develop approximation algorithms for a set of graph problems, typical of which is the maximum cut problem. From our matrix approximation, the above graph algorithms and the Regularity Lemma and several other results follow in a simple way. We generalize our approximations to multi-dimensional arrays and from that derive approximation algorithms for all dense Max-SNP problems. Received: July 25, 1997  相似文献   

20.
We consider primal-dual algorithms for certain types of infinite-dimensional optimization problems. Our approach is based on the generalization of the technique of finite-dimensional Euclidean Jordan algebras to the case of infinite-dimensional JB-algebras of finite rank. This generalization enables us to develop polynomial-time primal-dual algorithms for ``infinite-dimensional second-order cone programs.' We consider as an example a long-step primal-dual algorithm based on the Nesterov-Todd direction. It is shown that this algorithm can be generalized along with complexity estimates to the infinite-dimensional situation under consideration. An application is given to an important problem of control theory: multi-criteria analytic design of the linear regulator. The calculation of the Nesterov-Todd direction requires in this case solving one matrix differential Riccati equation plus solving a finite-dimensional system of linear algebraic equations on each iteration. The number of equations and unknown variables of this algebraic system is m+1, where m is a number of quadratic performance criteria. Key words.polynomial-time primal-dual interior-point methods – JB-algebras – infinite-dimensional problems – optimal control problemsThis author was supported in part by DMS98-03191 and DMS01-02698.This author was supported in part by the Grant-in-Aid for Scientific Research (C) 11680463 of the Ministry of Education, Culture, Sports, Science and Technology of Japan.Mathematics Subject Classification (1991):90C51, 90C48, 34H05, 49N05  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号