共查询到20条相似文献,搜索用时 0 毫秒
1.
Pivot selection methods of the Devex LP code 总被引:3,自引:0,他引:3
Paula M. J. Harris 《Mathematical Programming》1973,5(1):1-28
Pivot column and row selection methods used by the Devex code since 1965 are published here for the first time. After a fresh look at the iteration process, the author introduces dynamic column weighting factors as a means of estimating gradients for the purpose of selecting a maximum gradient column. The consequent effect of this column selection on rounding error is observed. By allowing that a constraint may not be positioned so exactly as its precise representation in the computer would imply, a wider choice of pivot row is made available, so making room for a further selection criterion based on pivot size. Three examples are given of problems having between 2500 and 5000 rows, illustrating the overall time and iteration advantages over the standard simplex methods used today. The final illustration highlights why these standard methods take so many iterations. These algorithms were originally coded for the Atlas computer and were re-coded in 1969 for the Univac 1108. 相似文献
2.
Mathematical Programming - We propose a new method for simplifying semidefinite programs (SDP) inspired by symmetry reduction. Specifically, we show if an orthogonal projection map satisfies... 相似文献
3.
In multiobjective optimization there is often the problem of the existence of a large number of objectives. For more than two objectives there is a difficulty with the representation and visualization of the solutions in the objective space. Therefore, it is not clear for the decision maker the trade-off between the different alternative solutions. Thus, this creates enormous difficulties when choosing a solution from the Pareto-optimal set and constitutes a central question in the process of decision making. Based on statistical methods as Principle Component Analysis and Cluster Analysis, the problem of reduction of the number of objectives is addressed. Several test examples with different number of objectives have been studied in order to evaluate the process of decision making through these methods. Preliminary results indicate that this statistical approach can be a valuable tool on decision making in multiobjective optimization. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) 相似文献
4.
Distance-based regression allows for a neat implementation of the partial least squares recurrence. In this paper, we address
practical issues arising when dealing with moderately large datasets (n ~ 104) such as those typical of automobile insurance premium calculations. 相似文献
5.
Zhi-Hua Xiao 《Mathematical and Computer Modelling of Dynamical Systems: Methods, Tools and Applications in Engineering and Related Sciences》2013,19(4):414-432
In this article, we discuss the time-domain dimension reduction methods for second-order systems by general orthogonal polynomials, and present a structure-preserving dimension reduction method for second-order systems. The resulting reduced systems not only preserve the second-order structure but also guarantee the stability under certain conditions. The error estimate of the reduced models is also given. The effectiveness of the proposed methods is demonstrated by three test examples. 相似文献
6.
V. B. Akkerman M. L. Zaytsev 《Computational Mathematics and Mathematical Physics》2011,51(8):1418-1430
A method for transforming the Euler and Navier-Stokes equations and a complete system of fluid dynamics equations in three
dimensions to a closed system on any moving surface is proposed. As a result, for an arbitrary geometric configuration, the
dimension of the equations is reduced by one, which makes them convenient for numerical simulation. The general principles
of the method are described, and verifying examples are presented. 相似文献
7.
Sufficient Dimension Reduction (SDR) in regression comprises the estimation of the dimension of the smallest (central) dimension reduction subspace and its basis elements. For SDR methods based on a kernel matrix, such as SIR and SAVE, the dimension estimation is equivalent to the estimation of the rank of a random matrix which is the sample based estimate of the kernel. A test for the rank of a random matrix amounts to testing how many of its eigen or singular values are equal to zero. We propose two tests based on the smallest eigen or singular values of the estimated matrix: an asymptotic weighted chi-square test and a Wald-type asymptotic chi-square test. We also provide an asymptotic chi-square test for assessing whether elements of the left singular vectors of the random matrix are zero. These methods together constitute a unified approach for all SDR methods based on a kernel matrix that covers estimation of the central subspace and its dimension, as well as assessment of variable contribution to the lower-dimensional predictor projections with variable selection, a special case. A small power simulation study shows that the proposed and existing tests, specific to each SDR method, perform similarly with respect to power and achievement of the nominal level. Also, the importance of the choice of the number of slices as a tuning parameter is further exhibited. 相似文献
8.
Annals of the Institute of Statistical Mathematics - To obtain M-estimators of a response variable when the data are missing at random, we can construct three bias-corrected nonparametric... 相似文献
9.
Katherine Morris Paul D. McNicholas Luca Scrucca 《Advances in Data Analysis and Classification》2013,7(3):321-338
We introduce a dimension reduction method for model-based clustering obtained from a finite mixture of $t$ t -distributions. This approach is based on existing work on reducing dimensionality in the case of finite Gaussian mixtures. The method relies on identifying a reduced subspace of the data by considering the extent to which group means and group covariances vary. This subspace contains linear combinations of the original data, which are ordered by importance via the associated eigenvalues. Observations can be projected onto the subspace and the resulting set of variables captures most of the clustering structure available in the data. The approach is illustrated using simulated and real data, where it outperforms its Gaussian analogue. 相似文献
10.
In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE. This work was supported by National Natural Science Foundation of China (Grant No. 10771015) 相似文献
11.
Science China Mathematics - In this study, we propose nonparametric testing for heteroscedasticity in nonlinear regression models based on pairwise distances between points in a sample. The test... 相似文献
12.
Plaia Antonella Sciandra Mariangela 《Advances in Data Analysis and Classification》2019,13(2):427-444
Advances in Data Analysis and Classification - Within the framework of preference rankings, the interest can lie in finding which predictors and which interactions are able to explain the observed... 相似文献
13.
Nadia Solaro Alessandro Barbiero Giancarlo Manzi Pier Alda Ferrari 《Advances in Data Analysis and Classification》2017,11(2):395-414
Missing data recurrently affect datasets in almost every field of quantitative research. The subject is vast and complex and has originated a literature rich in very different approaches to the problem. Within an exploratory framework, distance-based methods such as nearest-neighbour imputation (NNI), or procedures involving multivariate data analysis (MVDA) techniques seem to treat the problem properly. In NNI, the metric and the number of donors can be chosen at will. MVDA-based procedures expressly account for variable associations. The new approach proposed here, called Forward Imputation, ideally meets these features. It is designed as a sequential procedure that imputes missing data in a step-by-step process involving subsets of units according to their “completeness rate”. Two methods within this context are developed for the imputation of quantitative data. One applies NNI with the Mahalanobis distance, the other combines NNI and principal component analysis. Statistical properties of the two methods are discussed, and their performance is assessed, also in comparison with alternative imputation methods. To this purpose, a simulation study in the presence of different data patterns along with an application to real data are carried out, and practical hints for users are also provided. 相似文献
14.
15.
《Optimization》2012,61(2-3):143-160
In the first part, different characterizations for the dimension of the feasible set in linear semi-infinite programming are provided. They involve the corresponding dimensions of some parameter sets, as the consequent inequalities cone and its lineality subspace. The remaining sections of the paper deal with Farkas–Minkowski systems. The third section is devoted to establish some results concerning the optimal set and its dimension, exploiting its strong relation with a particular parameter cone associated with the corresponding unstable constraints. The last section approaches the finite reducibility problem. We have intended to characterize those finite subproblems with the same optimal value as the original problem, by means of a simplc dual analysis, based on the main results derived before. 相似文献
16.
Guochang Wang 《Computational Statistics》2017,32(2):585-609
In the present paper, we consider dimension reduction methods for functional regression with a scalar response and the predictors including a random curve and a categorical random variable. To deal with the categorical random variable, we propose three potential dimension reduction methods: partial functional sliced inverse regression, marginal functional sliced inverse regression and conditional functional sliced inverse regression. Furthermore, we investigate the relationships among the three methods. In addition, a new modified BIC criterion for determining the dimension of the effective dimension reduction space is developed. Real and simulation data examples are then presented to show the effectiveness of the proposed methods. 相似文献
17.
Pierpaolo D’Urso Riccardo Massari Livia De Giovanni Carmela Cappelli 《Fuzzy Optimization and Decision Making》2017,16(1):51-70
In several real life and research situations data are collected in the form of intervals, the so called interval-valued data. In this paper a fuzzy clustering method to analyse interval-valued data is presented. In particular, we address the problem of interval-valued data corrupted by outliers and noise. In order to cope with the presence of outliers we propose to employ a robust metric based on the exponential distance in the framework of the Fuzzy C-medoids clustering mode, the Fuzzy C-medoids clustering model for interval-valued data with exponential distance. The exponential distance assigns small weights to outliers and larger weights to those points that are more compact in the data set, thus neutralizing the effect of the presence of anomalous interval-valued data. Simulation results pertaining to the behaviour of the proposed approach as well as two empirical applications are provided in order to illustrate the practical usefulness of the proposed method. 相似文献
18.
19.
Jürgen Garloff 《PAMM》2010,10(1):549-550
The performance of the Cholesky decomposition in interval arithmetic is considered. In order to avoid the algorithm breaking down due to an interval pivot containing zero, a method is presented by which such a pivot can be tightened. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim) 相似文献
20.
Qihua Wang 《Journal of multivariate analysis》2003,85(2):234-252
Consider partial linear models of the form Y=Xτβ+g(T)+e with Y measured with error and both p-variate explanatory X and T measured exactly. Let
be the surrogate variable for Y with measurement error. Let primary data set be that containing independent observations on
and the validation data set be that containing independent observations on
, where the exact observations on Y may be obtained by some expensive or difficult procedures for only a small subset of subjects enrolled in the study. In this paper, without specifying any structure equations and distribution assumption of Y given
, a semiparametric dimension reduction technique is employed to obtain estimators of β and g(·) based the least squared method and kernel method with the primary data and validation data. The proposed estimators of β are proved to be asymptotically normal, and the estimator for g(·) is proved to be weakly consistent with an optimal convergent rate. 相似文献