共查询到20条相似文献,搜索用时 0 毫秒
1.
E. L. Lehmann 《Mathematische Annalen》1963,150(3):268-275
2.
《Applied Mathematical Modelling》2014,38(17-18):4538-4547
Data Envelopment Analysis (DEA) is a nonparametric technique originally conceived for efficiency analysis of a set of units. The main characteristic of DEA based procedures is endogenous determination of weighting vectors, i.e., the weighting vectors are determined as variables of the model. Nevertheless, DEA’s applications have vastly exceeded its original target. In this paper, a DEA based model for the selection of a subgroup of alternatives or units is proposed. Considering a set of alternatives, the procedure seeks to determine the group that maximizes overall efficiency. The proposed model is characterized by free selection of weights and allows the inclusion of additional information, such as agent’s preferences in terms of relative importance of the variables under consideration or interactions between alternatives. The solution is achieved by computing a mixed-integer linear programming model. Finally, the proposed model is applied to plan the deployment of filling stations in the province of Seville (Spain). 相似文献
3.
Civitelli Enrico Lapucci Matteo Schoen Fabio Sortino Alessio 《Computational Optimization and Applications》2021,80(1):1-32
Computational Optimization and Applications - In this paper, the problem of best subset selection in logistic regression is addressed. In particular, we take into account formulations of the... 相似文献
4.
The efficient solution of large-scale linear and nonlinear optimization problems may require exploiting any special structure
in them in an efficient manner. We describe and analyze some cases in which this special structure can be used with very little
cost to obtain search directions from decomposed subproblems. We also study how to correct these directions using (decomposable)
preconditioned conjugate gradient methods to ensure local convergence in all cases. The choice of appropriate preconditioners
results in a natural manner from the structure in the problem. Finally, we conduct computational experiments to compare the
resulting procedures with direct methods.
Received: January 2001 / Accepted: April 2002 Published online: September 5, 2002
Mathematics Subject Classification (2000): 90C26, 90C30, 49M27 相似文献
5.
Taka-aki Shiraishi 《Annals of the Institute of Statistical Mathematics》1991,43(4):715-734
Multiresponse experiments in two-way layouts with interactions, having equal number of observations per cell, are considered. Robust procedures based on aligned ranks for statistical inference of interactions, main effects and an overall mean response in the models are proposed. Large sample properties of the proposed tests, estimators and confidence regions as the cell size tends to infinity are investigated. For the univariate case, it is found that the asymptotic relative efficiencies (ARE's) of the proposed procedures relative to classical procedures agree with the ARE-results of the two-sample rank test relative to the t-test. In addition, robustness due to Huber (1981, Robust Statistics, Wiley, New York) can be drawn. 相似文献
6.
Toffano Federico Garraffa Michele Lin Yiqing Prestwich Steven Simonis Helmut Wilson Nic 《Annals of Operations Research》2022,308(1-2):609-640
Annals of Operations Research - This paper introduces an interactive framework to guide decision-makers in a multi-criteria supplier selection process. State-of-the-art multi-criteria methods for... 相似文献
7.
Pinyuen Chen 《Annals of the Institute of Statistical Mathematics》1987,39(1):325-330
Summary Ak-in-a-row procedure is proposed to select the most demanded element in a set ofn elements. We show that the least favorable configuration of the proposed procedure which always selects the element when
the same element has been demanded (or observed)k times in a row has a simple form similar to those of classical selection procedures. Moreover, numerical evidences are provided
to illustrate the fact thatk-in-a-row procedure is better than the usual inverse sampling procedure and fixed sample size procedure when the distance
between the most demanded element and the other elements is large and when the number of elements is small. 相似文献
8.
《European Journal of Operational Research》2001,128(3):674-678
When we have a selection problem that involves qualitative information, we can use the qualitative programming method and find the optimal choice by using integer programming. However, in practice after formulation of the problem we have a large scale model. In this note we introduce a very effective method for solving a qualitative programming problem. The only operations used in this method are scalar and matrix addition, and hence for problems of reasonable size, the method does not need high speed computers, and can be supplemented using hand calculators. 相似文献
9.
In high dimensional data modeling, Multivariate Adaptive Regression Splines (MARS) is a popular nonparametric regression technique used to define the nonlinear relationship between a response variable and the predictors with the help of splines. MARS uses piecewise linear functions for local fit and apply an adaptive procedure to select the number and location of breaking points (called knots). The function estimation is basically generated via a two-stepwise procedure: forward selection and backward elimination. In the first step, a large number of local fits is obtained by selecting large number of knots via a lack-of-fit criteria; and in the latter one, the least contributing local fits or knots are removed. In conventional adaptive spline procedure, knots are selected from a set of all distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid this drawback, it is possible to restrict the knot points to a subset of data points. In this context, a new method is proposed for knot selection which bases on a mapping approach like self organizing maps. By this method, less but more representative data points are become eligible to be used as knots for function estimation in forward step of MARS. The proposed method is applied to many simulated and real datasets, and the results show that it proposes a time efficient forward step for the knot selection and model estimation without degrading the model accuracy and prediction performance. 相似文献
10.
Taka-aki Shiraishi 《Annals of the Institute of Statistical Mathematics》1993,45(2):265-278
Ink samples with unequal variances, test procedures based on signed ranks for the homogeneity ofk location parameters are proposed. The asymptotic 2-distribution of the test statistics is shown. It is found that the asymptotic relative efficiency of the rank tests relative to Welch's test (1951,Biometrika,38, 330–336) under local alternatives agrees with that of the one-sample signed rank tests relative to thet-test. A simulation study for the goodness of the 2-approximate of significance points is done. Then, surprisingly it can be seen that the 2-approximate for the critical points of the proposed tests is better than that of Kruskal-Wallis test and the Welch-type test. NextR-estimators and weighted least squares estimators for common mean ofk samples under the homogeneity ofk location parameters are compared in the same way as the test case. Furthermore, positive-part shrinkage versions ofR-estimators for thek location parameters are considered along with a modified James-Stein estimation rule. The asymptotic distributional risks of the usualR-estimators, the positive-part shrinkageR-estimators (PSRE's), and the preliminary test and shrinkageR-versions under an arbitrary quadratic loss are derived. Under Mahalanobis loss, it is shown that the PSRE's dominate the otherR-estimators fork4. A simulation study leads strong support to the claims that the PSRE's dominate the other typeR-estimators and they are robust about outliers. 相似文献
11.
In this paper, the Kapur cross-entropy minimization model for portfolio selection problem is discussed under fuzzy environment, which minimizes the divergence of the fuzzy investment return from a priori one. First, three mathematical models are proposed by defining divergence as cross-entropy, average return as expected value and risk as variance, semivariance and chance of bad outcome, respectively. In order to solve these models under fuzzy environment, a hybrid intelligent algorithm is designed by integrating numerical integration, fuzzy simulation and genetic algorithm. Finally, several numerical examples are given to illustrate the modeling idea and the effectiveness of the proposed algorithm. 相似文献
12.
Several new notions of stable ranks for commutative Banach algebras are introduced. Due to the fact that they depend on certain orthogonal groups, they are called real stable ranks and are finer than their complex counterparts. 相似文献
13.
We study the interpolation procedure of Gomory and Johnson (1972), which generates cutting planes for general integer programs
from facets of cyclic group polyhedra. This idea has recently been re-considered by Evans (2002) and Gomory, Johnson and Evans
(2003). We compare inequalities generated by this procedure with mixed-integer rounding (MIR) based inequalities discussed
in Dash and Gunluk (2003). We first analyze and extend the shooting experiment described in Gomory, Johnson and Evans. We
show that MIR based inequalities dominate inequalities generated by the interpolation procedure in some important cases. We
also show that the Gomory mixed-integer cut is likely to dominate any inequality generated by the interpolation procedure
in a certain probabilistic sense. We also generalize a result of Cornuéjols, Li and Vandenbussche (2003) on comparing the
strength of the Gomory mixed-integer cut with related inequalities. 相似文献
14.
Each clustering algorithm usually optimizes a qualification metric during its progress. The qualification metric in conventional clustering algorithms considers all the features equally important; in other words each feature participates in the clustering process equivalently. It is obvious that some features have more information than others in a dataset. So it is highly likely that some features should have lower importance degrees during a clustering or a classification algorithm; due to their lower information or their higher variances and etc. So it is always a desire for all artificial intelligence communities to enforce the weighting mechanism in any task that identically uses a number of features to make a decision. But there is always a certain problem of how the features can be participated in the clustering process (in any algorithm, but especially in clustering algorithm) in a weighted manner. Recently, this problem is dealt with by locally adaptive clustering (LAC). However, like its traditional competitors the LAC suffers from inefficiency in data with imbalanced clusters. This paper solves the problem by proposing a weighted locally adaptive clustering (WLAC) algorithm that is based on the LAC algorithm. However, WLAC algorithm suffers from sensitivity to its two parameters that should be tuned manually. The performance of WLAC algorithm is affected by well-tuning of its parameters. Paper proposes two solutions. The first is based on a simple clustering ensemble framework to examine the sensitivity of the WLAC algorithm to its manual well-tuning. The second is based on cluster selection method. 相似文献
15.
This paper concerns three notions of rank of matrices over semirings; real rank, semiring rank and column rank. These three rank functions are the same over subfields of reals but differ for matrices over subsemirings of nonnegative reals. We investigate the largest values of r for which the real rank and semiring rank, real rank and column rank of all m×n matrices over a given semiring are both r, respectively. We also characterize the linear operators which preserve the column rank of matrices over certain subsemirings of the nonnegative reals. 相似文献
16.
17.
A Tabu search method is proposed and analysed for selecting variables that are subsequently used in Logistic Regression Models. The aim is to find from among a set of m variables a smaller subset which enables the efficient classification of cases. Reducing dimensionality has some very well-known advantages that are summarized in literature. The specific problem consists in finding, for a small integer value of p, a subset of size p of the original set of variables that yields the greatest percentage of hits in Logistic Regression. The proposed Tabu search method performs a deep search in the solution space that alternates between a basic phase (that uses simple moves) and a diversification phase (to explore regions not previously visited). Testing shows that it obtains significantly better results than the Stepwise, Backward or Forward methods used by classic statistical packages. Some results of applying these methods are presented. 相似文献
18.
In this paper, a linear model selection procedure based on M-estimation is proposed, which includes many classical model selection criteria as its special cases. It is shown that the
proposed criterion is strongly consistent under certain mild conditions, for instance without assuming normality of the distribution
of the random errors. The results from a simulation study are also presented.
Received: 13 October 1997 / Revised version: 10 August 1998 相似文献
19.
This paper concerns three notions of rank of matrices over semirings; real rank, semiring rank and column rank. These three rank functions are the same over subfields of reals but differ for matrices over subsemirings of nonnegative reals. We investigate the largest values of r for which the real rank and semiring rank, real rank and column rank of all m×n matrices over a given semiring are both r, respectively. We also characterize the linear operators which preserve the column rank of matrices over certain subsemirings of the nonnegative reals. 相似文献
20.
Shanti S. Gupta 《Annals of the Institute of Statistical Mathematics》1962,14(1):199-212
Summary The problem of selecting a subset of k gamma populations which includes the “best” population, i.e. the one with the largest
value of the scale parameter, is studied as a multiple decision problem. The shape parameters of the gamma distributions are
assumed to be known and equal for all the k populations. Based on a common number of observations from each population, a
procedure R is defined which selects a subset which is never empty, small in size and yet large enough to guarantee with preassigned
probability that it includes the best population regardless of the true unknown values of the scale parameters θi. Expression for the probability of a correct selection using R are derived and it is shown that for the case of a common
number of observations the infimum of this probability is identical with the probability integral of the ratio of the maximum
of k-1 independent gamma chance variables to another independent gamma chance variable, all with the same value of the other
parameter. Formulas are obtained for the expected number of populations retained in the selected subset and it is shown that
this function attains its maximum when the parameters θi are equal. Some other properties of the procedure are proved. Tables of constants b which are necessary to carry out the
procedure are appended. These constants are reciprocals of the upper percentage points of Fmax, the largest of several correlated F statistics. The distribution of this statistic is obtained.
This work was supported in part by Office of Naval Research Contract Nonr-225 (53) at Stanford University. Reproduction in
whole or in part is permitted for any purpose of the United States Government. 相似文献