首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a new approach to deal with the non-zero slacks in data envelopment analysis (DEA) assessments that is based on restricting the multipliers in the dual multiplier formulation of the used DEA model. It guarantees strictly positive weights, which ensures reference points on the Pareto-efficient frontier, and consequently, zero slacks. We follow a two-step procedure which, after specifying some weight bounds, results in an “Assurance Region”-type model that will be used in the assessment of the efficiency. The specification of these bounds is based on a selection criterion among the optimal solutions for the multipliers of the unbounded DEA models that tries to avoid the extreme dissimilarity between the weights that is often found in DEA applications. The models developed do not have infeasibility problems and we do not have problems with the alternate optima in the choice of weights that is made. To use our multiplier bound approach we do not need a priori information about substitutions between inputs and outputs, and it is not required the existence of full dimensional efficient facets on the frontier either, as is the case of other existing approaches that address this problem.  相似文献   

2.
In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each of these evaluations usefully contributes to the localization of good candidates for the role of global minimizer, a stochastic model of the function can be built to conduct a sequential choice of evaluation points. Based on Gaussian processes and Kriging, the authors have recently introduced the informational approach to global optimization (IAGO) which provides a one-step optimal choice of evaluation points in terms of reduction of uncertainty on the location of the minimizers. To do so, the probability density of the minimizers is approximated using conditional simulations of the Gaussian process model behind Kriging. In this paper, an empirical comparison between the underlying sampling criterion called conditional minimizer entropy (CME) and the standard expected improvement sampling criterion (EI) is presented. Classical test functions are used as well as sample paths of the Gaussian model and an industrial application. They show the interest of the CME sampling criterion in terms of evaluation savings.  相似文献   

3.
In this paper, we present a Multiple Criteria Data Envelopment Analysis (MCDEA) model which can be used to improve discriminating power of DEA methods and also effectively yield more reasonable input and output weights without a priori information about the weights. In the proposed model, several different efficiency measures, including classical DEA efficiency, are defined under the same constraints. Each measure serves as a criterion to be optimized. Efficiencies are then evaluated under the framework of multiple objective linear programming (MOLP). The method is illustrated through three examples in which data sets are taken from previous research on DEA's discriminating power and weight restriction.  相似文献   

4.
The theory of interval type-2 fuzzy sets provides an intuitive and computationally feasible way of addressing uncertain and ambiguous information in decision-making fields. The aim of this paper is to develop an interactive method for handling multiple criteria group decision-making problems, in which information about criterion weights is incompletely (imprecisely or partially) known and the criterion values are expressed as interval type-2 trapezoidal fuzzy numbers. With respect to the relative importance of multiple decision-makers and group consensus of fuzzy opinions, a hybrid averaging approach combining weighted averages and ordered weighted averages was employed to construct the collective decision matrix. An integrated programming model was then established based on the concept of signed distance-based closeness coefficients to determine the importance weights of criteria and the priority ranking of alternatives. Subsequently, an interactive procedure was proposed to modify the model according to the decision-makers’ feedback on the degree of satisfaction toward undesirable solution results for the sake of gradually improving the integrated model. The feasibility and applicability of the proposed methods are illustrated with a medical decision-making problem of patient-centered medicine concerning basilar artery occlusion. A comparative analysis with other approaches was performed to validate the effectiveness of the proposed methodology.  相似文献   

5.
Basis Function Adaptation in Temporal Difference Reinforcement Learning   总被引:1,自引:0,他引:1  
Reinforcement Learning (RL) is an approach for solving complex multi-stage decision problems that fall under the general framework of Markov Decision Problems (MDPs), with possibly unknown parameters. Function approximation is essential for problems with a large state space, as it facilitates compact representation and enables generalization. Linear approximation architectures (where the adjustable parameters are the weights of pre-fixed basis functions) have recently gained prominence due to efficient algorithms and convergence guarantees. Nonetheless, an appropriate choice of basis function is important for the success of the algorithm. In the present paper we examine methods for adapting the basis function during the learning process in the context of evaluating the value function under a fixed control policy. Using the Bellman approximation error as an optimization criterion, we optimize the weights of the basis function while simultaneously adapting the (non-linear) basis function parameters. We present two algorithms for this problem. The first uses a gradient-based approach and the second applies the Cross Entropy method. The performance of the proposed algorithms is evaluated and compared in simulations. This research was partially supported by the Fund for Promotion of Research at the Technion. The work of S.M. was partially supported by the National Science Foundation under grant ECS-0312921.  相似文献   

6.
One of the main tasks in a multi-criteria decision-making process is to define weights for the evaluation criteria. However, in many situations, the decision-maker (DM) may not be confident about defining specific values for these weights and may prefer to use partial information to represent the values of such weights with surrogate weights. Although for the additive model, the use of surrogate weighting procedures has been already explored in the literature, there is a gap with regard to experimenting with such kind of preference modeling in outranking based methods, such as PROMETHEE, for which there already are applications with surrogate weights in the literature. Thus, this paper presents an experimental study on preference modeling based on simulation so as to increase understanding and acceptance of a recommendation obtained when using surrogate weights within the PROMETHEE method. The main approaches to surrogate weights in the literature (EW, RS, RR and ROC) have been evaluated for choice and ranking problematics throughout statistical procedures, including Kendall's tau coefficient. The surrogate weighting procedure that most faithfully represents a DM's value system according to this analysis is the ROC procedure.  相似文献   

7.
In this paper, a convergence analysis of an adaptive choice of the sequence of damping parameters in the iteratively regularized Gauss–Newton method for solving nonlinear ill-posed operator equations is presented. The selection criterion is motivated from the damping parameter choice criteria, which are used for the efficient solution of nonlinear least-square problems. The performance of this selection criterion is tested for the solution of nonlinear ill-posed model problems.  相似文献   

8.
Although artificial neural networks (ANN) have been widely used in forecasting time series, the determination of the best model is still a problem that has been studied a lot. Various approaches available in the literature have been proposed in order to select the best model for forecasting in ANN in recent years. One of these approaches is to use a model selection strategy based on the weighted information criterion (WIC). WIC is calculated by summing weighted different selection criteria which measure the forecasting accuracy of an ANN model in different ways. In the calculation of WIC, the weights of different selection criteria are determined heuristically. In this study, these weights are calculated by using optimization in order to obtain a more consistent criterion. Four real time series are analyzed in order to show the efficiency of the improved WIC. When the weights are determined based on the optimization, it is obviously seen that the improved WIC produces better results.  相似文献   

9.
This paper addresses the use of incomplete information on both multi-criteria alternative values and importance weights in evaluating decision alternatives. Incomplete information frequently takes the form of strict inequalities, such as strict orders and strict bounds. En route to prioritizing alternatives, the majority of previous studies have replaced these strict inequalities with weak inequalities, by employing a small positive number. As this replacement closes the feasible region of decision parameters, it circumvents certain troubling questions that arise when utilizing a mathematical programming approach to evaluate alternatives. However, there are no hard and fast rules for selecting the factual small value and, even if the choice is possible, the resultant prioritizations depend profoundly on that choice. The method developed herein addresses and overcomes this drawback, and allows for dominance and potential optimality among alternatives, without selecting any small value for the strict preference information. Given strict information on criterion weights alone, we form a linear program and solve it via a two-stage method. When both alternative values and weights are provided in the form of strict inequalities, we first construct a nonlinear program, transform it into a linear programming equivalent, and finally solve this linear program via the same two-stage method. One application of this methodology to a market entry decision, a salient subject in the area of international marketing, is demonstrated in detail herein.  相似文献   

10.
Ramanathan [R. Ramanathan, ABC inventory classification with multiple-criteria using weighted linear optimization, Computers & Operations Research 33 (2006) 695–700] recently proposed a weighted linear optimization model for multi-criteria ABC inventory classification. Despite its many advantages, Ramanathan’s model (R-model) could lead to a situation where an item with a high value in an unimportant criterion is inappropriately classified as a class A item. In this paper we present an extended version of the R-model for multi-criteria inventory classification. Our model provides a more reasonable and encompassing index since it uses two sets of weights that are most favourable and least favourable for each item. An illustrative example is presented to compare our model and the R-model.  相似文献   

11.
Optional Pólya tree (OPT) is a flexible nonparametric Bayesian prior for density estimation. Despite its merits, the computation for OPT inference is challenging. In this article, we present time complexity analysis for OPT inference and propose two algorithmic improvements. The first improvement, named limited-lookahead optional Pólya tree (LL-OPT), aims at accelerating the computation for OPT inference. The second improvement modifies the output of OPT or LL-OPT and produces a continuous piecewise linear density estimate. We demonstrate the performance of these two improvements using simulated and real date examples.  相似文献   

12.
The multimodel inference makes statistical inferences from a set of plausible models rather than from a single model. In this paper, we focus on the multimodel inference based on smoothed information criteria proposed by seminal monographs(see Buckland et al.(1997) and Burnham and Anderson(2003)), which are termed as smoothed Akaike information criterion(SAIC) and smoothed Bayesian information criterion(SBIC)methods. Due to their simplicity and applicability, these methods are very widely used in many fields. By using an illustrative example and deriving limiting properties for the weights in the linear regression, we find that the existing variance estimation for SAIC is not applicable because of a restrictive condition, but for SBIC it is applicable. Especially, we propose a simulation-based inference for SAIC based on the limiting properties. Both the simulation study and the real data example show the promising performance of the proposed simulationbased inference.  相似文献   

13.
目前储量的分类标准是通过划分指标值的范围来确定的,这就要求所有指标值恰好符合既定的指标范围,否则难以划分储量类别。为克服这一问题,本文结合模糊c-均值算法和组合赋权法实现难采储量的分类。首先基于效益指标运用模糊c-均值算法自动搜索储量的最佳类别,再利用主客观赋权偏差最小的思想,构建组合赋权模型,确定属性指标的权重,并计算储量效益指标值,结合模糊c-均值结果判别难采储量类别。最后以大庆某油田为实例,对其难采储量进行了分类,有效指导难采储量滚动开发决策。  相似文献   

14.
ASIMPLEPROOFOFTHEINEQUALITYMFFD(L)≤(71/60)OPTL)+1,LFORTHE MFFDBIN-PACKINGALGORITHMYUEMINYI(越民义)(InstituteofAppliedMathematics...  相似文献   

15.
In this paper we propose a two-step procedure to be used for the selection of the weights that we obtain from the multiplier model in a DEA efficiency analysis. It is well known that optimal solutions of the envelopment formulation for extreme efficient units are often highly degenerate and, consequently, have alternate optima for the weights. Different optimal weights may then be obtained depending, for instance, on the software used. The idea behind the procedure we present is to explore the set of alternate optima in order to help make a choice of optimal weights. The selection of weights for a given extreme efficient point is connected with the dimension of the efficient facets of the frontier. Our approach makes it possible to select the weights associated with the facets of higher dimension that this unit generates and, in particular, it selects those weights associated with a full dimensional efficient facet (FDEF) if any. In this sense the weights provided by our procedure will have the maximum support from the production possibility set. We also look for weights that maximize the relative value of the inputs and outputs included in the efficiency analysis in a sense to be described in this article.  相似文献   

16.
The problem of selecting between semi-parametric and proportional hazards models is considered. We propose to make this choice based on the expectation of the log-likelihood (ELL) which can be estimated by the likelihood cross-validation (LCV) criterion. The criterion is used to choose an estimator in families of semi-parametric estimators defined by the penalized likelihood. A simulation study shows that the ELL criterion performs nearly as well in this problem as the optimal Kullback–Leibler criterion in term of Kullback–Leibler distance and that LCV performs reasonably well. The approach is applied to a model of age-specific risk of dementia as a function of sex and educational level from the data of a large cohort study.  相似文献   

17.
In this paper, we propose a new approach to cross-efficiency evaluation that focuses on the choice of the weights profiles to be used in the calculation of the cross-efficiency scores. It has been claimed in the literature that cross-efficiency eliminates unrealistic weighting schemes in the sense that their effects are cancelled out in the summary that the cross-efficiency evaluation makes. The idea of our approach here is to try to avoid these unreasonable weights instead of expecting that their effects are cancelled out in the amalgamation of weights that is made. To do it, we extend the ideas of the multiplier bound approach to the assessment of efficiency without slacks in Ramón et al. (2010) to its use in cross-efficiency evaluations. The models used look for the profiles with the least dissimilar weights, and also guarantee non-zero weights. In particular, this approach allows the inefficient DMUs to make a choice of weights that prevent them from using unrealistic weighting schemes. We use some examples of the literature to illustrate the performance of this approach and discuss some issues of interest regarding the choice of weights in cross-efficiency evaluations.  相似文献   

18.
In this paper we study lattice rules which are cubature formulae to approximate integrands over the unit cube [0,1] s from a weighted reproducing kernel Hilbert space. We assume that the weights are independent random variables with a given mean and variance for two reasons stemming from practical applications: (i) It is usually not known in practice how to choose the weights. Thus by assuming that the weights are random variables, we obtain robust constructions (with respect to the weights) of lattice rules. This, to some extend, removes the necessity to carefully choose the weights. (ii) In practice it is convenient to use the same lattice rule for many different integrands. The best choice of weights for each integrand may vary to some degree, hence considering the weights random variables does justice to how lattice rules are used in applications. In this paper the worst-case error is therefore a random variable depending on random weights. We show how one can construct lattice rules which perform well for weights taken from a set with large measure. Such lattice rules are therefore robust with respect to certain changes in the weights. The construction algorithm uses the component-by-component (cbc) idea based on two criteria, one using the mean of the worst case error and the second criterion using a bound on the variance of the worst-case error. We call the new algorithm the cbc2c (component-by-component with 2 constraints) algorithm. We also study a generalized version which uses r constraints which we call the cbcrc (component-by-component with r constraints) algorithm. We show that lattice rules generated by the cbcrc algorithm simultaneously work well for all weights in a subspace spanned by the chosen weights ?? (1), . . . , ?? (r). Thus, in applications, instead of finding one set of weights, it is enough to find a convex polytope in which the optimal weights lie. The price for this method is a factor r in the upper bound on the error and in the construction cost of the lattice rule. Thus the burden of determining one set of weights very precisely can be shifted to the construction of good lattice rules. Numerical results indicate the benefit of using the cbc2c algorithm for certain choices of weights.  相似文献   

19.
The paper proposes a Mixed Integer Programming (MIP) formulation of the scheduling problem with total flow criterion on a set of parallel unrelated machines under an uncertainty context about the processing times. To model the problem we assume that lower and upper bounds are known for each processing time. In this context we consider an optimal minmax regret schedule as a suitable approximation to the optimal schedule under an arbitrary choice of the possible processing times.  相似文献   

20.
In this paper, a new method for nonlinear system identification via extreme learning machine neural network based Hammerstein model (ELM-Hammerstein) is proposed. The ELM-Hammerstein model consists of static ELM neural network followed by a linear dynamic subsystem. The identification of nonlinear system is achieved by determining the structure of ELM-Hammerstein model and estimating its parameters. Lipschitz quotient criterion is adopted to determine the structure of ELM-Hammerstein model from input–output data. A generalized ELM algorithm is proposed to estimate the parameters of ELM-Hammerstein model, where the parameters of linear dynamic part and the output weights of ELM neural network are estimated simultaneously. The proposed method can obtain more accurate identification results with less computation complexity. Three simulation examples demonstrate its effectiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号