首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
Model selection strategies have been routinely employed to determine a model for data analysis in statistics, and further study and inference then often proceed as though the selected model were the true model that were known a priori. Model averaging approaches, on the other hand, try to combine estimators for a set of candidate models. Specifically, instead of deciding which model is the 'right' one, a model averaging approach suggests to fit a set of candidate models and average over the estimators using data adaptive weights.In this paper we establish a general frequentist model averaging framework that does not set any restrictions on the set of candidate models. It broaden, the scope of the existing methodologies under the frequentist model averaging development. Assuming the data is from an unknown model, we derive the model averaging estimator and study its limiting distributions and related predictions while taking possible modeling biases into account.We propose a set of optimal weights to combine the individual estimators so that the expected mean squared error of the average estimator is minimized. Simulation studies are conducted to compare the performance of the estimator with that of the existing methods. The results show the benefits of the proposed approach over traditional model selection approaches as well as existing model averaging methods.  相似文献   

2.
Polymers are compounds formed by the joining of smaller, often repeating, units linked by covalent bonds. The analysis of their sequence is a fundamental issue in many areas of chemistry, medicine and biology. Nowadays, the prevalent approach to this problem consists in using a mass spectrometry analysis that gives information about the molecular weights of the polymer and of its fragments. This information should be used in order to obtain the sequence. This is however a difficult mathematical problem, and several approaches have been proposed for it. In particular, a promising one is based on a propositional logic modeling of the problem. This paper presents conceptual improvements in this approach, principally the off-line computation of a database that substantially speeds-up the sequencing operations. This is obtained by finding a correspondence between sequences and natural numbers, so that all sequences up to a certain molecular weight can be implicitly considered in the above database, and explicitly computed only when needed. Results on real-world problems show the effectiveness of this approach.  相似文献   

3.
In this paper, we propose a new approach to deal with the non-zero slacks in data envelopment analysis (DEA) assessments that is based on restricting the multipliers in the dual multiplier formulation of the used DEA model. It guarantees strictly positive weights, which ensures reference points on the Pareto-efficient frontier, and consequently, zero slacks. We follow a two-step procedure which, after specifying some weight bounds, results in an “Assurance Region”-type model that will be used in the assessment of the efficiency. The specification of these bounds is based on a selection criterion among the optimal solutions for the multipliers of the unbounded DEA models that tries to avoid the extreme dissimilarity between the weights that is often found in DEA applications. The models developed do not have infeasibility problems and we do not have problems with the alternate optima in the choice of weights that is made. To use our multiplier bound approach we do not need a priori information about substitutions between inputs and outputs, and it is not required the existence of full dimensional efficient facets on the frontier either, as is the case of other existing approaches that address this problem.  相似文献   

4.
This work develops a Bayesian approach to perform inference and prediction in Gaussian random fields based on spatial censored data. These type of data occur often in the earth sciences due either to limitations of the measuring device or particular features of the sampling process used to collect the data. Inference and prediction on the underlying Gaussian random field is performed, through data augmentation, by using Markov chain Monte Carlo methods. Previous approaches to deal with spatial censored data are reviewed, and their limitations pointed out. The proposed Bayesian approach is applied to a spatial dataset of depths of a geologic horizon that contains both left- and right-censored data, and comparisons are made between inferences based on the censored data and inferences based on “complete data” obtained by two imputation methods. It is seen that the differences in inference between the two approaches can be substantial.  相似文献   

5.
Over the past several years, researchers at the U.S. Air Force Academy developed cooperative, distributed aerial sensor networks (Pack et?al. in IEEE Trans Syst Man Cybern Part B Cybern 39(4):959?C970, 2009) using multiple small unmanned aerial vehicles (SUAVs) to search, detect, and locate ground targets. The use of distributed SUAVs, however, introduced a set of problems, including difficulties in reliable air-to-air communication and clock synchronization among onboard systems of multiple SUAVs. The communication problems further aggravate the synchronization problem contributing to a large target localization error. Conventional methods use multiple sensor outputs of the same target seen from different perspectives to increase the target localization accuracy. These methods are effective only when the pose errors of sensor platforms based on GPS data are modeled accurately, which is not a reasonable assumption for SUAVs, especially when SUAVs operate in an environment with wind gusts. In this paper, we propose a robust, novel technique that analyzes what we call ??sensor fusion quality?? to assign an appropriate sensor reliability value to each set of updated sensor data. In the proposed approach, we characterize the quality of a set of newly acquired sensor data, containing a target, by examining the joint target location probability density function. The validity of the proposed method is demonstrated using flight test data.  相似文献   

6.
Pairwise comparison data are used in various contexts including the generation of weight vectors for multiple criteria decision making problems. If this data is not sufficiently consistent, then the resulting weight vector cannot be considered to be a reliable reflection of the evaluator’s opinion. Hence, it is necessary to measure its level of inconsistency. Different approaches have been proposed to measuring the level of inconsistency, but they are often based on ‘rules of thumb” and/or randomly generated matrices, and are not interpretable. In this paper we present an action learning approach for assessing the consistency of the input pairwise comparison data that offer interpretable consistency measures.  相似文献   

7.
Standard methods for optimal allocation of shares in a financial portfolio are determined by second-order conditions which are very sensitive to outliers. The well-known Markowitz approach, which is based on the input of a mean vector and a covariance matrix, seems to provide questionable results in financial management, since small changes of inputs might lead to irrelevant portfolio allocations. However, existing robust estimators often suffer from masking of multiple influential observations, so we propose a new robust estimator which suitably weights data using a forward search approach. A Monte Carlo simulation study and an application to real data show some advantages of the proposed approach.  相似文献   

8.
The current paper examines the cross-efficiency concept in data envelopment analysis (DEA). While cross-efficiency has appeal as a peer evaluation approach, it is often the subject of criticism, due mainly to the use of DEA weights that are often non-unique. As a result, cross-efficiency scores are routinely viewed as arbitrary in that they depend on a particular set of optimal DEA weights generated by the computer code in use at the time. While imposing secondary goals can reduce the variability of cross-efficiency scores, such approaches do not completely solve the problem of non-uniqueness, and meaningful secondary goals can lead to computationally intractable non-linear programs. The current paper proposes to use the units-invariant multiplicative DEA model to calculate the cross-efficiency scores. This allows one to calculate the maximum cross-efficiency score for each DMU in a converted linear model, and eliminates the need for imposing secondary goals.  相似文献   

9.
Many existing statistical and machine learning tools for social network analysis focus on a single level of analysis. Methods designed for clustering optimize a global partition of the graph, whereas projection-based approaches (e.g., the latent space model in the statistics literature) represent in rich detail the roles of individuals. Many pertinent questions in sociology and economics, however, span multiple scales of analysis. Further, many questions involve comparisons across disconnected graphs that will, inevitably be of different sizes, either due to missing data or the inherent heterogeneity in real-world networks. We propose a class of network models that represent network structure on multiple scales and facilitate comparison across graphs with different numbers of individuals. These models differentially invest modeling effort within subgraphs of high density, often termed communities, while maintaining a parsimonious structure between said subgraphs. We show that our model class is projective, highlighting an ongoing discussion in the social network modeling literature on the dependence of inference paradigms on the size of the observed graph. We illustrate the utility of our method using data on household relations from Karnataka, India. Supplementary material for this article is available online.  相似文献   

10.
The Analytic Hierarchy Process (AHP) is a popular multicriteria decision-making approach but the ease of AHP paired comparison data collection entails the problem that consistency restrictions have to be fulfilled for the data evaluation task. Quite a lot of consistency improvement techniques are available, however, this note explains why consistency adjustments are not necessarily helpful for computing acceptable weights for the determination of the underlying overall objective function.  相似文献   

11.
针对目前利用多源数据的测量方法,指出现有研究方法存在的问题和可能导致的结果偏差,提出应对测量同一事物(构念、变量)的多源数据进行合成,并基于量表信度和结构效度检验的思想,给出其合成的合理性和可行性,探讨来自不同测评方所含信息的权重设计,并以变革型领导和个体创造力的研究为例进行分析.  相似文献   

12.
Policy iteration methods are important but often computationally expensive approaches for solving certain stochastic optimization problems. Modified policy iteration methods have been proposed to reduce the storage and computational burden. The asymptotic speed-of-convergence of such methods is, however, not well understood. In this paper we show how modified policy iteration methods may be constructed to achieve a preassigned rate-of-convergence. Our analysis provides a framework for analyzing the local behavior of such methods and provides impetus for perhaps more computationally efficient procedures than currently exist.  相似文献   

13.
Multicriteria decision-making (MCDM) problems often involve a complex decision process in which multiple requirements and fuzzy conditions have to be taken into consideration simultaneously. The existing approaches for solving this problem in a fuzzy environment are complex. Combining the concepts of grey relation and pairwise comparison, a new fuzzy MCDM method is proposed. First, the fuzzy analytic hierarchy process (AHP) is used to construct fuzzy weights of all criteria. Then, linguistic terms characterized by L–R triangular fuzzy numbers are used to denote the evaluation values of all alternatives versus subjective and objective criteria. Finally, the aggregation fuzzy assessments of different alternatives are ranked to determine the best selection. Furthermore, this paper uses a numerical example of location selection to demonstrate the applicability of the proposed method. The study results show that this method is an effective means for tackling MCDM problems in a fuzzy environment.  相似文献   

14.
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker’s value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data.  相似文献   

15.
The minimum weight vertex cover problem is a basic combinatorial optimization problem defined as follows. Given an undirected graph and positive weights for all vertices the objective is to determine a subset of the vertices which covers all edges such that the sum of the related cost values is minimized. In this paper we apply a modified reactive tabu search approach for solving the problem. While the initial concept of reactive tabu search involves a random walk we propose to replace this random walk by a controlled simulated annealing. Numerical results are presented outperforming previous metaheuristic approaches in most cases.  相似文献   

16.
本文提出优化视角下专家权重信息未知的区间直觉模糊三支群决策方法。首先利用区间直觉模糊加权平均算子集结不同专家提供的区间直觉模糊损失评价,获得群体综合损失评价结果。以专家个体与群体综合评价相似度越髙,越能反映群体综合评价意见且赋予髙的专家权重为原则,分别构建专家权重信息完全未知和部分已知的权重确定模型。进而建立确定区间直觉模糊三支决策概率阈值对的优化模型,并提出基于专家权重信息未知的区间直觉模糊三支群决策方法。最后,算例分析及比较结果表明所提出方法的有效性。  相似文献   

17.
In many applications, some covariates could be missing for various reasons. Regression quantiles could be either biased or under-powered when ignoring the missing data. Multiple imputation and EM-based augment approach have been proposed to fully utilize the data with missing covariates for quantile regression. Both methods however are computationally expensive. We propose a fast imputation algorithm (FI) to handle the missing covariates in quantile regression, which is an extension of the fractional imputation in likelihood based regressions. FI and modified imputation algorithms (FIIPW and MIIPW) are compared to existing MI and IPW approaches in the simulation studies, and applied to part of of the National Collaborative Perinatal Project study.  相似文献   

18.
Project portfolio selection problems are inherently complex problems with multiple and often conflicting objectives. Numerous analytical techniques ranging from simple weighted scoring to complex mathematical programming approaches have been proposed to solve these problems with precise data. However, the project data in real-world problems are often imprecise or ambiguous. We propose a fuzzy Multidimensional Multiple-choice Knapsack Problem (MMKP) formulation for project portfolio selection. The proposed model is composed of an Efficient Epsilon-Constraint (EEC) method and a customized multi-objective evolutionary algorithm. A Data Envelopment Analysis (DEA) model is used to prune the generated solutions into a limited and manageable set of implementable alternatives. Statistical analysis is performed to investigate the effectiveness of the proposed approach in comparison with the competing methods in the literature. A case study is presented to demonstrate the applicability of the proposed model and exhibit the efficacy of the procedures and algorithms.  相似文献   

19.
Data Envelopment Analysis (DEA) is basically a linear programming-based technique used for measuring the relative performance of organizational units, referred to as Decision Making Units (DMUs). The flexibility in selecting the weights in standard DEA models deters the comparison among DMUs on a common base. Moreover, these weights are not suitable to measure the preferences of a decision maker (DM). For dealing with the first difficulty, the concept of common weights was proposed in the DEA literature. But, none of the common weights approaches address the second difficulty. This paper proposes an alternative approach that we term as ‘preference common weights’, which is both practical and intellectually consistent with the DEA philosophy. To do this, we introduce a multiple objective linear programming model in which objective functions are input/output variables subject to the constraints similar to the equations that define production possibility set of standard DEA models. Then by using the Zionts–Wallenius method, we can generate common weights as the DM's underlying value structure about objective functions.  相似文献   

20.
One of the unanswered questions in non-additive measure theory is how to define product of non-additive measures. Most of the approaches that have already been presented only work for discrete measures. In this paper a new approach is presented for not necessarily discrete non-additive measures that are in a certain relation with additive measures, usually this means that they are somehow derived from the additive measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号