首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian model averaging (BMA) is the state of the art approach for overcoming model uncertainty. Yet, especially on small data sets, the results yielded by BMA might be sensitive to the prior over the models. Credal model averaging (CMA) addresses this problem by substituting the single prior over the models by a set of priors (credal set). Such approach solves the problem of how to choose the prior over the models and automates sensitivity analysis. We discuss various CMA algorithms for building an ensemble of logistic regressors characterized by different sets of covariates. We show how CMA can be appropriately tuned to the case in which one is prior-ignorant and to the case in which instead domain knowledge is available. CMA detects prior-dependent instances, namely instances in which a different class is more probable depending on the prior over the models. On such instances CMA suspends the judgment, returning multiple classes. We thoroughly compare different BMA and CMA variants on a real case study, predicting presence of Alpine marmot burrows in an Alpine valley. We find that BMA is almost a random guesser on the instances recognized as prior-dependent by CMA.  相似文献   

2.
The available methods to handle missing values in principal component analysis only provide point estimates of the parameters (axes and components) and estimates of the missing values. To take into account the variability due to missing values a multiple imputation method is proposed. First a method to generate multiple imputed data sets from a principal component analysis model is defined. Then, two ways to visualize the uncertainty due to missing values onto the principal component analysis results are described. The first one consists in projecting the imputed data sets onto a reference configuration as supplementary elements to assess the stability of the individuals (respectively of the variables). The second one consists in performing a principal component analysis on each imputed data set and fitting each obtained configuration onto the reference one with Procrustes rotation. The latter strategy allows to assess the variability of the principal component analysis parameters induced by the missing values. The methodology is then evaluated from a real data set.  相似文献   

3.
The soft set theory, originally proposed by Molodtsov, can be used as a general mathematical tool for dealing with uncertainty. Since its appearance, there has been some progress concerning practical applications of soft set theory, especially the use of soft sets in decision making. The intuitionistic fuzzy soft set is a combination of an intuitionistic fuzzy set and a soft set. The rough set theory is a powerful tool for dealing with uncertainty, granuality and incompleteness of knowledge in information systems. Using rough set theory, this paper proposes a novel approach to intuitionistic fuzzy soft set based decision making problems. Firstly, by employing an intuitionistic fuzzy relation and a threshold value pair, we define a new rough set model and examine some fundamental properties of this rough set model. Then the concepts of approximate precision and rough degree are given and some basic properties are discussed. Furthermore, we investigate the relationship between intuitionistic fuzzy soft sets and intuitionistic fuzzy relations and present a rough set approach to intuitionistic fuzzy soft set based decision making. Finally, an illustrative example is employed to show the validity of this rough set approach in intuitionistic fuzzy soft set based decision making problems.  相似文献   

4.
This paper extends the Log-robust portfolio management approach to the case with short sales, i.e., the case where the manager can sell shares he does not yet own. We model the continuously compounded rates of return, which have been established in the literature as the true drivers of uncertainty, as uncertain parameters belonging to polyhedral uncertainty sets, and maximize the worst-case portfolio wealth over that set in a one-period setting. The degree of the manager’s aversion to ambiguity is incorporated through a single, intuitive parameter, which determines the size of the uncertainty set. The presence of short-selling requires the development of problem-specific techniques, because the optimization problem is not convex. In the case where assets are independent, we show that the robust optimization problem can be solved exactly as a series of linear programming problems; as a result, the approach remains tractable for large numbers of assets. We also provide insights into the structure of the optimal solution. In the case of correlated assets, we develop and test a heuristic where correlation is maintained only between assets invested in. In computational experiments, the proposed approach exhibits superior performance to that of the traditional robust approach.  相似文献   

5.
6.
We present a robust optimization approach to portfolio management under uncertainty when randomness is modeled using uncertainty sets for the continuously compounded rates of return, which empirical research argues are the true drivers of uncertainty, but the parameters needed to define the uncertainty sets, such as the drift and standard deviation, are not known precisely. Instead, a finite set of scenarios is available for the input data, obtained either using different time horizons or assumptions in the estimation process. Our objective is to maximize the worst-case portfolio value (over a set of allowable deviations of the uncertain parameters from their nominal values, using the worst-case nominal values among the possible scenarios) at the end of the time horizon in a one-period setting. Short sales are not allowed. We consider both the independent and correlated assets models. For the independent assets case, we derive a convex reformulation, albeit involving functions with singular Hessians. Because this slows computation times, we also provide lower and upper linear approximation problems and devise an algorithm that gives the decision maker a solution within a desired tolerance from optimality. For the correlated assets case, we suggest a tractable heuristic that uses insights derived in the independent assets case.  相似文献   

7.
Rough set theory has been combined with intuitionistic fuzzy sets in dealing with uncertainty decision making. This paper proposes a general decision-making framework based on the intuitionistic fuzzy rough set model over two universes. We first present the intuitionistic fuzzy rough set model over two universes with a constructive approach and discuss the basic properties of this model. We then give a new approach of decision making in uncertainty environment by using the intuitionistic fuzzy rough sets over two universes. Further, the principal steps of the decision method established in this paper are presented in detail. Finally, an example of handling medical diagnosis problem illustrates this approach.  相似文献   

8.
Rough set theory is a useful mathematical tool to deal with vagueness and uncertainty in available information. The results of a rough set approach are usually presented in the form of a set of decision rules derived from a decision table. Because using the original decision table is not the only way to implement a rough set approach, it could be interesting to investigate possible improvement in classification performance by replacing the original table with an alternative table obtained by pairwise comparisons among patterns. In this paper, a decision table based on pairwise comparisons is generated using the preference relation as in the Preference Ranking Organization Methods for Enrichment Evaluations (PROMETHEE) methods, to gauges the intensity of preference for one pattern over another pattern on each criterion before classification. The rough-set-based rule classifier (RSRC) provided by the well-known library for the Rough Set Exploration System (RSES) running under Windows as been successfully used to generate decision rules by using the pairwise-comparisons-based tables. Specifically, parameters related to the preference function on each criterion have been determined using a genetic-algorithm-based approach. Computer simulations involving several real-world data sets have revealed that of the proposed classification method performs well compared to other well-known classification methods and to RSRC using the original tables.  相似文献   

9.
Clusterwise regression consists of finding a number of regression functions each approximating a subset of the data. In this paper, a new approach for solving the clusterwise linear regression problems is proposed based on a nonsmooth nonconvex formulation. We present an algorithm for minimizing this nonsmooth nonconvex function. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate a good starting point for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or near global solution to the problem when the data sets are sufficiently dense. The algorithm is compared with the multistart Späth algorithm on several publicly available data sets for regression analysis.  相似文献   

10.

We study the problem of drift estimation for two-scale continuous time series. We set ourselves in the framework of overdamped Langevin equations, for which a single-scale surrogate homogenized equation exists. In this setting, estimating the drift coefficient of the homogenized equation requires pre-processing of the data, often in the form of subsampling; this is because the two-scale equation and the homogenized single-scale equation are incompatible at small scales, generating mutually singular measures on the path space. We avoid subsampling and work instead with filtered data, found by application of an appropriate kernel function, and compute maximum likelihood estimators based on the filtered process. We show that the estimators we propose are asymptotically unbiased and demonstrate numerically the advantages of our method with respect to subsampling. Finally, we show how our filtered data methodology can be combined with Bayesian techniques and provide a full uncertainty quantification of the inference procedure.

  相似文献   

11.
12.
We consider a problem of decision under uncertainty with outcomes distributed over time. We propose a rough set model based on a combination of time dominance and stochastic dominance. For the sake of simplicity we consider the case of traditional additive probability distribution over the set of states of the world, however, we show that the model is rich enough to handle non-additive probability distributions, and even qualitative ordinal distributions. The rough set approach gives a representation of decision maker’s time-dependent preferences under uncertainty in terms of “if…, then…” decision rules induced from rough approximations of sets of exemplary decisions.  相似文献   

13.
Constraint programming models appear in many sciences including mathematics, engineering and physics. These problems aim at optimizing a cost function joint with some constraints. Fuzzy constraint programming has been developed for treating uncertainty in the setting of optimization problems with vague constraints. In this paper, a new method is presented into creation fuzzy concept for set of constraints. Unlike to existing methods, instead of constraints with fuzzy inequalities or fuzzy coefficients or fuzzy numbers, vague nature of constraints set is modeled using learning scheme with adaptive neural-fuzzy inference system (ANFIS). In the proposed approach, constraints are not limited to differentiability, continuity, linearity; also the importance degree of each constraint can be easily applied. Unsatisfaction of each weighted constraint reduces membership of certainty for set of constraints. Monte-Carlo simulations are used for generating feature vector samples and outputs for construction of necessary data for ANFIS. The experimental results show the ability of the proposed approach for modeling constrains and solving parametric programming problems.  相似文献   

14.
This paper discusses the associations between traits and haplotypes based on Fl (fluorescent intensity) data sets. We consider a clustering algorithm based on mixtures of t distributions to obtain all possible genotypes of each individual (i.e. "GenoSpec-trum"). We then propose a likelihood-based approach that incorporates the genotyping uncertainty to assessing the associations between traits and haplotypes through a haplotype-based logistic regression model. Simulation studies show that our likelihood-based method can reduce the impact induced by genotyping errors.  相似文献   

15.

In this work, we study a stochastic single machine scheduling problem in which the features of learning effect on processing times, sequence-dependent setup times, and machine configuration selection are considered simultaneously. More precisely, the machine works under a set of configurations and requires stochastic sequence-dependent setup times to switch from one configuration to another. Also, the stochastic processing time of a job is a function of its position and the machine configuration. The objective is to find the sequence of jobs and choose a configuration to process each job to minimize the makespan. We first show that the proposed problem can be formulated through two-stage and multi-stage Stochastic Programming models, which are challenging from the computational point of view. Then, by looking at the problem as a multi-stage dynamic random decision process, a new deterministic approximation-based formulation is developed. The method first derives a mixed-integer non-linear model based on the concept of accessibility to all possible and available alternatives at each stage of the decision-making process. Then, to efficiently solve the problem, a new accessibility measure is defined to convert the model into the search of a shortest path throughout the stages. Extensive computational experiments are carried out on various sets of instances. We discuss and compare the results found by the resolution of plain stochastic models with those obtained by the deterministic approximation approach. Our approximation shows excellent performances both in terms of solution accuracy and computational time.

  相似文献   

16.
In this research, a robust optimization approach applied to multiclass support vector machines (SVMs) is investigated. Two new kernel based-methods are developed to address data with input uncertainty where each data point is inside a sphere of uncertainty. The models are called robust SVM and robust feasibility approach model (Robust-FA) respectively. The two models are compared in terms of robustness and generalization error. The models are compared to robust Minimax Probability Machine (MPM) in terms of generalization behavior for several data sets. It is shown that the Robust-SVM performs better than robust MPM.  相似文献   

17.
An adjustable approach to fuzzy soft set based decision making   总被引:2,自引:0,他引:2  
Molodtsov’s soft set theory was originally proposed as a general mathematical tool for dealing with uncertainty. Recently, decision making based on (fuzzy) soft sets has found paramount importance. This paper aims to give deeper insights into decision making based on fuzzy soft sets. We discuss the validity of the Roy-Maji method and show its true limitations. We point out that the choice value designed for the crisp case is no longer fit to solve decision making problems involving fuzzy soft sets. By means of level soft sets, we present an adjustable approach to fuzzy soft set based decision making and give some illustrative examples. Moreover, the weighted fuzzy soft set is introduced and its application to decision making is also investigated.  相似文献   

18.
19.
Semiparametric random censorship (SRC) models (Dikta, 1998) provide an attractive framework for estimating survival functions when censoring indicators are fully or partially available. When there are missing censoring indicators (MCIs), the SRC approach employs a model-based estimate of the conditional expectation of the censoring indicator given the observed time, where the model parameters are estimated using only the complete cases. The multiple imputations approach, on the other hand, utilizes this model-based estimate to impute the missing censoring indicators and form several completed data sets. The Kaplan-Meier and SRC estimators based on the several completed data sets are averaged to arrive at the multiple imputations Kaplan-Meier (MIKM) and the multiple imputations SRC (MISRC) estimators. While the MIKM estimator is asymptotically as efficient as or less efficient than the standard SRC-based estimator that involves no imputations, here we investigate the performance of the MISRC estimator and prove that it attains the benchmark variance set by the SRC-based estimator. We also present numerical results comparing the performances of the estimators under several misspecified models for the above mentioned conditional expectation.  相似文献   

20.
In this work, the problem of allocating a set of production lots to satisfy customer orders is considered. This research is of relevance to lot-to-order matching problems in semiconductor supply chain settings. We consider that lot-splitting is not allowed during the allocation process due to standard practices. Furthermore, lot-sizes are regarded as uncertain planning data when making the allocation decisions due to potential yield loss. In order to minimize the total penalties of demand un-fulfillment and over-fulfillment, a robust mixed-integer optimization approach is adopted to model is proposed the problem of allocating a set of work-in-process lots to customer orders, where lot-sizes are modeled using ellipsoidal uncertainty sets. To solve the optimization problem efficiently we apply the techniques of branch-and-price and Benders decomposition. The advantages of our model are that it can represent uncertainty in a straightforward manner with little distributional assumptions, and it can produce solutions that effectively hedge against the uncertainty in the lot-sizes using very reasonable amounts of computational effort.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号