首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
A qualitative approach to decision making under uncertainty has been proposed in the setting of possibility theory, which is based on the assumption that levels of certainty and levels of priority (for expressing preferences) are commensurate. In this setting, pessimistic and optimistic decision criteria have been formally justified. This approach has been transposed into possibilistic logic in which the available knowledge is described by formulas which are more or less certainly true and the goals are described in a separate prioritized base. This paper adapts the possibilistic logic handling of qualitative decision making under uncertainty in the Answer Set Programming (ASP) setting. We show how weighted beliefs and prioritized preferences belonging to two separate knowledge bases can be handled in ASP by modeling qualitative decision making in terms of abductive logic programming where (uncertain) knowledge about the world and prioritized preferences are encoded as possibilistic definite logic programs and possibilistic literals respectively. We provide ASP-based and possibilistic ASP-based algorithms for calculating optimal decisions and utility values according to the possibilistic decision criteria. We describe a prototype implementing the algorithms proposed on top of different ASP solvers and we discuss the complexity of the different implementations.  相似文献   

2.
Models for decision-making under uncertainty use probability distributions to represent variables whose values are unknown when the decisions are to be made. Often the distributions are estimated with observed data. Sometimes these variables depend on the decisions but the dependence is ignored in the decision maker??s model, that is, the decision maker models these variables as having an exogenous probability distribution independent of the decisions, whereas the probability distribution of the variables actually depend on the decisions. It has been shown in the context of revenue management problems that such modeling error can lead to systematic deterioration of decisions as the decision maker attempts to refine the estimates with observed data. Many questions remain to be addressed. Motivated by the revenue management, newsvendor, and a number of other problems, we consider a setting in which the optimal decision for the decision maker??s model is given by a particular quantile of the estimated distribution, and the empirical distribution is used as estimator. We give conditions under which the estimation and control process converges, and show that although in the limit the decision maker??s model appears to be consistent with the observed data, the modeling error can cause the limit decisions to be arbitrarily bad.  相似文献   

3.
Influence diagrams and decision trees represent the two most common frameworks for specifying and solving decision problems. As modeling languages, both of these frameworks require that the decision analyst specifies all possible sequences of observations and decisions (in influence diagrams, this requirement corresponds to the constraint that the decisions should be temporarily linearly ordered). Recently, the unconstrained influence diagram was proposed to address this drawback. In this framework, we may have a partial ordering of the decisions, and a solution to the decision problem therefore consists not only of a decision policy for the various decisions, but also of a conditional specification of what to do next. Relative to the complexity of solving an influence diagram, finding a solution to an unconstrained influence diagram may be computationally very demanding w.r.t. both time and space. Hence, there is a need for efficient algorithms that can deal with (and take advantage of) the idiosyncrasies of the language. In this paper we propose two such solution algorithms. One resembles the variable elimination technique from influence diagrams, whereas the other is based on conditioning and supports any-space inference. Finally, we present an empirical comparison of the proposed methods.  相似文献   

4.
A fuzzy random forest   总被引:4,自引:0,他引:4  
When individual classifiers are combined appropriately, a statistically significant increase in classification accuracy is usually obtained. Multiple classifier systems are the result of combining several individual classifiers. Following Breiman’s methodology, in this paper a multiple classifier system based on a “forest” of fuzzy decision trees, i.e., a fuzzy random forest, is proposed. This approach combines the robustness of multiple classifier systems, the power of the randomness to increase the diversity of the trees, and the flexibility of fuzzy logic and fuzzy sets for imperfect data management. Various combination methods to obtain the final decision of the multiple classifier system are proposed and compared. Some of them are weighted combination methods which make a weighting of the decisions of the different elements of the multiple classifier system (leaves or trees). A comparative study with several datasets is made to show the efficiency of the proposed multiple classifier system and the various combination methods. The proposed multiple classifier system exhibits a good accuracy classification, comparable to that of the best classifiers when tested with conventional data sets. However, unlike other classifiers, the proposed classifier provides a similar accuracy when tested with imperfect datasets (with missing and fuzzy values) and with datasets with noise.  相似文献   

5.
Managers increasingly face netsourcing decisions of whether and how to outsource selected software applications over the Internet. This paper illustrates the development of a netsourcing decision support system (DSS) that provides support for the first netsourcing decision of whether to netsource or not to do so. The development follows a five-stage methodology focusing on empirical modeling with internal validation during the development. It begins with identifying potential decision criteria from the literature followed by the collection of empirical data. Logistic regression is then used as a statistical method for selecting relevant decision criteria. Applying the logistic regression analysis to the dataset delivers competitive relevance and strategic vulnerability as relevant decision criteria. The development concludes with designing a core and a complementary DSS module. The paper critiques the developed DSS and its underlying development methodology. Recommendations for further research are offered.  相似文献   

6.
In many clinical trials, patients are enrolled and data are collected sequentially, with interim decisions, including what treatment the next patient should receive and whether or not the trial should be terminated or continued, being based on the accruing data. This naturally leads to application of Bayesian sequential procedures for trial monitoring. This article discusses the implementation and computational tasks involved in the use of backward induction for making decisions during a clinical trial. An efficient method is presented for storing and retrieving decision tables that represent the decision trees characterizing all possible decisions made when implementing a clinical trial using backward induction. We address the general computational needs, and illustrate the ideas with a specific example of a two-arm trial with a binary outcome and a maximum sample size of 200 patients.  相似文献   

7.
Proactive decision making, a concept recently introduced to behavioral operational research and decision analysis, addresses effective decision making during its phase of generating alternatives. It is measured on a scale comprising six dimensions grouped into two categories: proactive personality traits and proactive cognitive skills. Personality traits are grounded on theoretical constructs such as proactive attitude and proactive behavior; cognitive skills reflect value-focused thinking and decision quality. These traits and skills have been used to explain decision satisfaction, although their antecedents and other consequences have not yet been the subject of rigorous hypotheses and testing.This paper embeds proactive decision making within a model of three possible consequences. We consider—and empirically test—decision satisfaction, general self-efficacy, and life satisfaction by conducting three studies with 1300 participants. We then apply structural equation modeling to show that proactive decision making helps to account for life satisfaction, an explanation mediated by general self-efficacy and decision satisfaction. Thus proactive decision making fosters greater belief in one's abilities and increases satisfaction with one's decisions and with life more generally. These results imply that it is worthwhile to help individuals enhance their decision-making proactivity.Demonstrating the positive effects of proactive decision making at the individual level underscores how important the phase of generating alternatives is, and it also highlights the merit of employing “decision quality” principles and being proactive during that phase. Hence the findings presented here confirm the relevance of OR, and of decision-analytic principles, to the lives of ordinary people.  相似文献   

8.
We develop a multi-stage stochastic programming model for international portfolio management in a dynamic setting. We model uncertainty in asset prices and exchange rates in terms of scenario trees that reflect the empirical distributions implied by market data. The model takes a holistic view of the problem. It considers portfolio rebalancing decisions over multiple periods in accordance with the contingencies of the scenario tree. The solution jointly determines capital allocations to international markets, the selection of assets within each market, and appropriate currency hedging levels. We investigate the performance of alternative hedging strategies through extensive numerical tests with real market data. We show that appropriate selection of currency forward contracts materially reduces risk in international portfolios. We further find that multi-stage models consistently outperform single-stage models. Our results demonstrate that the stochastic programming framework provides a flexible and effective decision support tool for international portfolio management.  相似文献   

9.
In this paper various ensemble learning methods from machine learning and statistics are considered and applied to the customer choice modeling problem. The application of ensemble learning usually improves the prediction quality of flexible models like decision trees and thus leads to improved predictions. We give experimental results for two real-life marketing datasets using decision trees, ensemble versions of decision trees and the logistic regression model, which is a standard approach for this problem. The ensemble models are found to improve upon individual decision trees and outperform logistic regression.  相似文献   

10.
We consider bi-criteria optimization problems for decision rules and rule systems relative to length and coverage. We study decision tables with many-valued decisions in which each row is associated with a set of decisions as well as single-valued decisions where each row has a single decision. Short rules are more understandable; rules covering more rows are more general. Both of these problems—minimization of length and maximization of coverage of rules are NP-hard. We create dynamic programming algorithms which can find the minimum length and the maximum coverage of rules, and can construct the set of Pareto optimal points for the corresponding bi-criteria optimization problem. This approach is applicable for medium-sized decision tables. However, the considered approach allows us to evaluate the quality of various heuristics for decision rule construction which are applicable for relatively big datasets. We can evaluate these heuristics from the point of view of (i) single-criterion—we can compare the length or coverage of rules constructed by heuristics; and (ii) bi-criteria—we can measure the distance of a point (length, coverage) corresponding to a heuristic from the set of Pareto optimal points. The presented results show that the best heuristics from the point of view of bi-criteria optimization are not always the best ones from the point of view of single-criterion optimization.  相似文献   

11.
Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression‐based models, logit models, and theoretical market‐level models, such as the NBD‐Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method—agent‐based modeling—shows promise for addressing these issues. Agent‐based models use business‐driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system‐level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent‐based modeling to develop a multi‐scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings. © 2010 Wiley Periodicals, Inc. Complexity, 2010  相似文献   

12.
This paper discusses the rationale for the use of additive models involving multiple objectives as approximations to normative analyses. Experience has shown us that organizations often evaluate important decisions with multiple objective models rather than reducing all aspects of the problem to a single criterion, dollars, as many normative economic models prescribe. We justify this practice on two grounds: managers often prefer to think about a problem in terms of several dimensions and a multiple objective model may provide an excellent approximation to the more complex normative model. We argue that a useful analysis based on a multiple objective model will fulfill both conditions—it will provide insights for the decision maker as well as a good approximation to the normative model. We report several real-world examples of managers using multiple objective models to approximate such normative models as the risk-adjusted net present value and the value of information models. The agreement between the approximate models and the normative models is shown to be quite good. Next, we cite a portion of the behavioral decision theory literature which establishes that linear models of multiple attributes provide quite robust approximations to individual decision-making processes. We then present more general theoretical and empirical results which support our contention that linear multiple attribute models can provide good approximations to more complex models.  相似文献   

13.
An expert system was desired for a group decision-making process. A highly variable data set from previous groups' decisions was available to simulate past group decisions. This data set has much missing information and contains many possible errors. Classification and regression trees (CART) was selected for rule induction, and compared with multiple linear regression and discriminant analysis. We conclude that CART's decision rules can be used for rule induction. CART uses all available information and can predict observations with missing data. Errors in results from CART compare well with those from multiple linear regression and discriminant analysis. CART results are easier to understand.  相似文献   

14.
An important aspect of learning is the ability to transfer knowledge to new contexts. However, in dynamic decision tasks, such as bargaining, firefighting, and process control, where decision makers must make repeated decisions under time pressure and outcome feedback may relate to any of a number of decisions, such transfer has proven elusive. This paper proposes a two-stage connectionist model which hypothesizes that decision makers learn to identify categories of evidence requiring similar decisions as they perform in dynamic environments. The model suggests conditions under which decision makers will be able to use this ability to help them in novel situations. These predictions are compared against those of a one-stage decision model that does not learn evidence categories, as is common in many current theories of repeated decision making. Both models' predictions are then tested against the performance of decision makers in an Internet bargaining task. Both models correctly predict aspects of decision makers' learning under different interventions. The two-stage model provides closer fits to decision maker performance in a new, related bargaining task and accounts for important features of higher-performing decision makers' learning. Although frequently omitted in recent accounts of repeated decision making, the processes of evidence category formation described by the two-stage model appear critical in understanding the extent to which decision makers learn from feedback in dynamic tasks. Faison (Bud) Gibson is an Assistant Professor at College of Business, Eastern Michigan University. He has extensive experience developing and empirically testing models of decision behavior in dynamic decision environments.  相似文献   

15.
16.
Many strategic decisions in business are made in a context which the decision makers perceive as uncertain, complex and opaque. A method, based on Rhyne's field anomaly relaxation technique, is described of generating a network of states which characterise the environment or context in which strategic decisions are to be made. These states represent possible future conditions for the business, and knowledge of them allows improved strategic understanding and decision making to be achieved. This paper describes the method, using a representative real-life application to illustrate the process.  相似文献   

17.
The insurance industry is concerned with many problems of interest to the operational research community. This paper presents a case study involving two such problems and solves them using a variety of techniques within the methodology of data mining. The first of these problems is the understanding of customer retention patterns by classifying policy holders as likely to renew or terminate their policies. The second is better understanding claim patterns, and identifying types of policy holders who are more at risk. Each of these problems impacts on the decisions relating to premium pricing, which directly affects profitability. A data mining methodology is used which views the knowledge discovery process within an holistic framework utilising hypothesis testing, statistics, clustering, decision trees, and neural networks at various stages. The impacts of the case study on the insurance company are discussed.  相似文献   

18.
Transportation infrastructure, such as pavements and bridges, is critical to a nation’s economy. However, a large number of transportation infrastructure is underperforming and structurally deficient and must be repaired or reconstructed. Maintenance of deteriorating transportation infrastructure often requires multiple types/levels of actions with complex effects. Maintenance management becomes more intriguing when considering facilities at the network level, which represents more challenges on modeling interdependencies among various facilities. This research considers an integrated budget allocation and preventive maintenance optimization problem for multi-facility deteriorating transportation infrastructure systems. We first develop a general integer programming formulation for this problem. In order to solve large-scale problems, we reformulate the problem and decompose it into multiple Markov decision process models. A priority-based two-stage method is developed to find optimal maintenance decisions. Computational studies are conducted to evaluate the performance of the proposed algorithms. Our results show that the proposed algorithms are efficient and effective in finding satisfactory maintenance decisions for multi-facility systems. We also investigate the properties of the optimal maintenance decisions and make several important observations, which provide helpful decision guidance for real-world problems.  相似文献   

19.
This paper considers model uncertainty for multistage stochastic programs. The data and information structure of the baseline model is a tree, on which the decision problem is defined. We consider “ambiguity neighborhoods” around this tree as alternative models which are close to the baseline model. Closeness is defined in terms of a distance for probability trees, called the nested distance. This distance is appropriate for scenario models of multistage stochastic optimization problems as was demonstrated in Pflug and Pichler (SIAM J Optim 22:1–23, 2012). The ambiguity model is formulated as a minimax problem, where the the optimal decision is to be found, which minimizes the maximal objective function within the ambiguity set. We give a setup for studying saddle point properties of the minimax problem. Moreover, we present solution algorithms for finding the minimax decisions at least asymptotically. As an example, we consider a multiperiod stochastic production/inventory control problem with weekly ordering. The stochastic scenario process is given by the random demands for two products. We determine the minimax solution and identify the worst trees within the ambiguity set. It turns out that the probability weights of the worst case trees are concentrated on few very bad scenarios.  相似文献   

20.
Decision-tree algorithm provides one of the most popular methodologies for symbolic knowledge acquisition. The resulting knowledge, a symbolic decision tree along with a simple inference mechanism, has been praised for comprehensibility. The most comprehensible decision trees have been designed for perfect symbolic data. Over the years, additional methodologies have been investigated and proposed to deal with continuous or multi-valued data, and with missing or noisy features. Recently, with the growing popularity of fuzzy representation, some researchers have proposed to utilize fuzzy representation in decision trees to deal with similar situations. This paper presents a survey of current methods for Fuzzy Decision Tree (FDT) designment and the various existing issues. After considering potential advantages of FDT classifiers over traditional decision tree classifiers, we discuss the subjects of FDT including attribute selection criteria, inference for decision assignment and stopping criteria. To be best of our knowledge, this is the first overview of fuzzy decision tree classifier.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号