首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Models for decision-making under uncertainty use probability distributions to represent variables whose values are unknown when the decisions are to be made. Often the distributions are estimated with observed data. Sometimes these variables depend on the decisions but the dependence is ignored in the decision maker??s model, that is, the decision maker models these variables as having an exogenous probability distribution independent of the decisions, whereas the probability distribution of the variables actually depend on the decisions. It has been shown in the context of revenue management problems that such modeling error can lead to systematic deterioration of decisions as the decision maker attempts to refine the estimates with observed data. Many questions remain to be addressed. Motivated by the revenue management, newsvendor, and a number of other problems, we consider a setting in which the optimal decision for the decision maker??s model is given by a particular quantile of the estimated distribution, and the empirical distribution is used as estimator. We give conditions under which the estimation and control process converges, and show that although in the limit the decision maker??s model appears to be consistent with the observed data, the modeling error can cause the limit decisions to be arbitrarily bad.  相似文献   

2.
We consider a problem of decision under uncertainty with outcomes distributed over time. We propose a rough set model based on a combination of time dominance and stochastic dominance. For the sake of simplicity we consider the case of traditional additive probability distribution over the set of states of the world, however, we show that the model is rich enough to handle non-additive probability distributions, and even qualitative ordinal distributions. The rough set approach gives a representation of decision maker’s time-dependent preferences under uncertainty in terms of “if…, then…” decision rules induced from rough approximations of sets of exemplary decisions.  相似文献   

3.
We investigate the operational decisions and resulting profits for a supply chain facing price-dependent demand under a policy where there is an ex-ante commitment made on the retail price markup. We obtain closed-form solutions for this policy under the assumption of a multiplicative demand function and we analytically compare its performance with that of a traditional price-only policy. We compare these results to results obtained when demand follows a linear additive form. These formulations are shown to be qualitatively different as the manufacturer’s wholesale pricing decision is independent of the retail price markup commitment in the multiplicative case, but not when demand is linear additive. We demonstrate that the ex-ante commitment can lead to Pareto-improving solutions under linear additive demand, but not under the multiplicative demand function. We also consider the effect of pricing power in the supply chain by varying who determines the retail price markup.  相似文献   

4.
Rough set-based data analysis starts from a data table, called an information system. The information system contains data about objects of interest characterized in terms of some attributes. Often we distinguish in the information system condition and decision attributes. Such information system is called a decision table. The decision table describes decisions in terms of conditions that must be satisfied in order to carry out the decision specified in the decision table. With every decision table a set of decision rules, called a decision algorithm, can be associated. It is shown that every decision algorithm reveals some well-known probabilistic properties, in particular it satisfies the total probability theorem and Bayes' theorem. These properties give a new method of drawing conclusions from data, without referring to prior and posterior probabilities, inherently associated with Bayesian reasoning.  相似文献   

5.
Decision-makers who usually face model/parameter risk may prefer to act prudently by identifying optimal contracts that are robust to such sources of uncertainty. In this paper, we tackle this issue under a finite uncertainty set that contains a number of probability models that are candidates for the “true”, but unknown model. Various robust optimisation models are proposed, some of which are already known in the literature, and we show that all of them can be efficiently solved via Second Order Conic Programming (SOCP). Numerical experiments are run for various risk preference choices and it is found that for relatively large sample size, the modeler should focus on finding the best possible fit for the unknown probability model in order to achieve the most robust decision. If only small samples are available, then the modeler should consider two robust optimisation models, namely the Weighted Average Model or Weighted Worst-case Model, rather than focusing on statistical tools aiming to estimate the probability model. Amongst those two, the better choice of the robust optimisation model depends on how much interest the modeler puts on the tail risk when defining its objective function. These findings suggest that one should be very careful when robust optimal decisions are sought in the sense that the modeler should first understand the features of its objective function and the size of the available data, and then to decide whether robust optimisation or statistical inferences is the best practical approach.  相似文献   

6.
We consider a discrete-time constrained Markov decision process under the discounted cost optimality criterion. The state and action spaces are assumed to be Borel spaces, while the cost and constraint functions might be unbounded. We are interested in approximating numerically the optimal discounted constrained cost. To this end, we suppose that the transition kernel of the Markov decision process is absolutely continuous with respect to some probability measure μ  . Then, by solving the linear programming formulation of a constrained control problem related to the empirical probability measure μnμn of μ, we obtain the corresponding approximation of the optimal constrained cost. We derive a concentration inequality which gives bounds on the probability that the estimation error is larger than some given constant. This bound is shown to decrease exponentially in n. Our theoretical results are illustrated with a numerical application based on a stochastic version of the Beverton–Holt population model.  相似文献   

7.
A travel-time model for a person-onboard order picking system   总被引:1,自引:0,他引:1  
The design of an order picking system in a distribution center depends on several decisions, where a key decision is determining the optimal storage system configuration (the number, length, and height of the storage aisles). To make this decision, a throughput model that considers vertical, as well as horizontal, travel is needed. In this paper we extend prior research that considers horizontal travel for a given number and length of the storage aisles so that we are also able to consider the height of the aisles as well. Such a model will provide a more accurate estimate of the throughput of an order picker and it will also permit an examination of the tradeoff between the length and height of the aisles. The analytical model we develop to estimate throughput is based on probability models and order statistics results assuming random storage. It is intended for person-onboard order picking systems and we consider both Tchebychev and rectilinear travel. We illustrate the use of our travel-time model by incorporating it into a simple, cost-based optimization model to recommend the height of a one-pallet-deep storage system.  相似文献   

8.
In this paper we introduce a simple decision rule that a single product firm may use for filing for a price change to offset variations of the marginal cost. We consider a regulatory body whose response to the price change request involves a time delay with an exponential distribution. Two possibilities regarding the response of the regulatory body are considered. In one case it is assumed to be a binary approval process in which the rate adjustment is either approved in its entirety or rejected. In the second case we consider a partial approval process with a more general distribution. Decision rules for each case are developed. Finally we derive a multi-stage decision rule in which filing decisions are continuously updated based on temporal variations of the cost function. The multi-stage pricing decision model assumes that marginal cost escalation satisfies a Markovian jump process.This work was completed while the authors were with Bell Laboratories, USA.  相似文献   

9.

In this study, we consider two classes of multicriteria two-stage stochastic programs in finite probability spaces with multivariate risk constraints. The first-stage problem features multivariate stochastic benchmarking constraints based on a vector-valued random variable representing multiple and possibly conflicting stochastic performance measures associated with the second-stage decisions. In particular, the aim is to ensure that the decision-based random outcome vector of interest is preferable to a specified benchmark with respect to the multivariate polyhedral conditional value-at-risk or a multivariate stochastic order relation. In this case, the classical decomposition methods cannot be used directly due to the complicating multivariate stochastic benchmarking constraints. We propose an exact unified decomposition framework for solving these two classes of optimization problems and show its finite convergence. We apply the proposed approach to a stochastic network design problem in the context of pre-disaster humanitarian logistics and conduct a computational study concerning the threat of hurricanes in the Southeastern part of the United States. The numerical results provide practical insights about our modeling approach and show that the proposed algorithm is computationally scalable.

  相似文献   

10.
In many decision situations such as hiring a secretary, selling an asset, or seeking a job, the value of each offer, applicant, or choice is assumed to be an independent, identically distributed random variable. In this paper, we consider a special case where the observations are auto-correlated as in the random walk model for stock prices. For a given random walk process of n observations, we explicitly compute the probability that the j-th observation in the sequence is the maximum or minimum among all n observations. Based on the probability distribution of the rank, we derive several distribution-free selection strategies under which the decision maker's expected utility of selecting the best choice is maximized. We show that, unlike in the classical secretary problem, evaluating more choices in the random walk process does not increase the likelihood of successfully selecting the best.  相似文献   

11.
Multivariate Gaussian criteria in SMAA   总被引:2,自引:0,他引:2  
We consider stochastic multicriteria decision-making problems with multiple decision makers. In such problems, the uncertainty or inaccuracy of the criteria measurements and the partial or missing preference information can be represented through probability distributions. In many real-life problems the uncertainties of criteria measurements may be dependent. However, it is often difficult to quantify these dependencies. Also, most of the existing methods are unable to handle such dependency information.In this paper, we develop a method for handling dependent uncertainties in stochastic multicriteria group decision-making problems. We measure the criteria, their uncertainties and dependencies using a stochastic simulation model. The model is based on decision variables and stochastic parameters with given distributions. Based on the simulation results, we determine for the criteria measurements a joint probability distribution that quantifies the uncertainties and their dependencies. We then use the SMAA-2 stochastic multicriteria acceptability analysis method for comparing the alternatives based on the criteria distributions. We demonstrate the use of the method in the context of a strategic decision support model for a retailer operating in the liberated European electricity market.  相似文献   

12.
In this paper we consider the problem of controlling the arrival of customers into a GI/M/1 service station. It is known that when the decisions controlling the system are made only at arrival epochs, the optimal acceptance strategy is of a control-limit type, i.e., an arrival is accepted if and only if fewer than n customers are present in the system. The question is whether exercising conditional acceptance can further increase the expected long run average profit of a firm which operates the system. To reveal the relevance of conditional acceptance we consider an extension of the control-limit rule in which the nth customer is conditionally admitted to the queue. This customer may later be rejected if neither service completion nor arrival has occurred within a given time period since the last arrival epoch. We model the system as a semi-Markov decision process, and develop conditions under which such a policy is preferable to the simple control-limit rule.  相似文献   

13.
Probability constraints play a key role in optimization problems involving uncertainties. These constraints request that an inequality system depending on a random vector has to be satisfied with a high enough probability. In specific settings, copulæ can be used to model the probabilistic constraints with uncertainty on the left-hand side. In this paper, we provide eventual convexity results for the feasible set of decisions under local generalized concavity properties of the constraint mappings and involved copulæ. The results cover all Archimedean copulæ. We consider probabilistic constraints wherein the decision and random vector are separated, i.e. left/right-hand side uncertainty. In order to solve the underlying optimization problem, we propose and analyse convergence of a regularized supporting hyperplane method: a stabilized variant of generalized Benders decomposition. The algorithm is tested on a large set of instances involving several copulæ among which the Gaussian copula. A Numerical comparison with a (pure) supporting hyperplane algorithm and a general purpose solver for non-linear optimization is also presented.  相似文献   

14.
The combination of mathematical models and uncertainty measures can be applied in the area of data mining for diverse objectives with as final aim to support decision making. The maximum entropy function is an excellent measure of uncertainty when the information is represented by a mathematical model based on imprecise probabilities. In this paper, we present algorithms to obtain the maximum entropy value when the information available is represented by a new model based on imprecise probabilities: the nonparametric predictive inference model for multinomial data (NPI-M), which represents a type of entropy-linear program. To reduce the complexity of the model, we prove that the NPI-M lower and upper probabilities for any general event can be expressed as a combination of the lower and upper probabilities for the singleton events, and that this model can not be associated with a closed polyhedral set of probabilities. An algorithm to obtain the maximum entropy probability distribution on the set associated with NPI-M is presented. We also consider a model which uses the closed and convex set of probability distributions generated by the NPI-M singleton probabilities, a closed polyhedral set. We call this model A-NPI-M. A-NPI-M can be seen as an approximation of NPI-M, this approximation being simpler to use because it is not necessary to consider the set of constraints associated with the exact model.  相似文献   

15.
This paper considers model uncertainty for multistage stochastic programs. The data and information structure of the baseline model is a tree, on which the decision problem is defined. We consider “ambiguity neighborhoods” around this tree as alternative models which are close to the baseline model. Closeness is defined in terms of a distance for probability trees, called the nested distance. This distance is appropriate for scenario models of multistage stochastic optimization problems as was demonstrated in Pflug and Pichler (SIAM J Optim 22:1–23, 2012). The ambiguity model is formulated as a minimax problem, where the the optimal decision is to be found, which minimizes the maximal objective function within the ambiguity set. We give a setup for studying saddle point properties of the minimax problem. Moreover, we present solution algorithms for finding the minimax decisions at least asymptotically. As an example, we consider a multiperiod stochastic production/inventory control problem with weekly ordering. The stochastic scenario process is given by the random demands for two products. We determine the minimax solution and identify the worst trees within the ambiguity set. It turns out that the probability weights of the worst case trees are concentrated on few very bad scenarios.  相似文献   

16.
Since the Age of Enlightenment, most philosophers have associated reasoning with the rules of probability and logic. This association has been enhanced over the years and now incorporates the theory of fuzzy logic as a complement to the probability theory, leading to the concept of fuzzy probability. Our insight, here, is integrating the concept of validity into the notion of fuzzy probability within an extended fuzzy logic (FLe) framework keeping with the notion of collective intelligence. In this regard, we propose a novel framework of possibility–probability–validity distribution (PPVD). The proposed distribution is applied to a real world setting of actual judicial cases to examine the role of validity measures in automated judicial decision-making within a fuzzy probabilistic framework. We compute valid fuzzy probability of conviction and acquittal based on different factors. This determines a possible overall hypothesis for the decision of a case, which is valid only to a degree. Validity is computed by aggregating validities of all the involved factors that are obtained from a factor vocabulary based on the empirical data. We then map the combined validity based on the Jaccard similarity measure into linguistic forms, so that a human can understand the results. Then PPVDs that are obtained based on the relevant factors in the given case yield the final valid fuzzy probabilities for conviction and acquittal. Finally, the judge has to make a decision; we therefore provide a numerical measure. Our approach supports the proposed hypothesis within the three-dimensional contexts of probability, possibility, and validity to improve the ability to solve problems with incomplete, unreliable, or ambiguous information to deliver a more reliable decision.  相似文献   

17.
We study the effect of additional information on the quality of decisions. We define the extreme case of complete information about probabilities as our reference scenario. There, decision makers (DMs) can use expected utility theory to evaluate the best alternative. Starting from the worst case—where DMs have no information at all about probabilities—we find a method of constantly increasing the information by systematically limiting the ranges of the probabilities. In our simulation-based study, we measure the effects of the constant increase in information by using different forms of relative volumes. We define these as the relative volumes of the gradually narrowing areas which lead to the same (or a similar) decision as with the probability in the reference scenario. Thus, the relative volumes account for the quality of information. Combining the quantity and quality of information, we find decreasing returns to scale on information, or in other words, the costs of gathering additional information increase with the level of information. Moreover, we show that more available alternatives influence the decision process negatively. Finally, we analyze the quality of decisions in processes where more states of nature are considered. We find that this degree of complexity in the decision process also has a negative influence on the quality of decisions.  相似文献   

18.
We apply four alternative decision criteria, two old ones and two new, to the question of the appropriate level of greenhouse gas emission reduction. In all cases, we consider a uniform carbon tax that is applied to all emissions from all sectors and all countries; and that increases over time with the discount rate. For a one per cent pure rate of the time preference and a rate of risk aversion of one, the tax that maximises expected net present welfare equals $120/tC in 2010. However, we also find evidence that the uncertainty about welfare may well have fat tails so that the sample mean exists only by virtue of the finite number of runs in our Monte Carlo analysis. This is consistent with Weitzman’s Dismal Theorem. We therefore consider minimax regret as a decision criterion. As regret is defined on the positive real line, we in fact consider large percentiles instead of the ill-defined maximum. Depending on the percentile used, the recommended tax lies between $100 and $170/tC. Regret is a measure of the slope of the welfare function, while we are in fact concerned about the level of welfare. We therefore minimise the tail risk, defined as the expected welfare below a percentile of the probability density function without climate policy. Depending on the percentile used, the recommended tax lies between $20 and $330/tC. We also minimise the fatness of the tails, as measured by the p-value of the test of the null hypothesis that recursive mean welfare is non-stationary in the number of Monte Carlo runs. We cannot reject the null hypothesis of non-stationarity at the 5 % confidence level, but come closest for an initial tax of $50/tC. All four alternative decision criteria rapidly improve as modest taxes are introduced, but gradually deteriorate if the tax is too high. That implies that the appropriate tax is an interior solution. In stark contrast to some of the interpretations of the Dismal Theorem, we find that fat tails by no means justify arbitrarily large carbon taxes.  相似文献   

19.
Practically all organizations seek to create value by selecting and executing portfolios of actions that consume resources. Typically, the resulting value is uncertain, and thus organizations must take decisions based on ex ante estimates about what this future value will be. In this paper, we show that the Bayesian modeling of uncertainties in this selection problem serves to (i) increase the expected future value of the selected portfolio, (ii) raise the expected number of selected actions that belong to the optimal portfolio ex post, and (iii) eliminate the expected gap between the realized ex post portfolio value and the estimated ex ante portfolio value. We also propose a new project performance measure, defined as the probability that a given action belongs to the optimal portfolio. Finally, we provide analytic results to determine which actions should be re-evaluated to obtain more accurate value estimates before portfolio selection. In particular, we show that the optimal targeting of such re-evaluations can yield a much higher portfolio value in return for the total resources that are spent on the execution of actions and the acquisition of value estimates.  相似文献   

20.
PROMETHEE is a powerful method, which can solve many multiple criteria decision making (MCDM) problems. It involves sophisticated preference modelling techniques but requires too much a priori precise information about parameter values (such as criterion weights and thresholds). In this paper, we consider a MCDM problem where alternatives are evaluated on several conflicting criteria, and the criterion weights and/or thresholds are imprecise or unknown to the decision maker (DM). We build robust outranking relations among the alternatives in order to help the DM to rank the alternatives and select the best alternative. We propose interactive approaches based on PROMETHEE method. We develop a decision aid tool called INTOUR, which implements the developed approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号