首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We discuss the use of monotonic set measures for the representation of uncertain information. We look at some important examples of measure-based uncertainty, specifically probability and possibility and necessity. Others types of uncertainty such as cardinality based and quasi-additive measures are discussed. We consider the problem of determining the representative value of a variable whose uncertain value is formalized using a monotonic set measure. We note the central role that averaging and particularly weighted averaging operations play in obtaining these representative values. We investigate the use of various integrals such as the Choquet and Sugeno for obtaining these required averages. We suggest ways of extending a measure defined on a set to the case of fuzzy sets and the power sets of the original set. We briefly consider the problem of question answering under uncertain knowledge.  相似文献   

2.
On the Evaluation of Uncertain Courses of Action   总被引:3,自引:0,他引:3  
We consider the problem of decision making under uncertainty. The fuzzy measure is introduced as a general way of representing available information about the uncertainty. It is noted that generally in uncertain environments the problem of comparing alternative courses of action is difficult because of the multiplicity of possible outcomes for any action. One approach is to convert this multiplicity of possible of outcomes associated with an alternative into a single value using a valuation function. We describe various ways of providing a valuation function when the uncertainty is represented using a fuzzy measure. We then specialize these valuation functions to the cases of probabilistic and possibilistic uncertainty.  相似文献   

3.
A fundamental task in decision-making is the determination, in the face of uncertain information, of the satisfaction of some criteria in terms of a scalar value. Our objective here is to help support this task. We first discuss the process of selecting an uncertainty model for our knowledge, here we emphasize the tradeoff between functionality of the representation and its ability to model our knowledge, cointention. We next discuss the process of scalarization, determining a single value to represent some uncertain value. Some features required of operations used for scalarization are introduced. We look at the scalarization procedures used in probability theory, the expected value, and that used in possibility theory. We then turn to a more general framework for the representation of uncertain information based on a set measure.  相似文献   

4.
Adjustable robust optimization (ARO) generally produces better worst-case solutions than static robust optimization (RO). However, ARO is computationally more difficult than RO. In this paper, we provide conditions under which the worst-case objective values of ARO and RO problems are equal. We prove that when the uncertainty is constraint-wise, the problem is convex with respect to the adjustable variables and concave with respect to the uncertain parameters, the adjustable variables lie in a convex and compact set and the uncertainty set is convex and compact, then robust solutions are also optimal for the corresponding ARO problem. Furthermore, we prove that if some of the uncertain parameters are constraint-wise and the rest are not, then under a similar set of assumptions there is an optimal decision rule for the ARO problem that does not depend on the constraint-wise uncertain parameters. Also, we show for a class of problems that using affine decision rules that depend on all of the uncertain parameters yields the same optimal objective value as when the rules depend solely on the non-constraint-wise uncertain parameters. Finally, we illustrate the usefulness of these results by applying them to convex quadratic and conic quadratic problems.  相似文献   

5.
We introduce a new model for robust combinatorial optimization where the uncertain parameters belong to the image of multifunctions of the problem variables. In particular, we study the variable budgeted uncertainty, an extension of the budgeted uncertainty introduced by Bertsimas and Sim. Variable budgeted uncertainty can provide the same probabilistic guarantee as the budgeted uncertainty while being less conservative for vectors with few non-zero components. The feasibility set of the resulting optimization problem is in general non-convex so that we propose a mixed-integer programming reformulation for the problem, based on the dualization technique often used in robust linear programming. We show how to extend these results to non-binary variables and to more general multifunctions involving uncertainty set defined by conic constraints that are affine in the problem variables. We present a computational comparison of the budgeted uncertainty and the variable budgeted uncertainty on the robust knapsack problem. The experiments show a reduction of the price of robustness by an average factor of 18 %.  相似文献   

6.
《Fuzzy Sets and Systems》2004,142(1):129-142
Valuation functions are used in decision making under uncertainty to enable comparisons of alternatives. They are based on a weighted averaging of the n possible payoffs available under an alternative. The weighting vectors used are a reflection of the decision-making agent's strength of belief a given outcome will occur. Our concern is with developing methods to fuse multiple sources of these weighting vectors. We first suggest a method based on a normalized product. Some methods are suggested for handling completely conflicting beliefs. We abstract the basic features of this product fusion method. Particularly notably among these properties is the fact that a source with all weights equal, 1/n, acts as an identity in the fusion process. We consider next a fusion method using a uninorm aggregation operator with identity 1/n. We carefully look at this new type of method for multi-source fusion and suggest some generations and modifications. Finally we consider the situation when the contributing sources have differing credibilities.  相似文献   

7.
This paper addresses the problem of modeling of expert knowledge as a starting point for inference analysis in uncertain knowledge-based systems. The experts' opinions in a given problem are viewed as additional information in cognitive decision processes. Depending upon which uncertainty measures are used in expert knowledge representation, different inferential engines will be proposed. The flow from data to decisions will be examined in order to help the design of intelligent systems. In considering various types of uncertainty measures, the problem of admissibility will be addressed.  相似文献   

8.
A method for measuring the value of information in those fields where the meaning of messages is important is discussed. At present there exists no accepted measure of information in such problem areas, the classical unit of bits flowing per second being unacceptable. It is shown that where two sets of phenomena are associated in some way with a given set of probabilities, e.g. a population and the crimes which are committed within it, the problem reduces to how much “choice” exists. This enables the concept of entropy to be used to advantage.After developing a measure of the value of information in the general case, the paper applies the method to the investigation of a case of simple larceny. This case illustrates several interesting features. Perhaps the most important feature is that some pieces of information, although they are very important to the police, only change the level of uncertainty very slightly. In such cases the piece of information generally demands that the police perform an action which either may provide useful information or lead to further action. To overcome this problem a potential entropy change is defined which takes this factor into account.It is hoped that the work may lead to a fuller understanding of how information flows in the police network. Thus it may be possible to see if the right information is getting into the police system, getting lost inside it or whether it is being used most efficiently.  相似文献   

9.
In this paper ordinary stochastic differential equations whose coefficients depend on uncertain parameters are considered. An approach is presented how to combine both types of uncertainty (stochastic excitation and parameter uncertainty) leading to set-valued stochastic processes. The latter serve as a robust representation of solutions of the underlying stochastic differential equations. The mathematical concept is applied to a problem from earthquake engineering, where it is shown how the efficiency of Tuned Mass Dampers can be realistically assessed in the presence of uncertainty. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
In optimization, it is common to deal with uncertain and inaccurate factors which make it difficult to assign a single value to each parameter in the model. It may be more suitable to assign a set of values to each uncertain parameter. A scenario is defined as a realization of the uncertain parameters. In this context, a robust solution has to be as good as possible on a majority of scenarios and never be too bad. Such characterization admits numerous possible interpretations and therefore gives rise to various approaches of robustness. These approaches differ from each other depending on models used to represent uncertain factors, on methodology used to measure robustness, and finally on analysis and design of solution methods. In this paper, we focus on the application of a recent criterion for the shortest path problem with uncertain arc lengths. We first present two usual uncertainty models: the interval model and the discrete scenario set model. For each model, we then apply a criterion, called bw-robustness (originally proposed by B. Roy) which defines a new measure of robustness. According to each uncertainty model, we propose a formulation in terms of large scale integer linear program. Furthermore, we analyze the theoretical complexity of the resulting problems. Our computational experiments perform on a set of large scale graphs. By observing the results, we can conclude that the approved solvers, e.g. Cplex, are able to solve the mathematical models proposed which are promising for robustness analysis. In the end, we show that our formulations can be applied to the general linear program in which the objective function includes uncertain coefficients.  相似文献   

11.
In this study, we start from a multi-source variant of the two-stage capacitated facility location problem (TSCFLP) and propose a robust optimization model of the problem that involves the uncertainty of transportation costs. Since large dimensions of the robust TSCFLP could not be solved to optimality, we design a memetic algorithm (MA), which represents a combination of an evolutionary algorithm (EA) and a modified simulated annealing heuristic (SA) that uses a short-term memory of undesirable moves from previous iterations. A set of computational experiments is conducted to examine the impact of different protection levels on the deviation of the objective function value. We also investigate the impact of variations of transportation costs that may occur on both transhipment stages on the total cost for a fixed protection level. The obtained results may help in identifying a sustainable and efficient strategy for designing a two stage capacitated transportation network with uncertain transportation costs, and may be applicable in the design and management of similar transportation networks.  相似文献   

12.
本文主要考虑一类经典的含有二阶随机占优约束的投资组合优化问题,其目标为最大化期望收益,同时利用二阶随机占优约束度量风险,满足期望收益二阶随机占优预定的参考目标收益。与传统的二阶随机占优投资组合优化模型不同,本文考虑不确定的投资收益率,并未知其精确的概率分布,但属于某一不确定集合,建立鲁棒二阶随机占优投资组合优化模型,借助鲁棒优化理论,推导出对应的鲁棒等价问题。最后,采用S&P 500股票市场的实际数据,对模型进行不同训练样本规模和不确定集合下的最优投资组合的权重、样本内和样本外不确定参数对期望收益的影响的分析。结果表明,投资收益率在最新的历史数据规模下得出的投资策略,能够获得较高的样本外期望收益,对未来投资更具参考意义。在保证样本内解的最优性的同时,也能取得较高的样本外期望收益和随机占优约束被满足的可行性。  相似文献   

13.
Possibilistic networks and possibilistic logic are two standard frameworks of interest for representing uncertain pieces of knowledge. Possibilistic networks exhibit relationships between variables while possibilistic logic ranks logical formulas according to their level of certainty. For multiply connected networks, it is well-known that the inference process is a hard problem. This paper studies a new representation of possibilistic networks called hybrid possibilistic networks. It results from combining the two semantically equivalent types of standard representation. We first present a propagation algorithm through hybrid possibilistic networks. This inference algorithm on hybrid networks is strictly more efficient (and confirmed by experimental studies) than the one of standard propagation algorithm.  相似文献   

14.
Managers of projects and multi-project programs often face considerable uncertainty in the duration and outcomes of specific tasks, as well as in the overall level of resources required by tasks. They must decide, in these uncertain conditions, how to allocate and manage scarce resources across many projects that have competing needs. This paper develops a nonlinear mixed-integer programming model for optimizing the resource allocations to individual tasks to minimize the completion times of a collection of projects. The model contains a very flexible representation of the effects of changing resource allocations on the probability distribution of task duration, so it can accommodate a wide variety of practical situations. A heuristic solution procedure is proposed that works quite effectively. An illustration involving a collection of bridge construction projects is provided.  相似文献   

15.
The problem of the distribution center is concerned with how to select distribution centers from a potential set in order to minimize the total relevant cost comprising of fixed costs of the distribution center and transport costs, and minimize the transportation time. In this paper, we propose a multi-objective network optimal model with random fuzzy coefficients for the logistics distribution center location problem. Furthermore, we convert the uncertain model into a deterministic one by the probability and possibility measure. Then the spanning tree-based genetic algorithm (st-GA) by the Prüfer number representation is introduced to solve the crisp multiobjective programming. At last, the proposed model and algorithm are applied to the Xinxi Dairy Holdings Limited Company to show the efficiency.  相似文献   

16.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method.  相似文献   

17.
Dokka  Trivikram  Goerigk  Marc  Roy  Rahul 《Optimization Letters》2020,14(6):1323-1337

In robust optimization, the uncertainty set is used to model all possible outcomes of uncertain parameters. In the classic setting, one assumes that this set is provided by the decision maker based on the data available to her. Only recently it has been recognized that the process of building useful uncertainty sets is in itself a challenging task that requires mathematical support. In this paper, we propose an approach to go beyond the classic setting, by assuming multiple uncertainty sets to be prepared, each with a weight showing the degree of belief that the set is a “true” model of uncertainty. We consider theoretical aspects of this approach and show that it is as easy to model as the classic setting. In an extensive computational study using a shortest path problem based on real-world data, we auto-tune uncertainty sets to the available data, and show that with regard to out-of-sample performance, the combination of multiple sets can give better results than each set on its own.

  相似文献   

18.
We describe the basic ideas of the theory of approximate reasoning and indicate how it provides a framework for representing human sourced soft information. We discuss how to translate linguistic knowledge into formal representations using generalized constraints. We consider the inference process within the theory of approximate reasoning and introduce the entailment principle and describe its centrality to this inference process. Next we introduce the idea of doubly uncertain statements such as John’s friend is young. In these statements there exists uncertainty both with respect to value of the age, young, and the object associated with the age, John’s friend. We suggest a method for representing these complex statements and investigate the problem of making inferences about specific objects.  相似文献   

19.
We consider linear programming problems with uncertain objective function coefficients. For each coefficient of the objective function, an interval of uncertainty is known, and it is assumed that any coefficient can take on any value from the corresponding interval of uncertainty, regardless of the values taken by other coefficients. It is required to find a minmax regret solution. This problem received considerable attention in the recent literature, but its computational complexity status remained unknown. We prove that the problem is strongly NP-hard. This gives the first known example of a minmax regret optimization problem that is NP-hard in the case of interval-data representation of uncertainty but is polynomially solvable in the case of discrete-scenario representation of uncertainty.  相似文献   

20.
One of the open problems in the field of forward uncertainty quantification(UQ) is the ability to form accurate assessments of uncertainty having only incomplete information about the distribution of random inputs. Another challenge is to efficiently make use of limited training data for UQ predictions of complex engineering problems, particularly with high dimensional random parameters. We address these challenges by combining data-driven polynomial chaos expansions with a recently developed preconditioned sparse approximation approach for UQ problems. The first task in this two-step process is to employ the procedure developed in [1] to construct an "arbitrary" polynomial chaos expansion basis using a finite number of statistical moments of the random inputs. The second step is a novel procedure to effect sparse approximation via l1 minimization in order to quantify the forward uncertainty. To enhance the performance of the preconditioned l1 minimization problem, we sample from the so-called induced distribution, instead of using Monte Carlo (MC) sampling from the original, unknown probability measure. We demonstrate on test problems that induced sampling is a competitive and often better choice compared with sampling from asymptotically optimal measures(such as the equilibrium measure) when we have incomplete information about the distribution. We demonstrate the capacity of the proposed induced sampling algorithm via sparse representation with limited data on test functions, and on a Kirchoff plating bending problem with random Young's modulus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号