首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
群体决策的偏好协调性检验   总被引:2,自引:0,他引:2  
对于一类群体决策问题,本文引进群体的偏好协调性指标,并且给出了偏好协调性指标的统计检验.在此基础上,还提出一个求该类群体决策问题的方法,以及讨论了群体的偏好快调性指标和群体决策结果间的关系.  相似文献   

2.
研究基于满意选择的群体决策的一个基本数学理论问题. 给出并证明了群体在方案集上的任一群体满意偏好映射是多数满意偏好规则的充分必要条件.  相似文献   

3.
FAHP中基于概念格的加权群体决策   总被引:2,自引:0,他引:2  
在一些群体决策问题中,由于决策者个人的经验、才智、权力等因素的不同,因而拥有不同决策权重,将概念格聚类的方法嫁接于群体决策之中,提出FAHP中一种加权群体决策模型,并给出一个应用该群体决策模型的例子.  相似文献   

4.
针对多指标多标度大群体决策问题,提出了一种基于证据推理的决策方法.首先将参与决策人针对各指标给出的方案评价信息转化为关于指标评价标度的概率分布.然后运用证据推理方法将针对不同指标的概率分布形式的群体评价信息进行集结,得到关于综合评价标度分布形式的群体综合评价信息,在此基础上计算每个方案的效用值,并据此对方案进行排序.最后,通过一个实例说明了本文提出方法的可行性和有效性.本文的方法为解决大群体决策问题提供了一种新途径.具有实际应用价值.  相似文献   

5.
在群体决策过程中,随机噪音对个体理性和群体理性的变化具有不可忽视的影响.针对存在随机扰动条件下群体决策理性的演化行为,在假设个体和群体理性之间存在作用叠加效果的基础上,建立了刻画个体和群体理性受随机噪声影响的个体和群体理性演化动力学模型,利用随机微分方程理论,论证了模型解的存在性和唯一性,在给定群体决策的随机扰动为白噪声的假设条件下得到了模型的解析解,讨论了群体决策中两个重要参数:理性作用强度和随机干扰作用强度对于群体理性演化的影响.理论分析结果表明:群体决策理性的演化行为取决于这两个参数的相对大小,存在着三条不同的演化路径,数值算例验证了模型的合理性和有效性,并对群体理性的演化行为进行了直观说明,对于深入认识群体决策规律和提高决策质量具有积极的理论和实际意义.  相似文献   

6.
一个群体决策问题取决于两个因素,一个是群体决策的规则,另一个是投票。当选定群体决策规则时,一个群体决策问题由投票完全决定,此时,群体决策问题与投票之间一一对应。简单多数规则是个简单且被广泛采用的群体决策规则,但它有缺陷,我们可举出些群体决策问题使用简单多数规则没法从投票得到最后群体决策的结果。这里我们将给出一个简单多数规则的有趣性质,即在3个评选对象场合,使用简单多数规则没法从投票得到最后群体决策结果的n个评选人的群体决策问题的个数与所有n个评选人的群体决策问题的个数之比当评选人个数n趋向无穷时趋于零,这说明3个评选对象的大型群体决策场合,简单多数规则的缺陷不严重。  相似文献   

7.
区域发展战略规划群体决策支持系统(Ⅱ)   总被引:1,自引:0,他引:1  
赵玮  芮红 《运筹学学报》2000,4(1):86-94
本文介绍了一个用于区域发展战略规划设计与管理控制支持的群体决策支持系统中的群体决策支持模型部分,内容包括群体支持模型的功能、结构及其相应算法。  相似文献   

8.
群体决策的偏差度分析   总被引:16,自引:2,他引:14  
本文引进群体决策的决策个体和决策群体关于两方案的偏差度,以及群体偏差映射的概念.在建立相应的偏差公理的基础上,研究了群体决策偏差分析的基本理论.同时,还给出一个利用偏差度进行群体排序的方法.  相似文献   

9.
在多指标群体决策问题中,将个体决策的决策向量集结为群体决策的综合评判值是决策的关键,其中也涉及到决策专家的权威性比重问题.在五标度赋值确定专家权威性比重的基础上,以最小二乘法为工具,建立了一种群体决策的目标优化模型,从而为多指标群体决策问题提供了又一科学而合理的决策方法.  相似文献   

10.
本文研究求解群体多目标最优化问题的理想偏爱法的性质,证明了对应的理想偏爱映射满足基数型群体决策规则的匿名性,中立性,正响应性,非负响应性,强Pareto原则以及局部非独裁性等理性条件.  相似文献   

11.
多阶段群体满意决策最优算法   总被引:1,自引:0,他引:1  
针对多阶段群体满意决策问题,应用图论知识提出一种求解多阶段群体满意策略问题的最优算法.定义权ω为决策者对决策的总评价值,给出距离d和群体满意策略等概念.考虑实际情况中决策者的能力和认知的不同,赋予决策者变化的决策权重.将多阶段群体满意策略问题转换成在一个带有权向量的多部有向图中找权最大的路的问题.最后给出计算实例.  相似文献   

12.
This paper deals with a new optimality criterion consisting of the usual three average criteria and the canonical triplet (totally so-called strong average-canonical optimality criterion) and introduces the concept of a strong average-canonical policy for nonstationary Markov decision processes, which is an extension of the canonical policies of Herna′ndez-Lerma and Lasserre [16] (pages: 77) for the stationary Markov controlled processes. For the case of possibly non-uniformly bounded rewards and denumerable state space, we first construct, under some conditions, a solution to the optimality equations (OEs), and then prove that the Markov policies obtained from the OEs are not only optimal for the three average criteria but also optimal for all finite horizon criteria with a sequence of additional functions as their terminal rewards (i.e. strong average-canonical optimal). Also, some properties of optimal policies and optimal average value convergence are discussed. Moreover, the error bound in average reward between a rolling horizon policy and a strong average-canonical optimal policy is provided, and then a rolling horizon algorithm for computing strong average ε(>0)-optimal Markov policies is given.  相似文献   

13.
This paper investigates finite horizon semi-Markov decision processes with denumerable states. The optimality is over the class of all randomized history-dependent policies which include states and also planning horizons, and the cost rate function is assumed to be bounded below. Under suitable conditions, we show that the value function is a minimum nonnegative solution to the optimality equation and there exists an optimal policy. Moreover, we develop an effective algorithm for computing optimal policies, derive some properties of optimal policies, and in addition, illustrate our main results with a maintenance system.  相似文献   

14.
Kuri  Joy  Kumar  Anurag 《Queueing Systems》1997,27(1-2):1-16
We consider a problem of admission control to a single queue in discrete time. The controller has access to k step old queue lengths only, where k can be arbitrary. The problem is motivated, in particular, by recent advances in high-speed networking where information delays have become prominent. We formulate the problem in the framework of Completely Observable Controlled Markov Chains, in terms of a multi-dimensional state variable. Exploiting the structure of the problem, we show that under appropriate conditions, the multi-dimensional Dynamic Programming Equation (DPE) can be reduced to a unidimensional one. We then provide simple computable upper and lower bounds to the optimal value function corresponding to the reduced unidimensional DPE. These upper and lower bounds, along with a certain relationship among the parameters of the problem, enable us to deduce partially the structural features of the optimal policy. Our approach enables us to recover simply, in part, the recent results of Altman and Stidham, who have shown that a multiple-threshold-type policy is optimal for this problem. Further, under the same relationship among the parameters of the problem, we provide easily computable upper bounds to the multiple thresholds and show the existence of simple relationships among these upper bounds. These relationships allow us to gain very useful insights into the nature of the optimal policy. In particular, the insights obtained are of great importance for the problem of actually computing an optimal policy because they reduce the search space enormously. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

15.
本文考虑连续时间Markov决策过程折扣模型的均值-方差优化问题.假设状态空间和行动空间均为Polish空间,转移率和报酬率函数均无界.本文的优化目标是在折扣最优平稳策略类里,选取相应方差最小的策略.本文致力于寻找Polish空间下Markov决策过程均值-方差最优策略存在的条件.利用首次进入分解方法,本文证明均值-方差优化问题可以转化为"等价"的期望折扣优化问题,进而得到关于均值-方差优化问题的"最优方程"和均值-方差最优策略的存在性以及它相应的特征.最后,本文给出若干例子说明折扣最优策略的不唯一性和均值-方差最优策略的存在性.  相似文献   

16.
We consider a discrete-time Markov decision process with a partially ordered state space and two feasible control actions in each state. Our goal is to find general conditions, which are satisfied in a broad class of applications to control of queues, under which an optimal control policy is monotonic. An advantage of our approach is that it easily extends to problems with both information and action delays, which are common in applications to high-speed communication networks, among others. The transition probabilities are stochastically monotone and the one-stage reward submodular. We further assume that transitions from different states are coupled, in the sense that the state after a transition is distributed as a deterministic function of the current state and two random variables, one of which is controllable and the other uncontrollable. Finally, we make a monotonicity assumption about the sample-path effect of a pairwise switch of the actions in consecutive stages. Using induction on the horizon length, we demonstrate that optimal policies for the finite- and infinite-horizon discounted problems are monotonic. We apply these results to a single queueing facility with control of arrivals and/or services, under very general conditions. In this case, our results imply that an optimal control policy has threshold form. Finally, we show how monotonicity of an optimal policy extends in a natural way to problems with information and/or action delay, including delays of more than one time unit. Specifically, we show that, if a problem without delay satisfies our sufficient conditions for monotonicity of an optimal policy, then the same problem with information and/or action delay also has monotonic (e.g., threshold) optimal policies.  相似文献   

17.
This note describes sufficient conditions under which total-cost and average-cost Markov decision processes (MDPs) with general state and action spaces, and with weakly continuous transition probabilities, can be reduced to discounted MDPs. For undiscounted problems, these reductions imply the validity of optimality equations and the existence of stationary optimal policies. The reductions also provide methods for computing optimal policies. The results are applied to a capacitated inventory control problem with fixed costs and lost sales.  相似文献   

18.
彭怡  胡杨 《运筹与管理》2004,13(4):69-72
为了求解一类包含多轮群体评价过程的动态群体决策问题,定义了个体效用波动和群体一致度的概念并分别建立了相应的计算指标,利用决策个体的效用波动指标提出了决策个体权重的修正方法,然后提出了一种基于群体一致度指标的加权算法,得到了各决策方案的群体效用评价。最后给出了计算实例。  相似文献   

19.
Optimal control of a production-inventory system with customer impatience   总被引:1,自引:0,他引:1  
We consider the control of a production-inventory system with impatient customers. We show that the optimal policy can be described using two thresholds: a production base-stock level that determines when production takes place and an admission threshold that determines when orders should be accepted. We describe an algorithm for computing the performance of the system for any choice of base-stock level and admission threshold. In a numerical study, we compare the performance of the optimal policy against several other policies.  相似文献   

20.
Decision makers often face the need of performance guarantee with some sufficiently high probability. Such problems can be modelled using a discrete time Markov decision process (MDP) with a probability criterion for the first achieving target value. The objective is to find a policy that maximizes the probability of the total discounted reward exceeding a target value in the preceding stages. We show that our formulation cannot be described by former models with standard criteria. We provide the properties of the objective functions, optimal value functions and optimal policies. An algorithm for computing the optimal policies for the finite horizon case is given. In this stochastic stopping model, we prove that there exists an optimal deterministic and stationary policy and the optimality equation has a unique solution. Using perturbation analysis, we approximate general models and prove the existence of e-optimal policy for finite state space. We give an example for the reliability of the satellite sy  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号