首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Systemic decision making is a new approach for dealing with complex multiactor decision making problems in which the actors’ individual preferences on a fixed set of alternatives are incorporated in a holistic view in accordance with the “principle of tolerance”. The new approach integrates all the preferences, even if they are encapsulated in different individual theoretical models or approaches; the only requirement is that they must be expressed as some kind of probability distribution. In this paper, assuming the analytic hierarchy process (AHP) is the multicriteria technique employed to rank alternatives, the authors present a new methodology based on a Bayesian analysis for dealing with AHP systemic decision making in a local context (a single criterion). The approach integrates the individual visions of reality into a collective one by means of a tolerance distribution, which is defined as the weighted geometric mean of the individual preferences expressed as probability distributions. A mathematical justification of this distribution, a study of its statistical properties and a Monte Carlo method for drawing samples are also provided. The paper further presents a number of decisional tools for the evaluation of the acceptance of the tolerance distribution, the construction of tolerance paths that increase representativeness and the extraction of the relevant knowledge of the subjacent multiactor decisional process from a cognitive perspective. Finally, the proposed methodology is applied to the AHP-multiplicative model with lognormal errors and a case study related to a real-life experience in local participatory budgets for the Zaragoza City Council (Spain).  相似文献   

2.
In this paper we analyse applicability and robustness of Markov chain Monte Carlo algorithms for eigenvalue problems. We restrict our consideration to real symmetric matrices.

Almost Optimal Monte Carlo (MAO) algorithms for solving eigenvalue problems are formulated. Results for the structure of both – systematic and probability error are presented. It is shown that the values of both errors can be controlled independently by different algorithmic parameters. The results present how the systematic error depends on the matrix spectrum. The analysis of the probability error is presented. It shows that the close (in some sense) the matrix under consideration is to the stochastic matrix the smaller is this error. Sufficient conditions for constructing robust and interpolation Monte Carlo algorithms are obtained. For stochastic matrices an interpolation Monte Carlo algorithm is constructed.

A number of numerical tests for large symmetric dense matrices are performed in order to study experimentally the dependence of the systematic error from the structure of matrix spectrum. We also study how the probability error depends on the balancing of the matrix.  相似文献   


3.
Multicriteria choice methods are developed by applying methods of criteria importance theory with uncertain information on criteria importance and with preferences varying along their scale. Formulas are given for computing importance coefficients and importance scale estimates that are “characteristic” representatives of the feasible set of these parameters. In the discrete case, an alternative with the highest probability of being optimal (for a uniform distribution of parameter value probabilities) can be used as the best one. It is shown how such alternatives can be found using the Monte Carlo method.  相似文献   

4.
A Condorcet domain is a subset of the set of linear orders on a finite set of candidates (alternatives to vote), such that if voters preferences are linear orders belonging to this subset, then the simple majority rule does not yield cycles. It is well-known that the set of linear orders is the Bruhat lattice. We prove that a maximal Condorcet domain is a distributive sublattice in the Bruhat lattice. An explicit lattice formula for the simple majority rule is given. We introduce the notion of a symmetric Condorcet domain and characterize symmetric Condorcet domains of maximal size.  相似文献   

5.
Generalized additive models for location, scale and, shape define a flexible, semi-parametric class of regression models for analyzing insurance data in which the exponential family assumption for the response is relaxed. This approach allows the actuary to include risk factors not only in the mean but also in other key parameters governing the claiming behavior, like the degree of residual heterogeneity or the no-claim probability. In this broader setting, the Negative Binomial regression with cell-specific heterogeneity and the zero-inflated Poisson regression with cell-specific additional probability mass at zero are applied to model claim frequencies. New models for claim severities that can be applied either per claim or aggregated per year are also presented. Bayesian inference is based on efficient Markov chain Monte Carlo simulation techniques and allows for the simultaneous estimation of linear effects as well as of possible nonlinear effects, spatial variations and interactions between risk factors within the data set. To illustrate the relevance of this approach, a detailed case study is proposed based on the Belgian motor insurance portfolio studied in Denuit and Lang (2004).  相似文献   

6.
将目标值融入到新产品开发方案选择中,考虑方案属性值达成目标值的情况,有助于企业选择更具竞争力的产品开发方案。针对属性值和目标值的混合信息表征以及属性交互的问题,提出基于目标导向决策分析和k-可加模糊测度的新产品开发方案选择方法。首先,考虑目标值和属性值表示为区间值、模糊数、语言值等混合信息的情形,将其转化为概率密度;结合属性的三种偏好,利用目标导向决策分析计算属性值达成目标值的概率。其次,基于属性交互方向和强度等信息,利用最小方差法识别k-可加模糊测度,进而利用Choquet积分算子集结各属性的目标达成概率作为产品开发方案选择的依据。最后,将方法应用于大型集成电路测试仪的开发方案选择,验证了有效性。  相似文献   

7.
Simulation Optimization (SO) is a class of mathematical optimization techniques in which the objective function can only be numerically evaluated through simulation. In this paper, a new SO approach called Golden Region (GR) search is developed for continuous problems. GR divides the feasible region into a number of (sub) regions and selects one region in each iteration for further search based on the quality and distribution of simulated points in the feasible region and the result of scanning the response surface through a metamodel. Monte Carlo experiments show that the GR method is efficient compared to three well-established approaches in the literature. We also prove the asymptotic convergence in probability to a global optimum for a large class of random search methods in general and GR in particular.  相似文献   

8.
This paper establishes the computational complexity status for a problem of deciding on the quality of a committee. Starting with individual preferences over alternatives, we analyse when it can be determined efficiently if a given committee CC satisfies a weak (resp. strong) Condorcet criterion–i.e., if CC is at least as good as (resp. better than) every other committee in a pairwise majority comparison. Scoring functions used in classic voting rules are adapted for these comparisons. In particular, we draw the sharp separation line between computationally tractable and intractable instances with respect to different voting rules. Finally, we show that deciding if there exists a committee which satisfies the weak (resp. strong) Condorcet criterion is computationally hard.  相似文献   

9.
This article compares several estimation methods for nonlinear stochastic differential equations with discrete time measurements. The likelihood function is computed by Monte Carlo simulations of the transition probability (simulated maximum likelihood SML) using kernel density estimators and functional integrals and by using the extended Kalman filter (EKF and second-order nonlinear filter SNF). The relation with a local linearization method is discussed. A simulation study for a diffusion process in a double well potential (Ginzburg–Landau equation) shows that, for large sampling intervals, the SML methods lead to better estimation results than the likelihood approach via EKF and SNF. A second study using a nonlinear diffusion coefficient (generalized Cox–Ingersoll–Ross model) demonstrates that the EKF type estimators may serve as efficient alternatives to simple maximum quasilikelihood approaches and Monte Carlo methods.  相似文献   

10.
The Condorcet criterion and committee selection   总被引:1,自引:0,他引:1  
Recent studies have evaluated election procedures on their propensity to select committees that meet a Condorcet criterion. The Condorcet criterion has been defined to use majority agreement from voters' preferences to compare the selected committee to all other committees. This study uses a different definition of the Condorcet criterion as defined on committees. The focus of the new definition is on candidates. That is, we consider majority agreement on each candidate in the selected committee as compared to each candidate not in the selected committee.This new definition of the Condorcet criterion allows for the existence of majority cycles on candidates within the selected committee. However, no candidate in the non-selected group is able to defeat any candidate in the selected committee by majority rule. Of particular interest is the likelihood that a committee meeting this Condorcet criterion exists. Attention is also given to the likelihood that various simple voting procedures will select a committee meeting this Condorcet criterion when one does exist.  相似文献   

11.
After determining all supporting profiles with any number of voters for any specified three-candidate pairwise majority vote outcome, a new, large class of “octahedral” probability distributions, motivated by and including IAC, is introduced to examine various three-candidate voting outcomes involving majority vote outcomes. Illustrating examples include computing each distribution’s likelihood of a majority vote cycle and the likelihood that the Borda Count and Condorcet winners agree. Surprisingly, computations often reduce to a simple exercise of finding the volumes of tetrahedrons.  相似文献   

12.
Kinetic Monte Carlo methods provide a powerful computational tool for the simulation of microscopic processes such as the diffusion of interacting particles on a surface, at a detailed atomistic level. However such algorithms are typically computationatly expensive and are restricted to fairly small spatiotemporal scales. One approach towards overcoming this problem was the development of coarse-grained Monte Carlo algorithms. In recent literature, these methods were shown to be capable of efficiently describing much larger length scales while still incorporating information on microscopic interactions and fluctuations. In this paper, a coarse-grained Langevin system of stochastic differential equations as approximations of diffusion of interacting particles is derived, based on these earlier coarse-grained models. The authors demonstrate the asymptotic equivalence of transient and long time behavior of the Langevin approximation and the underlying microscopic process, using asymptotics methods such as large deviations for interacting particles systems, and furthermore, present corresponding numerical simulations, comparing statistical quantities like mean paths, auto correlations and power spectra of the microscopic and the approximating Langevin processes. Finally, it is shown that the Langevin approximations presented here are much more computationally efficient than conventional Kinetic Monte Carlo methods, since in addition to the reduction in the number of spatial degrees of freedom in coarse-grained Monte Carlo methods, the Langevin system of stochastic differential equations allows for multiple particle moves in a single timestep.  相似文献   

13.
We present two time-inhomogeneous search processes for finding the optimal Bernoulli parameters, where the performance measure cannot be evaluated exactly but must be estimated through Monte Carlo simulation. At each iteration, two neighbouring alternatives are compared and the one that appears to be better is passed on to the next iteration. The first search process uses an increasing sample size of each configuration at each iteration. The second search process uses a sequential sampling procedure with increasing boundaries as the number of iterations increases. At each iteration the acceptance of a new configuration depends on the iterate number, therefore, the search process turns out to be inhomogeneous Markov chain. We show that if the increase occurs slower than a certain rate, these search processes will converge to the optimal set with probability one. © 1998 John Wiley & Sons, Ltd.  相似文献   

14.
We present a new characterization of the class of weight-based scoring indices for ranking problems with top-truncated preferences. The main novel axiom is Splitting Invariance: if an alternative is split up into a number of distinct yet unranked alternatives, then the total score of these alternatives should increase by the score of the original alternative, and the scores of the other alternatives should not change.  相似文献   

15.
This paper proposes appropriate boundary conditions to be equipped with Kolmogorov's Forward Equation that governs a stationary probability density function for a 1-D impulsively controlled diffusion process and derives an exact probability density function. The boundary conditions are verified numerically with a Monte Carlo approach. A finite-volume method for solving the equation is also presented and its accuracy is investigated through numerical experiments.  相似文献   

16.
A new approach of iterative Monte Carlo algorithms for the well-known inverse matrix problem is presented and studied. The algorithms are based on a special techniques of iteration parameter choice, which allows to control the convergence of the algorithm for any column (row) of the matrix using different relaxation parameters. The choice of these parameters is controlled by a posteriori criteria for every Monte Carlo iteration. The presented Monte Carlo algorithms are implemented on a SUN Sparkstation. Numerical tests are performed for matrices of moderate in order to show how work the algorithms. The algorithms under consideration are well parallelized.  相似文献   

17.
We describe NIMBLE, a system for programming statistical algorithms for general model structures within R. NIMBLE is designed to meet three challenges: flexible model specification, a language for programming algorithms that can use different models, and a balance between high-level programmability and execution efficiency. For model specification, NIMBLE extends the BUGS language and creates model objects, which can manipulate variables, calculate log probability values, generate simulations, and query the relationships among variables. For algorithm programming, NIMBLE provides functions that operate with model objects using two stages of evaluation. The first stage allows specialization of a function to a particular model and/or nodes, such as creating a Metropolis-Hastings sampler for a particular block of nodes. The second stage allows repeated execution of computations using the results of the first stage. To achieve efficient second-stage computation, NIMBLE compiles models and functions via C++, using the Eigen library for linear algebra, and provides the user with an interface to compiled objects. The NIMBLE language represents a compilable domain-specific language (DSL) embedded within R. This article provides an overview of the design and rationale for NIMBLE along with illustrative examples including importance sampling, Markov chain Monte Carlo (MCMC) and Monte Carlo expectation maximization (MCEM). Supplementary materials for this article are available online.  相似文献   

18.
In the boolean decision tree model there is at least a linear gap between the Monte Carlo and the Las Vegas complexity of a function depending on the error probability. We prove for a large class of read-once formulae that this trivial speed-up is the best that a Monte Carlo algorithm can achieve. For every formula F belonging to that class we show that the Monte Carlo complexity of F with two-sided error p is (1 ? 2p)R(F), and with one-sided error p is (1 ? p)R(F), where R(F) denotes the Las Vegas complexity of F. The result follows from a general lower bound that we derive on the Monte Carlo complexity of these formulae. This bound is analogous to the lower bound due to Saks and Wigderson on their Las Vegas complexity.  相似文献   

19.
We have recently developed a global optimization methodology for solving combinatorial problems with either deterministic or stochastic performance functions. This method, the Nested Partitions (NP) method has been shown to generate a Markov chain and with probability one to converge to a global optimum. In this paper, we study the rate of convergence of the method through the use of Markov Chain Monte Carlo (MCMC) methods, and use this to derive stopping rules that can be applied during simulation-based optimization. A numerical example serves to illustrate the feasibility of our approach.  相似文献   

20.
We consider informational requirements of social choice rules satisfying anonymity, neutrality, monotonicity, and efficiency, and never choosing the Condorcet loser. Among such rules, we establish the existence of a rule operating on the minimal informational requirement. Depending on the number of agents and the number of alternatives, either the plurality rule or the plurality with a runoff is characterized. In some cases, the plurality rule is the most selective rule among the rules operating on the minimal informational requirement. In the other cases, each rule operating on the minimal informational requirement is a two-stage rule, and among them, the plurality with a runoff is the rule whose choice at the first stage is most selective. These results not only clarify properties of the plurality rule and the plurality with a runoff, but also explain why they are widely used in real societies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号