首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In networked systems research, game theory is increasingly used to model a number of scenarios where distributed decision making takes place in a competitive environment. These scenarios include peer‐to‐peer network formation and routing, computer security level allocation, and TCP congestion control. It has been shown, however, that such modeling has met with limited success in capturing the real‐world behavior of computing systems. One of the main reasons for this drawback is that, whereas classical game theory assumes perfect rationality of players, real world entities in such settings have limited information, and cognitive ability which hinders their decision making. Meanwhile, new bounded rationality models have been proposed in networked game theory which take into account the topology of the network. In this article, we demonstrate that game‐theoretic modeling of computing systems would be much more accurate if a topologically distributed bounded rationality model is used. In particular, we consider (a) link formation on peer‐to‐peer overlay networks (b) assigning security levels to computers in computer networks (c) routing in peer‐to‐peer overlay networks, and show that in each of these scenarios, the accuracy of the modeling improves very significantly when topological models of bounded rationality are applied in the modeling process. Our results indicate that it is possible to use game theory to model competitive scenarios in networked systems in a way that closely reflects real world behavior, topology, and dynamics of such systems. © 2016 Wiley Periodicals, Inc. Complexity 21: 123–137, 2016  相似文献   

2.
In this supposed “information age,” a high premium is put on the widespread availability of information. Access to as much information as possible is often cited as key to the making of effective decisions. While it would be foolish to deny the central role that information and its flow has in effective decision‐making processes, this chapter explores the equally important role of “barriers” to information flows in the robustness of complex systems. The analysis demonstrates that (for simple Boolean networks at least) a complex system's ability to filter out, i.e., block, certain information flows is essential if it is not to be beholden to every external signal. The reduction of information is as important as the availability of information. © 2009 Wiley Periodicals, Inc. Complexity, 2010  相似文献   

3.
Jens Saak  Peter Benner 《PAMM》2008,8(1):10085-10088
Model order reduction of large–scale linear time–invariant systems is an omnipresent task in control and simulation of complex dynamical processes. The solution of large scale Lyapunov and Riccati equations is a major task, e.g., in balanced truncation and related model order reduction methods, in particular when applied to semi–discretized partial differential equations constraint control problems. The software package LyaPack has shown to be a valuable tool in the task of solving these equations since its introduction in 2000. Here we want to discuss recent improvements and extensions of the underlying algorithms and their implementation. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
Risk and return are interdependent in a stock portfolio. To achieve the anticipated return, comparative risk should be considered simultaneously. However, complex investment environments and dynamic change in decision making criteria complicate forecasts of risk and return for various investment objects. Additionally, investors often fail to maximize their profits because of improper capital allocation. Although stock investment involves multi-criteria decision making (MCDM), traditional MCDM theory has two shortfalls: first, it is inappropriate for decisions that evolve with a changing environment; second, weight assignments for various criteria are often oversimplified and inconsistent with actual human thinking processes.In 1965, Rechenberg proposed evolution strategies for solving optimization problems involving real number parameters and addressed several flaws in traditional algorithms, such as their use of point search only and their high probability of falling into optimal solution area. In 1992, Hillis introduced the co-evolutionary concept that the evolution of living creatures is interactive with their environments (multi-criteria) and constantly improves the survivability of their genes, which then expedites evolutionary computation. Therefore, this research aimed to solve multi-criteria decision making problems of stock trading investment by integrating evolutionary strategies into the co-evolutionary criteria evaluation model. Since co-evolution strategies are self-calibrating, criteria evaluation can be based on changes in time and environment. Such changes not only correspond with human decision making patterns (i.e., evaluation of dynamic changes in criteria), but also address the weaknesses of multi-criteria decision making (i.e., simplified assignment of weights for various criteria).Co-evolutionary evolution strategies can identify the optimal capital portfolio and can help investors maximize their returns by optimizing the preoperational allocation of limited capital. This experimental study compared general evolution strategies with artificial neural forecast model, and found that co-evolutionary evolution strategies outperform general evolution strategies and substantially outperform artificial neural forecast models. The co-evolutionary criteria evaluation model avoids the problem of oversimplified adaptive functions adopted by general algorithms and the problem of favoring weights but failing to adaptively adjust to environmental change, which is a major limitation of traditional multi-criteria decision making. Doing so allows adaptation of various criteria in response to changes in various capital allocation chromosomes. Capital allocation chromosomes in the proposed model also adapt to various criteria and evolve in ways that resemble thinking patterns.  相似文献   

5.
In this paper, the dynamical control of a mixed finite and infinite dimensional mechanical system is approached within the framework of port Hamiltonian systems. In particular, a flexible beam, modeled according to the Timoshenko theory and in distributed port Hamiltonian form, with a mass under gravity field connected at a free end, is considered. The control problem is approached by generalization of the concept of structural invariant (Casimir function) to the infinite dimensional case and the so-called control by interconnection technique is extended to the infinite dimensional case. In this way, finite dimensional passive controllers can stabilize distributed parameter systems by shaping their total energy, i.e., by assigning a new minimum in the desired equilibrium configuration that can be reached if a dissipation effect is introduced.  相似文献   

6.
Many studies have proposed one‐equation models to represent transport processes in heterogeneous porous media. This approach is based on the assumption that dependent variables such as pressure, temperature, or concentration can be expressed in terms of a single large‐scale averaged quantity in regions having very different chemical and/or mechanical properties. However, one can also develop large‐scale averaged equations that apply to the distinct regions that make up a heterogeneous porous medium. This approach leads to region‐averaged equations that contain traditional convective and dispersive terms, in addition to exchange terms that account for the transfer between the different media. In our approach, the fissures represent one region, and the porous media blocks represent the second region. The analysis leads to upscaled equations having a domain of validity that is clearly identified in terms of time and length‐scale constraints. Closure problems are developed that lead to the prediction of the effective coefficients that appear in the region averaged equations, and the main purpose of this article is to provide solutions to those closure problems. The method of solution makes use of an unstructured grid and a joint element method in order to take care of the special characteristics of the fissured network. This new numerical method uses the theory developed by Quintard and Whitaker and is applied on considerably more complex geometries than previously published results. It has been tested for several special cases such as stratified systems and “sugarbox” media, and we have compared our calculations with other computational methods. © 2000 John Wiley & Sons, Inc. Numer Methods Partial Differential Eq 16: 237–263, 2000  相似文献   

7.
We discuss firstly the problem of military decision, in the context of the more general development of ideas in the representation of decision making. Within this context, we have considered a mathematical model—Bayesian Decision—of decision making and military command. Previous work has been extended, and applied to this problem. A distribution of belief in outcome, given that a decision is made, and a Loss function—a measure of the effect of this outcome relative to a goal—are formed. The Bayes' Decision is the decision which globally minimises the resultant bimodal (or worse) Expected Loss function. The set of all minimising decisions corresponds to the surface of an elementary Catastrophe. This allows smooth parameter changes to lead to a discontinuous change in the Bayes' decision. In future work this approach will be used to help develop a number of hypotheses concerning command processes and military headquarters structure. It will also be used to help capture such command and control processes in simulation modelling of future defence capability and force structure.  相似文献   

8.
Due to the growing popularity of distributed computing systems and the increased level of modelling activity in most organizations, significant benefits can be realized through the implementation of distributed model management systems (DMMS). These systems can be defined as a collection of logically related modelling resources distributed over a computer network. In several ways, functions of DMMS are isomorphic to those of distributed database systems. In general, this paper examines issues viewed as central to the development of distributed model bases (DMB). Several criteria relevant to the overall DMB design problem are discussed. Specifically, this paper focuses on the problem of distributing decision models and tools (solvers), henceforth referred to as theModel Allocation Problem (MAP), to individual computing sites in a geographically dispersed organization. In this research, a 0/1 integer programming model is formulated for the MAP, and an efficient dual ascent heuristic is proposed. Our extensive computational study shows in most instances heuristic-generated solutions which are guaranteed to be within 1.5–7% of optimality. Further, even problems with 420 integer and 160,000 continuous variables took no more than 60 seconds on an IBM 3090-600E computer.  相似文献   

9.
Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression‐based models, logit models, and theoretical market‐level models, such as the NBD‐Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method—agent‐based modeling—shows promise for addressing these issues. Agent‐based models use business‐driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system‐level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent‐based modeling to develop a multi‐scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings. © 2010 Wiley Periodicals, Inc. Complexity, 2010  相似文献   

10.
In the health informatics era, modeling longitudinal data remains problematic. The issue is method: health data are highly nonlinear and dynamic, multilevel and multidimensional, comprised of multiple major/minor trends, and causally complex—making curve fitting, modeling, and prediction difficult. The current study is fourth in a series exploring a case‐based density (CBD) approach for modeling complex trajectories, which has the following advantages: it can (1) convert databases into sets of cases (k dimensional row vectors; i.e., rows containing k elements); (2) compute the trajectory (velocity vector) for each case based on (3) a set of bio‐social variables called traces; (4) construct a theoretical map to explain these traces; (5) use vector quantization (i.e., k‐means, topographical neural nets) to longitudinally cluster case trajectories into major/minor trends; (6) employ genetic algorithms and ordinary differential equations to create a microscopic (vector field) model (the inverse problem) of these trajectories; (7) look for complex steady‐state behaviors (e.g., spiraling sources, etc) in the microscopic model; (8) draw from thermodynamics, synergetics and transport theory to translate the vector field (microscopic model) into the linear movement of macroscopic densities; (9) use the macroscopic model to simulate known and novel case‐based scenarios (the forward problem); and (10) construct multiple accounts of the data by linking the theoretical map and k dimensional profile with the macroscopic, microscopic and cluster models. Given the utility of this approach, our purpose here is to organize our method (as applied to recent research) so it can be employed by others. © 2015 Wiley Periodicals, Inc. Complexity 21: 160–180, 2016  相似文献   

11.
Production ramp-up is an important phase in the lifecycle of a manufacturing system which still has significant potential for improvement and thereby reducing the time-to-market of new and updated products. Production systems today are mostly one-of-a-kind complex, engineered-to-order systems. Their ramp-up is a complex order of physical and logical adjustments which are characterised by try and error decision making resulting in frequent reiterations and unnecessary repetitions. Studies have shown that clear goal setting and feedback can significantly improve the effectiveness of decision-making in predominantly human decision processes such as ramp-up. However, few measurement-driven decision aides have been reported which focus on ramp-up improvement and no systematic approach for ramp-up time reduction has yet been defined. In this paper, a framework for measuring the performance during ramp-up is proposed in order to support decision making by providing clear metrics based on the measurable and observable status of the technical system. This work proposes a systematic framework for data preparation, ramp-up formalisation, and performance measurement. A model for defining the ramp-up state of a system has been developed in order to formalise and capture its condition. Functionality, quality and performance based metrics have been identified to formalise a clear ramp-up index as a measurement to guide and support the human decision making. For the validation of the proposed framework, two ramp-up processes of an assembly station were emulated and their comparison was used to evaluate this work.  相似文献   

12.
Accounting for the large variation of asphalt mixes, resulting from variations of constituents and composition, and from the allowance of additives, a multiscale model for asphalt is currently developed at the Christian Doppler Laboratory for “Performance‐based optimization of flexible road pavements”. The multiscale concept allows to relate macroscopic material properties of asphalt to phenomena and material properties of finer scales of observation. Starting with the characterization of the finest scale, i.e., the bitumen‐scale, Atomic Force Microscopy (AFM) is employed. Depending on the mode of measurement (tapping versus pulsed‐force mode), the AFM provides insight into the surface topography or stiffness and adhesion properties of bitumen. The obtained results will serve as input for upscaling in the context of the multiscale model in order to obtain the homogenized material behavior of bitumen at the next‐higher scale, i.e., the mastic‐scale. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
Returns to scale is considered as one of the important concepts in data envelopment analysis (DEA) which can be useful for deciding to increase or decrease the size of a particular decision making unit. Traditional returns to scale on the efficient surface of the production possibility set with variable returns to scale (VRS) technology is introduced as a ratio of proportional changes of output components to proportional changes of input components. However, a problem which may arise in the real world is the impossibility or undesirability of proportional change in the input or output components. One of the attempts which is made to solve the aforementioned problem is the work of Yang et al., 2014. They have introduced the “directional returns to scale” in the DEA framework and have proposed some procedures to estimate and measure it. In this paper, the introduced directional returns to scale is investigated from a new perspective based on the defining hyperplanes of the production possibility set with VRS technology. We propose some algebraic equations and linear programming models which in addition to measuring the directional returns to scale, they enable us to analyse it. Moreover, we introduce the concepts of the best input and output direction vectors for expansion of input components or compression of output components, respectively, and propose two linear programming models in order to obtain these directions. The presented equations and models are demonstrated using a case study and numerical examples.  相似文献   

14.
基于DEA的污染物排放配额分配方法研究   总被引:2,自引:0,他引:2  
文章首先提出一种典型的通过分配污染物排放配额改善环境状况的环境管理问题,在分析问题特性的基础上,提出一种基于DEA的污染物排放配额的分配方法,该方法将污染物排放配额作为一种决策变量,在求解系统整体效率的同时得到各决策单元的配额分配量。然后采用淮河流域造纸厂的实例说明了该方法的合理性和可行性。由于本文提出的方法考虑环境管理实际情况,在分配配额时能有效提高整个系统的环境效率,能为环境管理政策的制定提供有效的参考信息,具有很大的应用价值.  相似文献   

15.
16.
Distributed computing systems are becoming bigger and more complex. Although the complexity of large‐scale distributed systems has been acknowledged to be an important challenge, there has not been much work in defining or measuring system complexity. Thus, today, it is difficult to compare the complexities of different systems, or to state that one system is easier to program, to manage, or to use than another. In this article, we try to understand the factors that cause computing systems to appear very complex to people. We define different aspects of system complexity and propose metrics for measuring these aspects. We also show how these aspects affect different kinds of people—viz. developers, administrators, and end‐users. On the basis of the aspects and metrics of complexity that we identify, we propose general guidelines that can help reduce the complexity of systems. © 2007 Wiley Periodicals, Inc. Complexity 12: 37–45, 2007  相似文献   

17.
在混料试验中,当混料模型较为复杂且混料成份较多时,要验证一个设计ξ的最优性是比较困难的.一方面,当模型或约束较为复杂时难以证明方差函数是否满足最优性准则条件,另一方面,当混料成份多于3时不能通过绘制方差函数的曲面图来观察最优性.文章提出一种可用于验证混料对称设计的最优性的图检验法,通过实例分析,这种方法是有效的.  相似文献   

18.
We develop and analyze a negative norm least‐squares method for the compressible Stokes equations with an inflow boundary condition. Least‐squares principles are derived for a first‐order form of the equations obtained by using ω = ?×u and φ = ? · u as new dependent variables. The resulting problem is incompletely elliptic, i.e., it combines features of elliptic and hyperbolic equations. As a result, well‐posedness of least‐squares functionals cannot be established using the ADN elliptic theory and so we use direct approaches to prove their norm‐equivalence. The article concludes with numerical examples that illustrate the theoretical convergence rates. © 2005 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2006  相似文献   

19.
This paper derives a maximum principle for dynamic systems with continuous lags, i.e., systems governed by integrodifferential equations, using dynamic programming. As a result, the adjoint variables can be provided with useful economic interpretations.This research was supported by NSERC Grant No. A4619.  相似文献   

20.
In this paper, we study the problem of collective decision-making over combinatorial domains, where the set of possible alternatives is a Cartesian product of (finite) domain values for each of a given set of variables, and these variables are not preferentially independent. Due to the large alternative space, most common rules for social choice cannot be directly applied to compute a winner. In this paper, we introduce a distributed protocol for collective decision-making in combinatorial domains, which enjoys the following desirable properties: (i) the final decision chosen is guaranteed to be a Smith member; (ii) it enables distributed decision-making and works under incomplete information settings, i.e., the agents are not required to reveal their preferences explicitly; (iii) it significantly reduces the amount of dominance testings (individual outcome comparisons) that each agent needs to conduct, as well as the number of pairwise comparisons; (iv) it is sufficiently general and does not restrict the choice of preference representation languages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号