共查询到20条相似文献,搜索用时 15 毫秒
1.
M Carey 《The Journal of the Operational Research Society》2009,60(3):395-410
Several analytic approaches have been developed to describe or predict traffic flows on networks with time-varying (dynamic) travel demands, flows and travel times. A key component of these models lies in modelling the flows and/or travel times on the individual links, but as this is made more realistic or accurate it tends to make the overall model less computationally tractable. To help overcome this, and for other reasons, we develop a bi-level user equilibrium (UE) framework that separates the assignment or loading of flows on the time–space network from the modelling of flows and trip times within individual links. We show that this model or framework satisfies appropriate definitions of UE satisfies a first-in-first-out (FIFO) property of road traffic, and has other desirable properties. The model can be solved by iterating between (a) a linear network-loading model that takes the lengths of time–space links as fixed (within narrow ranges), and (b) a set of link flow sub-models which update the link trip times to construct a new time–space network. This allows links to be processed sequentially or in parallel and avoids having to enumerate paths and compute path flows or travel times. We test and demonstrate the model and algorithms using example networks and find that the algorithm converges quickly and the solutions behave as expected. We show how to extend the model to handle elastic demands, multiple destinations and multiple traffic types, and traffic spillback within links and from link to link. 相似文献
2.
We study a dynamic way of automatically generating sequences of songs in this paper. Starting from a given seed song, a sequence is generated on-the-fly while listening to the music. During this process, the user can express his or her dislike for the currently played song by pressing a skip button. When the current song is finished or skipped, a heuristic is used to choose the next song to be played. We introduce fuzzy set theory as a formalism for defining such heuristics, because this allows us to make the definitions systematic, formal, and intuitively clear. By doing this, we obtain a general unified framework that includes all heuristics introduced previously by other authors. We compare these existing heuristics with several novel ones by means of extensive experimental evaluations. Moreover, we demonstrate that the presented formal framework can be used to easily generate variations of a heuristic that perform significantly better under specific circumstances. 相似文献
3.
A hyperbolic model to study effects of industrialization and urbanization on air pollution propagation is proposed. 相似文献
4.
《Mathematical and Computer Modelling》2004,39(6-8):885-896
Future operations by the U.S. military services will require greater collaboration within the government and with the private sector. Commercial enterprises that normally compete with one another will have to cooperate to satisfy the goals of the operation. For example, the military uses commercial airlift assets to support the movement of soldiers and cargo to the theater. Ideally, the military would like to receive adequate commercial airlift capacity at a reasonable cost, while the commercial air carriers would like to balance their workload and minimize the disruption to their daily operations. We present a distributed optimization approach that uses software agents—representing the interests of the military and commercial carriers—to collaboratively plan the airlift. By auctioning the missions and allowing carriers to swap missions when mutually beneficial, this approach cuts the controllable operating costs and schedule disruption costs by more than half compared with a centralized planning approach currently used. 相似文献
5.
A commonly used method of monitoring the condition of rail track is to run an inspection vehicle over the track at intervals of about 3 months. Measurements of several geometric properties of the track are automatically recorded about every 900 mm, resulting in long sequences of data (signals) arising from runs of up to 100 km. Condition monitoring is done by comparing the results of a current run with those of a previously recorded reference run. Before this can be done, the two signals need to be aligned so that corresponding distance measurements in each signal actually refer to the same point on the track. A procedure for matching the two signals is presented, which has at its heart a dynamic programming method. The procedure is demonstrated on data from rail tracks in Australia. 相似文献
6.
E Pratsini 《The Journal of the Operational Research Society》1999,50(5):526-530
Carbon containing aerosol is the most abundant particulate air pollutant species. It causes poor visibility and can be toxic. Tracing its origins is an important step in environmental management and control. This study analyses the carbon concentrations at Duarte, CA (a suburban site near Los Angeles) and in Lennox, CA (a site next to a Los Angeles freeway). Concentrations inside a tunnel are also available and used to derive a motor vehicle emission profile. A new approach is proposed for calculating the motor vehicle contribution to organic carbon and the amount of background carbon found at these two sites. Regression analysis provides insight in the formation of organic carbon and frontier analysis is used to calculate the motor vehicle contribution to organic carbon and the amount of background carbon in the atmosphere. The information obtained from this analysis can be used in the regulation of motor vehicle emissions and in air pollution control. 相似文献
7.
This paper describes the allocation of a wastewater treatment fund within a region based on a dynamic input-output model. Considering the complexity of the input-output process, many indeterminate factors must be included in the model. For example, with the aging of machines, an unexpected loss will be caused by the retention of raw materials during an operation; this can be realistically considered as a random variable, because of the sufficiently large amount of historical data. By contrast, actions such as a temporary transfer or inexperienced operators can only be regard as uncertain variables, because of a lack of historical data. First, the pollution control model is formulated in an uncertain environment by including both human uncertainty and objective randomness. Second, an optimal control model subject to an uncertain random singular system is established; this model can be transformed into an equivalent optimization problem. To solve such a problem, recurrence equations are presented based on Bellman’s principle, and these were successfully applied to address the optimal control problem in two special cases. Moreover, two algorithms are formulated for solving the pollution control problem. Finally, the optimal distribution strategies of the pollution control fund used to control the emissions of COD and NH3-H, which are two indicators of wastewater in China, were obtained through the proposed algorithms. 相似文献
8.
《European Journal of Operational Research》2005,165(1):219-230
Supply chain management has gained renewed interest among researchers in recent years. This is primarily due to the availability of timely information across the various stages of the supply chain, and therefore the need to effectively utilize the information for improved performance. Although information plays a major role in effective functioning of supply chains, there is a paucity of studies that deal specifically with the dynamics of supply chains and how data collected in these systems can be used to improve their performance. In this paper I develop a framework, with machine learning, for automated supply chain configuration. Supply chain configuration used to be mostly a one-shot problem. Once a supply chain is configured, researchers and practitioners were more interested in means to improve performance given that initial configuration. However, recent developments in e-commerce applications and faster communication over the Internet in general necessitates dynamic (re)configuration of supply chains over time to take advantage of better configurations. Using examples, I show performance improvements of the proposed adaptive supply chain configuration framework over static configurations. 相似文献
9.
Alberto Ceselli Federico Liberatore Giovanni Righini 《Annals of Operations Research》2009,167(1):209-251
The purpose of this paper is to illustrate a general framework for network location problems, based on column generation and
branch-and-price. In particular we consider capacitated network location problems with single-source constraints. We consider
several different network location models, by combining cardinality constraints, fixed costs, concentrator restrictions and
regional constraints. Our general branch-and-price-based approach can be seen as a natural counterpart of the branch-and-cut-based
commercial ILP solvers, with the advantage of exploiting the tightness of the lower bound provided by the set partitioning
reformulation of network location problems. Branch-and-price and branch-and-cut are compared through an extensive set of experimental
tests. 相似文献
10.
Alexander M. Hoole Issa Traore Isabelle Simplot-Ryl 《Mathematical and Computer Modelling》2011,53(3-4):522-537
Telecommunication software systems, containing security vulnerabilities, continue to be created and released to consumers. We need to adopt improved software engineering practices to reduce the security vulnerabilities in modern systems. Contracts can provide a useful mechanism for the identification, tracking, and validation of security vulnerabilities. In this work, we propose a new contract-based security assertion monitoring framework (CB_SAMF) that is intended to reduce the number of security vulnerabilities that are exploitable across multiple software layers, and to be used in an enhanced systems development life cycle (SDLC). We show how contract-based security assertion monitoring can be achieved in a live environment on Linux. Through security activities integrated into the SDLC we can identify potential security vulnerabilities in telecommunication systems, which in turn are used for the creation of contracts defining security assertions. Our contract model is then applied, as runtime probes, against two common security related vulnerabilities in the form of a buffer overflow and a denial of service. 相似文献
11.
J Whittaker C Whitehead M Somers 《The Journal of the Operational Research Society》2007,58(7):911-921
A principled technique for monitoring the performance of a consumer credit scorecard through time is derived from Kalman filtering. Standard approaches sporadically compare certain characteristics of the new applicants with those predicted from the scorecard. The new approach systematically updates the scorecard combining new applicant information with the previous best estimate. The dynamically updated scorecard is tracked through time and compared to limits calculated by sequential simulation from the baseline scorecard. The observation equation of the Kalman filter is tailored to take the results of fitting local scorecards by logistic regression to batches of new clients that arrive in the current time interval. The states in the Kalman filter represent the true or underlying score for each attribute in the card: the parameters of the logistic regression. Their progress in time is modelled by a random walk and the filter provides the best estimate of the scores using past and present information. We illustrate the technique using a commercial mortgage portfolio and the results indicate significant emerging deficiencies in the baseline scorecard. 相似文献
12.
Evaluation of the overall effectiveness of decision support systems (DSS) has been a research topic since the early 1980s. As artificial intelligence methods have been incorporated into systems to create intelligent decision support systems (IDSS), researchers have attempted to quantify the value of the additional capabilities. Despite the useful and relevant insights generated by previous research, existing evaluation methodologies offer only a fragmented and incomplete view of IDSS value and the contribution of its technical infrastructure. This paper proposes an integrative, multiple criteria IDSS evaluation framework through a model that links the decision value of an IDSS to both the outcome from, and process of, decision making and down to specific components of the IDSS. The proposed methodology provides the designer and developer specific guidance on the intelligent tools most useful for a specific user with a particular decision problem. The proposed framework is illustrated by evaluating an actual IDSS that coordinates management of urban infrastructures. 相似文献
13.
The purpose of this article is to consider a two firms excess-loss reinsurance problem. The first firm is defined as the direct underwriter while the second firm is the reinsurer. As in the classical model of collective risk theory it is assumed that premium payments are received deterministically from policyholders at a constant rate, while the claim process is determined by a compound Poisson process. The objective of the underwriter is to maximize the expected present value of the long run terminal wealth (investments plus cash) of the firm by selecting an appropriate excess-loss coverage strategy, while the reinsurer seeks to maximize its total expected discounted profit by selecting an optimal loading factor. Since both firms' policies are interdependent we define an insurance game, solved by employing a Stackelberg solution concept. A diffusion approximation is used in order to obtain tractable results for a general claim size distribution. Finally, an example is presented illustrating computational procedures. 相似文献
14.
《Chaos, solitons, and fractals》2000,11(9):1365-1368
A general framework is proposed for synchronization theory on finite dimensional dynamical systems with the intention to resolve the problem that puzzles some people of how to give a rigorous unified notion for describing the various synchronization phenomena in physical systems. 相似文献
15.
A framework for positive dependence 总被引:4,自引:3,他引:1
George Kimeldorf Allan R. Sampson 《Annals of the Institute of Statistical Mathematics》1989,41(1):31-45
This paper presents, for bivariate distributions, a unified framework for studying and relating three basic concepts of positive dependence. These three concepts are positive dependence orderings, positive dependence properties and measures of positive dependence. The latter two concepts are formally defined and their properties discussed. Interrelationships among these three concepts are given, and numerous examples are presented.Supported by the National Science Foundation under Grant DMS-8301361.Supported by the Air Force Office of Scientific Research under Contract 84-0113. Reproduction in whole or part is permitted for any purpose of the United States Government. 相似文献
16.
《European Journal of Operational Research》1997,96(1):1-35
This paper surveys the current state of the literature in management science/operations research approaches to air pollution management. After introducing suitable background we provide some of the institutional and legal framework needed to understand the continuing regulatory efforts in United States. Attention is then turned to mathematical programming models ranging from fairly simple deterministic linear programs to quite sophisticated stochastic models which have appeared in the literature dealing with these topics. This is followed by extensions reflecting some of the work we have undertaken in association with the Texas Natural Resource Conservation Commission, a regulatory agency in Texas. Application and potential use of models is the central theme of this survey. Issues for future research are presented at the end and an extensive list of publications is provided in the references at the end of the article.Principal air quality issues of local, national, and international concern are listed below in increasing order of difficulty based on the number of different types of pollutants and problems in quantification of the risks the pollutants pose:
- 1.1. Stratospheric ozone depletion: one relatively easily controllable class of trace gases - ozone depleting chemicals, or ODCs, principally chloroflurocarbons (CFCs) — with relatively well quantified risks;
- 2.2. Criteria pollutants: six common pollutants — ozone (O3), carbon monoxide (CO), sulfur dioxide (SO2), nitrogen dioxide (NO2), lead (Pb), and particulate matter less than 10 microns in size (PM10) — regulated since 1970 in the U.S. and presenting relatively well quantified risks;
- 3.3. Acid precipitation: two relatively easily controllable classes of trace gases — oxides of nitrogen (NOx) and oxides of sulfur (SOx) with relatively well quantified risks;
- 4.4. Global warming/climate change: a few difficult to control trace gases — principally carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and CFCs — with highly uncertain risks;
- 5.5. Toxics or HAPS (hazardous air pollutants): hundreds of types of gaseous chemicals and particles with uncertain risks;
- 6.6. Somewhat dated, but nevertheless useful, is the following reference: Glossary on Air Pollution (Copenhagen, World Health Organization, 1980).
17.
Selmo Tauber 《Applied mathematics and computation》1978,4(2):167-176
By extending the use of the Hadamard product, it can be applied to solving a partial differential equation coming from an air pollution problem. 相似文献
18.
In this paper, we present a global optimization method for solving nonconvex mixed integer nonlinear programming (MINLP) problems. A convex overestimation of the feasible region is obtained by replacing the nonconvex constraint functions with convex underestimators. For signomial functions single-variable power and exponential transformations are used to obtain the convex underestimators. For more general nonconvex functions two versions of the so-called αBB-underestimator, valid for twice-differentiable functions, are integrated in the actual reformulation framework. However, in contrast to what is done in branch-and-bound type algorithms, no direct branching is performed in the actual algorithm. Instead a piecewise convex reformulation is used to convexify the entire problem in an extended variable-space, and the reformulated problem is then solved by a convex MINLP solver. As the piecewise linear approximations are made finer, the solution to the convexified and overestimated problem will form a converging sequence towards a global optimal solution. The result is an easily-implementable algorithm for solving a very general class of optimization problems. 相似文献
19.
Geert Van Damme 《Journal of Computational and Applied Mathematics》2011,235(8):2523-2550
In this document a method is discussed to incorporate stochastic Loss-Given-Default (LGD) in factor models, i.e. structural models for credit risk. The general idea exhibited in this text is to introduce a common dependence of the LGD and the probability of default (PD) on a latent variable, representing the systemic risk. Though our theory can be applied to any arbitrary firm-value model and any underlying distribution for the LGD, provided its support is a compact subset of [0,1], special attention is given to the extension of the well-known cases of the Gaussian copula framework and the shifted Gamma one-factor model (a particular case of the generic one-factor Lévy model), and the LGD is modeled by a Beta distribution, in accordance with rating agency models and the Credit Metrics model.In order to introduce stochastic LGD, a monotonically decreasing relation is derived between the loss rate L, i.e. the loss as a percentage of the total exposure, and the standardized log-return R of the obligor’s asset value, which is assumed to be a function of one or more systematic and idiosyncratic risk factors. The property that the relation is decreasing guarantees that the LGD is negatively correlated to R and hence positively correlated to the default rate. From this relation, expressions are then derived for the cumulative distribution function (CDF) and the expected value of the loss rate and the LGD, conditionally on a realization of the systematic risk factor(s). It is important to remark that all our results are derived under the large homogeneous portfolio (LHP) assumption and that they are fully consistent with the IRB approach outlined by the Basel II Capital Accord.We will demonstrate the impact of incorporating stochastic LGD and using models based on skew and fat-tailed distributions in determining adequate capital requirements. Furthermore, we also skim the potential application of the proposed framework in a credit risk environment. It will turn out that both building blocks, i.e. stochastic LGD and fat-tailed distributions, separately, increase the projected loss and thus the required capital charge. Hence, the aggregation of a model based on a fat-tailed underlying distribution that accounts for stochastic LGD will lead to sound capital requirements. 相似文献
20.
A normative framework for agent-based systems 总被引:1,自引:0,他引:1
Fabiola López y López Michael Luck Mark d’Inverno 《Computational & Mathematical Organization Theory》2006,12(2-3):227-250
One of the key issues in the computational representation of open societies relates to the introduction of norms that help to cope with the heterogeneity, the autonomy and the diversity of interests among their members. Research regarding
this issue presents two omissions. One is the lack of a canonical model of norms that facilitates their implementation, and
that allows us to describe the processes of reasoning about norms. The other refers to considering, in the model of normative
multi-agent systems, the perspective of individual agents and what they might need to effectively reason about the society
in which they participate. Both are the concerns of this paper, and the main objective is to present a formal normative framework
for agent-based systems that facilitates their implementation.
F. López y López is researcher of the Computer Science Faculty at the Benemérita Universidad Autónoma de Puebla in México, from where she
got her first degree. She also gained a MSc in Computation from the Universidad Nacional Autónoma de México and a PhD in Computer
Science from the University of Southampton in the United Kingdom. She is leading several theoretical and practical projects
that use multi-agent systems as the main paradigm. Her research has been focused on Autonomous Normative Agents and Normative
Multi-Agent Systems and she has published over 20 articles in these and related topics.
M. Luck is Professor of Computer Science in the Intelligence, Agents, Multimedia Group of the School of Electronics and Computer
Science at the University of Southampton, where he carries out research into the theory and practice of agent technology.
He has published over 150 articles in these and related areas, both alone and in collaboration with others, and has published
eight books. He is a member of the Executive Committee of AgentLink III, the European Network of Excellence for Agent-Based
Computing. He is a co-founder of the European Multi-Agent Systems workshop series, is co-founder and Chair of the steering
committee of the UK Multi-Agent Systems Workshops (UKMAS), and was a member of the Management Board of Agentcities.NET. Professor
Luck is also a steering committee member for the Central and Eastern European Conference on Multi-Agent Systems. He is series
editor for Artech House’s Agent Oriented Systems series, and an editorial board member of the Journal of Autonomous Agents
and Multi-Agent Systems, the International Journal of Agent-Oriented Software Engineering, and ACM Transactions on Autonomous
and Adaptive Systems.
M. d’Inverno gained a BA in Mathematics and an MSc in Computation both from Oxford University. He also was awarded a PhD from University
College London. He joined the University of Westminster in 1992 as a Lecturer, became a senior lecturer in 1998, a reader
in 1999 and was appointed professor of computer science in 2001. He is interested in formal, principled approaches to modelling
both natural and artificial systems in a computational setting. The main strand to this research, focuses on the application
of formal methods in providing models of intelligent agent and multi-agent systems. His approach has sought to take a structured
approach to the development of practical agent systems from theoretical models. He has published over 70 articles in these
areas and has published four books and edited collections. 相似文献