首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Constraint programming offers modeling features and solution methods that are unavailable in mathematical programming but are often flexible and efficient for scheduling and other combinatorial problems. Yet mathematical programming is well suited to declarative modeling languages and is more efficient for some important problem classes. This raises this issue as to whether the two approaches can be combined in a declarative modeling framework. This paper proposes a general declarative modeling system in which the conditional structure of the constraints shows how to integrate any “checker” and any special-purpose “solver”. In particular this integrates constraint programming and optimization methods, because the checker can consist of constraint propagation methods, and the solver can be a linear or nonlinear programming routine.  相似文献   

2.
This paper addresses the problem of quantifying and modeling financial institutions’ operational risk in accordance with the Advanced Measurement Approach put forth in the Basel II Accord. We argue that standard approaches focusing on modeling stochastic dependencies are not sufficient to adequately assess operational risk. In addition to stochastic dependencies, causal topological dependencies between the risk classes are typically encountered. These dependencies arise when risk units have common information- and/or work-flows and when failure of upstream processes imply risk for downstream processes. In this paper, we present a modeling strategy that explicitly captures both topological and stochastic dependencies between risk classes. We represent the operational-risk taxonomy in the framework of a hybrid Bayesian network (BN) and provide an intuitively compelling approach for handling causal relationships and external influences. We demonstrate the use of hybrid BNs as a tool for mapping causal dependencies between frequencies and severities of risk events and for modeling common shocks. Monte-Carlo simulations illustrate that the impact of topological dependencies on triggering overall system breakdowns can be substantial.  相似文献   

3.
Central to the Model Management (MM) function is the creation and maintenance of a knowledge-based model repository. The Model Knowledge Base (MKB) provides the basis by which information about models can be shared to facilitate consistent and controlled utilization of existing models for decision making, as well as the development of new models. Various schemes for representing individual models have been proposed in the literature. This paper focuses on how best to structure, control, and administer a large MKB to support organization-wide modeling activities. Guided by a recently proposed systems framework for MM, we describe a number of concepts which are useful for capturing the semantics and structural relationships of models in an MKB. These concepts, and the nature of the MMS functions to be supported, are then used to derive specific information management requirements for model bases. Four major requirements are identified: (1) management of composite model configurations; (2) management of model version histories; (3) support for the model consultation and selection functions of an MMS; and (4) support for multiple logical MKBs (private, group, and public). We argue that traditional record-based approaches to data management appear to fall short of capturing the rich semantics present in an MM environment. The paper proposes an architecture for an MMS, focusing on its major component — the MKB Management Subsystem. An implementation of this architecture is briefly described.  相似文献   

4.
The problem of formal likelihood-based (either classical or Bayesian) inference for discretely observed multidimensional diffusions is particularly challenging. In principle, this involves data augmentation of the observation data to give representations of the entire diffusion trajectory. Most currently proposed methodology splits broadly into two classes: either through the discretization of idealized approaches for the continuous-time diffusion setup or through the use of standard finite-dimensional methodologies discretization of the diffusion model. The connections between these approaches have not been well studied. This article provides a unified framework that brings together these approaches, demonstrating connections, and in some cases surprising differences. As a result, we provide, for the first time, theoretical justification for the various methods of imputing missing data. The inference problems are particularly challenging for irreducible diffusions, and our framework is correspondingly more complex in that case. Therefore, we treat the reducible and irreducible cases differently within the article. Supplementary materials for the article are available online.  相似文献   

5.
Summary We present a general modeling framework for therobust optimization of linear network problems with uncertainty in the values of the right-hand side. In contrast to traditional approaches in mathematical programming, we use scenarios to characterize the uncertainty. Solutions are obtained for each scenario and these individual scenarios are aggregated to yield a nonanticipative or implementable policy that minimizes the regret of wrong decisions. A given solution is termed robust if it minimizes the sum over the scenarios of the weighted upper difference between the objective function value for the solution and the objective function value for the optimal solution for each scenario, while satisfying certain nonanticipativity constraints. This approach results in a huge model with a network submodel per scenario plus coupling constraints. Several decomposition approaches are considered, namely Dantzig-Wolfe decomposition, various types of Benders decomposition and different quadratic network approaches for approximating Augmented Lagrangian decomposition. We present computational results for these methods, including two implementation versions of the Lagrangian based method: a sequential implementation and a parallel implementation on a network of three workstations.  相似文献   

6.
In a companion paper (Cromvik and Patriksson, Part I, J. Optim. Theory Appl., 2010), the mathematical modeling framework SMPEC was studied; in particular, global optima and stationary solutions to SMPECs were shown to be robust with respect to the underlying probability distribution under certain assumptions. Further, the framework and theory were elaborated to cover extensions of the upper-level objective: minimization of the conditional value-at-risk (CVaR) and treatment of the multiobjective case. In this paper, we consider two applications of these results: a classic traffic network design problem, where travel costs are uncertain, and the optimization of a treatment plan in intensity modulated radiation therapy, where the machine parameters and the position of the organs are uncertain. Owing to the generality of SMPEC, we can model these two very different applications within the same framework. Our findings illustrate the large potential in utilizing the SMPEC formalism for modeling and analysis purposes; in particular, information from scenarios in the lower-level problem may provide very useful additional insights into a particular application.  相似文献   

7.
The rapid progress of communications technology has created new opportunities for modeling and optimizing the design of local telecommunication systems. The complexity, diversity, and continuous evolution of these networks pose several modeling challenges. In this paper, we present an overview of the local telephone network environment, and discuss possible modeling approaches. In particular, we (i) discuss the engineering characteristics of the network, and introduce terminology that is commonly used in the communications industry and literature; (ii) describe a general local access network planning model and framework, and motivate different possible modeling assumptions; (iii) summarize various existing planning models in the context of this framework; and (iv) describe some new modeling approaches. The discussion in this paper is directed both to researchers interested in modeling local telecommunications systems and to planners interested in using such models. Our goal is to present relevant aspects of the engineering environment for local access telecommunication networks, and to discuss the relationship between engineering issues and the formulation of economic decision models. We indicate how changes in the underlying switching and transmission technology affect the modeling of the local telephone network. We also review various planning issues and discuss possible optimization approaches for treating them.This research was initiated through a grant from GTE Laboratories, IncorporatedSupported in part by an AT&T research award.Supported in part by Grant No. ECS-8316224 from the Systems Theory and Operations Research Program of the National Science Foundation.  相似文献   

8.
In this article, we show that certain generalized boolean subalgebras of the exocenter of a generalized effect algebra (GEA) determine hull systems on the GEA in a manner analogous to the determination of a hull mapping on an effect algebra (EA) by its set of invariant elements. We show that a hull system on a GEA E induces a hull mapping on each interval E[0, p] in E, and, using hull systems, we identify certain special elements of E (e.g., η-subcentral elements, η-monads, and η-dyads). We also extend the type-decomposition theory for EAs to GEAs.  相似文献   

9.
This paper presents a new combined constraint handling framework (CCHF) for solving constrained optimization problems (COPs). The framework combines promising aspects of different constraint handling techniques (CHTs) in different situations with consideration of problem characteristics. In order to realize the framework, the features of two popular used CHTs (i.e., Deb’s feasibility-based rule and multi-objective optimization technique) are firstly studied based on their relationship with penalty function method. And then, a general relationship between problem characteristics and CHTs in different situations (i.e., infeasible situation, semi-feasible situation, and feasible situation) is empirically obtained. Finally, CCHF is proposed based on the corresponding relationship. Also, for the first time, this paper demonstrates that multi-objective optimization technique essentially can be expressed in the form of penalty function method. As CCHF combines promising aspects of different CHTs, it shows good performance on the 22 well-known benchmark test functions. In general, it is comparable to the other four differential evolution-based approaches and five dynamic or ensemble state-of-the-art approaches for constrained optimization.  相似文献   

10.
Recently there has been a lot of effort to model extremes of spatially dependent data. These efforts seem to be divided into two distinct groups: the study of max-stable processes, together with the development of statistical models within this framework; the use of more pragmatic, flexible models using Bayesian hierarchical models (BHM) and simulation based inference techniques. Each modeling strategy has its strong and weak points. While max-stable models capture the local behavior of spatial extremes correctly, hierarchical models based on the conditional independence assumption, lack the asymptotic arguments the max-stable models enjoy. On the other hand, they are very flexible in allowing the introduction of physical plausibility into the model. When the objective of the data analysis is to estimate return levels or kriging of extreme values in space, capturing the correct dependence structure between the extremes is crucial and max-stable processes are better suited for these purposes. However when the primary interest is to explain the sources of variation in extreme events Bayesian hierarchical modeling is a very flexible tool due to the ease with which random effects are incorporated in the model. In this paper we model a data set on Portuguese wildfires to show the flexibility of BHM in incorporating spatial dependencies acting at different resolutions.  相似文献   

11.
The purpose of this article is to propose a simple framework for the various decomposition schemes in mathematical programming.Special instances are discussed. Particular attention is devoted to the general mathematical programming problem with two sets of variables. An economic interpretation in the context of hierarchical planning is done for the suggested decomposition procedure.The framework is based on general duality theory in mathematical programming and thus focussing on approaches leading to global optimality.  相似文献   

12.
Model management (MM) regards decision models as an important organisational resource deserving prudent management. Despite the remarkable volume of model management literature compiled over the past twenty-odd years, very little is known about how decision makers actually benefit from employing model management systems (MMS). In this paper, we report findings from an experiment designed to verify the idea that the adequacy of modeling support provided by a MMS influences the decision maker's problem solving performance and behaviour. We show that the decision makers who receive adequate modelling support from MMS outperform those without such support. Also, we provide empirical evidence that the MMS help turn the decision makers' perception of problem solving from a number crunching task into development of solution strategies, consequently changing their decision making behaviour.  相似文献   

13.
The mathematical representation of human preferences has been a subject of study for researchers in different fields. In multi-criteria decision making (MCDM) and fuzzy modeling, preference models are typically constructed by interacting with the human decision maker (DM). However, it is known that a DM often has difficulties to specify precise values for certain parameters of the model. He/she instead feels more comfortable to give holistic judgements for some of the alternatives. Inference and elicitation procedures then assist the DM to find a satisfactory model and to assess unjudged alternatives. In a related but more statistical way, machine learning algorithms can also infer preference models with similar setups and purposes, but here less interaction with the DM is required/allowed. In this article we discuss the main differences between both types of inference and, in particular, we present a hybrid approach that combines the best of both worlds. This approach consists of a very general kernel-based framework for constructing and inferring preference models. Additive models, for which interpretability is preserved, and utility models can be considered as special cases. Besides generality, important benefits of this approach are its robustness to noise and good scalability. We show in detail how this framework can be utilized to aggregate single-criterion outranking relations, resulting in a flexible class of preference models for which domain knowledge can be specified by a DM.   相似文献   

14.
This paper proposes input selection methods for fuzzy modeling, which are based on decision tree search approaches. The branching decision at each node of the tree is made based on the accuracy of the model available at the node. We propose two different approaches of decision tree search algorithms: bottom-up and top-down and four different measures for selecting the most appropriate set of inputs at every branching node (or decision node). Both decision tree approaches are tested using real-world application examples. These methods are applied to fuzzy modeling of two different classification problems and to fuzzy modeling of two dynamic processes. The models accuracy of the four different examples are compared in terms of several performance measures. Moreover, the advantages and drawbacks of using bottom-up or top-down approaches are discussed.  相似文献   

15.
Basin-wide cooperative water resources allocation   总被引:9,自引:0,他引:9  
The Cooperative Water Allocation Model (CWAM) is designed within a general mathematical programming framework for modeling equitable and efficient water allocation among competing users at the basin level and applied to a large-scale water allocation problem in the South Saskatchewan River Basin located in southern Alberta, Canada. This comprehensive model consists of two main steps: initial water rights allocation and subsequent water and net benefits reallocation. Two mathematical programming approaches, called the priority-based maximal multiperiod network flow (PMMNF) method and the lexicographic minimax water shortage ratios (LMWSR) technique, are developed for use in the first step. Cooperative game theoretic approaches are utilized to investigate how the net benefits can be fairly reallocated to achieve optimal economic reallocation of water resources in the second step. The application of this methodology to the South Saskatchewan River Basin shows that CWAM can be utilized as a tool for promoting the understanding and cooperation of water users to achieve maximum welfare in a river basin and minimize the potential damage caused by water shortages, through water rights allocation, and water and net benefit transfers among water users under a regulated water market or administrative allocation mechanism.  相似文献   

16.
We present the mechanical model of a bio-inspired deformable system, modeled as a Timoshenko beam, which is coupled to a substrate by a system of distributed elements. The locomotion action is inspired by the coordinated motion of coupling elements that mimic the legs of millipedes and centipedes, whose leg-to-ground contact can be described as a peristaltic displacement wave. The multi-legged structure is crucial in providing redundancy and robustness in the interaction with unstructured environments and terrains. A Lagrangian approach is used to derive the governing equations of the system that couple locomotion and shape morphing. Features and limitations of the model are illustrated with numerical simulations.  相似文献   

17.
The shortest path problem is among fundamental problems of network optimization. Majority of the optimization algorithms assume that weights of data graph’s edges are pre-determined real numbers. However, in real-world situations, the parameters (costs, capacities, demands, time) are not well defined. The fuzzy set has been widely used as it is very flexible and cost less time when compared with the stochastic approaches. We design a bio-inspired algorithm for computing a shortest path in a network with various types of fuzzy arc lengths by defining a distance function for fuzzy edge weights using \(\alpha \) cuts. We illustrate effectiveness and adaptability of the proposed method with numerical examples, and compare our algorithm with existing approaches.  相似文献   

18.
This paper is a contribution to the modeling and the adaptive control of bio-inspired sensors which have the animal vibrissae as a paradigm. Mice and rats employ a sophisticated tactile sensory system to explore their environment in addition to their visual and auditory sense. Vibrissae in the mystical pad (region around the mouth) are used both passively to sense environmental influences (wind, objects) and actively to detect surface and object structures. Inspired by this particular version of tactile sense we consider the following three stages of a sensory system: perception, transduction and processing of information. We model this system in combining two existing mechanical models and obtain an uncertain nonlinear control system. An applied adaptive controller implements the ability of the animals to employ their vibrissae actively as well as passively. Numerical simulations show that the developed nonlinear model compensates noise signals and reacts strongly to sudden perturbations while guaranteeing a pre-specified control objective (working in active or passive mode).  相似文献   

19.
This paper describes the package sppmix for the statistical environment R. The sppmix package implements classes and methods for modeling spatial point patterns using inhomogeneous Poisson point processes, where the intensity surface is assumed to be a multiple of a finite additive mixture of normal components and the number of components is a finite, fixed or random integer. Extensions to the marked inhomogeneous Poisson point processes case are also presented. We provide an extensive suite of R functions that can be used to simulate, visualize and model point patterns, estimate the parameters of the models, assess convergence of the algorithms and perform model selection and checking in the proposed modeling context. In addition, several approaches have been implemented in order to handle the standard label switching issue which arises in any modeling approach involving mixture models. We adapt a hierarchical Bayesian framework in order to model the intensity surfaces and have implemented two major algorithms in order to estimate the parameters of the mixture models involved: the data augmentation and the birth–death Markov chain Monte Carlo (DAMCMC and BDMCMC). We used C++ (via the Rcpp package) in order to implement the most computationally intensive algorithms.  相似文献   

20.
We discuss the problem of estimating the number of principal components in principal components analysis (PCA). Despite the importance of the problem and the multitude of solutions proposed in literature, it comes as a surprise that there does not exist a coherent asymptotic framework, which would justify different approaches depending on the actual size of the dataset. In this article, we address this issue by presenting an approximate Bayesian approach based on Laplace approximation and introducing a general method of developing criteria for model selection, called PEnalized SEmi-integrated Likelihood (PESEL). Our general framework encompasses a variety of existing approaches based on probabilistic models, like the Bayesian Information Criterion for Probabilistic PCA (PPCA), and enables the construction of new criteria, depending on the size of the dataset at hand and additional prior information. Specifically, we apply PESEL to derive two new criteria for datasets where the number of variables substantially exceeds the number of observations, which is out of the scope of currently existing approaches. We also report results of extensive simulation studies and real data analysis, which illustrate the desirable properties of our proposed criteria as compared to state-of-the-art methods and very recent proposals. Specifically, these simulations show that PESEL-based criteria can be quite robust against deviations from the assumptions of a probabilistic model. Selected PESEL-based criteria for the estimation of the number of principal components are implemented in the R package pesel, which is available on github (https://github.com/psobczyk/pesel). Supplementary material for this article, with additional simulation results, is available online. The code to reproduce all simulations is available at https://github.com/psobczyk/pesel_simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号