首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Detectability describes the property of a system to uniquely determine, after a finite number of observations, the current and the subsequent states. Different notions of detectability have been proposed in the literature. In this paper, we formalize and analyze strong detectability and strong periodic detectability for systems that are modeled as labeled Petri nets with partial observation on their transitions. We provide three new approaches for the verification of such detectability properties using three different structures. The computational complexity of the proposed approaches is analyzed and the three methods are compared. The main feature of all the three approaches is that they do not require the calculation of the entire reachability space or the construction of an observer. As a result, they have lower computational complexity than other methods in the literature.  相似文献   

2.
The article begins with a review of the main approaches for interpretation the results from principal component analysis (PCA) during the last 50–60 years. The simple structure approach is compared to the modern approach of sparse PCA where interpretable solutions are directly obtained. It is shown that their goals are identical but they differ by the way they are realized. Next, the most popular and influential methods for sparse PCA are briefly reviewed. In the remaining part of the paper, a new approach to define sparse PCA is introduced. Several alternative definitions are considered and illustrated on a well-known data set. Finally, it is demonstrated, how one of these possible versions of sparse PCA can be used as a sparse alternative to the classical rotation methods.  相似文献   

3.
This paper provides a survey on probabilistic decision graphs for modeling and solving decision problems under uncertainty. We give an introduction to influence diagrams, which is a popular framework for representing and solving sequential decision problems with a single decision maker. As the methods for solving influence diagrams can scale rather badly in the length of the decision sequence, we present a couple of approaches for calculating approximate solutions. The modeling scope of the influence diagram is limited to so-called symmetric decision problems. This limitation has motivated the development of alternative representation languages, which enlarge the class of decision problems that can be modeled efficiently. We present some of these alternative frameworks and demonstrate their expressibility using several examples. Finally, we provide a list of software systems that implement the frameworks described in the paper.  相似文献   

4.
带有正交约束的矩阵优化问题在材料计算、统计及数据分析等领域中有着广泛的应用.由于正交约束的可行域是Stiefel流形,一直以来流形上的优化方法是求解这一问题的主要方法.近年来,随着实际应用问题所要求的变量规模的扩大,传统的流形优化方法在计算上的劣势显现出来,而一些迭代简单、收敛快的新算法逐渐被提出.通过收缩方法、非收缩可行方法、不可行方法三个类别分别来介绍求解带有正交约束的矩阵优化问题的最新算法.通过分析这些方法的主要特性,以及应用问题的要求,对这类问题算法设计的研究进行了展望.  相似文献   

5.
This paper provides a survey on probabilistic decision graphs for modeling and solving decision problems under uncertainty. We give an introduction to influence diagrams, which is a popular framework for representing and solving sequential decision problems with a single decision maker. As the methods for solving influence diagrams can scale rather badly in the length of the decision sequence, we present a couple of approaches for calculating approximate solutions. The modeling scope of the influence diagram is limited to so-called symmetric decision problems. This limitation has motivated the development of alternative representation languages, which enlarge the class of decision problems that can be modeled efficiently. We present some of these alternative frameworks and demonstrate their expressibility using several examples. Finally, we provide a list of software systems that implement the frameworks described in the paper.  相似文献   

6.
We propose subject matter expert refined topic (SMERT) allocation, a generative probabilistic model applicable to clustering freestyle text. SMERT models are three‐level hierarchical Bayesian models in which each item is modeled as a finite mixture over a set of topics. In addition to discrete data inputs, we introduce binomial inputs. These ‘high‐level’ data inputs permit the ‘boosting’ or affirming of terms in the topic definitions and the ‘zapping’ of other terms. We also present a collapsed Gibbs sampler for efficient estimation. The methods are illustrated using real world data from a call center. Also, we compare SMERT with three alternative approaches and two criteria. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Abstract

We consider Bayesian inference when priors and likelihoods are both available for inputs and outputs of a deterministic simulation model. This problem is fundamentally related to the issue of aggregating (i.e., pooling) expert opinion. We survey alternative strategies for aggregation, then describe computational approaches for implementing pooled inference for simulation models. Our approach (1) numerically transforms all priors to the same space; (2) uses log pooling to combine priors; and (3) then draws standard Bayesian inference. We use importance sampling methods, including an iterative, adaptive approach that is more flexible and has less bias in some instances than a simpler alternative. Our exploratory examples are the first steps toward extension of the approach for highly complex and even noninvertible models.  相似文献   

8.
Regression models are popular tools for rate-making in the framework of heterogeneous insurance portfolios; however, the traditional regression methods have some disadvantages particularly their sensitivity to the assumptions which significantly restrict the area of their applications. This paper is devoted to an alternative approach–quantile regression. It is free of some disadvantages of the traditional models. The quality of estimators for the approach described is approximately the same as or sometimes better than that for the traditional regression methods. Moreover, the quantile regression is consistent with the idea of using the distribution quantile for rate-making. This paper provides detailed comparisons between the approaches and it gives the practical example of using the new methodology.  相似文献   

9.
Recently, Robin claimed to introduce clever innovations (‘wrinkles’) into the mathematics education literature concerning the solutions, and methods of solution, to differential equations. In particular, Robin formulated an iterative scheme in the form of a single integral representation. These ideas were applied to a range of examples involving differential equations. In this article, we respond to Robin's work by subjecting these claims, methods and applications to closer scrutiny. By outlining the historical development of Picard's iterative method for differential equations and drawing on relevant literature, we show that the iterative scheme of Robin has been known for some time. We introduce the need for a ‘space for otherness’ in mathematics education, by drawing on Foucault and posit alternative pedagogical approaches as heterotopias. We open a space for otherness and make it concrete by considering alternative perspectives to Robin's work. On a practical note, we see the importance of history and theory to be part of the pedagogical conversation when teaching and learning iterative methods; and provide a set of Maple code with which students and teachers can experiment, explore and learn. We also advocate more broadly for educators to open a space for otherness in their own pedagogical practice.  相似文献   

10.
Geometry of interpolation sets in derivative free optimization   总被引:2,自引:0,他引:2  
We consider derivative free methods based on sampling approaches for nonlinear optimization problems where derivatives of the objective function are not available and cannot be directly approximated. We show how the bounds on the error between an interpolating polynomial and the true function can be used in the convergence theory of derivative free sampling methods. These bounds involve a constant that reflects the quality of the interpolation set. The main task of such a derivative free algorithm is to maintain an interpolation sampling set so that this constant remains small, and at least uniformly bounded. This constant is often described through the basis of Lagrange polynomials associated with the interpolation set. We provide an alternative, more intuitive, definition for this concept and show how this constant is related to the condition number of a certain matrix. This relation enables us to provide a range of algorithms whilst maintaining the interpolation set so that this condition number or the geometry constant remain uniformly bounded. We also derive bounds on the error between the model and the function and between their derivatives, directly in terms of this condition number and of this geometry constant.  相似文献   

11.
Agent-based models (ABMs) simulate interactions between autonomous agents in constrained environments over time and are often used for modeling the spread of infectious diseases. ABMs use information about agents and their environments as input, together referred to as a “synthetic ecosystem.” Previous approaches for generating synthetic ecosystems have some limitations: they are not open-source, cannot be adapted to new or updated input data sources, or do not allow for alternative methods for sampling agent characteristics and locations. We introduce a general framework for generating synthetic ecosystems, called “Synthetic Populations and Ecosystems of the World” (SPEW). SPEW lets researchers choose from a variety of sampling methods for agent characteristics and locations and is implemented as an open-source R package. We analyze the accuracy and computational efficiency of SPEW, given different sampling methods for agent characteristics and locations, and provide a suite of statistical and graphical tools to screen our generated ecosystems. SPEW has generated over five billion human agents across approximately 100,000 geographic regions in over 70 countries available online.  相似文献   

12.
Multiobjective programming methods would appear to offer a very attractive alternative to conventional single objective LP models for medium term financial planning. However, prior to implementation, many technical and procedural problems need to be solved. This paper presents in case study form some of these problems and discusses possible approaches to their solution.  相似文献   

13.
Scoring rules are an important disputable subject in data envelopment analysis (DEA). Various organizations use voting systems whose main object is to rank alternatives. In these methods, the ranks of alternatives are obtained by their associated weights. The method for determining the ranks of alternatives by their weights is an important issue. This problem has been the subject at hand of some authors. We suggest a three-stage method for the ranking of alternatives. In the first stage, the rank position of each alternative is computed based on the best and worst weights in the optimistic and pessimistic cases, respectively. The vector of weights obtained in the first stage is not a singleton. Hence, to deal with this problem, a secondary goal is used in the second stage. In the third stage of our method, the ranks of the alternatives approach the optimistic or pessimistic case. It is mentionable that the model proposed in the third stage is a multi-criteria decision making (MCDM) model and there are several methods for solving it; we use the weighted sum method in this paper. The model is solved by mixed integer programming. Also, we obtain an interval for the rank of each alternative. We present two models on the basis of the average of ranks in the optimistic and pessimistic cases. The aim of these models is to compute the rank by common weights.  相似文献   

14.
In the last 10 years much has been written about the drawbacks of radial projection. During this time, many authors proposed methods to explore, interactively or not, the efficient frontier via non-radial projections. This paper compares three families of data envelopment analysis (DEA) models: the traditional radial, the preference structure and the multi-objective models. We use the efficiency analysis of Rio de Janeiro Odontological Public Health System as a background for comparing the three methods through a real case with one integer and one exogenous variable. The objectives of the study case are (i) to compare the applicability of the three approaches for efficiency analysis with exogenous and integer variables, (ii) to present the main advantages and drawbacks for each approach, (iii) to prove the impossibility to project in some regions and its implications, (iv) to present the approximate CPU time for the models, when this time is not negligible. We find that the multi-objective approach, although mathematically equivalent to its preference structure peer, allows projections that are not present in the latter. Furthermore, we find that, for our case study, the traditional radial projection model provides useless targets, as expected. Furthermore, for some parts of the frontier, none of the models provide suitable targets. Other interesting result is that the CPU-time for the multi-objective formulation, although its endogenous high complexity, is acceptable for DEA applications, due to its compact nature.  相似文献   

15.
The proximal method is a standard regularization approach in optimization. Practical implementations of this algorithm require (i)?an algorithm to compute the proximal point, (ii)?a rule to stop this algorithm, (iii)?an update formula for the proximal parameter. In this work we focus on?(ii), when smoothness is present??so that Newton-like methods can be used for?(i): we aim at giving adequate stopping rules to reach overall efficiency of the method. Roughly speaking, usual rules consist in stopping inner iterations when the current iterate is close to the proximal point. By contrast, we use the standard paradigm of numerical optimization: the basis for our stopping test is a ??sufficient?? decrease of the objective function, namely a fraction of the ideal decrease. We establish convergence of the algorithm thus obtained and we illustrate it on some ill-conditioned problems. The experiments show that combining the proposed inexact proximal scheme with a standard smooth optimization algorithm improves the numerical behaviour of the latter for those ill-conditioned problems.  相似文献   

16.
This article presents a conceptual framework and methodological guide for researching and understanding OR interventions particularly problem structuring methods (PSM). The article argues that OR/PSM interventions are complex events which can not be understood by traditional approaches alone. In this paper an alternative methodology is developed, where the units of analysis are the narratives and networks produced during PSM interventions. The paper outlines the main theoretical and methodological concerns that need to be appreciated in studying PSM interventions. The paper then explores actor-network theory and narrative analysis as approaches to study them. A case study describing the use of these approaches is provided.  相似文献   

17.
In this paper an application of the Analytic Network Process (ANP) to asset valuation is presented. It has two purposes: solving some of the drawbacks found in classical asset valuation methods and broadening the scope of current approaches. The ANP is a method based on Multiple Criteria Decision Analysis (MCDA) that accurately models complex environments. This approach is particularly useful in problems which work with partially available data, qualitative variables and influences among the variables, which are very common situations in the valuation context. As an illustration, the new approach has been applied to a real case study of an industrial park located in Valencia (Spain) using three different models. The results confirm the validity of the methodology and show that the more information is incorporated into the model, the more accurate the solution will be, so the presented methodology stands out as a good alternative to current valuation approaches.  相似文献   

18.
We address a generic mixed-integer bilevel linear program (MIBLP), i.e., a bilevel optimization problem where all objective functions and constraints are linear, and some/all variables are required to take integer values. We first propose necessary modifications needed to turn a standard branch-and-bound MILP solver into an exact and finitely-convergent MIBLP solver, also addressing MIBLP unboundedness and infeasibility. As in other approaches from the literature, our scheme is finitely-convergent in case both the leader and the follower problems are pure integer. In addition, it is capable of dealing with continuous variables both in the leader and in follower problems—provided that the leader variables influencing follower’s decisions are integer and bounded. We then introduce new classes of linear inequalities to be embedded in this branch-and-bound framework, some of which are intersection cuts based on feasible-free convex sets. We present a computational study on various classes of benchmark instances available from the literature, in which we demonstrate that our approach outperforms alternative state-of-the-art MIBLP methods.  相似文献   

19.
Decomposition has proved to be one of the more effective tools for the solution of large-scale problems, especially those arising in stochastic programming. A decomposition method with wide applicability is Benders' decomposition, which has been applied to both stochastic programming as well as integer programming problems. However, this method of decomposition relies on convexity of the value function of linear programming subproblems. This paper is devoted to a class of problems in which the second-stage subproblem(s) may impose integer restrictions on some variables. The value function of such integer subproblem(s) is not convex, and new approaches must be designed. In this paper, we discuss alternative decomposition methods in which the second-stage integer subproblems are solved using branch-and-cut methods. One of the main advantages of our decomposition scheme is that Stochastic Mixed-Integer Programming (SMIP) problems can be solved by dividing a large problem into smaller MIP subproblems that can be solved in parallel. This paper lays the foundation for such decomposition methods for two-stage stochastic mixed-integer programs.  相似文献   

20.
《Optimization》2012,61(5-6):495-516
For optimization problems that are structured both with respect to the constraints and with respect to the variables, it is possible to use primal–dual solution approaches, based on decomposition principles. One can construct a primal subproblem, by fixing some variables, and a dual subproblem, by relaxing some constraints and king their Lagrange multipliers, so that both these problems are much easier to solve than the original problem. We study methods based on these subproblems, that do not include the difficult Benders or Dantzig-Wolfe master problems, namely primal–dual subgradient optimization methods, mean value cross decomposition, and several comtbinations of the different techniques. In this paper, these solution approaches are applied to the well-known uncapacitated facility location problem. Computational tests show that some combination methods yield near-optimal solutions quicker than the classical dual ascent method of Erlenkotter  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号