首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper describes a method and the corresponding algorithms for simplification of large-scale linear programming models. It consists of the elimination of the balance constraints (i.e. constraints with zero RHS term). The idea is to apply some linear transformations to the original problem in order to nullify the balance constraints. These transformations are able to simultaneously eliminate more balance rows. The core of this contribution is the introduction of the reduction matrix and the associated theorems on the equivalent linear programs (original and reduced). The numerical experiments with this method of simplification proved this approach to be beneficial for a large class of LP problems.This research work was done while the first author was at Duisburg University, Mess-, Steuer und Regelungstechnik, Germany, under the greatly appreciated financial assistance given by the Alexander-von-Humboldt Foundation.  相似文献   

2.
To integrate economic considerations into management decisions in ecosystem frameworks, we need to build models that capture observed system dynamics and incorporate existing knowledge of ecosystems, while at the same time accommodating economic analysis. The main constraint for models to serve in economic analysis is dimensionality. In addition, to apply in long‐term management analysis, models should be stable in terms of adjustments to new observations. We use the ensemble Kalman filter to fit relatively simple models to ecosystem or foodweb data and estimate parameters that are stable over the observed variability in the data. The filter also provides a lower bound on the noise terms that a stochastic analysis requires. In this paper, we apply the filter to model the main interactions in the Barents Sea ecosystem. In a comparison, our method outperforms a regression‐based approach.  相似文献   

3.
In this paper, the updated Lagrangian Taylor-SPH meshfree method is applied to the numerical analysis of large deformation and failure problems under dynamic conditions. The Taylor-SPH method is a meshfree collocation method developed by the authors over the past years. The governing equations, a set of first-order hyperbolic partial differential equations, are written in mixed form in terms of stress and velocity. This set of equations is first discretized in time by means of a Taylor series expansion in two steps and afterwards in space using a corrected form of the SPH method. Two sets of particles are used for the computation resulting on the elimination of the classical tensile instability. In the paper presented herein the authors propose an updated Lagrangian Taylor-SPH approach to address the large deformations of the solid, and therefore the continuous re-positioning of the particles. In order to illustrate the performance and efficiency of the proposed method, some numerical examples based on elastic and viscoplastic materials involving large deformations under dynamic conditions are solved using the proposed algorithm. Results clearly show that the updated Lagrangian Taylor-SPH method is an accurate tool to model large deformation and failure problems under dynamic loadings.  相似文献   

4.
A three-dimensional recirculation flow in a ventilated room was predicted by the numerical methods in which the turbulence models are applied. The predicted results are compared with the experimental results obtained in a model room in order to estimate the practical utilities of such methods from the viewpoint of engineering. Taking account of the practicability of prediction method which the engineers regard as important, two turbulence models were selected and they were incorporated into the numerical prediction methods respectively. One is the two-equation model, in which transport equations of turbulence energy and its rate of dissipation are adopted. The other is the Deardoff's model, in which the subgrid scale eddy coefficient is utilized. The prediction was made by each numerical method. Consequently, no noticeable difference is recognized between both predicted results. Each result is compared with the experimental results. Generally speaking, each agreement is good with regard to the mean velocity. Thus we can conclude that the numerical method using the two-equation model has more practical utility than that using Deardoff's model, because it can give the solutions in a shorter computer time.  相似文献   

5.
Data envelopment analysis methods classify the decision making units into two groups: efficient and inefficient ones. Therefore, the fully ranking all DMUs is demanded by most of the decision makers. However, data envelopment analysis and multiple criteria decision making units are developed independently and designed for different purposes. However, there are some applications in problem solving such as ranking, where these two methods are combined. Combination of multiple criteria decision making methods with data envelopment analysis is a new idea for elimination of disadvantages when applied independently. In this paper, first the new combined method is proposed named TOPSIS-DEA for ranking efficient units which not only includes the benefits of both data envelopment analysis and multiple criteria decision making methods, but also solves the issues that appear in former methods. Then properties and advantages of the suggested method are discussed and compared with super efficiency method, MAJ method, statistical-based model (CCA), statistical-based model (DR/DEA), cross-efficiency—aggressive, cross-efficiency—benevolent, Liang et al.’s model, through several illustrative examples. Finally, the proposed methods are validated.  相似文献   

6.
By combining in a novel way the randomization method with the stationary detection technique, we develop two new algorithms for the computation of the expected reward rates of finite, irreducible Markov reward models, with control of the relative error. The first algorithm computes the expected transient reward rate and the second one computes the expected averaged reward rate. The algorithms are numerically stable. Further, it is argued that, from the point of view of run-time computational cost, for medium-sized and large Markov reward models, we can expect the algorithms to be better than the only variant of the randomization method that allows to control the relative error and better than the approach that consists in employing iteratively the currently existing algorithms that use the randomization method with stationarity detection but allow to control the absolute error. The performance of the new algorithms is illustrated by means of examples, showing that the algorithms can be not only faster but also more efficient than the alternatives in terms of run-time computational cost in relation to accuracy.  相似文献   

7.
The finite element (FE) approach constitutes an essential methodology when modelling the elastic properties of structures in various research disciplines such as structural mechanics, engine dynamics and so on. Because of increased accuracy requirements, the FE method results in discretized models, which are described by higher order ordinary differential equations, or, in FE terms, by a large number of degrees of freedom (DoF). In this regard, the application of an additional methodology, referred to as the model order reduction (MOR) or DoF condensation, is rather compulsory. Herein, a reduced dimension set of ordinary differential equations is generated, i.e. the initially large number of DoF is condensed, while aiming to keep the dynamics of the original model as intact as possible. In the commercially available FE software tools, the static and the component mode syntheses (CMS) are the only available integrated condensation methods. The latter represents the state of the art generating well-correlated reduced order models (ROMs), which can be further utilized for FE or multi-body systems simulations. Taking into consideration the information loss of the CMS, which is introduced by its part-static nature, the improved CMS (ICMS) method is proposed. Here the algorithmic scheme of the standard CMS is adopted, which is qualitatively improved by adequately considering the advantageous characteristics of another MOR approach, the so-called improved reduction system method. The ICMS results in better correlated reduced order models in comparison to all the aforementioned methods, while preserving the required structural properties of the original FE model.  相似文献   

8.
过去对复杂流动的三维数值模拟往往要采用许多简化处理方法,使得数值模型的适用性受到很大限制,所得结果也不能全面反应流场的特征.本文用有限容积法直接求解三维椭圆型流动控制方程,紊流模型采用有浮力修正的κ-ε模型.本文首先将该模型用于有横流情况下岸边等密度排放问题,以检验本数值模型和计算程序的正确性,所得结果正确预报了排放口下游的回流区,与文献[7]的计算和实验结果一致.然后进一步将其应用于有横流情况下的温排水、取水问题,所得结果合理,并精细地揭示了流场的内部特征.  相似文献   

9.
Life-cycle cost models typically minimize system repair costsas a function of various cost coefficients associated with agiven repair option such as discarding upon failure or repairingthe components. Fixed costs such as set-up costs pose specialproblems for the minimization of life-cycle costs: first ofall, fixed costs are characterized by step functions linkedto capacity constraints, while variable costs are representedby continuous functions. Secondly, both categories of cost shouldbe evaluated simultaneously for all components of a physicalsystem and for all repair options. Thirdly, there are operational–researchtools such as mixed integer programming which can solve largeproblems of this type under a set of acceptable simplifyingassumptions. Heuristic methods can be used to minimize the searchtime for a global optimum solution. This paper describes anapproach which has been successfully applied to maintenanceplanning, vibration analysis, and expert fault diagnostics.One of these applications is discussed in detail as an illustrationof the method proposed.  相似文献   

10.
In optimization, it is common to deal with uncertain and inaccurate factors which make it difficult to assign a single value to each parameter in the model. It may be more suitable to assign a set of values to each uncertain parameter. A scenario is defined as a realization of the uncertain parameters. In this context, a robust solution has to be as good as possible on a majority of scenarios and never be too bad. Such characterization admits numerous possible interpretations and therefore gives rise to various approaches of robustness. These approaches differ from each other depending on models used to represent uncertain factors, on methodology used to measure robustness, and finally on analysis and design of solution methods. In this paper, we focus on the application of a recent criterion for the shortest path problem with uncertain arc lengths. We first present two usual uncertainty models: the interval model and the discrete scenario set model. For each model, we then apply a criterion, called bw-robustness (originally proposed by B. Roy) which defines a new measure of robustness. According to each uncertainty model, we propose a formulation in terms of large scale integer linear program. Furthermore, we analyze the theoretical complexity of the resulting problems. Our computational experiments perform on a set of large scale graphs. By observing the results, we can conclude that the approved solvers, e.g. Cplex, are able to solve the mathematical models proposed which are promising for robustness analysis. In the end, we show that our formulations can be applied to the general linear program in which the objective function includes uncertain coefficients.  相似文献   

11.
We study the closure problem for continuum balance equations that model the mesoscale dynamics of large ODE systems. The underlying microscale model consists of classical Newton equations of particle dynamics. As a mesoscale model we use the balance equations for spatial averages obtained earlier by a number of authors: Murdoch and Bedeaux, Hardy, Noll and others. The momentum balance equation contains a flux (stress), which is given by an exact function of particle positions and velocities. We propose a method for approximating this function by a sequence of operators applied to the average density and momentum. The resulting approximate mesoscopic models are systems in closed form. The closed form property allows one to work directly with the mesoscale equations without the need to calculate the underlying particle trajectories, which is useful for the modeling and simulation of large particle systems. The proposed closure method utilizes the theory of ill-posed problems, in particular iterative regularization methods for solving first order linear integral equations. The closed form approximations are obtained in two steps. First, we use Landweber regularization to (approximately) reconstruct the interpolants of the relevant microscale quantities from the average density and momentum. Second, these reconstructions are substituted into the exact formulas for stress. The developed general theory is then applied to non-linear oscillator chains. We conduct a detailed study of the simplest zero-order approximation, and show numerically that it works well as long as the fluctuations of velocity are nearly constant.  相似文献   

12.
The varying-coefficient single-index models (VCSIM) have been applied in many fields since they combine the advantages of single-index models and varying-coefficient models. In this paper, their estimation method is proposed based on B-spline approximation technique and two calculation methods can be used. The first one is to directly calculate the parametric and nonparametric parts simultaneously by Newton-Raphson iteration algorithm. The second one is to calculate the two parts by profile method individually. We suggest that the second method is for our preference when the large amount of parameters are involved, otherwise the first method will be more convenient. Two simulated examples are given to illustrate the performances of the proposed estimation methodologies and calculation procedures.  相似文献   

13.
A threshold stochastic volatility (SV) model is used for capturing time-varying volatilities and nonlinearity. Two adaptive Markov chain Monte Carlo (MCMC) methods of model selection are designed for the selection of threshold variables for this family of SV models. The first method is the direct estimation which approximates the model posterior probabilities of competing models. Using parallel MCMC sampling to estimate these probabilities, the best threshold variable is selected with the highest posterior model probability. The second method is to use the deviance information criterion to compare among these competing models and select the best one. Simulation results lead us to conclude that for large samples the posterior model probability approximation method can give an accurate approximation of the posterior probability in Bayesian model selection. The method delivers a powerful and sharp model selection tool. An empirical study of five Asian stock markets provides strong support for the threshold variable which is formulated as a weighted average of important variables.  相似文献   

14.
Computation of turbulent reactive flows in industrial burners   总被引:1,自引:0,他引:1  
This paper presents models that are suitable for computing steady and unsteady gaseous combustion with finite rate chemistry. Reynold averaging and large eddy simulation (LES) techniques are used to model turbulence for the steady and unsteady cases, respectively. In LES, the Reynold stress terms are modelled by a linear combination of the scale-similarity and eddy dissipation models while the cross terms are of the scale-similarity type. In Reynold averaging, the conventional kε two-equation model is used. For the chemical reactions, a 3-step mechanism is used for methane oxidation and the extended Zeldovich and N2O mechanism are used for NO formation. The combustion model is a hybrid model of the Arrhenius type and a modified eddy dissipation model to take into account the effects of reaction rate, flame stretch and turbulent intensity and scale. Numerical simulations of a flat pulse burner and a swirling burner are discussed.  相似文献   

15.
The unequal-areas facility layout problem is concerned with finding the optimal arrangement of a given number of non-overlapping indivisible departments with unequal area requirements within a facility. We present a convex-optimisation-based framework for efficiently finding competitive solutions for this problem. The framework is based on the combination of two mathematical programming models. The first model is a convex relaxation of the layout problem that establishes the relative position of the departments within the facility, and the second model uses semidefinite optimisation to determine the final layout. Aspect ratio constraints, frequently used in facility layout methods to restrict the occurrence of overly long and narrow departments in the computed layouts, are taken into account by both models. We present computational results showing that the proposed framework consistently produces competitive, and often improved, layouts for well-known large instances when compared with other approaches in the literature.  相似文献   

16.
Longitudinal study has become one of the most commonly adopted designs in medical research. The generalized estimating equations (GEE) method and/or mixed effects models are employed very often in causal inferences. The related model diagnostic procedures are not yet fully formalized, and perhaps never will be. The potential causes of major problems are the high variety of the dependence within subjects and/or the number of repeated measurements. A single testing procedure, e.g., run test, is not possible to resolve all model diagnostics problems in longitudinal data analysis. Multiple quantitative indexes for model diagnostics are needed to take into account this variety. We propose eight testing procedures for randomness accompanied with some conventional and/or non-conventional plots to remedy model diagnostics in longitudinal data analysis. The proposed issue in this paper is well illustrated with four clinical studies in Taiwan.  相似文献   

17.
An iterative predictor—corrector technique for the elimination of the approximate factorization errors which result from the factorization of linearized θ-methods in multidimensional reaction—diffusion equations is proposed, and its convergence and linear stability are analyzed. Four approximate factorization techniques which do not account for the approximate factorization errors are developed. The first technique uses the full Jacobian matrix of the reaction terms, requires the inversion of, in general, dense matrices, and its approximate factorization errors are second-order accurate in time. The second and third methods approximate the Jacobian matrix by diagonal or triangular ones which are easily inverted but their approximate factorization errors are, however, first-order accurate in time. The fourth approximately factorized method has approximate factorization errors which are second-order accurate in time and requires the inversion of lower and upper triangular matrices. The techniques are applied to a nonlinear, two-species, two-dimensional system of reaction—diffusion equations in order to determine the approximate factorization errors and those resulting from the approximations to the Jacobian matrix as functions of the allocation of the reaction terms, space and time.  相似文献   

18.
Terrestrial and marine biodiversity provides the basis for both ecosystems functioning and numerous commodities or services that underpin human well-being. From several decades, alarming trends have been reported worldwide for both biodiversity and ecosystem services. Therefore the sustainable management of biodiversity requires a double viewpoint balancing ecological conservation with the welfare of human societies. Understanding the underlying trade-offs, synergies and interactions imposes the development of interdisciplinary researches and methods. In that respect, bio-economic or ecological economic modeling is likely to play a major role. The present paper intends to elicit the key features, strengths and challenges of bio-economic approaches especially in mathematical and computational terms. It first recall the main bio-economic methods, models and decisional instruments used in these types of analyses. Then the paper shows to what extent bio-economic sustainability lies between equilibrium, viability and optimality mathematical frameworks. It ends up by identifying new major challenges among which the operationalization of ecosystem based management, the precautionary principle and the implementation of governance are especially important.  相似文献   

19.
The qualitative and quantitative combined nonlinear dynamics model proposed in this paper fill the gap in nonlinear dynamics model in terms of qualitative and quantitative combined methods, allowing the qualitative model and quantitative model to perfectly combine and overcome their weaknesses by learning from each other. These two types of models use their strengths to make up for the other’s deficiencies. The qualitative and quantitative combined models can surmount the weakness that the qualitative model cannot be applied and verified in a quantitative manner, and the high costs and long time of multiple construction as well as verification of the quantitative model. The combined model is more practical and efficient, which is of great significance for nonlinear dynamics. The qualitative and quantitative combined modeling and model analytical method raised in this paper is not only applied to nonlinear dynamics, but can be adopted and drawn on in the modeling and model analysis of other fields. Additionally, the analytical method of qualitative and quantitative combined nonlinear dynamics model proposed in this paper can satisfactorily resolve the problems with the price system’s existing nonlinear dynamics model analytical method. The three-dimensional dynamics model of price, supply–demand ratio and selling rate established in this paper make estimates about the best commodity prices using the model results, thereby providing a theoretical basis for the government’s macro-control of price. Meanwhile, this model also offer theoretical guidance to how to enhance people’s purchasing power and consumption levels through price regulation and hence to improve people’s living standards.  相似文献   

20.
Hidden Markov models are used as tools for pattern recognition in a number of areas, ranging from speech processing to biological sequence analysis. Profile hidden Markov models represent a class of so-called “left–right” models that have an architecture that is specifically relevant to classification of proteins into structural families based on their amino acid sequences. Standard learning methods for such models employ a variety of heuristics applied to the expectation-maximization implementation of the maximum likelihood estimation procedure in order to find the global maximum of the likelihood function. Here, we compare maximum likelihood estimation to fully Bayesian estimation of parameters for profile hidden Markov models with a small number of parameters. We find that, relative to maximum likelihood methods, Bayesian methods assign higher scores to data sequences that are distantly related to the pattern consensus, show better performance in classifying these sequences correctly, and continue to perform robustly with regard to misspecification of the number of model parameters. Though our study is limited in scope, we expect our results to remain relevant for models with a large number of parameters and other types of left–right hidden Markov models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号