首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A promising area of research in fuzzy control is the model-based fuzzy controller. At the heart of this approach is a fuzzy relational model of the process to be controlled. Since this model is identified directly from process input-output data it is likely that ‘holes’ will be present in the identified relational model. These holes are real problems when the model is incorporated into a model-based controller since the model will be unable to make any predictions whatsoever if the system drifts into an unknown region. The present work deals with the completeness of the fuzzy relational model which forms the core of the controller. This work proposes a scheme of post-processing to ‘fiil in’ the fuzzy relational model once it has been built and thereby improve its applicability for on-line control. A comparative study of the post-processed model and conventional relational model is presented for Box-Jenkins data identification system and a real-time, highly non-linear application of pH control identification.  相似文献   

2.
Estimates of bank cost efficiency can be biased if bank heterogeneity is ignored. I compare X-inefficiency derived from a model constraining the cost frontier to be the same for all banks in the U.S. and a model allowing for different frontiers and error terms across Federal Reserve Districts. I find that the data reject the single cost function model; X-inefficiency measures based on the single cost function model are, on average, higher than those based on the separate cost functions model; the distributions of the one-sided error terms are wider for the single cost function model than for the separate cost functions model; and the ranking of Districts by the level of X-inefficiency differs in the two models. The results suggest it is important when studying X-inefficiency to account for differences across the markets in which banks are operating and that since X-inefficiency is, by construction, a residual, it will be particulary sensitive to omissions in the basic model.  相似文献   

3.
This paper focuses on the class of finite-state, discrete-index, reciprocal processes (reciprocal chains). Such a class of processes seems to be a suitable setup in many applications and, in particular, it appears well-suited for image-processing. While addressing this issue, the aim is 2-fold: theoretic and practical. As to the theoretic purpose, some new results are provided: first, a general stochastic realization result is provided for reciprocal chains endowed with a known, arbitrary, distribution. Such a model has the form of a fixed-degree, nearest-neighbour polynomial model. Next, the polynomial model is shown to be exactly linearizable, which means it is equivalent to a nearest-neighbour linear model in a different set of variables. The latter model turns out to be formally identical to the Levi–Frezza–Krener linear model of a Gaussian reciprocal process, although actually non-linear with respect to the chain's values. As far as the practical purpose is concerned, in order to yield an example of application an estimation issue is addressed: a suboptimal (polynomial-optimal) solution is derived for the smoothing problem of a reciprocal chain partially observed under non-Gaussian noise. To this purpose, two kinds of boundary conditions (Dirichlet and Cyclic), specifying the reciprocal chain on a finite interval, are considered, and in both cases the model is shown to be well-posed, in a ‘wide-sense’. Under this view, some well-known representation results about Gaussian reciprocal processes extend, in a sense, to a ‘non-Gaussian’ case.  相似文献   

4.
Abstract

The subject of the present paper is a simplified model for a symmetric bistable system with memory or delay, the reference model, which in the presence of noise exhibits a phenomenon similar to what is known as stochastic resonance. The reference model is given by a one-dimensional parametrized stochastic differential equation with point delay; the basic properties of which we check.

With a view to capturing the effective dynamics and, in particular, the resonance-like behavior of the reference model, we construct a simplified or reduced model, the two-state model, first in discrete time, then in the limit of discrete time tending to continuous time. The main advantage of the reduced model is that it enables us to explicitly calculate the distribution of residence times which in turn can be used to characterize the phenomenon of noise-induced resonance.

Drawing on what has been proposed in the physics literature, we outline a heuristic method for establishing the link between the two-state model and the reference model. The resonance characteristics developed for the reduced model can thus be applied to the original model.  相似文献   

5.
In the majority of research on incompressible magnetohydrodynamic (MHD) flows, the simplified model with the low magnetic Reynolds number assumption has been adopted because it reduces the number of equations to be solved. However, because the effect of flow on magnetic field is also neglected, the solutions of the simplified model may be different from those of the full model. As an example, the flow of an electrically conducting fluid past a circular cylinder under a magnetic field is investigated numerically using the simplified and full models in this paper. To solve the problems, two second-order compact finite difference algorithms based on the streamfunction-velocity formulation of the simplified model and the quasi-streamfunction-velocity formulation of the full model are developed respectively.Numerical simulations are carried out over a wide range of Hartmann number for steady-state laminar problems with both models. For the full model, magnetic Reynolds number (Rem) is chosen from 0.01 to 10. The computed results show that solutions of the simplified MHD model are not exactly the same as those of the full MHD model for this flow problem in most cases even if Rem in the full model is very low. Only in the special case that a strong external magnetic field is exerted perpendicular to the dominant flow direction, can the simplified MHD model be regarded as an approximation of the full MHD model with low Rem.  相似文献   

6.
In this paper, a model is said to be validated for control design if using the model-based controller, the closed loop performance of the real plant satisfies a specified performance bound. To improve the model for control design, only closed loop response data is available to deduce a new model of the plant. Hence the procedure described herein involves three steps in each iteration: (i) closed loop identification; (ii) plant model extraction from the closed loop model; (iii) controller design. Thus our criteria for model validation involve both the control design procedure by which the closed loop system performance is evaluated, and the identification procedure by which a new model of the plant is deduced from the closed loop response data. This paper proposes new methods for both parts, and also proposes an iterative algorithm to connect the two parts. To facilitate both the identification and control tasks, the new finite-signal-to-noise (FSN) model of linear systems is utilized. The FSN model allows errors in variables whose noise covariances are proportional to signal covariances. Allowing the signal to noise ratios to be bounded but uncertain, a control theory to guarantee a variance upper bound is developed for the discrete version of this new FSN model. The identification of the closed loop system is accomplished by a new type of q-Markov Cover, adjusted to accommodate the assumed FSN structure of the model. The model of the plant is extracted from the closed loop identification model. This model is then used for control design and the process is repeated until the closed loop performance validates the model. If the iterations produce no such a controller, we say that this specific procedure cannot produce a model valid for control design and the level of the required performance must be reduced.  相似文献   

7.
It is very common to assume deterministic demand in the literature of integrated targeting – inventory models. However, if variability in demand is high, there may be significant disruptions from using the deterministic solution in probabilistic environment. Thus, the model would not be applicable to real world situations and adjustment must be made. The purpose of this paper is to develop a model for integrated targeting – inventory problem when the demand is a random variable. In particular, the proposed model jointly determines the optimal process mean, lot size and reorder point in (QR) continuous review model. In order to investigate the effect of uncertainty in demand, the proposed model is compared with three baseline cases. The first of which considers a hierarchical model where the producer determines the process mean and lot-sizing decisions separately. This hierarchical model is used to show the effect of integrating the process targeting with production/inventory decisions. Another baseline case is the deterministic demand case which is used to show the effect of variation in demand on the optimal solution. The last baseline case is for the situation where the variation in the filling amount is negligible. This case demonstrates the sensitivity of the total cost with respect to the variation in the process output. Also, a procedure is developed to determine the optimal solution for the proposed models. Empirical results show that ignoring randomness in the demand pattern leads to underestimating the expected total cost. Moreover, the results indicate that performance of a process can be improved significantly by reducing its variation.  相似文献   

8.
All forecast models, whether they represent the state of the weather, the spread of a disease, or levels of economic activity, contain unknown parameters. These parameters may be the model's initial conditions, its boundary conditions, or other tunable parameters which have to be determined. Four dimensional variational data assimilation (4D-Var) is a method of estimating this set of parameters by optimizing the fit between the solution of the model and a set of observations which the model is meant to predict.Although the method of 4D-Var described in this paper is not restricted to any particular system, the application described here has a numerical weather prediction (NWP) model at its core, and the parameters to be determined are the initial conditions of the model.The purpose of this paper is to give a review covering assimilation of Doppler radar wind data into a NWP model. Some associated problems, such as sensitivity to small variations in the initial conditions or due to small changes in the background variables, and biases due to nonlinearity are also studied.  相似文献   

9.
《Mathematical Modelling》1987,8(9):669-690
We describe a new method for the fitting of differentiable fuzzy model functions to crisp data. The model functions can be either scalar or multidimensional and need not be linear. The data are n-component vectors. An efficient algorithm is achieved by restricting the fuzzy model functions to sets which depend on a fuzzy parameter vector and assuming that the vector has a conical membership function. The fuzzy model function, equated to zero, defines a fuzzy hypersurface in the n-space. The model fitting is done in a least-squares sense by minimizing the squares of the deviations from unity of the membership values of the fitted hypersurface at the observed points. Under the outlined restriction, the problem can be reduced to an ordinary least-squares formulation for which software is available.Application of the new method is illustrated by two examples. In one example, we are concerned with the hazards caused by enemy fire on armor. An important item of information for the assessment of the involved risks is a predictive model for the hole size in terms of physical properties of the projectile and target plate, respectively. We use a non-linear fuzzy model function for this analysis. The second example involves a linear model function and is of theoretical interest because it allows comparison of the new method with a previously developed method.  相似文献   

10.
Economic agents in electronic markets generally consider reputation to be a significant factor in selecting trading partners. Most traditional online businesses publish reputation profile for traders that reflect average of the ratings received in previous transactions. Because of the importance of these ratings, there is an incentive for traders to partake in strategic behavior (for example shilling) to artificially inflate their rating. It is therefore important for an online business to be able to provide a robust estimate of a trader’s reputation that is not easily affected by strategic behavior or noisy ratings. This paper proposes such an adaptive ratings-based reputation model. The model is based on a trader’s transaction history, witness testimony, and other weighting factors. Learning is integrated to make the ratings model adaptive and robust in a dynamic environment. To validate the proposed model and to demonstrate the significance of its constructs, a multi-agent system is built to simulate the interactions among buyers and sellers in an electronic marketplace. The performance of the proposed model is compared to that of the reputation model used in most online marketplaces like Amazon, and to Huynh’s model proposed in the literature.  相似文献   

11.
The paper proposes an additive continuous-time stochastic mortality model which revises that (B&H model) of Ballotta and Haberman (2006). The structure of the B&H model implies that the future hazard rate is proportional to the stochastic component, thus inducing two questionable features. First, in the B&H model, the uncertainty of the future hazard rate will be enlarged as the base hazard rate increases. However, an increase in the base hazard rate may not cause a dramatic increase suggested by the exponential component of B&H (2006). Second, in the B&H model, the uncertainty of the future hazard rate will be larger in the group which is older and will be greatly augmented by the interaction of age and time. But the uncertainty of the future hazard rate may not increase with an increase in age. The problems can be resolved by our additive structure which is the sum of a deterministic estimator and a stochastic component. Since using the additive structure will contribute to the fact that the stochastic component is independent of age and the base hazard rate, in our model the uncertainty of the future hazard rate will not be affected by an increase in age or in the base hazard rate. We further demonstrate an application of our model by calculating reserves of longevity risks for pure endowments and various common annuity products in the UK. We also compare our results with those of the B&H model.  相似文献   

12.
In practical location problems on networks, the response time between any pair of vertices and the demands of vertices are usually indeterminate. This paper employs uncertainty theory to address the location problem of emergency service facilities under uncertainty. We first model the location set covering problem in an uncertain environment, which is called the uncertain location set covering model. Using the inverse uncertainty distribution, the uncertain location set covering model can be transformed into an equivalent deterministic location model. Based on this equivalence relation, the uncertain location set covering model can be solved. Second, the maximal covering location problem is investigated in an uncertain environment. This paper first studies the uncertainty distribution of the covered demand that is associated with the covering constraint confidence level α. In addition, we model the maximal covering location problem in an uncertain environment using different modelling ideas, namely, the (α, β)-maximal covering location model and the α-chance maximal covering location model. It is also proved that the (α, β)-maximal covering location model can be transformed into an equivalent deterministic location model, and then, it can be solved. We also point out that there exists an equivalence relation between the (α, β)-maximal covering location model and the α-chance maximal covering location model, which leads to a method for solving the α-chance maximal covering location model. Finally, the ideas of uncertain models are illustrated by a case study.  相似文献   

13.
A simple methodology is presented for sensitivity analysis ofmodels that have been fitted to data by statistical methods.Such analysis is a decision support tool that can focus theeffort of a modeller who wishes to further refine a model and/orto collect more data. A formula is given for the calculationof the proportional reduction in the variance of the model ‘output’that would be achievable with perfect knowledge of a subsetof the model parameters. This is a measure of the importanceof the set of parameters, and is shown to be asymptoticallyequal to the squared correlation between the model output andits best predictor based on the omitted parameters. The methodology is illustrated with three examples of OR problems,an age-based equipment replacement model, an ARIMA forecastingmodel and a cancer screening model. The sampling error of thecalculated percentage of variance reduction is studied theoretically,and a simulation study is then used to exemplify the accuracyof the method as a function of sample size.  相似文献   

14.
The Herschel–Bulkley model can be used to describe the rheological behaviour of certain non-Newtonian fluids. When fitting to experimental data, its parameters need to be determined and this is a non-linear problem. The conventional approach is to solve the resulting normal equations numerically. An alternative method is presented for the Herschel–Bulkley model which eliminates the complexity associated with a general numerical method, and so offers potential benefits when dealing with the model in practice.  相似文献   

15.
A phenomenological model simulating the time-dependent consequences of the HIV challenge on the immune system is presented. One of the important features of the model is its ability to handle T helper cell production and apoptosis (genetically determined suicide). The values of the independent, generally time-dependent, model parameters were chosen to be compatible with known experimental data. A new approach to the numerical solution of the resulting coupled, nonlinear model equations is presented, and simulations of a typical viral challenge that is cleared and one that leads to infection and AIDS are exhibited.It is shown that a change in the saturated value of a single model parameter is sufficient to change a simulated challenge on its way to being cleared into one that leads to infection instead (and vice versa). If the saturated values of all of the independent model parameters are known at the beginning of a challenge, the outcome of the challenge can be predicted in advance. If the virulence of the HIV strain (defined in this paper) is above a critical threshold at inoculation, infection will result regardless of the initial viral load. This latter result could explain why accidental HIV contaminated needle sticks sometime result in infection regardless of the counter-measures undertaken.A model simulating the time evolution of the collapse of the T helper cell density leading to AIDS is introduced. This model consists of immunological and mathematical parts and is compatible with experimental data. The immediate cause of the beginning of this collapse is postulated to be a spontaneous mutation of the virus into a more virulent form that not only leads to an explosion in the viral load but also to a dramatic increase in the level of induced apoptosis of T helper cells. The results of this model are consistent with the known experimental behavior of the viral load and T helper cell densities in the final stage of HIV infection.  相似文献   

16.
The creation of a holistic model which is able to represent the global dynamic behavior as well as local effects in certain regions leads to finite element models consisting of domains with different local meshes and a combination of different model dimensions. The different model domains have to be coupled such causing in an additional coupling error. The Arlequin method seems to be a flexible tool which has some advantages in comparison to alternative methods. In this paper the application of the Arlequin method on the coupling of a 3D continua model and a beam model is studied. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
Data envelopment analysis (DEA) and multiple objective linear programming (MOLP) are tools that can be used in management control and planning. Whilst these two types of model are similar in structure, DEA is directed to assessing past performances as part of management control function and MOLP to planning future performance targets. This paper is devoted to investigating equivalence models and interactive tradeoff analysis procedures in MOLP, such that DEA-oriented performance assessment and target setting can be integrated in a way that the decision makers’ preferences can be taken into account in an interactive fashion. Three equivalence models are investigated between the output-oriented dual DEA model and the minimax reference point formulations, namely the super-ideal point model, the ideal point model and the shortest distance model. These models can be used to support efficiency analysis in the same way as the conventional DEA model does and also support tradeoff analysis for setting target values by individuals or groups. A case study is conducted to illustrate how DEA-oriented efficiency analysis can be conducted using the MOLP methods and how such performance assessment can be integrated into an interactive procedure for setting realistic target values.  相似文献   

18.
Our purpose is to derive a model describing the evolution of charged particles in a plasma, at various scales following their kinetic energy. Fast particles will be described through a collisional kinetic equation of Boltzmann type. This equation will be coupled with a drift-diffusion model that describes the evolution of slower particles. The main interest of this approach is to reduce the cost of numerical simulations. This gain is due to the use of a macroscopic model for slow particles instead of a kinetic model for all the particles, which would involve a larger number of variables. To cite this article: N. Crouseilles, C. R. Acad. Sci. Paris, Ser. I 334 (2002) 827–832.  相似文献   

19.
In this paper, an integer programming model for the hierarchical workforce problem under the compressed workweeks is developed. The model is based on the integer programming formulation developed by Billionnet [A. Billionnet, Integer programming to schedule a hierarchical workforce with variable demands, European Journal of Operational Research 114 (1999) 105–114] for the hierarchical workforce problem. In our model, workers can be assigned to alternative shifts in a day during the course of a week, whereas all workers are assigned to one shift type in Billionnet’s model. The main idea of this paper is to use compressed workweeks in order to save worker costs. This case is also suitable for the practice. The proposed model is illustrated on the Billionnet’s example problem and the obtained results are compared with the Billionnet’s model results.  相似文献   

20.
Semiparametric models to describe the functional relationship between k groups of observations are broadly applied in statistical analysis, ranging from nonparametric ANOVA to proportional hazard (ph) rate models in survival analysis. In this paper we deal with the empirical assessment of the validity of such a model, which will be denoted as a “structural relationship model”. To this end Hadamard differentiability of a suitable goodness-of-fit measure in the k-sample case is proved. This yields asymptotic limit laws which are applied to construct tests for various semiparametric models, including the Cox ph model. Two types of asymptotics are obtained, first when the hypothesis of the semiparametric model under investigation holds true, and second for the case when a fixed alternative is present. The latter result can be used to validate the presence of a semiparametric model instead of simply checking the null hypothesis “the model holds true”. Finally, various bootstrap approximations are numerically investigated and a data example is analyzed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号