首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a fault diagnosis architecture for a class of hybrid systems with nonlinear uncertain time-driven dynamics, measurement noise, and autonomous and controlled mode transitions. The proposed approach features a hybrid estimator based on a modified hybrid automaton framework. The fault detection scheme employs a filtering approach that attenuates the effect of the measurement noise and allows tighter mode-dependent thresholds for the detection of both discrete and parametric faults while guaranteeing no false alarms due to modeling uncertainty and mode mismatches. Both the hybrid estimator and the fault detection scheme are linked with an autonomous guard events identification (AGEI) scheme that handles the effects of mode mismatches due to autonomous mode transitions and allows effective mode estimation. Finally, the fault isolation scheme anticipates which fault events may have occurred and dynamically employs the appropriate isolation estimators for isolating the fault by calculating suitable thresholds and estimating the parametric fault magnitude through adaptive approximation methods. Simulation results from a five-tank hybrid system illustrate the effectiveness of the proposed approach.  相似文献   

2.
The focal problem for centralized multisensor multitarget tracking is the data association problem of partitioning the observations into tracks and false alarms so that an accurate estimate of the true tracks can be recovered. Large classes of these association problems can be formulated as multidimensional assignment problems, which are known to be NP-hard for three dimensions or more. The assignment problems that result from tracking are large scale, sparse and noisy. Solution methods must execute in real-time. The Greedy Randomized Adaptive Local Search Procedure (GRASP) has proven highly effective for solving many classes NP-hard optimization problems. This paper introduces four GRASP implementations for the multidimensional assignment problem, which are combinations of two constructive methods (randomized reduced cost greedy and randomized max regret) and two local search methods (two-assignment-exchange and variable depth exchange). Numerical results are shown for a two random problem classes and one tracking problem class.  相似文献   

3.
The ever-increasing demand in surveillance is to produce highly accurate target and track identification and estimation in real-time, even for dense target scenarios and in regions of high track contention. The use of multiple sensors, through more varied information, has the potential to greatly enhance target identification and state estimation. For multitarget tracking, the processing of multiple scans all at once yields high track identification. However, to achieve this accurate state estimation and track identification, one must solve an NP-hard data association problem of partitioning observations into tracks and false alarms in real-time. The primary objective in this work is to formulate a general class of these data association problems as multidimensional assignment problems to which new, fast, near-optimal, Lagrangian relaxation based algorithms are applicable. The dimension of the formulated assignment problem corresponds to the number of data sets being partitioned with the constraints defining such a partition. The linear objective function is developed from Bayesian estimation and is the negative log posterior or likelihood function, so that the optimal solution yields the maximum a posteriori estimate. After formulating this general class of problems, the equivalence between solving data association problems by these multidimensional assignment problems and by the currently most popular method of multiple hypothesis tracking is established. Track initiation and track maintenance using anN-scan sliding window are then used as illustrations. Since multiple hypothesis tracking also permeates multisensor data fusion, two example classes of problems are formulated as multidimensional assignment problems.This work was partially supported by the Air Force Office of Scientific Research through AFOSR Grant Numbers AFOSR-91-0138 and F49620-93-1-0133 and by the Federal Systems Company of the IBM Corporation in Boulder, CO and Owego, NY.  相似文献   

4.
It is a long‐accepted tenet of scientific practice that every measurement result ought to include a statement of uncertainty associated with the measured value. Such uncertainty should also be propagated to functions of the measured values. It is also widely recognized that probability distributions are well suited to express measurement uncertainty and that statistical methods are the choice vehicles to produce uncertainty assessments incorporating information in empirical data as well as other relevant information, either about the quantity that is the object of measurement or about the techniques or apparatuses used in measurement. Statistical models and methods of statistical inference provide the technical machinery necessary to evaluate and propagate measurement uncertainty. Some of these models and methods are illustrated in five examples: (i) measurement of the refractive index of a glass prism (employing a venerable formula due to Gauss, as well as contemporary Monte Carlo methods); (ii) measurement of the mass fraction of arsenic in oyster tissue using data from an inter‐laboratory study (introducing a Bayesian hierarchical model with adaptive tail heaviness); (iii) measurement of the relative viscosity increment of a solution of sucrose in water (using a copula); (iv) mapping measurements of radioactivity in the area of Fukushima, Japan (via both local regression and kriging, and explaining how model uncertainty may be evaluated); and (v) combining expert opinions about the flow rate of oil during the Deepwater Horizon oil spill into the Gulf of Mexico (via linear or logarithmic pooling). Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
6.
The estimation of a circles centre and radius from a set of noisy measurements of its circumference has many applications. It is a problem of fitting a circle to the measurements and the fit can be in algebraic or geometric distances. The former gives linear equations, while the latter yields nonlinear equations. Starting from estimation theory, this paper first proves that the maximum likelihood (ML), i.e., the optimal estimation of the circle parameters, is equivalent to the minimization of the geometric distances. It then derives a pseudolinear set of ML equations whose coefficients are functions of the unknowns. An approximate ML algorithm updates the coefficients from the previous solution and selects the solution that gives the minimum cost. Simulation results show that the ML algorithm attains the Cramer-Rao lower bound (CRLB) for arc sizes as small as 90°. For arc sizes of 15° and 5° the ML algorithm errors are slightly above the CRLB, but lower than those of other linear estimators.Communicated by L. C. W. Dixon  相似文献   

7.
8.
钢筋抗拉强度的不确定度包括:钢筋直径的不确定度分量,拉力的不确定度分量,检测结果重复性引入的不确定度分量,数据修约的不确定度分量等等,因此,测量不确定度是与测量数据相联系的,联系数是处理不确定性问题的一种系统数学理论,可以用联系数来表示测量不确定度,为此,提出一种基于联系数的钢筋抗拉强度测量不确定度评定的新方法.  相似文献   

9.
This paper offers a joint estimation approach for forecasting probabilities of default and loss rates given default in the presence of selection. The approach accommodates fixed and random risk factors. An empirical analysis identifies bond ratings, borrower characteristics and macroeconomic information as important risk factors. A portfolio-level analysis finds evidence that common risk measurement approaches may underestimate bank capital by up to 17% relative to the presented model.  相似文献   

10.
Cryptanalytic time memory tradeoff algorithms are generic one-way function inversion techniques that utilize pre-computation. Even though the online time complexity is known up to a small multiplicative factor for any tradeoff algorithm, false alarms pose a major obstacle in its accurate assessment. In this work, we study the expected pre-image size for an iteration of functions and use the result to analyze the cost incurred by false alarms. We are able to present the expected online time complexities for the Hellman tradeoff and the rainbow table method in a manner that takes false alarms into account. We also analyze the effects of the checkpoint method in reducing false alarm costs. The ability to accurately compute the online time complexities will allow one to choose their tradeoff parameters more optimally, before starting the expensive pre-computation process.  相似文献   

11.
Per Jarlemark  Ragne Emardson 《PAMM》2007,7(1):1150301-1150302
In the present information based knowledge society, accurate knowledge of time and frequency plays a fundamental role. For many applications, real time estimates of the clocks time and frequency error are required. We have developed a novel approach for estimating time and frequency errors based on an assembly of clocks of varying quality. By using parallel Kalman filters we utilize all available measurements in the estimation. As these measurements are available with different delays, the Kalman filter produces estimates of different quality. One filtermay produce real-time estimates while another filter waits for delayed measurements. When new information becomes available, the parallel Kalman filters exchange information in order to keep the state matrices updated with the most recent information. In Kalman filtering, accurate modelling of the measurement system is fundamental. All the contributing clocks, as well as the time transfer methods, are modelled as stochastic processes. By using this methodology and by correctly modelling the contributing clocks, we obtain minimum mean square error estimators (MMSE) of the time and frequency errors of a specific clock at every epoch. In addition, we also determine the uncertainty of each time and frequency error estimate. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
Missing data and time-dependent covariates often arise simultaneously in longitudinal studies, and directly applying classical approaches may result in a loss of efficiency and biased estimates. To deal with this problem, we propose weighted corrected estimating equations under the missing at random mechanism, followed by developing a shrinkage empirical likelihood estimation approach for the parameters of interest when time-dependent covariates are present. Such procedure improves efficiency over generalized estimation equations approach with working independent assumption, via combining the independent estimating equations and the extracted additional information from the estimating equations that are excluded by the independence assumption. The contribution from the remaining estimating equations is weighted according to the likelihood of each equation being a consistent estimating equation and the information it carries. We show that the estimators are asymptotically normally distributed and the empirical likelihood ratio statistic and its profile counterpart follow central chi-square distributions asymptotically when evaluated at the true parameter. The practical performance of our approach is demonstrated through numerical simulations and data analysis.  相似文献   

13.
We present a method for early detection of runaway initiation in chemical reactors using only temperature measurements and based on the calculation of the divergence of the system. The method is based on state space reconstruction techniques and is illustrated using simulated as well as experimental datasets. The results show that the method is able to distinguish between runaway and non-runaway situations and it does not produce false alarms during controlled heating/cooling experiments.  相似文献   

14.
《Applied Mathematical Modelling》2014,38(9-10):2377-2397
An uncertain quantification and propagation procedure via interval analysis is proposed to deal with the uncertain structural problems in the case of the small sample measurement data in this study. By virtue of the construction of a membership function, a finite number of sample data on uncertain structural parameters are processed, and the effective interval estimation on uncertain parameters can be obtained. Moreover, uncertainty propagation based on interval analysis is performed to obtain the structural responses interval according to the quantified results of the uncertain structural parameters. The proposed method can decrease the demanding on the sample number of measurement data in comparison with the classical probabilistic method. For instance, the former only need several to tens of sample data, whereas the latter usually need several tens to several hundreds of them. The numerical examples illustrate the feasibility and validity of the proposed method for non-probabilistic quantification of limited uncertain information as well as propagation analysis.  相似文献   

15.
This article deals with non-linear model parameter estimation from experimental data. As for non-linear models a rigorous identifiability analysis is difficult to perform, parameter estimation is performed in such a way that uncertainty in the estimated parameter values is represented by the range of model use results when the model is used for a certain purpose. Using this approach, the article presents a simulation study where the objective is to discover whether the estimation of model parameters can be improved, so that a small enough range of model use results is obtained. The results of the study indicate that from plant measurements available for the estimation of model parameters, it is possible to extract data that are important for the estimation of model parameters relative to a certain model use. If these data are improved by a proper measurement campaign (e.g. proper choice of measured variables, better accuracy, higher measurement frequency) it is to be expected that a valid model for a certain model use will be obtained. The simulation study is performed for an activated sludge model from wastewater treatment, while the estimation of model parameters is done by Monte Carlo simulation.  相似文献   

16.
We discuss prevalence estimation under misclassification. That is we are concerned with the estimation of a proportion of units having a certain property (being diseased, showing deviant behavior, etc.) from a random sample when the true variable of interest cannot be observed, but a related proxy variable (e.g. the outcome of a diagnostic test) is available. If the misclassification probabilities were known then unbiased prevalence estimation would be possible. We focus on the frequent case where the misclassification probabilities are unknown but two independent replicate measurements have been taken. While in the traditional precise probabilistic framework a correction from this information is not possible due to non-identifiability, the imprecise probability methodology of partial identification and systematic sensitivity analysis allows to obtain valuable insights into possible bias due to misclassification. We derive tight identification intervals and corresponding confidence regions for the true prevalence, based on the often reported kappa coefficient, which condenses the information of the replicates by measuring agreement between the two measurements. Our method is illustrated in several theoretical scenarios and in an example from oral health on prevalence of caries in children.  相似文献   

17.
In many problems involving generalized linear models, the covariates are subject to measurement error. When the number of covariates p exceeds the sample size n, regularized methods like the lasso or Dantzig selector are required. Several recent papers have studied methods which correct for measurement error in the lasso or Dantzig selector for linear models in the p > n setting. We study a correction for generalized linear models, based on Rosenbaum and Tsybakov’s matrix uncertainty selector. By not requiring an estimate of the measurement error covariance matrix, this generalized matrix uncertainty selector has a great practical advantage in problems involving high-dimensional data. We further derive an alternative method based on the lasso, and develop efficient algorithms for both methods. In our simulation studies of logistic and Poisson regression with measurement error, the proposed methods outperform the standard lasso and Dantzig selector with respect to covariate selection, by reducing the number of false positives considerably. We also consider classification of patients on the basis of gene expression data with noisy measurements. Supplementary materials for this article are available online.  相似文献   

18.
A generalised probabilistic framework is proposed for reliability assessment and uncertainty quantification under a lack of data. The developed computational tool allows the effect of epistemic uncertainty to be quantified and has been applied to assess the reliability of an electronic circuit and a power transmission network. The strength and weakness of the proposed approach are illustrated by comparison to traditional probabilistic approaches. In the presence of both aleatory and epistemic uncertainty, classic probabilistic approaches may lead to misleading conclusions and a false sense of confidence which may not fully represent the quality of the available information. In contrast, generalised probabilistic approaches are versatile and powerful when linked to a computational tool that permits their applicability to realistic engineering problems.  相似文献   

19.
An inspection and replacement policy for a protection system is described in which the inspection process is subject to error, and false positives (false alarms) and false negatives are possible. We develop two models: one in which a false positive implies renewal of the protection system; the other not. These models are motivated by inspection of a protection system on the production line of a beverage manufacturer. False negatives reduce the efficiency of inspection. Another notion of imperfect maintenance is also modelled: that of poor installation of a component at replacement. These different aspects of maintenance quality interact: false alarms can, in a worst case scenario, lead to the systematic and unnecessary replacement of good components by poor components, thus reducing the availability of the system. The models also allow situations in which maintenance quality differs between alternative maintainers to be investigated.  相似文献   

20.
We investigate a robust penalized logistic regression algorithm based on a minimum distance criterion. Influential outliers are often associated with the explosion of parameter vector estimates, but in the context of standard logistic regression, the bias due to outliers always causes the parameter vector to implode, that is, shrink toward the zero vector. Thus, using LASSO-like penalties to perform variable selection in the presence of outliers can result in missed detections of relevant covariates. We show that by choosing a minimum distance criterion together with an elastic net penalty, we can simultaneously find a parsimonious model and avoid estimation implosion even in the presence of many outliers in the important small n large p situation. Minimizing the penalized minimum distance criterion is a challenging problem due to its nonconvexity. To meet the challenge, we develop a simple and efficient MM (majorization–minimization) algorithm that can be adapted gracefully to the small n large p context. Performance of our algorithm is evaluated on simulated and real datasets. This article has supplementary materials available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号