首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 98 毫秒
1.
Remaining useful life (RUL) estimation is regarded as one of the most central components in prognostics and health management (PHM). Accurate RUL estimation can enable failure prevention in a more controllable manner in that effective maintenance can be executed in appropriate time to correct impending faults. In this paper we consider the problem of estimating the RUL from observed degradation data for a general system. A degradation path-dependent approach for RUL estimation is presented through the combination of Bayesian updating and expectation maximization (EM) algorithm. The use of both Bayesian updating and EM algorithm to update the model parameters and RUL distribution at the time obtaining a newly observed data is a novel contribution of this paper, which makes the estimated RUL depend on the observed degradation data history. As two specific cases, a linear degradation model and an exponential-based degradation model are considered to illustrate the implementation of our presented approach. A major contribution under these two special cases is that our approach can obtain an exact and closed-form RUL distribution respectively, and the moment of the obtained RUL distribution from our presented approach exists. This contrasts sharply with the approximated results obtained in the literature for the same cases. To our knowledge, the RUL estimation approach presented in this paper for the two special cases is the only one that can provide an exact and closed-form RUL distribution utilizing the monitoring history. Finally, numerical examples for RUL estimation and a practical case study for condition-based replacement decision making with comparison to a previously reported approach are provided to substantiate the superiority of the proposed model.  相似文献   

2.
In this paper, a new fuzzy linear programming based methodology using a specific membership function, named as modified logistic membership function is proposed. The modified logistic membership function is first formulated and its flexibility in taking up vagueness in parameters is established by an analytical approach. This membership function is tested for its useful performance through an illustrative example by employing fuzzy linear programming. The developed methodology of FLP has provided a confidence in applying to real life industrial production planning problem. This approach of solving industrial production planning problem can have feed back within the decision maker, the implementer and the analyst. In such case this approach can be called as IFLP (Interactive Fuzzy Linear Programming). There is a possibility to design the self organizing of fuzzy system for the mix products selection problem in order to find the satisfactory solution. The decision maker, the analyst and the implementer can incorporate their knowledge and experience to obtain the best outcome.  相似文献   

3.
Incorporating statistical multiple comparisons techniques with credit risk measurement, a new methodology is proposed to construct exact confidence sets and exact confidence bands for a beta distribution. This involves simultaneous inference on the two parameters of the beta distribution, based upon the inversion of Kolmogorov tests. Some monotonicity properties of the distribution function of the beta distribution are established which enable the derivation of an efficient algorithm for the implementation of the procedure. The methodology has important applications to financial risk management. Specifically, the analysis of loss given default (LGD) data are often modeled with a beta distribution. This new approach properly addresses model risk caused by inadequate sample sizes of LGD data, and can be used in conjunction with the standard recommendations provided by regulators to provide enhanced and more informative analyses.  相似文献   

4.
对于考察预指定情形下的统计模型的性能、性质及适应性,模拟研究是非常重要的统计工具.作为生存分析中两个最受欢迎的模型之一,由于加速失效时间模型中的因变量是生存时间的对数,且此模型能够以线性形式回归带有易解释的参数的协变量,从而加速失效模型比COX比例风险模型更便于拟合生存数据.首先提出了关于带有广义F-分布的加速失效模型的模拟研究中生成生存时间的方法,然后给出了描述加速失效时间模型的误差分布和相应的生存时间之间的一般的关系式,并给出了广义F-分布是如何生成生存时间的.最后,为证实所建议模拟技术的性能和有效性,将此方法应用于检测生存性状位点的模型中.  相似文献   

5.
In decision analysis, difficulties of obtaining complete information about model parameters make it advisable to seek robust solutions that perform reasonably well across the full range of feasible parameter values. In this paper, we develop the Robust Portfolio Modeling (RPM) methodology which extends Preference Programming methods into portfolio problems where a subset of project proposals are funded in view of multiple evaluation criteria. We also develop an algorithm for computing all non-dominated portfolios, subject to incomplete information about criterion weights and project-specific performance levels. Based on these portfolios, we propose a project-level index to convey (i) which projects are robust choices (in the sense that they would be recommended even if further information were to be obtained) and (ii) how continued activities in preference elicitation should be focused. The RPM methodology is illustrated with an application using real data on road pavement projects.  相似文献   

6.
In practice, managers often wish to ascertain that a particular engineering design of a production system meets their requirements. The future environment of this design is likely to differ from the environment assumed during the design. Therefore it is crucial to find out which variations in that environment may make this design unacceptable (unfeasible). This article proposes a methodology for estimating which uncertain environmental parameters are important (so managers can become pro-active) and which combinations of parameter values (scenarios) make the design unacceptable. The proposed methodology combines simulation, bootstrapping, design of experiments, and linear regression metamodeling. This methodology is illustrated through a simulated manufacturing system, including fourteen uncertain parameters of the input distributions for the various arrival and service times. These parameters are investigated through the simulation of sixteen scenarios, selected through a two-level fractional–factorial statistical design. The resulting simulation Input/Output (I/O) data are analyzed through a first-order polynomial metamodel and bootstrapping. A second experiment with other scenarios gives some outputs that turn out to be unacceptable. In general, polynomials fitted to the simulation’s I/O data can estimate the border line (frontier) between acceptable and unacceptable environments.  相似文献   

7.
A simple methodology is presented for sensitivity analysis ofmodels that have been fitted to data by statistical methods.Such analysis is a decision support tool that can focus theeffort of a modeller who wishes to further refine a model and/orto collect more data. A formula is given for the calculationof the proportional reduction in the variance of the model ‘output’that would be achievable with perfect knowledge of a subsetof the model parameters. This is a measure of the importanceof the set of parameters, and is shown to be asymptoticallyequal to the squared correlation between the model output andits best predictor based on the omitted parameters. The methodology is illustrated with three examples of OR problems,an age-based equipment replacement model, an ARIMA forecastingmodel and a cancer screening model. The sampling error of thecalculated percentage of variance reduction is studied theoretically,and a simulation study is then used to exemplify the accuracyof the method as a function of sample size.  相似文献   

8.
A model for building statistical dependence between marginal distribution with bounded support is discussed. The model is geared towards elicitation of dependence parameters through expert judgment. The resulting joint distribution may be useful in uncertainty analyses where dependence between random variables with a bounded support is present due to common risk factors, such as, e.g., in the classical Project Evaluation and Review Technique.  相似文献   

9.
This study intends to determine the optimal cutting parameters required to minimize the cutting time while maintaining an acceptable quality level. Usually, the equation for predicting cutting time is unknown during the early stages of cutting operations. This equation can be determined by studying the output cutting times vs. input cutting parameters through CATIA software, with assistance from the statistical method, response surface methodology (RSM). Then, the equation is formulated as an objective function in the form of mathematical programming (MP) to determine the optimal cutting parameters so that the cutting time is minimized. The formulation in MP also includes the constraints of feasible ranges for process capability consideration and surface roughness for quality concerns. The important ranking obtained from the statistical method in cooperation with the optimal solutions found from MP can also be used as references for the possibility of robust design improvements.  相似文献   

10.
Household consumption of natural gas is usually considered to be quite stable as cooking, space, and water heating belong to basic needs. The improvement of technologies together with possibilities of switching to alternative sources can, however, lead to a decreasing consumption trend. Knowing more about such trend, especially of its spatial distribution, can be useful for strategic planning. In this paper, we describe a general statistical methodology allowing to study the spatiotemporal behavior of consumption. It is based on semiparametric modeling. Formalized error and sensitivity analyses are part of the methodology. Presented methods are illustrated on large‐scale data from the Czech Republic.  相似文献   

11.
This paper addresses the problem of extracting preferences for alternatives from interval judgement matrices in the Analytic Hierarchy Process (AHP). The method of Arbel for extracting such preferences, which is based on the assumption that the interval judgements specify a feasible region in the weight space of the alternatives, is critically appraised from a statistical perspective and some new ideas emanating from this approach are developed and discussed. In particular it is proposed that a distribution for the weights on the feasible region, which is both tractable and meaningful, be adopted. The mean of the distribution can then be used as an assessment of the overall ranking of the alternatives and quantities of interest, such as probabilities and marginal distributions, can immediately be quantified. Two specific distributions on the feasible region, the uniform distribution and the distribution of random convex combinations with coefficients which are uniform spacings, are examined in some detail and the ideas which emerge are illustrated by means of selected examples.  相似文献   

12.
Fault tree analysis (FTA) is a powerful technique that is widely used for evaluating system safety and reliability. It can be used to assess the effects of combinations of failures on system behaviour but is unable to capture sequence dependent dynamic behaviour. A number of extensions to fault trees have been proposed to overcome this limitation. Pandora, one such extension, introduces temporal gates and temporal laws to allow dynamic analysis of temporal fault trees (TFTs). It can be easily integrated in model-based design and analysis techniques. The quantitative evaluation of failure probability in Pandora TFTs is performed using exact probabilistic data about component failures. However, exact data can often be difficult to obtain. In this paper, we propose a method that combines expert elicitation and fuzzy set theory with Pandora TFTs to enable dynamic analysis of complex systems with limited or absent exact quantitative data. This gives Pandora the ability to perform quantitative analysis under uncertainty, which increases further its potential utility in the emerging field of model-based design and dependability analysis. The method has been demonstrated by applying it to a fault tolerant fuel distribution system of a ship, and the results are compared with the results obtained by other existing techniques.  相似文献   

13.
An interval arithmetic based branch-and-bound optimizer is applied to find the singular points and bifurcations in studying feasibility of batch extractive distillation. This kind of study is an important step in synthesizing economic industrial processes applied to separate liquid mixtures of azeotrope-forming chemical components. The feasibility check methodology includes computation and analysis of phase plots of differential algebraic equation systems (DAEs). Singular points and bifurcations play an essential role in judging feasibility. The feasible domain of parameters can be estimated by tracing the paths of the singular points in the phase plane; bifurcations indicate the border of this domain. Since the algebraic part of the DAE cannot be transformed to an explicit form, implicit function theorem is applied in formulating the criterion of bifurcation points. The singular points of the maps at specified process parameters are found with interval methodology. Limiting values of the parameters are determined by searching for points satisfying bifurcation criteria.  相似文献   

14.
Bayesian networks (BNs) have attained widespread use in data analysis and decision making. Well-studied topics include efficient inference, evidence propagation, parameter learning from data for complete and incomplete data scenarios, expert elicitation for calibrating BN probabilities, and structure learning. It is common for the researcher to assume the structure of the BN or to glean the structure from expert elicitation or domain knowledge. In this scenario, the model may be calibrated through learning the parameters from relevant data. There is a lack of work on model diagnostics for fitted BNs; this is the contribution of this article. We key on the definition of (conditional) independence to develop a graphical diagnostic that indicates whether the conditional independence assumptions imposed, when one assumes the structure of the BN, are supported by the data. We develop the approach theoretically and describe a Monte Carlo method to generate uncertainty measures for the consistency of the data with conditional independence assumptions under the model structure. We describe how this theoretical information and the data are presented in a graphical diagnostic tool. We demonstrate the approach through data simulated from BNs under different conditional independence assumptions. We also apply the diagnostic to a real-world dataset. The results presented in this article show that this approach is most feasible for smaller BNs—this is not peculiar to the proposed diagnostic graphic, but rather is related to the general difficulty of combining large BNs with data in any manner (such as through parameter estimation). It is the authors’ hope that this article helps highlight the need for more research into BN model diagnostics. This article has supplementary materials online.  相似文献   

15.
We examine three Bayesian case influence measures including the φ-divergence, Cook’s posterior mode distance, and Cook’s posterior mean distance for identifying a set of influential observations for a variety of statistical models with missing data including models for longitudinal data and latent variable models in the absence/presence of missing data. Since it can be computationally prohibitive to compute these Bayesian case influence measures in models with missing data, we derive simple first-order approximations to the three Bayesian case influence measures by using the Laplace approximation formula and examine the applications of these approximations to the identification of influential sets. All of the computations for the first-order approximations can be easily done using Markov chain Monte Carlo samples from the posterior distribution based on the full data. Simulated data and an AIDS dataset are analyzed to illustrate the methodology. Supplemental materials for the article are available online.  相似文献   

16.
This paper develops an asymptotic expansion technique in momentum space for stochastic filtering. It is shown that Fourier transformation combined with a polynomial-function approximation of the nonlinear terms gives a closed recursive system of ordinary differential equations (ODEs) for the relevant conditional distribution. Thanks to the simplicity of the ODE system, higher-order calculation can be performed easily. Furthermore, solving ODEs sequentially with small sub-periods with updated initial conditions makes it possible to implement a substepping method for asymptotic expansion in a numerically efficient way. This is found to improve the performance significantly where otherwise the approximation fails badly. The method is expected to provide a useful tool for more realistic financial modeling with unobserved parameters and also for problems involving nonlinear measure-valued processes.  相似文献   

17.
In industrial statistics, there is great interest in predicting with precision lifetimes of specimens that operate under stress. For example, a bad estimation of the lower percentiles of a life distribution can produce significant monetary losses to organizations due to an excessive amount of warranty claims. The Birnbaum–Saunders distribution is useful for modeling lifetime data. This is because such a distribution allows us to relate the total time until the failure occurs to some type of cumulative damage produced by stress. In this paper, we propose a methodology for detecting influence of atypical data in accelerated life models on the basis of the Birnbaum–Saunders distribution. The methodology developed in this study should be considered in the design of structures and in the prediction of warranty claims. We conclude this work with an application of the proposed methodology on the basis of real fatigue life data, which illustrates its importance in a warranty claim problem. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
This paper deals with the statistical study of the local search methods which we used in Part I of this work. In that part, a tactical planning model of the rail freight transportation was defined as a network design model. Different local search methods have been used to solve it: Simulated Annealing, Tabu Search and a ‘Descent’ method. The solution and method convergence depends on the initial feasible solution and the convexity of the feasible region, so the comparison among them will be made with the help of statistical theory. Assuming the hypothesis that the distribution of local minima can be represented by the Weibull distribution, it is possible to obtain an approach to the global minimum and a confidence interval of this. The global minimum estimation has been used to compare the heuristic methods and the parameters for a given heuristic, and to obtain a stopping criterion.  相似文献   

19.
Data Flow Analysis can be used to find some of the errors in a computer program and gives as output a set of dubious paths of the investigated program, which have to be checked for executability. This can be done by solving a system of inequalities in order to obtain input data for that path. This article discusses how to obtain a reliable solution of this system in the linear case, when rounding effects are taken into account. The method is based on the simplex algorithm from linear programming, and returns a solution in the middle of the feasible region. The general nonlinear case is much more difficult to handle.  相似文献   

20.
This article deals with the problem of local sensitivity analysis, that is, how sensitive are the results of a statistical analysis to changes in the data? A general methodology of sensitivity analysis is applied to some statistical problems. The proposed methods are applicable to any statistical problem that can be expressed as an optimization problem or that involves solving a system of equations. As some particular examples, the methodology is applied to the maximum likelihood method, the standard and constrained methods of moments and the standard and constrained probability weighted moments methods. Unlike the standard method of moments, the constrained method of moments ensures that the obtained estimates are always consistent with the data. Closed analytical formulas for the calculation of these local sensitivities are derived. The obtained sensitivities include: (a) the objective function sensitivities to data points and (b) the sensitivities of the estimated parameters to data points. The derived formulas for the sensitivities are based on recent results in the area of mathematical programming. Several examples of parameter estimation problems are used to illustrate the methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号