首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In practice, lifetime performance index CL is used to measure the potential and performance of a process, where L is the lower specification limit. Progressive censoring scheme is quite useful in many practical situations where budget constraints are in place or there is a demand for rapid testing. In this paper, under the assumption of exponential distribution, this study constructs a maximum likelihood estimator (MLE) of CL based on the progressively type II right censored sample. The MLE of CL is then utilized to develop the hypothesis testing procedure in the condition of known L. The new testing procedure can be employed by product managers to assess whether the lifetime of products (or items) adheres to the required level in the condition of known L. Finally, we give one example to illustrate the use of the testing algorithmic procedure under given significance level.  相似文献   

2.
Using process capability indices to quantify manufacturing process precision (consistency) and performance, is an essential part of implementing any quality improvement program. Most research works for testing the capability indices have focused on using the traditional distribution frequency approaches. Cheng and Spiring [IIE Trans. 21 (1) 97] proposed a Bayesian procedure for assessing process capability index Cp based on one single sample. In practice, manufacturing information regarding product quality characteristic is often derived from multiple samples, particularly, when a routine-based quality control plan is implemented for monitoring process stability. In this paper, we consider estimating and testing Cp with multiple samples using Bayesian approach, and propose accordingly a Bayesian procedure for capability testing. The posterior probability, p, for which the process under investigation is capable, is derived. The credible interval, a Bayesian analogue of the classical lower confidence interval, is obtained. The results obtained in this paper, are generalizations of those obtained in Cheng and Spiring [IIE Trans. 21 (1), 97]. Practitioners can use the proposed procedure to Cheng and Spiring determine whether their manufacturing processes are capable of reproducing products satisfying the preset precision requirement.  相似文献   

3.
Process capability indices (PCIs) are used to measure process potential and performance. Since the lifetime of products generally may possess an exponential, gamma or Weibull distribution, etc., so under a two-parameter exponential distribution, this study constructs a uniformly minimum variance unbiased estimator (UMVUE) of the lifetime performance index based on the right type II censored sample. Then the UMVUE of the lifetime performance index is utilized to develop the new hypothesis testing procedure in the condition of known L. Finally, a practical example is illustrated to employ the testing procedure to determine whether the process is capable.  相似文献   

4.
Process capability indices provide numerical measures on whether a process conforms to the defined manufacturing capability prerequisite. These have been successfully applied by companies to compete with and to lead high-profit markets by evaluating the quality and productivity performance. The loss-based process capability index Cpm, sometimes called the Taguchi index, was proposed to measure process capability, wherein the output process measurements are precise. In the present study, we develop a realistic approach that allows the consideration of imprecise output data resulting from the measurements of the products quality. A general method combining the vector of fuzzy numbers to produce the membership function of fuzzy estimator of Taguchi index is introduced for further testing process capability. With the sampling distribution for the precise estimation of Cpm, two useful fuzzy inference criteria, the critical value and the fuzzy P-value, are proposed to assess the manufacturing process capability based on Cpm. The presented methodology takes into the consideration of a certain degree of imprecision on the sample data and leads to the three-decision rule with the four quadrants decision-making plot. The fuzzy inference procedure presented to assess process capability is a natural generalization of the traditional test, when the data are precise the proposed test is reduced to a classical test with a binary decision.  相似文献   

5.
It is often the case that some information is available on the parameter of failure time distributions from previous experiments or analyses of failure time data. The Bayesian approach provides the methodology for incorporation of previous information with the current data. In this paper, given a progressively type II censored sample from a Rayleigh distribution, Bayesian estimators and credible intervals are obtained for the parameter and reliability function. We also derive the Bayes predictive estimator and highest posterior density prediction interval for future observations. Two numerical examples are presented for illustration and some simulation study and comparisons are performed. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper we address the problem of estimating θ1 when , are observed and |θ1θ2|?c for a known constant c. Clearly Y2 contains information about θ1. We show how the so-called weighted likelihood function may be used to generate a class of estimators that exploit that information. We discuss how the weights in the weighted likelihood may be selected to successfully trade bias for precision and thus use the information effectively. In particular, we consider adaptively weighted likelihood estimators where the weights are selected using the data. One approach selects such weights in accord with Akaike's entropy maximization criterion. We describe several estimators obtained in this way. However, the maximum likelihood estimator is investigated as a competitor to these estimators along with a Bayes estimator, a class of robust Bayes estimators and (when c is sufficiently small), a minimax estimator. Moreover we will assess their properties both numerically and theoretically. Finally, we will see how all of these estimators may be viewed as adaptively weighted likelihood estimators. In fact, an over-riding theme of the paper is that the adaptively weighted likelihood method provides a powerful extension of its classical counterpart.  相似文献   

7.
研究Bayes统计分析中利用验前信息的稳健性.首先,用一般方法研究了指数寿命型分布中失效率的验前分布的稳健性.然后利用Gamma分布函数的典型性质,并以平方损失下的后验期望损失为判别准则,讨论了失效率的最优Bayes稳健区间.给出了失效率的最优Bayes稳健点估计.  相似文献   

8.
We apply the Bayes approach to the problem of projection estimation of a signal observed in the Gaussian white noise model and we study the rate at which the posterior distribution concentrates about the true signal from the space ℓ2 as the information in observations tends to infinity. A benchmark is the rate of a so-called oracle projection risk, i.e., the smallest risk of an unknown true signal over all projection estimators. Under an appropriate hierarchical prior, we study the performance of the resulting (appropriately adjusted by the empirical Bayes approach) posterior distribution and establish that the posterior concentrates about the true signal with the oracle projection convergence rate. We also construct a Bayes estimator based on the posterior and show that it satisfies an oracle inequality. The results are nonasymptotic and uniform over ℓ2. Another important feature of our approach is that our results on the oracle projection posterior rate are always stronger than any result about posterior convergence with the minimax rate over all nonparametric classes for which the corresponding projection oracle estimator is minimax over this class. We also study implications for the model selection problem, namely, we propose a Bayes model selector and assess its quality in terms of the so-called false selection probability.  相似文献   

9.
In this article, based on a set of upper record values from a Rayleigh distribution, Bayesian and non-Bayesian approaches have been used to obtain the estimators of the parameter, and some lifetime parameters such as the reliability and hazard functions. Bayes estimators have been developed under symmetric (squared error) and asymmetric (LINEX and general entropy (GE)) loss functions. These estimators are derived using the informative and non-informative prior distributions for σ. We compare the performance of the presented Bayes estimators with known, non-Bayesian, estimators such as the maximum likelihood (ML) and the best linear unbiased (BLU) estimators. We show that Bayes estimators under the asymmetric loss functions are superior to both the ML and BLU estimators. The highest posterior density (HPD) intervals for the Rayleigh parameter and its reliability and hazard functions are presented. Also, Bayesian prediction intervals of the future record values are obtained and discussed. Finally, practical examples using real record values are given to illustrate the application of the results.  相似文献   

10.
Asymptotic biases and variances of M-, L- and R-estimators of a location parameter are compared under ε-contamination of the known error distribution F 0 by an unknown (and possibly asymmetric) distribution. For each ε-contamination neighborhood of F 0, the corresponding M-, L- and R-estimators which are asymptotically efficient at the least informative distribution are compared under asymmetric ε-contamination. Three scale-invariant versions of the M-estimator are studied: (i) one using the interquartile range as a preliminary estimator of scale: (ii) another using the median absolute deviation as a preliminary estimator of scale; and (iii) simultaneous M-estimation of location and scale by Huber's Proposal 2. A question considered for each case is: when are the maximal asymptotic biases and variances under asymmetric ε-contamination attained by unit point mass contamination at ∞? Numerical results for the case of the ε-contaminated normal distribution show that the L-estimators have generally better performance (for small to moderate values of ε) than all three of the scale-invariant M-estimators studied.  相似文献   

11.
In the framework of nonparametric multivariate function estimation we are interested in structural adaptation. We assume that the function to be estimated has the “single-index” structure where neither the link function nor the index vector is known. This article suggests a novel procedure that adapts simultaneously to the unknown index and the smoothness of the link function. For the proposed procedure, we prove a “local” oracle inequality (described by the pointwise seminorm), which is then used to obtain the upper bound on the maximal risk of the adaptive estimator under assumption that the link function belongs to a scale of Hölder classes. The lower bound on the minimax risk shows that in the case of estimating at a given point the constructed estimator is optimally rate adaptive over the considered range of classes. For the same procedure we also establish a “global” oracle inequality (under the L r norm, r < ∞) and examine its performance over the Nikol’skii classes. This study shows that the proposed method can be applied to estimating functions of inhomogeneous smoothness, that is whose smoothness may vary from point to point.  相似文献   

12.
Based on randomly right-censored data, a smooth nonparametric estimator of the quantile function of the lifetime distribution is studied. The estimator is defined to be the solution xn(p) of Fn(xn(p)) = p, where Fn is the distribution function corresponding to a kernel estimator of the lifetime density. The strong consistency and asymptotic normality of xn(p) are shown. Data-based selection of the bandwidth required for computing Fn is investigated using bootstrap methods. Illustrative examples are given.  相似文献   

13.
Process capability indices had been widely used to evaluate the process performance to the continuous improvement of quality and productivity. When the lifetime of products possesses a one-parameter Pareto distribution, the larger-the-better lifetime performance index is considered. The maximum likelihood estimator is used to estimate the lifetime performance index based on the progressive type I interval censored sample. The asymptotic distribution of this estimator is also investigated. We use this estimator to develop the new hypothesis testing algorithmic procedure in the condition of known lower specification limit. Finally, two practical examples are given to illustrate the use of this testing algorithmic procedure to determine whether the process is capable.  相似文献   

14.
We prove that if the one-point compactification of a locally compact, noncompact Hausdorff space L is the topological space called pseudoarc, then C0(L,C) is almost transitive. We also obtain two necessary conditions on a metrizable locally compact Hausdorff space L for C0(L) being almost transitive.  相似文献   

15.
Let XN(θ,1), where θ ϵ [−m, m], for some m > 0, and consider the problem of estimating θ with quadratic loss. We show that the Bayes estimator δm, corresponding to the uniform prior on [−m, m], dominates δ0 (x) = x on [−m, m] and it also dominates the MLE over a large part of the parameter interval. We further offer numerical evidence to suggest that δm has quite satisfactory risk performance when compared with the minimax estimators proposed by Casella and Strawderman (1981) and the estimators proposed by Bickel (1981).  相似文献   

16.
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid, however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this article, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general setup, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effect models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.  相似文献   

17.
Summary  The Bayesian estimation on lifetime data under fuzzy environments is proposed in this paper. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. The (conventional) Bayesian estimation method will be used to create the fuzzy Bayes point estimator by invoking the well-known theorem called “Resolution Identity” in fuzzy set theory. On the other hand, we also provide computational procedures to evaluate the membership degree of any given Bayes point estimate. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four subproblems for the purpose of simplifying computation. Finally, the subproblems can be solved by using any commercial optimizers, e.g., GAMS or LINDO.  相似文献   

18.
《Applied Mathematical Modelling》2014,38(9-10):2586-2600
In this paper, we study the interval estimation of failure rate and reliability for exponential distribution, in the case of zero-failure data, using the method of two-sided Modified Bayesian (M-Bayesian) credible limit. We discuss the properties of two-sided M-Bayesian credible limits which include the impact of the value of upper bound c of hyper parameter, and the influence of different prior distributions of hyper parameter on two-sided M-Bayesian credible limits. The paper obtains the relationship between three kinds of two-sided M-Bayesian credible limits and two-sided classical confidence limits. Finally, we use a real data set to verify the properties of two-sided M-Bayesian credible limits, and the computing results indicate that the method is efficient and easy to operate.  相似文献   

19.
C 1(K) is the space of real continuous functions onK endowed with the usualL 1,-norm where \(K = \overline {\operatorname{int} K}\) is compact inR m · U is a finite-dimensional subspace ofC 1,(K). The metric projection ofC 1,(K) ontoU contains a continuous selection with respect toL 1, -convergence if and only ifU is a unicity (Chebyshev) space forC 1,(K). Furthermore, ifK is connected andU is not a unicity space forC 1,(K), then there is no continuous selection with respect toL -convergence. An example is given of aU and a disconnectedK with no continuous selection with respect toL 1-convergence, but many continuous selections with respect toL -convergence.  相似文献   

20.
LetX 1,...,X p bep(≥ 2) independent random variables, where eachX i has a gamma distribution withk i andθ i . The problem is to simultaneously estimatep gammar parametersθ i under entropy loss where the parameters are believed priori. Hierarchical Bayes (HB) and empirical Bayes(EB) estimators are investigated. Next, computer simulation is studied to compute the risk percentage improvement of the HB, EB and the estimator of Dey et al.(1987) compared to MVUE ofθ.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号