首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Repetitive testing process is commonly used in the final testing stage of semiconductor manufacturing to ensure high outgoing product quality and to reduce testing errors. The decision on testing lot size and the number of testing repetitions ultimately determines the effectiveness of the testing process. Setting the retest rule is often difficult in practice due to uncertainties in the incoming product quality and testing equipment condition. In this paper, we study a repetitive testing process where the testing equipment may shift randomly to an inferior state. We develop a cost model that helps us to make optimal decisions on retesting rule. Through numerical analysis, we provide practical insights about the effects of testing equipment shift rate, testing errors, and different costs such as cost of testing and cost of rejecting conforming products on the optimal decision and the system performance. We find that significant penalty may result if the potential testing equipment shift is ignored.  相似文献   

2.
Hundreds of millions of multiple choice exams are given every year in the United States. These exams permit form-filling shift errors, where an absent-minded mismarking displaces a long run of correct answers. A shift error can substantially alter the exam's score, and thus invalidate it.In this paper, we develop algorithms to accurately detect and correct shift errors, while guaranteeing few false detections. We propose a shift error model, and probabilistic methods to identify shifted exam regions.We describe the results of our search for shift errors in undergraduate Stony Brook exam sets, and in over 100,000 Scholastic Amplitude Tests. These results suggest that approximately 2% of all tests contain shift errors. Extrapolating these results over all multiple choice exams and forms leads us to conclude that exam takers make millions of undetected shift errors each year.Employing probabilistic shift correcting systems is inherently dangerous. Such systems may be taken advantage of by clever examinees, who seek to increase the probability of correct guessing. We conclude our paper with a short study of optimal guessing strategies when faced with a generous shift error correcting system.  相似文献   

3.
** Corresponding author. Email: frank.coolen{at}durham.ac.uk We consider optimal testing of a system in order to demonstratereliability with regard to its use in a process after testing,where the system has to function for different types of tasks,which we assume to be independent. We explicitly assume thattesting reveals zero failures. The optimal numbers of tasksto be tested are derived by optimisation of a cost criterion,taking into account the costs of testing and the costs of failuresin the process after testing, assuming that such failures arenot catastrophic to the system. Cost and time constraints ontesting are also included in the analysis. We focus on studyof the optimal numbers of tests for different types of tasks,depending on the arrival rate of tasks in the process and thecosts involved. We briefly compare the results of this studywith optimal test numbers in a similar setting, but with analternative optimality criterion which is more suitable in caseof catastrophic failures, as presented elsewhere. For thesetwo different optimality criteria, the optimal numbers to betested depend similarly on the costs of testing per type andon the arrival rates of tasks in the process after testing.  相似文献   

4.
In this paper, we consider the error detection phenomena in the testing phase when modifications or improvements can be made to the software in the testing phase. The occurrence of improvements is described by a homogeneous Poisson process with intensity rate denoted by λ. The error detection phenomena is assumed to follow a nonhomogeneous Poisson process (NHPP) with the mean value function being denoted by m(t). Two models are presented and in one of the models, we have discussed an optimal release policy for the software taking into account the occurrences of errors and improvements. Finally, we discuss the possibility of an improvement removing k errors with probability pk, k ≥ 0 in the software and develop a NHPP model for the error detection phenomena in this situation.  相似文献   

5.
Testing is an important activity in product development. Past studies, which are developed to determine the optimal scheduling of tests, often focused on single-stage testing of sequential design process. This paper presents an analytical model for the scheduling of tests in overlapped design process, where a downstream stage starts before the completion of upstream testing. We derive optimal stopping rules for upstream and downstream stages’ testing, together with the optimal time elapsed between beginning the upstream tests and beginning the downstream development. We find that the cost function is first convex then concave increasing with respect to upstream testing duration. A one-dimensional search algorithm is then proposed for finding the unique optimum that minimizes the overall cost. Moreover, the impact of different model parameters, such as the problem-solving capacity and opportunity cost, on the optimal solution is discussed. Finally, we compare the testing strategies in overlapped process with those in sequential process, and get some additional results. The methodology is illustrated with a case study at a handset design company.  相似文献   

6.
We consider the problem of testing two simple hypotheses about unknown local characteristics of several independent Brownian motions and compound Poisson processes. All of the processes may be observed simultaneously as long as desired before a final choice between hypotheses is made. The objective is to find a decision rule that identifies the correct hypothesis and strikes the optimal balance between the expected costs of sampling and choosing the wrong hypothesis. Previous work on Bayesian sequential hypothesis testing in continuous time provides a solution when the characteristics of these processes are tested separately. However, the decision of an observer can improve greatly if multiple information sources are available both in the form of continuously changing signals (Brownian motions) and marked count data (compound Poisson processes). In this paper, we combine and extend those previous efforts by considering the problem in its multisource setting. We identify a Bayes optimal rule by solving an optimal stopping problem for the likelihood-ratio process. Here, the likelihood-ratio process is a jump-diffusion, and the solution of the optimal stopping problem admits a two-sided stopping region. Therefore, instead of using the variational arguments (and smooth-fit principles) directly, we solve the problem by patching the solutions of a sequence of optimal stopping problems for the pure diffusion part of the likelihood-ratio process. We also provide a numerical algorithm and illustrate it on several examples.  相似文献   

7.
最优过程均值和生产运行长度的确定   总被引:2,自引:1,他引:1  
实际生产中,过程均值由于受到随机振荡的影响,经常从受控状态逐渐漂移到失控状态,从而导致大量不合格品的出现.针对这种情况,本文假定随机振荡次数服从泊松过程,每次振荡引起过程均值漂移相互独立且服从同一指数分布,结合不对称田口质量损失函数,建立了最佳初始过程均值的经济模型,并讨论了最优生产运行长度的确定.通过与初始过程均值设置在目标值处的情形比较,说明本文模型对降低生产成本的有效性。灵敏度分析表明了各参数对最优过程均值和生产运行长度的影响.  相似文献   

8.
Although supply chain scholars very often assume the availability of error free data pertaining to the flow of goods that come in and go out of an inventory system as well as the on hand inventory level, some recent investigations show that this may not be true even in facilities where advanced item identification and data capture technologies such as the barcode system are used. This paper proposes a single period model where the inventory data capture process using the barcode system is prone to errors that lead to inaccuracies. In the first part of our work, we derive analytically the optimal policy in presence of errors when both demand and errors are uniformly distributed. In the second part, we examine quantitatively the impact of record inaccuracies on the performance of an inventory system, in terms of additional overage and shortage costs incurred.  相似文献   

9.
We consider the problem of testing for additivity in the standard multiple nonparametric regression model. We derive optimal (in the minimax sense) non- adaptive and adaptive hypothesis testing procedures for additivity against the composite nonparametric alternative that the response function involves interactions of second or higher orders separated away from zero in L 2([0, 1] d )-norm and also possesses some smoothness properties. In order to shed some light on the theoretical results obtained, we carry out a wide simulation study to examine the finite sample performance of the proposed hypothesis testing procedures and compare them with a series of other tests for additivity available in the literature.  相似文献   

10.
The stability of testing hypotheses is discussed. Differing from the usual tests measured by Neyman-Pearson lemma, the regret and correction of the tests are considered. After the decision is made based on the observationsX 1,X 2, ⋅⋅⋅,X n, one more piece of datumX n+1 is picked and the test is done again in the same way but based onX 1,X 2, ⋅⋅⋅,X n,X n+l There are three situations: (i) The previous decision is right but the new decision is wrong; (ii) the previous decision is wrong but the new decision is right; (iii) both of them are right or both of them are wrong. Of course, it is desired that the probability of the occurrence of (i) is as small as possible and the probability of the occurrence of (ii) is as large as possible. Since the sample size is sometimes not chosen very precisely after the type I error and the type II error are determined in practice, it seems more urgent to consider the above problem. Some optimal plans are also given. Project supported by the National Natural Science Foundation of China and the Doctoral Programme Foundation.  相似文献   

11.
In this study, we reformulated the problem of wafer probe operation in semiconductor manufacturing to consider a probe machine (PM) which has a discrete Weibull shift distribution with a nondecreasing failure rate. To maintain the imperfect PM during the probing of a lot of wafers, a minimal repair policy is introduced with type II inspection error. To increase the productivity of the PM, this paper aims to find an optimal probing lot size that minimizes the expected average processing time per wafer. Conditions and uniqueness for the optimal lot size are explored. The special case of a geometric shift distribution is studied to find a tighter upper bound on the optimal lot size than in previous study. Numerical examples are performed to evaluate the impacts of minimal repair on the optimal lot size. In addition, the adequacy of using a geometric shift distribution is examined when the actual shift distribution has an increasing failure rate.  相似文献   

12.
We study testing of nonlinear operators; we want to test whether an implementation operator conforms to a specification operator. The problem is difficult, since there can be infinitely many possible inputs but we can only test finitely many of them. An implementation operator may perform well on the tested inputs but may be faulty on the untested inputs. In general, finite testing is inherently inconclusive. Consequently, we modify the problem in three different directions and obtain positive results: (1) We consider an infinite sequence of tests and prove that testing is decidable in the limit; (2) We relax the error criterion and show that finite testing is conclusive, however, the cost could be formidable; and (3) We tolerate faults on a negligible subset of inputs and develop a probabilistic testing algorithm with a significantly reduced cost. Our results indicate that test sets are universal; they only depend on the structure of the input set. In fact, they are provided by an net of the input set.This work was done while consulting at AT&T Bell Laboratories, and is partially supported by the National Science Foundation grant IRI-89-07215 and the Air Force Office of Scientific Research 91-0347.  相似文献   

13.
We derive a large deviation result for the log-likelihood ratio for testing simple hypotheses in locally stationary Gaussian processes. This result allows us to find explicitly the rates of exponential decay of the error probabilities of type I and type II for Neyman?CPearson tests. Furthermore, we obtain the analogue of classical results on asymptotic efficiency of tests such as Stein??s lemma and the Chernoff bound, as well as the more general Hoeffding bound concerning best possible joint exponential rates for the two error probabilities.  相似文献   

14.
A lot of importance has been attached to the testing phase of the Software Development Life Cycle (SDLC). It is during this phase it is checked whether the software product meets user requirements or not. Any discrepancies that are identified are removed. But testing needs to be monitored to increase its effectiveness. Software Reliability Growth Models (SRGMs) that specify mathematical relationships between the failure phenomenon and time have proved useful. SRGMs that include factors that affect failure process are more realistic and useful. Software fault detection and removal during the testing phase of SDLC depend on how testing resources (test cases, manpower and time) are used and also on previously identified faults. With this motivation a Non-Homogeneous Poisson Process (NHPP) based SRGM is proposed in this paper which is flexible enough to describe various software failure/reliability curves. Both testing efforts and time dependent fault detection rate (FDR) are considered for software reliability modeling. The time lag between fault identification and removal has also been depicted. The applicability of our model is shown by validating it on software failure data sets obtained from different real software development projects. The comparisons with established models in terms of goodness of fit, the Akaike Information Criterion (AIC), Mean of Squared Errors (MSE), etc. have been presented.  相似文献   

15.
To reconstruct a function from its sampling value is not always exact, error may arise due to a lot of reasons, therefore error estimation is useful in reconstruction. For non-uniform sampling in shift invariant space, three kinds of errors of the reconstruction formula are discussed in this article. For every kind of error, we give an estimation. We find the accuracy of the reconstruction formula mainly depends on the decay property of the generator and the sampling function.  相似文献   

16.
By considering equivalences between various forecasting systems, the exact stochastic process followed by the one-step-ahead errors may be found. This process, the error process, is important for any monitoring scheme, and is a function of the forecasting system and the underlying data process. The error process is obtained for the most general form of exponential smoothing systems used in optimal conditions. The statistical properties are derived. In particular, the approximate variance of Trigg's smoothed error tracking signal is obtained explicitly for several exponential smoothing systems, and a procedure is given for obtaining it numerically for any such system. The use of different smoothing constants in the forecasting system and the tracking signal is discussed and it is found that suitable choice of the tracking signal constant greatly improves the performance of the signal, making it more comparable with CUSUM schemes.  相似文献   

17.
In call centers, call blending consists in the mixing of incoming and outgoing call activity, according to some call blending balance. Recently, Artalejo and Phung-Duc have developed an apt model for such a setting, with a two way communication retrial queue. However, by assuming a classical (proportional) retrial rate for the incoming calls, the short-term blending balance is heavily impacted by the number of incoming calls, which may be undesired, especially when the balance between incoming and outgoing calls is vital to the service offered. In this contribution, we consider an alternative to classical call blending, through a retrial queue with constant retrial rate for incoming calls. For the single-server case (one operator), a generating functions approach enables to derive explicit formulas for the joint stationary distribution of the number of incoming calls and the system state, and also for the factorial moments. This is complemented with a stability analysis, expressions for performance measures, and also recursive formulas, allowing reliable numerical calculation. A correlation study enables to study the system’s short-term blending balance, allowing to compare it to that of the system with classical retrial rate. For the multiserver case (multiple operators), we provide a quasi-birth-and-death process formulation, enabling to derive a sufficient and necessary condition for stability in this case (in a simple form), a numerical recipe to obtain the stationary distribution, and a cost model.  相似文献   

18.
Over the years, numerous process capability indices (PCIs) have been proposed to the manufacturing industry to provide numerical measures of process performance. Most research efforts have focused on developing and investigating PCIs that assess process capability by precise measurements of output quality. However, real observations of continuous quantities are not precise numbers; in practice, they are more or less imprecise. Since observations of continuous random variables are imprecise the values of related test statistics become imprecise. Therefore, decision rules for statistical tests have to be adapted to this situation. This article presents a set of confidence intervals that produces triangular fuzzy numbers for the estimation of Cpk index using Buckley’s approach with some modification. Additionally, a three-decision testing rule and step-by-step procedure are developed to assess process performance based on fuzzy critical values and fuzzy p-values. This concept is also illustrated with an example for testing process performance.  相似文献   

19.
In this work, we present an adaptive Newton-type method to solve nonlinear constrained optimization problems, in which the constraint is a system of partial differential equations discretized by the finite element method. The adaptive strategy is based on a goal-oriented a posteriori error estimation for the discretization and for the iteration error. The iteration error stems from an inexact solution of the nonlinear system of first-order optimality conditions by the Newton-type method. This strategy allows one to balance the two errors and to derive effective stopping criteria for the Newton iterations. The algorithm proceeds with the search of the optimal point on coarse grids, which are refined only if the discretization error becomes dominant. Using computable error indicators, the mesh is refined locally leading to a highly efficient solution process. The performance of the algorithm is shown with several examples and in particular with an application in the neurosciences: the optimal electrode design for the study of neuronal networks.  相似文献   

20.
We consider multistage group testing with incomplete identification and unreliability features. The objective is to find a cost‐efficient group testing policy to select a prespecified number of non‐defective items from some population in the presence of false‐positive and false‐negative test results, subject to reliability and other constraints. To confirm the status of tested groups, various sequential retesting procedures are suggested. We also extend the model to include the possibility of inconclusive test results. We derive all relevant cost functionals in analytically closed form. Numerical examples are also given. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号