首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
Burn-in is a manufacturing process applied to products to eliminate early failures in the factory before the products reach the customers. Various methods have been proposed for determining an optimal burn-in time of a non-repairable system or a repairable series system, assuming that system burn-in improves all components in the system. In this paper, we establish the trade-off between the component reliabilities during system burn-in and develop an optimal burn-in time for repairable non-series systems to maximize reliability. One impediment to expressing the reliability of a non-series system is in that successive failures during system burn-in cannot be described precisely because a failed component is not detected until the whole system fails. For approximating the successive failures of a non-series system during system burn-in, we considered two types of repair: minimal repair at the time of system failure, and repair at the time of component or connection failure. The two types of repair provide bounds on the optimal system burn-in time of non-series systems.  相似文献   

2.
Burn-in is a widely used engineering method of elimination of defective items before they are shipped to customers or put into field operation. In conventional burn-in procedures, components or systems are subject to a period of simulated operation prior to actual usage. Then those which failed during this period are scrapped and discarded. In this paper, we assume that the population of items is composed of two ordered subpopulations and the elimination of weak items by using environmental shocks is considered. Optimal severity levels of these shocks that minimize the defined expected costs are investigated. Some illustrative examples are discussed.  相似文献   

3.
An important problem in reliability is to define and estimate the optimal burn-in time. For bathtub shaped failure-rate lifetime distributions, the optimal burn-in time is frequently defined as the point where the corresponding mean residual life function achieves its maximum. For this point, we construct an empirical estimator and develop the corresponding statistical inferential theory. Theoretical results are accompanied with simulation studies and applications to real data. Furthermore, we develop a statistical inferential theory for the difference between the minimum point of the corresponding failure rate function and the aforementioned maximum point of the mean residual life function. The difference measures the length of the time interval after the optimal burn-in time during which the failure rate function continues to decrease and thus the burn-in process can be stopped.   相似文献   

4.
This paper presents burn-in effects on yield loss and reliability gain for a lifetime distribution developed from a negative binomial defect density distribution and a given defect size distribution, after assuming that the rate of defect growth is proportional to the power of the present defect size. While burn-in always results in yield loss, it creates reliability gain only if either defects grow fast or the field operation time is long. Otherwise, burn-in for a short time could result in reliability loss. The optimal burn-in time for maximizing reliability is finite if defects grow linearly in time and is infinite if defects grow nonlinearly in time. The optimal burn-in time for minimizing cost expressed in terms of both yield and reliability increases in the field operation time initially but becomes constant as the field operation time is long enough. It is numerically shown that increasing mean defect density or defect clustering increases the optimal burn-in time.  相似文献   

5.
Novel replacement policies that are hybrids of inspection maintenance and block replacement are developed for an n identical component series system in which the component parts used at successive replacements arise from a heterogeneous population. The heterogeneous nature of components implies a mixed distribution for time to failure. In these circumstances, a hybrid policy comprising two phases, an early inspection phase and a later wear-out replacement phase, may be appropriate. The policy has some similarity to burn-in maintenance. The simplest policy described is such a hybrid and comprises a block-type or periodic replacement policy with an embedded block or periodic inspection policy. We use a three state failure model, in which a component may be good, defective or failed, in order to consider inspection maintenance. Hybrid block replacement and age-based inspection, and opportunistic hybrid policies will also arise naturally in these circumstances and these are briefly investigated. For the simplest policy, an approximation is used to determine the long-run cost and the system reliability. The policies have the interesting property that the system reliability may be a maximum when the long-run cost is close to its minimum. The failure model implies that the effect of maintenance is heterogeneous. The policies themselves imply that maintenance is carried out more prudently to newer than to older systems. The maintenance of traction motor bearings on underground trains is used to illustrate the ideas in the paper.  相似文献   

6.
As many products are becoming increasingly more reliable, traditional lifetime-based burn-in approaches that try to fail defective units during the test require a long burn-in duration, and thus are not effective. Therefore, we promote the degradation-based burn-in approach that bases the screening decision on the degradation level of a burnt-in unit. Motivated by the infant mortality faced by many Micro-Electro-Mechanical Systems (MEMSs), this study develops two degradation-based joint burn-in and maintenance models under the age and the block based maintenances, respectively. We assume that the product population comprises a weak and a normal subpopulations. Degradation of the product follows Wiener processes with linear drift, while the weak and the normal subpopulations possess distinct drift parameters. The objective of joint burn-in and maintenance decisions is to minimize the long run average cost per unit time during field use by properly choosing the burn-in settings and the preventive replacement intervals. An example using the MEMS devices demonstrates effectiveness of these two models.  相似文献   

7.
The popular models for repairable item inventory, both in the literature as well as practical applications, assume that the demands for items are independent of the number of working systems. However this assumption can introduce a serious underestimation of availability when the number of working systems is small, the failure rate is high or the repair time is long. In this paper, we study a multi-echelon repairable item inventory system under the phenomenon of passivation, i.e. serviceable items are passivated (“switched off”) upon system failure. This work is motivated by corrective maintenance of high-cost technical equipment in the miltary. We propose an efficient approximation model to compute time-varying availability. Experiments show that our analytical model agrees well with Monte Carlo simulation.  相似文献   

8.
张少强  马希荣 《应用数学》2006,19(2):374-380
本文研究一个目标是最小化最大交付时间的能分批处理的非中断单机排序问题.这个问题来源于半导体制造过程中对芯片煅烧工序的排序.煅烧炉可以看成一个能同时最多加工B(〈n)个工件的处理机.此外,每个工件有一个可以允许其加工的释放时间和一个完成加工后的额外交付时间.该问题就是将工件分批后再依批次的排序加工,使得所有工件都交付后所需的时间最短.我们设计了一个用时O(f(l/ε)n^5/2)的多项式时间近似方案,其中关于1/ε的指数函数厂(1/ε)对固定的ε是个常数.  相似文献   

9.
Burn-in has been widely used as an effective procedure for screening out failed electronic products during the early-failure period, before shipment to the customers. Environmental stress such as temperature is increasingly being used to effectively shorten the burn-in time, and this method is usually called an accelerated burn-in test. When different stress levels are chosen for the burn-in operation, the burn-in times must be determined. An Arrhenius–Lognormal distribution can describe the lognormal lifetime of electronic products under different temperature levels. In this paper, the Arrhenius–Lognormal distribution and its mean residual life function are applied to the accelerated burn-in cost model, and a genetic algorithm is used to solve for the optimal burn-in time. We choose a real TFT–LCD module as an example, and determine its optimal accelerated burn-in time. A sensitivity analysis of the TFT–LCD module case shows the effect of model parameters on optimal burn-in time.  相似文献   

10.
Classification of items as good or bad can often be achieved more economically by examining the items in groups rather than individually. If the result of a group test is good, all items within it can be classified as good, whereas one or more items are bad in the opposite case. Whether it is necessary to identify the bad items or not, and if so, how, is described by the screening policy. In the course of time, a spectrum of group screening models has been studied, each including some policy. However, the majority ignores that items may arrive at random time epochs at the testing center in real life situations. This dynamic aspect leads to two decision variables: the minimum and maximum group size. In this paper, we analyze a discrete-time batch-service queueing model with a general dependency between the service time of a batch and the number of items within it. We deduce several important quantities, by which the decision variables can be optimized. In addition, we highlight that every possible screening policy can, in principle, be studied, by defining the dependency between the service time of a batch and the number of items within it appropriately.  相似文献   

11.
This study develops a bi-objective method for burn-in decision makings with a view to achieving an optimal trade-off between the cost and the performance measures. Under the proposed method, a manufacturer specifies the relative importance between the cost and the performance measures. Then a single-objective optimal solution can be obtained through optimizing the weighted combination of these two measures. Based on this method, we build a specific model when the performance objective is the survival probability given a mission time. We prove that the optimal burn-in duration is decreasing in the weight assigned to the normalized cost. Then, we develop an algorithm to populate the Pareto frontier in case the manufacturer has no idea about the relative weight.  相似文献   

12.
The following replacement problem is considered. N items, which are subject to failure, can be divided into two groups distinguished by the fact that the individual replacement cost in one group is higher than in the other. A strategy is required to minimize replacement costs. In some cases the cheapest policy is to replace each item, when it fails, by a new item. However, the paper shows that this policy can usually be improved upon by what is called a two-stage policy. In a two-stage policy the failures in one group are replaced by new items; those in the other group are replaced by items already operating in the first group. Under some circumstances it is shown to be worth while to create a second group. Formulae are given for calculating the optimum two-stage strategies for any life distribution, but the emphasis is on the formulation of general conditions under which two-stage schemes are preferable to simple replacement. Some extensions and generalizations are briefly indicated.  相似文献   

13.
In this paper, a nonlinear mathematical model is proposed and analyzed to study the survival of a resource-dependent population. It is assumed that this population and its resource are affected simultaneously by a toxicant (pollutant) emitted into the environment from external sources as well as formed by precursors of this population. It is shown that as the cumulative rates of emission and formation of the toxicant into the environment increase, the densities of population and its resource settle down to lower equilibria than their initial carrying capacities, and their magnitudes decrease as rates of emission and formation of the toxicant increase. On comparing different cases, it is noted that when population is not affected directly by the toxicant but only its resource is affected, the possibility of its survival is greater than the case when both are affected simultaneously. But for large emission rate of toxicant, the affected resource may be driven to extinction under certain conditions and the population which wholly depends on it may not survive for long even if it is not affected directly by the toxicant.  相似文献   

14.
Mixtures of decreasing failure rate (DFR) distributions are always DFR. It turns out that very often mixtures of increasing failure rate distributions can decrease or show even more complicated patterns of dependence on time. For studying this and other relevant effects two simple models of mixing with additive and multiplicative failure rates are considered. It is shown that for these models an inverse problem can be solved, which means that given an arbitrary shape of the mixture failure rate and a mixing distribution, the failure rate for a governing distribution can be uniquely obtained. Some examples are considered where this operation can be performed explicitly. Possible generalizations are discussed. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

15.
基于需求和采购价格均为时变的EOQ模型,考虑物品的变质率呈更符合现实情况的三参数Weibull分布,同时考虑短缺量拖后和资金时值对易变质物品库存管理的影响,构建了相应的EOQ模型.应用数学软件Matlab对该库存模型进行仿真计算和主要影响参数的灵敏度分析.结果表明,该模型存在最优解,且各主要影响参数对最优库存控制各有不同程度的影响,资金时值对库存总成本净现值的影响程度要甚于短缺量拖后的影响,故在制定科学的库存策略时资金时值需要更加关注.  相似文献   

16.
ABSTRACT. This paper investigates theoretically to what extent a nature reserve may protect a uniformly distributed population of fish or wildlife against negative effects of harvesting. Two objectives of this protection are considered: avoidance of population extinction and maintenance of population, at or above a given precautionary population level. The pre‐reserve population is assumed to follow the logistic growth law and two models for post‐reserve population dynamics are formulated and discussed. For Model A by assumption the logistic growth law with a common carrying capacity is valid also for the post‐reserve population growth. In Model B, it is assumed that each sub‐population has its own carrying capacity proportionate to its distribution area. For both models, migration from the high‐density area to the low‐density area is proportional to the density difference. For both models there are two possible outcomes, either a unique globally stable equilibrium, or extinction. The latter may occur when the exploitation effort is above a threshold that is derived explicitly for both models. However, when the migration rate is less than the growth rate both models imply that the reserve can be chosen so that extinction cannot occur. For the opposite case, when migration is large compared to natural growth, a reserve as the only management tool cannot assure survival of the population, but the specific way it increases critical effort is discussed.  相似文献   

17.
We consider the problem where items are produced in lots and sold with warranty. Due to manufacturing variability, some items do not conform to the design specifications and their performance is inferior (for example, have higher failure rate). The warranty servicing cost for these is much higher than for those which conform. Two approaches have been advocated for reducing the warranty cost per item released and in both it is achieved at the expense of increased manufacturing cost. The first involves life testing to weed out nonconforming items and the second involves strategies to reduce nonconforming items being produced. In this paper, the authors develop a model which combines both approaches and quality control decisions are made optimally to minimize the total (manufacturing and warranty) cost. It extends the earlier models of the authors which deals with quality decisions based solely on either the first or the second approach.  相似文献   

18.
This paper derives the optimal replenishment policy for imperfect quality economic manufacturing quantity (EMQ) model with rework and backlogging. The classic EMQ model assumes that all items produced are of perfect quality. However, in real‐life manufacturing settings, generation of imperfect quality items is almost inevitable. In this study, a random defective rate is assumed. All items produced are inspected and the defective items are classified as scrap and repairable. A rework process is involved in each production run when regular manufacturing process ends, and a rate of failure in repair is also assumed. Unit disposal cost and unit repairing and holding costs are included in our mathematical modelling and analysis. The renewal reward theorem is employed in this study to cope with the variable cycle length. The optimal replenishment policy in terms of lot‐size and backlogging level that minimizes expected overall costs for the proposed imperfect quality EMQ model is derived. Special cases of the model are identified and discussed. Numerical example is provided to demonstrate its practical usage. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
An unbounded knapsack problem (KP) was investigated that describes the loading of items into a knapsack with a finite capacity, W, so as to maximize the total value of the loaded items. There were n types of an infinite number of items, each type with a distinct weight and value. Exact branch and bound algorithms have been successfully applied previously to the unbounded KP, even when n and W were very large; however, the algorithms are not adequate when the weight and the value of the items are strongly correlated. An improved branching strategy is proposed that is less sensitive to such a correlation; it can therefore be used for both strongly correlated and uncorrelated problems.  相似文献   

20.
Mathematical models are presented which are useful for determiningwhen replacement or maintenance is needed. In addition, techniquesfor assessing the efficacy of maintenance and/or overhaul arediscussed. Since the underlying concepts and techniques fornonrepairable items are relatively well known, attention isfocused on repairable items. Moreover, great emphasis is placedon the major differences between the concepts, probabilisticmodels, and statistical analysis techniques appropriate fornonrepairable and repairable items respectively. Such emphasisis still required because the superficial similarities betweennonrepairable and repairable items have contributed to the widespreaduse of poor terminology and notation which, in turn, make thesimilarities appear to be substantive, rather than just superficial.This vicious circle—-which is still evident in most currentreliability texts and standards—-must be broken, and thispaper is intended to contribute to this campaign. It is alsostressed that, even to the very limited extent that repairablesystems concepts and techniques are discussed in the literature,excessive emphasis is placed on reliability growth or improvement.This has resulted in even less understanding of basic notionsof repairable-systems deterioration, i.e. of basic conceptsassociated with systems maintenance. This paper focuses on conceptsconnected with systems maintenance to help rectify this imbalance.Nonetheless, it is also stressed that the same models (withdifferent parameters) can often be used for both situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号