首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
In this paper, we consider the problem of determining optimal operating conditions for a data processing system. The system is burned‐in for a fixed burn‐in time before it is put into field operation and, in field operation, it has a work size and follows an age‐replacement policy. Assuming that the underlying lifetime distribution of the system has a bathtub‐shaped failure rate function, the properties of optimal burn‐in time, optimal work size and optimal age‐replacement policy will be derived. It can be seen that this model is a generalization of those considered in the previous works, and it yields a better optimal operating conditions. This paper presents an analytical method for three‐dimensional optimization problem. An algorithm for determining optimal operating conditions is also given. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

2.
In this study, we reformulated the problem of wafer probe operation in semiconductor manufacturing to consider a probe machine (PM) which has a discrete Weibull shift distribution with a nondecreasing failure rate. To maintain the imperfect PM during the probing of a lot of wafers, a minimal repair policy is introduced with type II inspection error. To increase the productivity of the PM, this paper aims to find an optimal probing lot size that minimizes the expected average processing time per wafer. Conditions and uniqueness for the optimal lot size are explored. The special case of a geometric shift distribution is studied to find a tighter upper bound on the optimal lot size than in previous study. Numerical examples are performed to evaluate the impacts of minimal repair on the optimal lot size. In addition, the adequacy of using a geometric shift distribution is examined when the actual shift distribution has an increasing failure rate.  相似文献   

3.
The problem of an inspection permutation or inspection strategy (first discussed in a research paper in 1989 and reviewed in another research paper in 1991) is revisited. The problem deals with an N‐component system whose times to failure are independent but not identically distributed random variables. Each of the failure times follows an exponential distribution. The components in the system are connected in series such that the failure of at least one component entails the failure of the system. Upon system failure, the components are inspected one after another in a hierarchical way (called an inspection permutation) until the component causing the system to fail is identified. The inspection of each component is a process that takes a non‐negligible amount of time and is performed at a cost. Once the faulty component is identified, it is repaired at a cost, and the repair process takes some time. After the repair, the system is good as new and is put back in operation. The inspection permutation that results in the maximum long run average net income per unit of time (for the undiscounted case) or maximum total discounted net income per unit of time (for the discounted case) is called the optimal inspection permutation/strategy. A way of determining an optimal inspection permutation in an easier fashion, taking advantage of the improvements in computer software, is proffered. Mathematica is used to showcase how the method works with the aid of a numerical example. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
The optimal control methodology called concentration-of-measure optimal control (COMOC), seeks to minimise a concen- tration-of-measure upper bound on the probability of failure of an uncertain system. This bound is computed for a system characterised by a single performance measure depending on random inputs. This work considers controlled multibody dynamics taking place in an uncertain environment. The goal is to quantify uncertainty in a controlled robot manoeuvre and to minimise the probability of failure with regard to a performance measure. First, a deterministic optimal control problem is solved, yielding state and control trajectories that minimise an objective function. Boundary conditions for the optimal control problem are chosen such that the system performs ideally in the sense of the performance measure. Secondly, the obtained manoeuvre is reconsidered in the presence of uncertainty. Using a concentration-of-measure inequality, a rigorous upper bound for the probability of failure is derived. Finally, an optimisation is performed that searches for a control sequence (in the neighbourhood of the given one), that minimises the probability of failure. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Burn‐in tests help manufacturers detect defective items and remove them before being sold to customers. In a competitive marketplace, cost is a major consideration and not employing a burn‐in test may result in higher and needless expenses. With this in mind, we consider degradation‐based burn‐in tests in which the degradation path follows a Wiener process and weak items are identified when the process crosses a piecewise linear function. We also study linear functions as a special case of such a piecewise linear barrier. Within this setup, we apply a cost model to determine the optimal burn‐in test. Finally, we discuss an illustrative example using GaAs laser degradation data and present an optimal burn‐in test for it.  相似文献   

6.
The assumption that components which are part of a technical system work independently seems not appropriate in a number of applications. A lot of multivariate extensions of the univariate exponential distribution have been suggested as lifetime distributions. But only the models of J. E. Freund and of A. W. Marshall and I. Olkin seem to be physically motivated. A combination of these approaches yields a bivariate lifetime distribution which is investigated in some detail. Applications of this bivariate lifetime model are considered in preventive maintenance. In a two-component parallel system the optimal replacement time shall be determined with respect to the total expected discounted cost criterion. Results of the theory of stochastic processes are used to obtain the optimal strategy for different information levels. Some numerical examples based on a two-component parallel system with dependent component lifetimes show how the optimal replacement policy depends on the different information levels and on the degree of dependence of the components.  相似文献   

7.
An extended stochastic failure model for a system subject to random shocks   总被引:1,自引:0,他引:1  
In this article, a stochastic failure model for a system subject to a random shock process is studied. It is assumed that a fatal shock results in an immediate system failure, whereas a non-fatal shock may increase the susceptibility of the system to failure. The lifetime distribution of the system and its failure rate function are derived, and the effect of environmental factors on the failure process of the system is also investigated. Lifetimes of systems operated under different environmental conditions are stochastically compared.  相似文献   

8.
This work deals with the filtering problem for norm-bounded uncertain discrete dynamic systems with multiple sensors having different stochastic failure rates. For tackling the uncertainties of the covariance matrices of state and state estimation error simultaneously, their upper bounds containing a scaling parameter are derived, and then a robust finite-horizon filtering minimizing the upper bound of the estimation error covariance is proposed. Furthermore, an optimal scaling parameter is exploited to reduce the conservativeness of the upper bounds of the state and estimation error covariances, which leads to an optimal robust filtering for all possible missing measurements and all admissible parameter uncertainties. A numerical example illustrates the performance improvement over the traditional Kalman filtering method.  相似文献   

9.
A step‐stress accelerated life testing model is considered for progressive type‐I censored experiments when the tested items are not monitored continuously but inspected at prespecified time points, producing thus grouped data. The underlying lifetime distributions belong to a general scale family of distributions. The points of stress‐level change are simultaneously inspection points as well while there is the option of assigning additional inspection points in between the stress‐level change points. In a Bayesian framework, the posterior distributions of the parameters of the model are derived for characteristic choices of prior distributions, as conjugate‐like and normal priors; vague or noninformative. The developed approach is illustrated on a simulated example and on a real data set, both known from the literature. The results are compared to previous analyses; frequentist or Bayes.  相似文献   

10.
In this work, new results on functional type a posteriori estimates for elliptic optimal control problems with control constraints are presented. More precisely, we derive new, sharp, guaranteed, and fully computable lower bounds for the cost functional in addition to the already existing upper bounds. Using both, the lower and the upper bounds, we arrive at two‐sided estimates for the cost functional. We prove that these bounds finally lead to sharp, guaranteed and fully computable upper estimates for the discretization error in the state and the control of the optimal control problem. First numerical tests are presented confirming the efficiency of the a posteriori estimates derived. © 2016 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 33: 403–424, 2017  相似文献   

11.
In this work, we consider linear elliptic problems posed in long domains, i.e. the domains whose size in one coordinate direction is much greater than the size in the other directions. If the variation of the coefficients and right‐hand side along the emphasized direction is small, the original problem can be reduced to a lower‐dimensional one that is supposed to be much easier to solve. The a‐posteriori estimation of the error stemming from the model reduction constitutes the goal of the present work. For general coefficient matrix and right‐hand side of the equation, the reliable and efficient error estimator is derived that provides a guaranteed upper bound for the modelling error, exhibits the optimal asymptotics as the size of the domain tends to infinity and correctly indicates the local error distribution. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
Abstract This paper examines the question of optimal harvesting time in a size‐heterogeneous farmed aquatic population, using a model reflecting the effect of population density on both overall mortality rate and individual growth. This analysis enables an optimal harvesting rule to be deduced. The results obtained are applied to shrimp culture in recirculation systems in Mexico. Numerical solutions are derived for different production scenarios. Assuming identical culture conditions, results are also obtained under the hypothesis of homogeneous population growth, the view traditionally taken in the relevant economic literature. The optimal harvesting times calculated tend to decrease with higher densities, although this rule fails under the size‐heterogeneous population model. In general, optimal harvesting times are overestimated when size‐homogeneity in the culture is assumed. Our analysis reveals that management predictions are significantly mistaken if the size‐heterogeneity phenomenon is not taken into account.  相似文献   

13.
Joint progressive censoring schemes are quite useful to conduct comparative life‐testing experiment of different competing products. Recently, Mondal and Kundu (“A New Two Sample Type‐II Progressive Censoring Scheme,” Commun Stat‐Theory Methods; 2018) introduced a joint progressive censoring scheme on two samples known as the balanced joint progressive censoring (BJPC) scheme. Optimal planning of such progressive censoring scheme is an important issue to the experimenter. This article considers optimal life‐testing plan under the BJPC scheme using the Bayesian precision and D‐optimality criteria, assuming that the lifetimes follow Weibull distribution. In order to obtain the optimal BJPC life‐testing plans, one needs to carry out an exhaustive search within the set of all admissible plans under the BJPC scheme. However, for large sample size, determination of the optimal life‐testing plan is difficult by exhaustive search technique. A metaheuristic algorithm based on the variable neighborhood search method is employed for computation of the optimal life‐testing plan. Optimal plans are provided under different scenarios. The optimal plans depend upon the values of the hyperparameters of the prior distribution. The effect of different prior information on optimal scheme is studied.  相似文献   

14.
In this paper, we study the upper bounds for ruin probabilities of an insurance company which invests its wealth in a stock and a bond. We assume that the interest rate of the bond is stochastic and it is described by a Cox-Ingersoll-Ross (CIR) model. For the stock price process, we consider both the case of constant volatility (driven by an O-U process) and the case of stochastic volatility (driven by a CIR model). In each case, under certain conditions, we obtain the minimal upper bound for ruin probability as well as the corresponding optimal investment strategy by a pure probabilistic method.  相似文献   

15.
苏保河 《运筹学学报》2007,11(1):93-101
研究被检测系统的一个模型,假定系统有4种运行状态(正常工作、异常工作、正常故障和异常故障).系统故障时不需检测,系统工作时必须经过检测才能知道它是正常还是异常.系统开始工作后,每隔一段随机时间对它检测一次,直到系统故障或检测出系统处于异常状态为止.利用概率分析和随机模型的密度演化方法,导出了系统的一些新的可靠性指标和最优检测策略.  相似文献   

16.
The synchronization problem for both continuous and discrete‐time complex dynamical networks with time‐varying delays is investigated. Using optimal partitioning method, time‐varying delays are partitioned into l subintervals and generalized results are derived in terms of linear matrix inequalities (LMIs). New delay‐dependent synchronization criteria in terms of LMIs are derived by constructing appropriate Lyapunov–Krasovskii functional, reciprocally convex combination technique and some inequality techniques. Numerical examples are given to illustrate the effectiveness and advantage of the proposed synchronization criteria. © 2014 Wiley Periodicals, Inc. Complexity 21: 193–210, 2015  相似文献   

17.
Although several Fourth Normal Form (4NF) decomposition algorithms have been published, the problem's computational complexity remains an open question. Part of the difficulty is that no one has established an upper bound on the size of a 4NF decomposition scheme for a given number of attributes. We prove such an upper bound and we present an algorithm which is worst-case optimal: it never produces a 4NF decomposition scheme that is larger than the upper bound.  相似文献   

18.
吕晓星  彭维  刘禄勤 《数学杂志》2015,35(5):1233-1244
本文由Pareto分布和Logarithmic分布"混合"生成两参数具有单调降失效率的新型寿命分布,研究了该分布的矩、熵、失效率函数、平均剩余寿命和参数的极大似然估计,应用EM算法求参数的极大似然估计,进行了数值模拟.  相似文献   

19.
Local a posteriori error estimators are derived for linear elliptic problems over general polygonal domains in 2d. The estimators lead to a sharp upper bound for the energy error in a local region of interest. This upper bound consists of H1‐type local error indicators in a slightly larger subdomain, plus weighted L2‐type local error indicators outside this subdomain, which account for the pollution effects. This constitutes the basis of a local adaptive refinement procedure. Numerical experiments show a superior performance than the standard global procedure as well as the generation of locally quasi‐optimal meshes. © 2003 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 19: 421–442, 2003  相似文献   

20.
Software failures have become the major factor that brings the system down or causes a degradation in the quality of service. For many applications, estimating the software failure rate from a user's perspective helps the development team evaluate the reliability of the software and determine the release time properly. Traditionally, software reliability growth models are applied to system test data with the hope of estimating the software failure rate in the field. Given the aggressive nature by which the software is exercised during system test, as well as unavoidable differences between the test environment and the field environment, the resulting estimate of the failure rate will not typically reflect the user‐perceived failure rate in the field. The goal of this work is to quantify the mismatch between the system test environment and the field environment. A calibration factor is proposed to map the failure rate estimated from the system test data to the failure rate that will be observed in the field. Non‐homogeneous Poisson process models are utilized to estimate the software failure rate in both the system test phase and the field. For projects that have only system test data, use of the calibration factor provides an estimate of the field failure rate that would otherwise be unavailable. For projects that have both system test data and previous field data, the calibration factor can be explicitly evaluated and used to estimate the field failure rate of future releases as their system test data becomes available. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号