共查询到20条相似文献,搜索用时 15 毫秒
1.
M. Xie 《商业与工业应用随机模型》1990,6(4):207-213
Software reliability is a rapidly developing discipline. In this paper we model the fault-detecting processes by Markov processes with decreasing jump intensity. The intensity function is suggested to be a power function of the number of the remaining faults in the software. The models generalize the software reliability model suggested by Jelinski and Moranda (‘Software reliability research’, in W. Freiberger (ed.), Statistical Computer Performance Evaluation, Academic Press, New York, 1972. pp. 465–497). The main advantage of our models is that we do not use the assumption that all software faults correspond to the same failure rate. Preliminary studies suggest that a second-order power function is quite a good approximation. Statistical tests also indicate that this may be the case. Numerical results show that the estimation of the expected time to next failure is both reasonable and decreases relatively stably when the number of removed faults is increased. 相似文献
2.
Over the past three decades, many software reliability models with different parameters, reflecting various testing characteristics, have been proposed for estimating the reliability growth of software products. We have noticed that one of the most important parameters controlling software reliability growth is the fault reduction factor (FRF) proposed by Musa. FRF is generally defined as the ratio of net fault reduction to failures experienced. During the software testing process, FRF could be influenced by many environmental factors, such as imperfect debugging, debugging time lag, etc. Thus, in this paper, we first analyze some real data to observe the trends of FRF, and consider FRF to be a time-variable function. We further study how to integrate time-variable FRF into software reliability growth modeling. Some experimental results show that the proposed models can improve the accuracy of software reliability estimation. Finally, sensitivity analyses of various optimal release times based on cost and reliability requirements are discussed. The analytic results indicate that adjusting the value of FRF may affect the release time as well as the development cost. 相似文献
3.
The binomial software reliability growth model (SRGM) contains most existing SRGMs proposed in earlier work as special cases, and can describe every software failure-occurrence pattern in continuous time. In this paper, we propose generalized binomial SRGMs in both continuous and discrete time, based on the idea of cumulative Bernoulli trials. It is shown that the proposed models give some new unusual discrete models as well as the well-known continuous SRGMs. Through numerical examples with actual software failure data, two estimation methods for model parameters with grouped data are provided, and the predictive model performance is examined quantitatively. 相似文献
4.
A lot of importance has been attached to the testing phase of the Software Development Life Cycle (SDLC). It is during this phase it is checked whether the software product meets user requirements or not. Any discrepancies that are identified are removed. But testing needs to be monitored to increase its effectiveness. Software Reliability Growth Models (SRGMs) that specify mathematical relationships between the failure phenomenon and time have proved useful. SRGMs that include factors that affect failure process are more realistic and useful. Software fault detection and removal during the testing phase of SDLC depend on how testing resources (test cases, manpower and time) are used and also on previously identified faults. With this motivation a Non-Homogeneous Poisson Process (NHPP) based SRGM is proposed in this paper which is flexible enough to describe various software failure/reliability curves. Both testing efforts and time dependent fault detection rate (FDR) are considered for software reliability modeling. The time lag between fault identification and removal has also been depicted. The applicability of our model is shown by validating it on software failure data sets obtained from different real software development projects. The comparisons with established models in terms of goodness of fit, the Akaike Information Criterion (AIC), Mean of Squared Errors (MSE), etc. have been presented. 相似文献
5.
A lot of development resources are consumed during the software testing phase fundamentally consisting of module testing, integration, testing and system testing. Then, it is of great importance for a manager to decide how to effectively spend testing-resources on software testing for developing a quality and reliable software.In this paper, we consider two kinds of software testing-resource allocation problems to make the best use of the specified testing-resources during module testing. Also, we introduce a software reliability growth model for describing the time-dependent behavior of detected software faults and testing-resource expenditures spent during the testing, which is based on a nonhomogeneous Poisson process. It is shown that the optimal allocation of testing-resources among software modules can improve software reliability. 相似文献
6.
Software failures have become the major factor that brings the system down or causes a degradation in the quality of service. For many applications, estimating the software failure rate from a user's perspective helps the development team evaluate the reliability of the software and determine the release time properly. Traditionally, software reliability growth models are applied to system test data with the hope of estimating the software failure rate in the field. Given the aggressive nature by which the software is exercised during system test, as well as unavoidable differences between the test environment and the field environment, the resulting estimate of the failure rate will not typically reflect the user‐perceived failure rate in the field. The goal of this work is to quantify the mismatch between the system test environment and the field environment. A calibration factor is proposed to map the failure rate estimated from the system test data to the failure rate that will be observed in the field. Non‐homogeneous Poisson process models are utilized to estimate the software failure rate in both the system test phase and the field. For projects that have only system test data, use of the calibration factor provides an estimate of the field failure rate that would otherwise be unavailable. For projects that have both system test data and previous field data, the calibration factor can be explicitly evaluated and used to estimate the field failure rate of future releases as their system test data becomes available. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
7.
In this research, we investigate stopping rules for software testing and propose two stopping rules from the aspect of software reliability testing based on the impartial reliability model. The impartial reliability difference (IRD-MP) rule considers the difference between the impartial transition-probability reliabilities estimated for both software developer and consumers at their predetermined prior information levels. The empirical–impartial reliability difference (EIRD-MP) rule suggests stopping a software test when the computed empirical transition reliability is tending to its estimated impartial transition reliability. To insure the high-standard requirement for safety-critical software, both rules take the maximum probability (MP) of untested paths into account. 相似文献
8.
One of the most important issues for a development manager may be how to predict the reliability of a software system at an arbitrary testing time. In this paper, using the software failure-occurrence time data, we discuss a method of software reliability prediction based on software reliability growth models described by an NHPP (nonhomogeneous Poisson process). From the applied software reliability growth models, the conditional probability distribution of the time between software failures is derived, and its mean and median are obtained as reliability prediction measures. Finally, based on several numerical examples, we compare the performance between these measures from the view point of software reliability prediction in the testing phase. 相似文献
9.
In this paper, we consider a latent Markov process governing the intensity rate of a Poisson process model for software failures. The latent process enables us to infer performance of the debugging operations over time and allows us to deal with the imperfect debugging scenario. We develop the Bayesian inference for the model and also introduce a method to infer the unknown dimension of the Markov process. We illustrate the implementation of our model and the Bayesian approach by using actual software failure data. 相似文献
10.
The objective of studying software reliability is to assist software engineers in understanding more of the probabilistic nature of software failures during the debugging stages and to construct reliability models. In this paper, we consider modeling of a multiplicative failure rate whose components are evolving stochastically over testing stages and discuss its Bayesian estimation. In doing so, we focus on the modeling of parameters such as the fault detection rate per fault and the number of faults. We discuss how the proposed model can account for “imperfect debugging” under certain conditions. We use actual inter-failure data to carry out inference on model parameters via Markov chain Monte Carlo methods and present additional insights from Bayesian analysis. 相似文献
11.
This paper proposes optimum ramp accelerated life test (ALT) of m identical repairable systems using non-homogeneous power law process (PLP) under failure truncated case. An ALT with linearly increasing stress is a ramp test. In particular, a ramp test with two different linearly increasing stresses is a simple ramp test. The optimum ramp test with different stress rates is formulated by determining the proportions of test systems allocated to each stress rate using D-optimality criterion. D-optimality criterion minimizes the reciprocal of the determinant of the Fisher information matrix of the model parameters. The method developed is illustrated using two stress rates and three stress rates. It has been found that it takes much longer to obtain same estimated expected no. of failures at baseline condition than at stress levels. 相似文献
12.
To accurately model software failure process with software reliability growth models, incorporating testing effort has shown to be important. In fact, testing effort allocation is also a difficult issue, and it directly affects the software release time when a reliability criteria has to be met. However, with an increasing number of parameters involved in these models, the uncertainty of parameters estimated from the failure data could greatly affect the decision. Hence, it is of importance to study the impact of these model parameters. In this paper, sensitivity of the software release time is investigated through various methods, including one-factor-at-a-time approach, design of experiments and global sensitivity analysis. It is shown that the results from the first two methods may not be accurate enough for the case of complex nonlinear model. Global sensitivity analysis performs better due to the consideration of the global parameter space. The limitations of different approaches are also discussed. Finally, to avoid further excessive adjustment of software release time, interval estimation is recommended for use and it can be obtained based on the results from global sensitivity analysis. 相似文献
13.
A finite-volume scheme for dynamic reliability models 总被引:2,自引:0,他引:2
** Email: christiane.cocozza{at}univ-mlv.fr*** Email: robert.eymard{at}univ-mlv.fr**** Email: sophie.mercier{at}univ-mlv.fr In a model arising in the dynamic reliability study of a system,the probability of the state of the system is completely describedby the ChapmanKolmogorov equations, which are scalarlinear hyperbolic partial differential equations coupled bytheir right-hand side, the solution of which are probabilitymeasures. We propose in this paper a finite-volume scheme toapproximate these measures. We show, thanks to the proof ofthe tightness of the approximate solution, that the conservationof the probability mass leads to a compactness property. Theconvergence of the scheme is then obtained in the space of continuousfunctions with respect to the time variable, valued in the setof probability measures on [graphic: see PDF] . We finally show on a numerical example the accuracy and efficiencyof the approximation method. 相似文献
14.
Relying on reliability growth testing to improve system designis neither usually effective nor efficient. Instead it is importantto design in reliability. This requires models to estimate reliabilitygrowth in the design that can be used to assess whether goalreliability will be achieved within the target timescale forthe design process. Many models have been developed for analysisof reliability growth on test, but there has been much lessattention given to reliability growth in design. This paperdescribes and compares two models: one motivated by the practicalengineering process; the other by extending the reasoning ofstatistical reliability growth modelling. Both models are referencedin the recently revised edition of international standard IEC61164. However, there has been no reported evaluation of theirproperties. Therefore, this paper explores the commonalitiesand differences between these models through an assessment oftheir logic and their application to an industrial example.Recommendations are given for the use of reliability growthmodels to aid management of the design process and to informproduct development. 相似文献
15.
Consider a system subject to two modes of failures: maintainable and non-maintainable. A failure rate function is related to each failure mode. Whenever the system fails, a minimal repair is performed. Preventive maintenances are performed at integer multiples of a fixed period. The system is replaced when a fixed number of preventive maintenances have been completed. The preventive maintenance is imperfect because it reduces the failure rate of the maintainable failures but does not affect the failure rate of the non-maintainable failures. The two failure modes are dependent in the following way: after each preventive maintenance, the failure rate of the maintainable failures depends on the total of non-maintainable failures since the installation of the system. The problem is to determine an optimal length between successive preventive maintenances and the optimal number of preventive maintenances before the system replacement that minimize the expected cost rate. Optimal preventive maintenance schedules are obtained for non-decreasing failure rates and numerical examples for power law models are given. 相似文献
16.
P. Zeephongsekul 《Journal of Optimization Theory and Applications》1996,91(1):215-233
We present a software release policy which is based on the Stackelberg strategy solution concept. The model formulated assumes the existence of two type of producers in the market, the leader and follower. The resulting release policy combines both cost factors and a loss of opportunity factor which is the result of competition between the rival producers. We define a Stackelberg strategy pair in the context of our model and, through a series of preliminary results, show that an optimal strategy pair exists. We also present a numerical example which utilizes a software reliability growth model based on the nonhomogeneous Poisson process. Finally, we explore the relative leadership property of the optimal strategies.This work was supported in part by a FOAS Research Grant provided by RMIT. The author would like to thank the referees for constructive suggestions which helped to improve a previous version of this paper. 相似文献
17.
18.
19.
The model proposed by Trivedo and Shooman [8] is extended and modified by assuming that (1) the error occurrence rate when the machine is running is proportional to the number of errors in the system; (2) the error correction rate has two components, either an error is corrected with correction rate μ0 or an error is corrected but a new error is created with ineffective correction rate μ1. The solution of the differential equations corresponding to the model is obtained in closed form. 相似文献
20.
Musa-Okumoto模型和逆线性模型是研究软件可靠性的重要模型,给出了在分组数据下,M-O模型和逆线性模型中参数的最大似然估计及其存在的充分性条件,指出了[4]中的错误,并且给出了一个实例。 相似文献