首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
A lot of importance has been attached to the testing phase of the Software Development Life Cycle (SDLC). It is during this phase it is checked whether the software product meets user requirements or not. Any discrepancies that are identified are removed. But testing needs to be monitored to increase its effectiveness. Software Reliability Growth Models (SRGMs) that specify mathematical relationships between the failure phenomenon and time have proved useful. SRGMs that include factors that affect failure process are more realistic and useful. Software fault detection and removal during the testing phase of SDLC depend on how testing resources (test cases, manpower and time) are used and also on previously identified faults. With this motivation a Non-Homogeneous Poisson Process (NHPP) based SRGM is proposed in this paper which is flexible enough to describe various software failure/reliability curves. Both testing efforts and time dependent fault detection rate (FDR) are considered for software reliability modeling. The time lag between fault identification and removal has also been depicted. The applicability of our model is shown by validating it on software failure data sets obtained from different real software development projects. The comparisons with established models in terms of goodness of fit, the Akaike Information Criterion (AIC), Mean of Squared Errors (MSE), etc. have been presented.  相似文献   

2.
We propose a software reliability model which assumes that there are two types of software failures. The first type is caused by the faults latent in the system before the testing; the second type is caused by the faults regenerated randomly during the testing phase. The former and latter software failure-occurrence phenomena are described by a geometrically decreasing and a constant hazard rate, respectively. Further, this model describes the imperfect debugging environment in which the fault-correction activity corresponding to each software failure is not always performed perfectly. Defining a random variable representing the cumulative number of faults successfully corrected up to a specified time point, we use a Markov process to formulate this model. Several quantitative measures for software reliability assessment are derived from this model. Finally, numerical examples of software reliability analysis based on the actual testing data are presented.  相似文献   

3.
The objective of studying software reliability is to assist software engineers in understanding more of the probabilistic nature of software failures during the debugging stages and to construct reliability models. In this paper, we consider modeling of a multiplicative failure rate whose components are evolving stochastically over testing stages and discuss its Bayesian estimation. In doing so, we focus on the modeling of parameters such as the fault detection rate per fault and the number of faults. We discuss how the proposed model can account for “imperfect debugging” under certain conditions. We use actual inter-failure data to carry out inference on model parameters via Markov chain Monte Carlo methods and present additional insights from Bayesian analysis.  相似文献   

4.
Since last seventies, various software reliability growth models (SRGMs) have been developed to estimate different measures related to quality of software like: number of remaining faults, software failure rate, reliability, cost, release time, etc. Most of the exiting SRGMs are probabilistic. These models have been developed based on various assumptions. The entire software development process is performed by human being. Also, a software can be executed in different environments. As human behavior is fuzzy and the environment is changing, the concept of fuzzy set theory is applicable in developing software reliability models. In this paper, two fuzzy time series based software reliability models have been proposed. The first one predicts the time between failures (TBFs) of software and the second one predicts the number of errors present in software. Both the models have been developed considering the software failure data as linguistic variable. Usefulness of the models has been demonstrated using real failure data.  相似文献   

5.
6.
In this paper, we propose a testing-coverage software reliability model that considers not only the imperfect debugging (ID) but also the uncertainty of operating environments based on a non-homogeneous Poisson process (NHPP). Software is usually tested in a given control environment, but it may be used in different operating environments by different users, which are unknown to the developers. Many NHPP software reliability growth models (SRGMs) have been developed to estimate the software reliability measures, but most of the underlying common assumptions of these models are that the operating environment is the same as the developing environment. But in fact, due to the unpredictability of the uncertainty in the operating environments for the software, environments may considerably influence the reliability and software's performance in an unpredictable way. So when a software system works in a field environment, its reliability is usually different from the theory reliability, and also from all its similar applications in other fields. In this paper, a new model is proposed with the consideration of the fault detection rate based on the testing coverage and examined to cover ID subject to the uncertainty of operating environments. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real software failure data based on seven criteria. Improved normalized criteria distance (NCD) method is also used to rank and select the best model in the context of a set of goodness-of-fit criteria taken all together. All results demonstrate that the new model can give a significant improved goodness-of-fit and predictive performance. Finally, the optimal software release time based on cost and reliability requirement and its sensitivity analysis are discussed.  相似文献   

7.
In this paper, we propose a non-Gaussion state space model to apply in software reliability. This model assumes an exponential distribution for the failure time in every test-debugging stage, conditionally on the state parameter — the number of faults in the program. It is a generalized JM model which can be applied to the imperfect debugging situation as well as in evolving programs. By examining a set of data on evolving program failures, the effect of evolving program model is amply proved.  相似文献   

8.
《随机分析与应用》2013,31(4):849-864
Abstract

This paper considers a Markovian imperfect software debugging model incorporating two types of faults and derives several measures including the first passage time distribution. When a debugging process upon each failure is completed, the fault which causes the failure is either removed from the fault contents with probability p or is remained in the system with probability 1 ? p. By defining the transition probabilities for the debugging process, we derive the distribution of first passage time to a prespecified number of fault removals and evaluate the expected numbers of perfect debuggings and debugging completions up to a specified time. The availability function of a software system, which is the probability that the software is in working state at a given time, is also derived and thus, the availability and working probability of the software system are obtained. Throughout the paper, the length of debugging time is treated to be random and thus its distribution is assumed. Numerical examples are provided for illustrative purposes.  相似文献   

9.
One of the challenging problems for software companies is to find the optimal time of release of the software so as to minimize the total cost expended on testing and potential penalty cost due to unresolved faults. If the software is for a safety critical system, then the software release time becomes more important. The criticality of a failure caused by a fault also becomes an important issue for safety critical software. In this paper we develop a total cost model based on criticality of the fault and cost of its occurrence during different phases of development for N-version programming scheme, a popular fault-tolerant architecture. The mathematical model is developed using the reliability growth model based on the non-homogeneous Poisson process. The models for optimal release time under different constraints are developed under the assumption that the debugging is imperfect and there is a penalty for late release of the software. The concept of Failure Mode Effects and Criticality Analysis is used for measuring criticality.  相似文献   

10.
For their nice mathematical properties, state space models have been widely used, especially for forecasting. Over the last decades, the study of tracking software reliability by statistical models has attracted scientists’ attention. However, most of models focus on perfect debugging although practically imperfect debugging arises everywhere. In this paper, a non-Gaussian state space model is modified to predict software failure time with imperfect debugging. In fact, this model is very flexible so that we can modify the system equation in this model to satisfy the various situations. Besides, this model is suitable for tracking software reliability, and applied to two well known datasets on software failures.  相似文献   

11.
Software failures have become the major factor that brings the system down or causes a degradation in the quality of service. For many applications, estimating the software failure rate from a user's perspective helps the development team evaluate the reliability of the software and determine the release time properly. Traditionally, software reliability growth models are applied to system test data with the hope of estimating the software failure rate in the field. Given the aggressive nature by which the software is exercised during system test, as well as unavoidable differences between the test environment and the field environment, the resulting estimate of the failure rate will not typically reflect the user‐perceived failure rate in the field. The goal of this work is to quantify the mismatch between the system test environment and the field environment. A calibration factor is proposed to map the failure rate estimated from the system test data to the failure rate that will be observed in the field. Non‐homogeneous Poisson process models are utilized to estimate the software failure rate in both the system test phase and the field. For projects that have only system test data, use of the calibration factor provides an estimate of the field failure rate that would otherwise be unavailable. For projects that have both system test data and previous field data, the calibration factor can be explicitly evaluated and used to estimate the field failure rate of future releases as their system test data becomes available. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
Obtaining accurate models of systems which are prone to failures and breakdowns is a difficult task. In this paper we present a methodology which makes the task of modeling failure prone discrete event systems (DESs) considerably less cumbersome, less error prone, and more user-friendly. The task of obtaining commonly used automata models for DESs is non-trivial for most practical systems, owing to the fact that the number of states in the commonly used automata models is exponential in the number of signals and faults. In contrast a model of a discrete event system, in the rules based modeling formalism proposed by the co-authors of this paper, is of size polynomial in the number of signals and faults. In order to model failures, we augment the signals set of the rules based formalism to include binary valued fault signals, the values representing either a non-faulty or a faulty state of a certain failure type. Addition of new fault signals requires introduction of new rules for the added fault signal events, and also modification of the existing rules for non-fault events. The rules based modeling formalism is further extended to model real-time systems, and we apply it to model delay-faults of the system as well. The model of a failure prone DES in the rules based can automatically be converted into an equivalent (timed)-automaton model for a failure analysis in the automaton model framework.  相似文献   

13.
** Email: shaomin.wu{at}reading.ac.uk Commonly used repair rate models for repairable systems in thereliability literature are renewal processes, generalised renewalprocesses or non-homogeneous Poisson processes. In additionto these models, geometric processes (GP) are studied occasionally.The GP, however, can only model systems with monotonously changing(increasing, decreasing or constant) failure intensities. Thispaper deals with the reliability modelling of failure processesfor repairable systems where the failure intensity shows a bathtub-typenon-monotonic behaviour. A new stochastic process, i.e. an extendedPoisson process, is introduced in this paper. Reliability indicesare presented, and the parameters of the new process are estimated.Experimental results on a data set demonstrate the validityof the new process.  相似文献   

14.
Nonhomogeneous Poisson process (NHPP) is a commonly used stochastic model that is utilized to describe the pattern of repeated occurrence of certain events or conditions. Aninhomogeneous gamma process evolves as a generalization to NHPP, where the observed failure epochs correspond to every successive κ-th event of the underlying Poisson process, κ being an unknown parameter to be estimated from the data. This article focuses on a special class of inhomogeneous gamma process, calledmodulated power law process (MPLP) that assumes the Weibull form of the intensity function. The traditional power law process is a popular stochastic formulation of certain empirical relationships between the time to failure and the cumulative number of failures, often observed in industrial experiments. The MPLP retains this underlying physical basis and provides a more flexible modeling environment potentially leading to a better fit to the failure data at hand. In this paper, we investigate inference issues related to MPLP. The maximum likelihood estimators (MLE’s) of the model parameters are not in closed form and enjoy the curious property that they are asymptotically normal with a singular variance-covariance matrix. Consequently, the derivation of the large-sample results requires non-standard modifications of the usual arguments. We also propose a set of simple closed-form estimators that are asymptotically equivalent to the MLE’s. Extensive simulation results are carried out to supplement the theoretical findings. Finally, we implement our inference results to a failure dataset arising from a repairable system.  相似文献   

15.
In this paper, we consider the error detection phenomena in the testing phase when modifications or improvements can be made to the software in the testing phase. The occurrence of improvements is described by a homogeneous Poisson process with intensity rate denoted by λ. The error detection phenomena is assumed to follow a nonhomogeneous Poisson process (NHPP) with the mean value function being denoted by m(t). Two models are presented and in one of the models, we have discussed an optimal release policy for the software taking into account the occurrences of errors and improvements. Finally, we discuss the possibility of an improvement removing k errors with probability pk, k ≥ 0 in the software and develop a NHPP model for the error detection phenomena in this situation.  相似文献   

16.
To accurately model software failure process with software reliability growth models, incorporating testing effort has shown to be important. In fact, testing effort allocation is also a difficult issue, and it directly affects the software release time when a reliability criteria has to be met. However, with an increasing number of parameters involved in these models, the uncertainty of parameters estimated from the failure data could greatly affect the decision. Hence, it is of importance to study the impact of these model parameters. In this paper, sensitivity of the software release time is investigated through various methods, including one-factor-at-a-time approach, design of experiments and global sensitivity analysis. It is shown that the results from the first two methods may not be accurate enough for the case of complex nonlinear model. Global sensitivity analysis performs better due to the consideration of the global parameter space. The limitations of different approaches are also discussed. Finally, to avoid further excessive adjustment of software release time, interval estimation is recommended for use and it can be obtained based on the results from global sensitivity analysis.  相似文献   

17.
Bayesian inference for the power law process   总被引:2,自引:0,他引:2  
The power law process has been used to model reliability growth, software reliability and the failure times of repairable systems. This article reviews and further develops Bayesian inference for such a process. The Bayesian approach provides a unified methodology for dealing with both time and failure truncated data. As well as looking at the posterior densities of the parameters of the power law process, inference for the expected number of failures and the probability of no failures in some given time interval is discussed. Aspects of the prediction problem are examined. The results are illustrated with two data examples.  相似文献   

18.
刘云  田斌  赵玮 《运筹学学报》2005,9(3):49-55
软件的最优发行管理问题是软件可靠性研究的一个关键问题.现有的最优软件发行模型大都假定软件排错过程是完全的,并且在排错过程中没有新的故障引入,这种假设在很多情况下是不合理的.本文提出了一种新的最优软件发行管理模型,该模型既考虑了软件的不完全排错过程,又考虑了在排错过程中可能会引入新的故障,同时还考虑了由于排错经验的不断积累,软件的完全排错概率会增加的情况.本文同时给出了该模型的解.  相似文献   

19.
20.
The k-out-of-n model is commonly used in reliability theory. In this model the failure of any component of the system does not influence the components still at work. Sequential k-out-of-n systems have been introduced as an extension of k-out-of-n systems where the failure of some component of the system may influence the remaining ones. We consider nonparametric estimation of the cumulative hazard function, the reliability function and the quantile function of sequential k-out-of-n systems. Furthermore, nonparametric hypothesis testing for sequential k-out-of-n-systems is examined. We make use of counting processes to show strong consistency and weak convergence of the estimators and to derive the asymptotic distribution of the test statistics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号