首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Over the past three decades, many software reliability models with different parameters, reflecting various testing characteristics, have been proposed for estimating the reliability growth of software products. We have noticed that one of the most important parameters controlling software reliability growth is the fault reduction factor (FRF) proposed by Musa. FRF is generally defined as the ratio of net fault reduction to failures experienced. During the software testing process, FRF could be influenced by many environmental factors, such as imperfect debugging, debugging time lag, etc. Thus, in this paper, we first analyze some real data to observe the trends of FRF, and consider FRF to be a time-variable function. We further study how to integrate time-variable FRF into software reliability growth modeling. Some experimental results show that the proposed models can improve the accuracy of software reliability estimation. Finally, sensitivity analyses of various optimal release times based on cost and reliability requirements are discussed. The analytic results indicate that adjusting the value of FRF may affect the release time as well as the development cost.  相似文献   

2.
A lot of development resources are consumed during the software testing phase fundamentally consisting of module testing, integration, testing and system testing. Then, it is of great importance for a manager to decide how to effectively spend testing-resources on software testing for developing a quality and reliable software.In this paper, we consider two kinds of software testing-resource allocation problems to make the best use of the specified testing-resources during module testing. Also, we introduce a software reliability growth model for describing the time-dependent behavior of detected software faults and testing-resource expenditures spent during the testing, which is based on a nonhomogeneous Poisson process. It is shown that the optimal allocation of testing-resources among software modules can improve software reliability.  相似文献   

3.
In this research, we investigate stopping rules for software testing and propose two stopping rules from the aspect of software reliability testing based on the impartial reliability model. The impartial reliability difference (IRD-MP) rule considers the difference between the impartial transition-probability reliabilities estimated for both software developer and consumers at their predetermined prior information levels. The empirical–impartial reliability difference (EIRD-MP) rule suggests stopping a software test when the computed empirical transition reliability is tending to its estimated impartial transition reliability. To insure the high-standard requirement for safety-critical software, both rules take the maximum probability (MP) of untested paths into account.  相似文献   

4.
To accurately model software failure process with software reliability growth models, incorporating testing effort has shown to be important. In fact, testing effort allocation is also a difficult issue, and it directly affects the software release time when a reliability criteria has to be met. However, with an increasing number of parameters involved in these models, the uncertainty of parameters estimated from the failure data could greatly affect the decision. Hence, it is of importance to study the impact of these model parameters. In this paper, sensitivity of the software release time is investigated through various methods, including one-factor-at-a-time approach, design of experiments and global sensitivity analysis. It is shown that the results from the first two methods may not be accurate enough for the case of complex nonlinear model. Global sensitivity analysis performs better due to the consideration of the global parameter space. The limitations of different approaches are also discussed. Finally, to avoid further excessive adjustment of software release time, interval estimation is recommended for use and it can be obtained based on the results from global sensitivity analysis.  相似文献   

5.
A lot of importance has been attached to the testing phase of the Software Development Life Cycle (SDLC). It is during this phase it is checked whether the software product meets user requirements or not. Any discrepancies that are identified are removed. But testing needs to be monitored to increase its effectiveness. Software Reliability Growth Models (SRGMs) that specify mathematical relationships between the failure phenomenon and time have proved useful. SRGMs that include factors that affect failure process are more realistic and useful. Software fault detection and removal during the testing phase of SDLC depend on how testing resources (test cases, manpower and time) are used and also on previously identified faults. With this motivation a Non-Homogeneous Poisson Process (NHPP) based SRGM is proposed in this paper which is flexible enough to describe various software failure/reliability curves. Both testing efforts and time dependent fault detection rate (FDR) are considered for software reliability modeling. The time lag between fault identification and removal has also been depicted. The applicability of our model is shown by validating it on software failure data sets obtained from different real software development projects. The comparisons with established models in terms of goodness of fit, the Akaike Information Criterion (AIC), Mean of Squared Errors (MSE), etc. have been presented.  相似文献   

6.
The binomial software reliability growth model (SRGM) contains most existing SRGMs proposed in earlier work as special cases, and can describe every software failure-occurrence pattern in continuous time. In this paper, we propose generalized binomial SRGMs in both continuous and discrete time, based on the idea of cumulative Bernoulli trials. It is shown that the proposed models give some new unusual discrete models as well as the well-known continuous SRGMs. Through numerical examples with actual software failure data, two estimation methods for model parameters with grouped data are provided, and the predictive model performance is examined quantitatively.  相似文献   

7.
The objective of studying software reliability is to assist software engineers in understanding more of the probabilistic nature of software failures during the debugging stages and to construct reliability models. In this paper, we consider modeling of a multiplicative failure rate whose components are evolving stochastically over testing stages and discuss its Bayesian estimation. In doing so, we focus on the modeling of parameters such as the fault detection rate per fault and the number of faults. We discuss how the proposed model can account for “imperfect debugging” under certain conditions. We use actual inter-failure data to carry out inference on model parameters via Markov chain Monte Carlo methods and present additional insights from Bayesian analysis.  相似文献   

8.
基于一种新模型对软件可靠度的估计   总被引:6,自引:0,他引:6  
本文通过修改JM模型给出了一种新的软件可靠性模型,并对软件可靠度给出了点估计和置信限.对实际数据的分析表明这种新模型的预测能力比JM模型要好.  相似文献   

9.
Since last seventies, various software reliability growth models (SRGMs) have been developed to estimate different measures related to quality of software like: number of remaining faults, software failure rate, reliability, cost, release time, etc. Most of the exiting SRGMs are probabilistic. These models have been developed based on various assumptions. The entire software development process is performed by human being. Also, a software can be executed in different environments. As human behavior is fuzzy and the environment is changing, the concept of fuzzy set theory is applicable in developing software reliability models. In this paper, two fuzzy time series based software reliability models have been proposed. The first one predicts the time between failures (TBFs) of software and the second one predicts the number of errors present in software. Both the models have been developed considering the software failure data as linguistic variable. Usefulness of the models has been demonstrated using real failure data.  相似文献   

10.
Relying on reliability growth testing to improve system designis neither usually effective nor efficient. Instead it is importantto design in reliability. This requires models to estimate reliabilitygrowth in the design that can be used to assess whether goalreliability will be achieved within the target timescale forthe design process. Many models have been developed for analysisof reliability growth on test, but there has been much lessattention given to reliability growth in design. This paperdescribes and compares two models: one motivated by the practicalengineering process; the other by extending the reasoning ofstatistical reliability growth modelling. Both models are referencedin the recently revised edition of international standard IEC61164. However, there has been no reported evaluation of theirproperties. Therefore, this paper explores the commonalitiesand differences between these models through an assessment oftheir logic and their application to an industrial example.Recommendations are given for the use of reliability growthmodels to aid management of the design process and to informproduct development.  相似文献   

11.
12.
Software reliability is a rapidly developing discipline. In this paper we model the fault-detecting processes by Markov processes with decreasing jump intensity. The intensity function is suggested to be a power function of the number of the remaining faults in the software. The models generalize the software reliability model suggested by Jelinski and Moranda (‘Software reliability research’, in W. Freiberger (ed.), Statistical Computer Performance Evaluation, Academic Press, New York, 1972. pp. 465–497). The main advantage of our models is that we do not use the assumption that all software faults correspond to the same failure rate. Preliminary studies suggest that a second-order power function is quite a good approximation. Statistical tests also indicate that this may be the case. Numerical results show that the estimation of the expected time to next failure is both reasonable and decreases relatively stably when the number of removed faults is increased.  相似文献   

13.
In this paper, we propose a testing-coverage software reliability model that considers not only the imperfect debugging (ID) but also the uncertainty of operating environments based on a non-homogeneous Poisson process (NHPP). Software is usually tested in a given control environment, but it may be used in different operating environments by different users, which are unknown to the developers. Many NHPP software reliability growth models (SRGMs) have been developed to estimate the software reliability measures, but most of the underlying common assumptions of these models are that the operating environment is the same as the developing environment. But in fact, due to the unpredictability of the uncertainty in the operating environments for the software, environments may considerably influence the reliability and software's performance in an unpredictable way. So when a software system works in a field environment, its reliability is usually different from the theory reliability, and also from all its similar applications in other fields. In this paper, a new model is proposed with the consideration of the fault detection rate based on the testing coverage and examined to cover ID subject to the uncertainty of operating environments. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real software failure data based on seven criteria. Improved normalized criteria distance (NCD) method is also used to rank and select the best model in the context of a set of goodness-of-fit criteria taken all together. All results demonstrate that the new model can give a significant improved goodness-of-fit and predictive performance. Finally, the optimal software release time based on cost and reliability requirement and its sensitivity analysis are discussed.  相似文献   

14.
In this paper, we consider the error detection phenomena in the testing phase when modifications or improvements can be made to the software in the testing phase. The occurrence of improvements is described by a homogeneous Poisson process with intensity rate denoted by λ. The error detection phenomena is assumed to follow a nonhomogeneous Poisson process (NHPP) with the mean value function being denoted by m(t). Two models are presented and in one of the models, we have discussed an optimal release policy for the software taking into account the occurrences of errors and improvements. Finally, we discuss the possibility of an improvement removing k errors with probability pk, k ≥ 0 in the software and develop a NHPP model for the error detection phenomena in this situation.  相似文献   

15.
We present a software release policy which is based on the Stackelberg strategy solution concept. The model formulated assumes the existence of two type of producers in the market, the leader and follower. The resulting release policy combines both cost factors and a loss of opportunity factor which is the result of competition between the rival producers. We define a Stackelberg strategy pair in the context of our model and, through a series of preliminary results, show that an optimal strategy pair exists. We also present a numerical example which utilizes a software reliability growth model based on the nonhomogeneous Poisson process. Finally, we explore the relative leadership property of the optimal strategies.This work was supported in part by a FOAS Research Grant provided by RMIT. The author would like to thank the referees for constructive suggestions which helped to improve a previous version of this paper.  相似文献   

16.
We propose a software reliability model which assumes that there are two types of software failures. The first type is caused by the faults latent in the system before the testing; the second type is caused by the faults regenerated randomly during the testing phase. The former and latter software failure-occurrence phenomena are described by a geometrically decreasing and a constant hazard rate, respectively. Further, this model describes the imperfect debugging environment in which the fault-correction activity corresponding to each software failure is not always performed perfectly. Defining a random variable representing the cumulative number of faults successfully corrected up to a specified time point, we use a Markov process to formulate this model. Several quantitative measures for software reliability assessment are derived from this model. Finally, numerical examples of software reliability analysis based on the actual testing data are presented.  相似文献   

17.
One of the challenging problems for software companies is to find the optimal time of release of the software so as to minimize the total cost expended on testing and potential penalty cost due to unresolved faults. If the software is for a safety critical system, then the software release time becomes more important. The criticality of a failure caused by a fault also becomes an important issue for safety critical software. In this paper we develop a total cost model based on criticality of the fault and cost of its occurrence during different phases of development for N-version programming scheme, a popular fault-tolerant architecture. The mathematical model is developed using the reliability growth model based on the non-homogeneous Poisson process. The models for optimal release time under different constraints are developed under the assumption that the debugging is imperfect and there is a penalty for late release of the software. The concept of Failure Mode Effects and Criticality Analysis is used for measuring criticality.  相似文献   

18.
刘云  田斌  赵玮 《运筹学学报》2005,9(3):49-55
软件的最优发行管理问题是软件可靠性研究的一个关键问题.现有的最优软件发行模型大都假定软件排错过程是完全的,并且在排错过程中没有新的故障引入,这种假设在很多情况下是不合理的.本文提出了一种新的最优软件发行管理模型,该模型既考虑了软件的不完全排错过程,又考虑了在排错过程中可能会引入新的故障,同时还考虑了由于排错经验的不断积累,软件的完全排错概率会增加的情况.本文同时给出了该模型的解.  相似文献   

19.
Due to the large scale application of software systems, software reliability plays an important role in software developments. In this paper, a software reliability growth model (SRGM) is proposed. The testing time on the right is truncated in this model. The instantaneous failure rate, mean-value function, error detection rate, reliability of the software, estimation of parameters and the simple applications of this model are discussed.  相似文献   

20.
In this paper, an algorithm for the fast computation of network reliability bounds is proposed. The evaluation of the network reliability is an intractable problem for very large networks, and hence approximate solutions based on reliability bounds have assumed importance. The proposed bounds computation algorithm is based on an efficient BDD representation of the reliability graph model and a novel search technique to find important minpaths/mincuts to quickly reduce the gap between the reliability upper and lower bounds. Furthermore, our algorithm allows the control of the gap between the two bounds by controlling the overall execution time. Therefore, a trade-off between prediction accuracy and computational resources can be easily made in our approach. The numerical results are presented for large real example reliability graphs to show the efficacy of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号