首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a testing-coverage software reliability model that considers not only the imperfect debugging (ID) but also the uncertainty of operating environments based on a non-homogeneous Poisson process (NHPP). Software is usually tested in a given control environment, but it may be used in different operating environments by different users, which are unknown to the developers. Many NHPP software reliability growth models (SRGMs) have been developed to estimate the software reliability measures, but most of the underlying common assumptions of these models are that the operating environment is the same as the developing environment. But in fact, due to the unpredictability of the uncertainty in the operating environments for the software, environments may considerably influence the reliability and software's performance in an unpredictable way. So when a software system works in a field environment, its reliability is usually different from the theory reliability, and also from all its similar applications in other fields. In this paper, a new model is proposed with the consideration of the fault detection rate based on the testing coverage and examined to cover ID subject to the uncertainty of operating environments. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real software failure data based on seven criteria. Improved normalized criteria distance (NCD) method is also used to rank and select the best model in the context of a set of goodness-of-fit criteria taken all together. All results demonstrate that the new model can give a significant improved goodness-of-fit and predictive performance. Finally, the optimal software release time based on cost and reliability requirement and its sensitivity analysis are discussed.  相似文献   

2.
Over the past three decades, many software reliability models with different parameters, reflecting various testing characteristics, have been proposed for estimating the reliability growth of software products. We have noticed that one of the most important parameters controlling software reliability growth is the fault reduction factor (FRF) proposed by Musa. FRF is generally defined as the ratio of net fault reduction to failures experienced. During the software testing process, FRF could be influenced by many environmental factors, such as imperfect debugging, debugging time lag, etc. Thus, in this paper, we first analyze some real data to observe the trends of FRF, and consider FRF to be a time-variable function. We further study how to integrate time-variable FRF into software reliability growth modeling. Some experimental results show that the proposed models can improve the accuracy of software reliability estimation. Finally, sensitivity analyses of various optimal release times based on cost and reliability requirements are discussed. The analytic results indicate that adjusting the value of FRF may affect the release time as well as the development cost.  相似文献   

3.
A lot of importance has been attached to the testing phase of the Software Development Life Cycle (SDLC). It is during this phase it is checked whether the software product meets user requirements or not. Any discrepancies that are identified are removed. But testing needs to be monitored to increase its effectiveness. Software Reliability Growth Models (SRGMs) that specify mathematical relationships between the failure phenomenon and time have proved useful. SRGMs that include factors that affect failure process are more realistic and useful. Software fault detection and removal during the testing phase of SDLC depend on how testing resources (test cases, manpower and time) are used and also on previously identified faults. With this motivation a Non-Homogeneous Poisson Process (NHPP) based SRGM is proposed in this paper which is flexible enough to describe various software failure/reliability curves. Both testing efforts and time dependent fault detection rate (FDR) are considered for software reliability modeling. The time lag between fault identification and removal has also been depicted. The applicability of our model is shown by validating it on software failure data sets obtained from different real software development projects. The comparisons with established models in terms of goodness of fit, the Akaike Information Criterion (AIC), Mean of Squared Errors (MSE), etc. have been presented.  相似文献   

4.
The objective of studying software reliability is to assist software engineers in understanding more of the probabilistic nature of software failures during the debugging stages and to construct reliability models. In this paper, we consider modeling of a multiplicative failure rate whose components are evolving stochastically over testing stages and discuss its Bayesian estimation. In doing so, we focus on the modeling of parameters such as the fault detection rate per fault and the number of faults. We discuss how the proposed model can account for “imperfect debugging” under certain conditions. We use actual inter-failure data to carry out inference on model parameters via Markov chain Monte Carlo methods and present additional insights from Bayesian analysis.  相似文献   

5.
The binomial software reliability growth model (SRGM) contains most existing SRGMs proposed in earlier work as special cases, and can describe every software failure-occurrence pattern in continuous time. In this paper, we propose generalized binomial SRGMs in both continuous and discrete time, based on the idea of cumulative Bernoulli trials. It is shown that the proposed models give some new unusual discrete models as well as the well-known continuous SRGMs. Through numerical examples with actual software failure data, two estimation methods for model parameters with grouped data are provided, and the predictive model performance is examined quantitatively.  相似文献   

6.
本文主要讨论软件测试过程中NHPP模型参数发生变化的情形,并用Bayes方法对GGO模型进行变点分析,运用基于Gibbs抽样的MCMC方法模拟出参数后验分布的马尔科夫链,最后借助于BUGS软件包对软件故障数据集Musa进行建模仿真,其结果表明该模型在软件可靠性变点分析中的直观性和有效性。  相似文献   

7.
This article presents a software reliability growth model based on non-homogeneous Poisson process. The main focus of this article is to deliver a method for software reliability modelling incorporating the concept of time-dependent fault introduction and fault removal rate with change point. Also in this article, a cost model with change point has been developed. Based on the cost model optimal release policy with change point has been discussed. Maximum likelihood technique has been applied to estimate the parameters of the model. The proposed model has been validated using some real software failure data. Comparison has been made with models incorporating change point and without change point. The application of the proposed cost model has been shown using some numerical examples.  相似文献   

8.
Since last seventies, various software reliability growth models (SRGMs) have been developed to estimate different measures related to quality of software like: number of remaining faults, software failure rate, reliability, cost, release time, etc. Most of the exiting SRGMs are probabilistic. These models have been developed based on various assumptions. The entire software development process is performed by human being. Also, a software can be executed in different environments. As human behavior is fuzzy and the environment is changing, the concept of fuzzy set theory is applicable in developing software reliability models. In this paper, two fuzzy time series based software reliability models have been proposed. The first one predicts the time between failures (TBFs) of software and the second one predicts the number of errors present in software. Both the models have been developed considering the software failure data as linguistic variable. Usefulness of the models has been demonstrated using real failure data.  相似文献   

9.
One of the challenging problems for software companies is to find the optimal time of release of the software so as to minimize the total cost expended on testing and potential penalty cost due to unresolved faults. If the software is for a safety critical system, then the software release time becomes more important. The criticality of a failure caused by a fault also becomes an important issue for safety critical software. In this paper we develop a total cost model based on criticality of the fault and cost of its occurrence during different phases of development for N-version programming scheme, a popular fault-tolerant architecture. The mathematical model is developed using the reliability growth model based on the non-homogeneous Poisson process. The models for optimal release time under different constraints are developed under the assumption that the debugging is imperfect and there is a penalty for late release of the software. The concept of Failure Mode Effects and Criticality Analysis is used for measuring criticality.  相似文献   

10.
Due to the large scale application of software systems, software reliability plays an important role in software developments. In this paper, a software reliability growth model (SRGM) is proposed. The testing time on the right is truncated in this model. The instantaneous failure rate, mean-value function, error detection rate, reliability of the software, estimation of parameters and the simple applications of this model are discussed.  相似文献   

11.
邱慧  闫相斌  彭锐 《运筹与管理》2022,31(4):104-108
本文提出一种考虑多种类型缺陷的软件可靠性模型,并构建了缺陷检测和剔除两个过程的模型。具体分类情况,可以根据模型的检验方法(拟合准则和预测有效性度量)和模型复杂度来具体决定,如果有测试人员的分类建议或者分类数据,可以结合模型共同决定。为了说明问题,本文给出四种类型缺陷的具体模型,并对实际数据集进行了拟合。通过模型比较,验证了多种类型缺陷模型的有效性。最后,通过构建软件最优发布时间策略对模型进行了应用。研究结果为软件开发和测试提供了理论参考。  相似文献   

12.
One of the most important issues for a development manager may be how to predict the reliability of a software system at an arbitrary testing time. In this paper, using the software failure-occurrence time data, we discuss a method of software reliability prediction based on software reliability growth models described by an NHPP (nonhomogeneous Poisson process). From the applied software reliability growth models, the conditional probability distribution of the time between software failures is derived, and its mean and median are obtained as reliability prediction measures. Finally, based on several numerical examples, we compare the performance between these measures from the view point of software reliability prediction in the testing phase.  相似文献   

13.
We propose a software reliability model which assumes that there are two types of software failures. The first type is caused by the faults latent in the system before the testing; the second type is caused by the faults regenerated randomly during the testing phase. The former and latter software failure-occurrence phenomena are described by a geometrically decreasing and a constant hazard rate, respectively. Further, this model describes the imperfect debugging environment in which the fault-correction activity corresponding to each software failure is not always performed perfectly. Defining a random variable representing the cumulative number of faults successfully corrected up to a specified time point, we use a Markov process to formulate this model. Several quantitative measures for software reliability assessment are derived from this model. Finally, numerical examples of software reliability analysis based on the actual testing data are presented.  相似文献   

14.
An efficient approach, called augmented line sampling, is proposed to locally evaluate the failure probability function (FPF) in structural reliability-based design by using only one reliability analysis run of line sampling. The novelty of this approach is that it re-uses the information of a single line sampling analysis to construct the FPF estimation, repeated evaluations of the failure probabilities can be avoided. It is shown that, when design parameters are the distribution parameters of basic random variables, the desired information about FPF can be extracted through a single implementation of line sampling. Line sampling is a highly efficient and widely used reliability analysis method. The proposed method extends the traditional line sampling for the failure probability estimation to the evaluation of the FPF which is a challenge task. The required computational effort is neither relatively sensitive to the number of uncertain parameters, nor grows with the number of design parameters. Numerical examples are given to show the advantages of the approach.  相似文献   

15.
刘云  田斌  赵玮 《运筹学学报》2005,9(3):49-55
软件的最优发行管理问题是软件可靠性研究的一个关键问题.现有的最优软件发行模型大都假定软件排错过程是完全的,并且在排错过程中没有新的故障引入,这种假设在很多情况下是不合理的.本文提出了一种新的最优软件发行管理模型,该模型既考虑了软件的不完全排错过程,又考虑了在排错过程中可能会引入新的故障,同时还考虑了由于排错经验的不断积累,软件的完全排错概率会增加的情况.本文同时给出了该模型的解.  相似文献   

16.
多项式混沌拓展(polynomial chaos expansion,PCE)模型现已发展为全局灵敏度分析的强大工具,却很少作为替代模型用于可靠性分析。针对该模型缺乏误差项从而很难构造主动学习函数来逐步更新的事实,在结构可靠性分析的框架下提出了基于PCE模型和bootstrap重抽样的仿真方法来计算失效概率。首先,对试验设计(experimental design)使用bootstrap重抽样步骤以刻画PCE模型的预测误差;其次,基于这个局部误差构造主动学习函数,通过不断填充试验设计以自适应地更新模型,直到能够精确地逼近真实的功能函数;最后,当PCE模型具有足够精确的拟合、预测能力,再使用蒙特卡洛仿真方法来计算失效概率。提出的平行加点策略既能在模型更新过程中找到改进模型拟合能力的"最好"的点,又考虑了模型拟合的计算量;而且,当失效概率的数量级较低时,PCE-bootstrap步骤与子集仿真(subset simulation)的结合能进一步加速失效概率估计量的收敛。本文方法将PCE模型在概率可靠性领域的应用从灵敏度分析延伸到了可靠性分析,同时,算例分析结果显示了该方法的精确性和高效性。  相似文献   

17.
Knowledge of the sensitivity of inverse solutions to variation of parameters of a model can be very useful in making engineering design decisions. This article describes how parameter sensitivity analysis can be carried out for inverse simulations generated through approximate transfer function inversion methods and also through the use of feedback principles. Emphasis is placed on the use of sensitivity models and the article includes examples and a case study involving a model of an underwater vehicle. It is shown that the use of sensitivity models can provide physical understanding of inverse simulation solutions that is not directly available using parameter sensitivity analysis methods that involve parameter perturbations and response differencing.  相似文献   

18.
Software failures have become the major factor that brings the system down or causes a degradation in the quality of service. For many applications, estimating the software failure rate from a user's perspective helps the development team evaluate the reliability of the software and determine the release time properly. Traditionally, software reliability growth models are applied to system test data with the hope of estimating the software failure rate in the field. Given the aggressive nature by which the software is exercised during system test, as well as unavoidable differences between the test environment and the field environment, the resulting estimate of the failure rate will not typically reflect the user‐perceived failure rate in the field. The goal of this work is to quantify the mismatch between the system test environment and the field environment. A calibration factor is proposed to map the failure rate estimated from the system test data to the failure rate that will be observed in the field. Non‐homogeneous Poisson process models are utilized to estimate the software failure rate in both the system test phase and the field. For projects that have only system test data, use of the calibration factor provides an estimate of the field failure rate that would otherwise be unavailable. For projects that have both system test data and previous field data, the calibration factor can be explicitly evaluated and used to estimate the field failure rate of future releases as their system test data becomes available. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

19.
The aim of this paper is to evaluate the reliability of probabilistic and interval hybrid structural system. The hybrid structural system includes two kinds of uncertain parameters—probabilistic parameters and interval parameters. Based on the interval reliability model and probabilistic operation, a new probabilistic and interval hybrid reliability model is proposed. Firstly, we use the interval reliability model to analyze the performance function, and then sum up reliability of all regions divided by the failure plane. Based on the presented optimal criterion enumerating the main failure modes of hybrid structural system and the relationship of failure modes, the reliability of structure system can be obtained. By means of the numerical examples, the hybrid reliability model and the traditional probabilistic reliability model are critically contrasted. The results indicate the presented reliability model is more suitable for analysis and design of these structural systems and it can ensure the security of system well, and it only needs less uncertain information.  相似文献   

20.
Software reliability is a rapidly developing discipline. In this paper we model the fault-detecting processes by Markov processes with decreasing jump intensity. The intensity function is suggested to be a power function of the number of the remaining faults in the software. The models generalize the software reliability model suggested by Jelinski and Moranda (‘Software reliability research’, in W. Freiberger (ed.), Statistical Computer Performance Evaluation, Academic Press, New York, 1972. pp. 465–497). The main advantage of our models is that we do not use the assumption that all software faults correspond to the same failure rate. Preliminary studies suggest that a second-order power function is quite a good approximation. Statistical tests also indicate that this may be the case. Numerical results show that the estimation of the expected time to next failure is both reasonable and decreases relatively stably when the number of removed faults is increased.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号