首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a testing-coverage software reliability model that considers not only the imperfect debugging (ID) but also the uncertainty of operating environments based on a non-homogeneous Poisson process (NHPP). Software is usually tested in a given control environment, but it may be used in different operating environments by different users, which are unknown to the developers. Many NHPP software reliability growth models (SRGMs) have been developed to estimate the software reliability measures, but most of the underlying common assumptions of these models are that the operating environment is the same as the developing environment. But in fact, due to the unpredictability of the uncertainty in the operating environments for the software, environments may considerably influence the reliability and software's performance in an unpredictable way. So when a software system works in a field environment, its reliability is usually different from the theory reliability, and also from all its similar applications in other fields. In this paper, a new model is proposed with the consideration of the fault detection rate based on the testing coverage and examined to cover ID subject to the uncertainty of operating environments. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real software failure data based on seven criteria. Improved normalized criteria distance (NCD) method is also used to rank and select the best model in the context of a set of goodness-of-fit criteria taken all together. All results demonstrate that the new model can give a significant improved goodness-of-fit and predictive performance. Finally, the optimal software release time based on cost and reliability requirement and its sensitivity analysis are discussed.  相似文献   

2.
The binomial software reliability growth model (SRGM) contains most existing SRGMs proposed in earlier work as special cases, and can describe every software failure-occurrence pattern in continuous time. In this paper, we propose generalized binomial SRGMs in both continuous and discrete time, based on the idea of cumulative Bernoulli trials. It is shown that the proposed models give some new unusual discrete models as well as the well-known continuous SRGMs. Through numerical examples with actual software failure data, two estimation methods for model parameters with grouped data are provided, and the predictive model performance is examined quantitatively.  相似文献   

3.
Since last seventies, various software reliability growth models (SRGMs) have been developed to estimate different measures related to quality of software like: number of remaining faults, software failure rate, reliability, cost, release time, etc. Most of the exiting SRGMs are probabilistic. These models have been developed based on various assumptions. The entire software development process is performed by human being. Also, a software can be executed in different environments. As human behavior is fuzzy and the environment is changing, the concept of fuzzy set theory is applicable in developing software reliability models. In this paper, two fuzzy time series based software reliability models have been proposed. The first one predicts the time between failures (TBFs) of software and the second one predicts the number of errors present in software. Both the models have been developed considering the software failure data as linguistic variable. Usefulness of the models has been demonstrated using real failure data.  相似文献   

4.
Over the past three decades, many software reliability models with different parameters, reflecting various testing characteristics, have been proposed for estimating the reliability growth of software products. We have noticed that one of the most important parameters controlling software reliability growth is the fault reduction factor (FRF) proposed by Musa. FRF is generally defined as the ratio of net fault reduction to failures experienced. During the software testing process, FRF could be influenced by many environmental factors, such as imperfect debugging, debugging time lag, etc. Thus, in this paper, we first analyze some real data to observe the trends of FRF, and consider FRF to be a time-variable function. We further study how to integrate time-variable FRF into software reliability growth modeling. Some experimental results show that the proposed models can improve the accuracy of software reliability estimation. Finally, sensitivity analyses of various optimal release times based on cost and reliability requirements are discussed. The analytic results indicate that adjusting the value of FRF may affect the release time as well as the development cost.  相似文献   

5.
The objective of studying software reliability is to assist software engineers in understanding more of the probabilistic nature of software failures during the debugging stages and to construct reliability models. In this paper, we consider modeling of a multiplicative failure rate whose components are evolving stochastically over testing stages and discuss its Bayesian estimation. In doing so, we focus on the modeling of parameters such as the fault detection rate per fault and the number of faults. We discuss how the proposed model can account for “imperfect debugging” under certain conditions. We use actual inter-failure data to carry out inference on model parameters via Markov chain Monte Carlo methods and present additional insights from Bayesian analysis.  相似文献   

6.
One of the most important issues for a development manager may be how to predict the reliability of a software system at an arbitrary testing time. In this paper, using the software failure-occurrence time data, we discuss a method of software reliability prediction based on software reliability growth models described by an NHPP (nonhomogeneous Poisson process). From the applied software reliability growth models, the conditional probability distribution of the time between software failures is derived, and its mean and median are obtained as reliability prediction measures. Finally, based on several numerical examples, we compare the performance between these measures from the view point of software reliability prediction in the testing phase.  相似文献   

7.
Software reliability is a rapidly developing discipline. In this paper we model the fault-detecting processes by Markov processes with decreasing jump intensity. The intensity function is suggested to be a power function of the number of the remaining faults in the software. The models generalize the software reliability model suggested by Jelinski and Moranda (‘Software reliability research’, in W. Freiberger (ed.), Statistical Computer Performance Evaluation, Academic Press, New York, 1972. pp. 465–497). The main advantage of our models is that we do not use the assumption that all software faults correspond to the same failure rate. Preliminary studies suggest that a second-order power function is quite a good approximation. Statistical tests also indicate that this may be the case. Numerical results show that the estimation of the expected time to next failure is both reasonable and decreases relatively stably when the number of removed faults is increased.  相似文献   

8.
A lot of development resources are consumed during the software testing phase fundamentally consisting of module testing, integration, testing and system testing. Then, it is of great importance for a manager to decide how to effectively spend testing-resources on software testing for developing a quality and reliable software.In this paper, we consider two kinds of software testing-resource allocation problems to make the best use of the specified testing-resources during module testing. Also, we introduce a software reliability growth model for describing the time-dependent behavior of detected software faults and testing-resource expenditures spent during the testing, which is based on a nonhomogeneous Poisson process. It is shown that the optimal allocation of testing-resources among software modules can improve software reliability.  相似文献   

9.
Due to the large scale application of software systems, software reliability plays an important role in software developments. In this paper, a software reliability growth model (SRGM) is proposed. The testing time on the right is truncated in this model. The instantaneous failure rate, mean-value function, error detection rate, reliability of the software, estimation of parameters and the simple applications of this model are discussed.  相似文献   

10.
To accurately model software failure process with software reliability growth models, incorporating testing effort has shown to be important. In fact, testing effort allocation is also a difficult issue, and it directly affects the software release time when a reliability criteria has to be met. However, with an increasing number of parameters involved in these models, the uncertainty of parameters estimated from the failure data could greatly affect the decision. Hence, it is of importance to study the impact of these model parameters. In this paper, sensitivity of the software release time is investigated through various methods, including one-factor-at-a-time approach, design of experiments and global sensitivity analysis. It is shown that the results from the first two methods may not be accurate enough for the case of complex nonlinear model. Global sensitivity analysis performs better due to the consideration of the global parameter space. The limitations of different approaches are also discussed. Finally, to avoid further excessive adjustment of software release time, interval estimation is recommended for use and it can be obtained based on the results from global sensitivity analysis.  相似文献   

11.
Software failures have become the major factor that brings the system down or causes a degradation in the quality of service. For many applications, estimating the software failure rate from a user's perspective helps the development team evaluate the reliability of the software and determine the release time properly. Traditionally, software reliability growth models are applied to system test data with the hope of estimating the software failure rate in the field. Given the aggressive nature by which the software is exercised during system test, as well as unavoidable differences between the test environment and the field environment, the resulting estimate of the failure rate will not typically reflect the user‐perceived failure rate in the field. The goal of this work is to quantify the mismatch between the system test environment and the field environment. A calibration factor is proposed to map the failure rate estimated from the system test data to the failure rate that will be observed in the field. Non‐homogeneous Poisson process models are utilized to estimate the software failure rate in both the system test phase and the field. For projects that have only system test data, use of the calibration factor provides an estimate of the field failure rate that would otherwise be unavailable. For projects that have both system test data and previous field data, the calibration factor can be explicitly evaluated and used to estimate the field failure rate of future releases as their system test data becomes available. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
This article presents a software reliability growth model based on non-homogeneous Poisson process. The main focus of this article is to deliver a method for software reliability modelling incorporating the concept of time-dependent fault introduction and fault removal rate with change point. Also in this article, a cost model with change point has been developed. Based on the cost model optimal release policy with change point has been discussed. Maximum likelihood technique has been applied to estimate the parameters of the model. The proposed model has been validated using some real software failure data. Comparison has been made with models incorporating change point and without change point. The application of the proposed cost model has been shown using some numerical examples.  相似文献   

13.
We propose a software reliability model which assumes that there are two types of software failures. The first type is caused by the faults latent in the system before the testing; the second type is caused by the faults regenerated randomly during the testing phase. The former and latter software failure-occurrence phenomena are described by a geometrically decreasing and a constant hazard rate, respectively. Further, this model describes the imperfect debugging environment in which the fault-correction activity corresponding to each software failure is not always performed perfectly. Defining a random variable representing the cumulative number of faults successfully corrected up to a specified time point, we use a Markov process to formulate this model. Several quantitative measures for software reliability assessment are derived from this model. Finally, numerical examples of software reliability analysis based on the actual testing data are presented.  相似文献   

14.
In this research, we investigate stopping rules for software testing and propose two stopping rules from the aspect of software reliability testing based on the impartial reliability model. The impartial reliability difference (IRD-MP) rule considers the difference between the impartial transition-probability reliabilities estimated for both software developer and consumers at their predetermined prior information levels. The empirical–impartial reliability difference (EIRD-MP) rule suggests stopping a software test when the computed empirical transition reliability is tending to its estimated impartial transition reliability. To insure the high-standard requirement for safety-critical software, both rules take the maximum probability (MP) of untested paths into account.  相似文献   

15.
One of the challenging problems for software companies is to find the optimal time of release of the software so as to minimize the total cost expended on testing and potential penalty cost due to unresolved faults. If the software is for a safety critical system, then the software release time becomes more important. The criticality of a failure caused by a fault also becomes an important issue for safety critical software. In this paper we develop a total cost model based on criticality of the fault and cost of its occurrence during different phases of development for N-version programming scheme, a popular fault-tolerant architecture. The mathematical model is developed using the reliability growth model based on the non-homogeneous Poisson process. The models for optimal release time under different constraints are developed under the assumption that the debugging is imperfect and there is a penalty for late release of the software. The concept of Failure Mode Effects and Criticality Analysis is used for measuring criticality.  相似文献   

16.
Telecommunication software systems, containing security vulnerabilities, continue to be created and released to consumers. We need to adopt improved software engineering practices to reduce the security vulnerabilities in modern systems. Contracts can provide a useful mechanism for the identification, tracking, and validation of security vulnerabilities. In this work, we propose a new contract-based security assertion monitoring framework (CB_SAMF) that is intended to reduce the number of security vulnerabilities that are exploitable across multiple software layers, and to be used in an enhanced systems development life cycle (SDLC). We show how contract-based security assertion monitoring can be achieved in a live environment on Linux. Through security activities integrated into the SDLC we can identify potential security vulnerabilities in telecommunication systems, which in turn are used for the creation of contracts defining security assertions. Our contract model is then applied, as runtime probes, against two common security related vulnerabilities in the form of a buffer overflow and a denial of service.  相似文献   

17.
In this paper, we consider the error detection phenomena in the testing phase when modifications or improvements can be made to the software in the testing phase. The occurrence of improvements is described by a homogeneous Poisson process with intensity rate denoted by λ. The error detection phenomena is assumed to follow a nonhomogeneous Poisson process (NHPP) with the mean value function being denoted by m(t). Two models are presented and in one of the models, we have discussed an optimal release policy for the software taking into account the occurrences of errors and improvements. Finally, we discuss the possibility of an improvement removing k errors with probability pk, k ≥ 0 in the software and develop a NHPP model for the error detection phenomena in this situation.  相似文献   

18.
《随机分析与应用》2013,31(4):849-864
Abstract

This paper considers a Markovian imperfect software debugging model incorporating two types of faults and derives several measures including the first passage time distribution. When a debugging process upon each failure is completed, the fault which causes the failure is either removed from the fault contents with probability p or is remained in the system with probability 1 ? p. By defining the transition probabilities for the debugging process, we derive the distribution of first passage time to a prespecified number of fault removals and evaluate the expected numbers of perfect debuggings and debugging completions up to a specified time. The availability function of a software system, which is the probability that the software is in working state at a given time, is also derived and thus, the availability and working probability of the software system are obtained. Throughout the paper, the length of debugging time is treated to be random and thus its distribution is assumed. Numerical examples are provided for illustrative purposes.  相似文献   

19.
In the traditional design of reliability tests for assuring the mean time to failure (MTTF) in Weibull distribution with shape and scale parameters, it has been assumed that the shape parameter in the acceptable and rejectable populations is the same fixed number. For the purpose of expanding applicability of the reliability testing, Hisada and Arizono have developed a reliability sampling scheme for assuring MTTF in the Weibull distribution under the conditions that shape parameters in the both populations do not necessarily coincide, and are specified as interval values, respectively. Then, their reliability test is designed using the complete lifetime data. In general, the reliability testing based on the complete lifetime data requires the long testing time. As a consequence, the testing cost becomes sometimes expensive. In this paper, for the purpose of an economical plan of the reliability test, we consider the sudden death procedure for assuring MTTF in Weibull distribution with variational shape parameter.  相似文献   

20.
Intense competition and the requirement to continually drive down costs within a mature mobile telephone infrastructure market calls for new and innovative solutions to process improvement. One particular challenge is to improve the quality and reliability of the diagnostic process for systems testing of Global System for Mobile Communications and Universal Mobile Telecommunications System products. In this paper, we concentrate on a particularly important equipment type—the Base Transceiver Station (BTS). The BTS manages the radio channels and transfers signalling information to and from mobile stations (ie mobile phones). Most of the diagnostic processes are manually operated and rely heavily on individual operators and technicians' knowledge for their performance. Hence, there is a high cost associated with troubleshooting in terms of time and manpower. In this paper, we employ Bayesian networks (BNs) to model the domain knowledge that comprises the operations of the System Under Test, Automated Test Equipment (ATE), and the diagnostic skill of experienced engineers, in an attempt to enhance the efficiency and reliability of the diagnostic process. The proposed automated diagnostic tool (known as Wisdom) consists of several modules. An intelligent user interface provides possible solutions to test operators/technicians, captures their responses, and activates the automated test program. Server and client software architecture is used to integrate Wisdom with the ATE seamlessly and to maintain Wisdom as an independent module. A local area network provides the infrastructure for managing and deploying the multimedia information in real time. We describe how a diagnostic model can be developed and implemented using a BN approach. We also describe how the resulting process of diagnosis following failure, advice generation, and subsequent actions by the operator are handled interactively by the prototype system. The results from an initial survey are presented, indicating sizeable reductions in fault correction times for many fault types.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号