首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
软件模块测试中的动态资源分配问题   总被引:2,自引:0,他引:2  
赵玮  杨莉 《运筹学学报》2000,4(3):88-94
为有效利用测度过程中投入的测试资源,提高测试效率,本文考虑将测试过程为多个阶段并为各个阶段动态分配测试资源的方法,为此提出两种动态分配测试资源模型,当测试资源总数一定时,测试约束时各软件模块中剩余错误平均数量小的模型;当给定测试约束时各软件模块中剩余错误平均数要达到的预定指标时,所用的测试资源最少的模型。  相似文献   

2.
One of the most important issues for a development manager may be how to predict the reliability of a software system at an arbitrary testing time. In this paper, using the software failure-occurrence time data, we discuss a method of software reliability prediction based on software reliability growth models described by an NHPP (nonhomogeneous Poisson process). From the applied software reliability growth models, the conditional probability distribution of the time between software failures is derived, and its mean and median are obtained as reliability prediction measures. Finally, based on several numerical examples, we compare the performance between these measures from the view point of software reliability prediction in the testing phase.  相似文献   

3.
In this research, we investigate stopping rules for software testing and propose two stopping rules from the aspect of software reliability testing based on the impartial reliability model. The impartial reliability difference (IRD-MP) rule considers the difference between the impartial transition-probability reliabilities estimated for both software developer and consumers at their predetermined prior information levels. The empirical–impartial reliability difference (EIRD-MP) rule suggests stopping a software test when the computed empirical transition reliability is tending to its estimated impartial transition reliability. To insure the high-standard requirement for safety-critical software, both rules take the maximum probability (MP) of untested paths into account.  相似文献   

4.
A lot of importance has been attached to the testing phase of the Software Development Life Cycle (SDLC). It is during this phase it is checked whether the software product meets user requirements or not. Any discrepancies that are identified are removed. But testing needs to be monitored to increase its effectiveness. Software Reliability Growth Models (SRGMs) that specify mathematical relationships between the failure phenomenon and time have proved useful. SRGMs that include factors that affect failure process are more realistic and useful. Software fault detection and removal during the testing phase of SDLC depend on how testing resources (test cases, manpower and time) are used and also on previously identified faults. With this motivation a Non-Homogeneous Poisson Process (NHPP) based SRGM is proposed in this paper which is flexible enough to describe various software failure/reliability curves. Both testing efforts and time dependent fault detection rate (FDR) are considered for software reliability modeling. The time lag between fault identification and removal has also been depicted. The applicability of our model is shown by validating it on software failure data sets obtained from different real software development projects. The comparisons with established models in terms of goodness of fit, the Akaike Information Criterion (AIC), Mean of Squared Errors (MSE), etc. have been presented.  相似文献   

5.
The objective of studying software reliability is to assist software engineers in understanding more of the probabilistic nature of software failures during the debugging stages and to construct reliability models. In this paper, we consider modeling of a multiplicative failure rate whose components are evolving stochastically over testing stages and discuss its Bayesian estimation. In doing so, we focus on the modeling of parameters such as the fault detection rate per fault and the number of faults. We discuss how the proposed model can account for “imperfect debugging” under certain conditions. We use actual inter-failure data to carry out inference on model parameters via Markov chain Monte Carlo methods and present additional insights from Bayesian analysis.  相似文献   

6.
To accurately model software failure process with software reliability growth models, incorporating testing effort has shown to be important. In fact, testing effort allocation is also a difficult issue, and it directly affects the software release time when a reliability criteria has to be met. However, with an increasing number of parameters involved in these models, the uncertainty of parameters estimated from the failure data could greatly affect the decision. Hence, it is of importance to study the impact of these model parameters. In this paper, sensitivity of the software release time is investigated through various methods, including one-factor-at-a-time approach, design of experiments and global sensitivity analysis. It is shown that the results from the first two methods may not be accurate enough for the case of complex nonlinear model. Global sensitivity analysis performs better due to the consideration of the global parameter space. The limitations of different approaches are also discussed. Finally, to avoid further excessive adjustment of software release time, interval estimation is recommended for use and it can be obtained based on the results from global sensitivity analysis.  相似文献   

7.
Over the past three decades, many software reliability models with different parameters, reflecting various testing characteristics, have been proposed for estimating the reliability growth of software products. We have noticed that one of the most important parameters controlling software reliability growth is the fault reduction factor (FRF) proposed by Musa. FRF is generally defined as the ratio of net fault reduction to failures experienced. During the software testing process, FRF could be influenced by many environmental factors, such as imperfect debugging, debugging time lag, etc. Thus, in this paper, we first analyze some real data to observe the trends of FRF, and consider FRF to be a time-variable function. We further study how to integrate time-variable FRF into software reliability growth modeling. Some experimental results show that the proposed models can improve the accuracy of software reliability estimation. Finally, sensitivity analyses of various optimal release times based on cost and reliability requirements are discussed. The analytic results indicate that adjusting the value of FRF may affect the release time as well as the development cost.  相似文献   

8.
This paper presents a conceptual framework and a mathematical formulation for software resource allocation and project selection at the level of software skills. First, we introduce a skill-based framework that considers universities, software companies, and potential projects of a country. Based on this framework, we formulate a linear integer program PMax which determines the selection of projects and the allocation of human resources that maximize profit for a certain company. We show that PMax is NP-complete. Therefore, we devise a meta-heuristic, called Tabu Select and Greedily Allocate (TSGA), to overcome the computational complexities. When compared to PMax running on CPLEX, TSGA performs 15 times faster with an accuracy of 98% on small to large size problems where CPLEX converges. On larger problems where CPLEX does not return an answer, TSGA computes a feasible solution in the order of minutes.  相似文献   

9.
Component based software system approach is concerned with the system development by integrating components. The component based software construction primarily focuses on the view that software systems can be built up in modular fashion. The modular design is a logical collection of several independent developed components that are assembled with well defined software architecture. These components can be developed in-house or can be obtained commercially from outside market making build versus buy decision an important consideration in development process. Cohesion and coupling (C&C) plays a major role in determining the system quality in terms of reliability, maintainability and availability. Cohesion is defined as the internal interaction of components within the module. On the other hand, coupling is the external interaction of the module with other modules i.e. interaction of components amongst the modules of the software system. High cohesion and low coupling is one of the important criteria for good software design. Intra-modular coupling density (ICD) is a measure that describes the relationship between cohesion and coupling of modules in a modular software system and its value lies between zero and one. This paper deals with the selection of right mix of components for a modular software system using build-or-buy strategy. In this paper, fuzzy bi-criteria optimization model is formulated for component selection under build-or-buy scheme. The model simultaneously maximizes intra-modular coupling density (ICD) and functionality within the limitation of budget, reliability and delivery time. The model is further extended by incorporating the issue of compatibility amongst the components of the modules. A case study is devised to explain the formulated model.  相似文献   

10.
We propose a software reliability model which assumes that there are two types of software failures. The first type is caused by the faults latent in the system before the testing; the second type is caused by the faults regenerated randomly during the testing phase. The former and latter software failure-occurrence phenomena are described by a geometrically decreasing and a constant hazard rate, respectively. Further, this model describes the imperfect debugging environment in which the fault-correction activity corresponding to each software failure is not always performed perfectly. Defining a random variable representing the cumulative number of faults successfully corrected up to a specified time point, we use a Markov process to formulate this model. Several quantitative measures for software reliability assessment are derived from this model. Finally, numerical examples of software reliability analysis based on the actual testing data are presented.  相似文献   

11.
The Cross Entropy method has recently been applied to combinatorial optimization problems with promising results. This paper proposes a Cross Entropy based algorithm for reliability optimization of complex systems, where one wants to maximize the reliability of a system through optimal allocation of redundant components while respecting a set of budget constraints. We illustrate the effectiveness of the proposed algorithm on two classes of problems, software system reliability optimization and complex network reliability optimization, by testing it on instances from the literature as well as on randomly generated large scale instances. Furthermore, we show how a Cross Entropy-based algorithm can be fine-tuned by using a training scheme based upon the Response Surface Methodology. Computational results show the effectiveness as well as the robustness of the algorithm on different classes of problems.  相似文献   

12.
This work is motivated by a particular software reliability problem in a unit of flight control software developed by the Indian Space Research Organization (ISRO), in which the testing of the software is carried out in multiple batches, each consisting of several runs. As the errors are found during the runs within a batch, they are noted, but not debugged immediately; they are debugged only at the end of that particular batch of runs. In this work, we introduce a discrete time model suitable for this type of periodic debugging schedule and describe maximum likelihood estimation for the model parameters. This model is used to estimate the reliability of the software. We also develop a method to determine the additional number of error‐free test runs required for the estimated reliability to achieve a specific target with some high probability. We analyze the test data on the flight control software of ISRO. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
针对复杂武器系统可靠性冗余分配优化问题,以系统的可靠度最大为目标函数,综合考虑系统的费用、质量等约束条件,建立了武器系统的可靠性冗余分配最优化模型;提出了基于相对增量的边际效应分析方法的求解算法,通过改进各级子系统的可靠性,从而使总的系统可靠性最大,最后对防空武器系统的可靠性冗余分配优化问题进行了实例分析。  相似文献   

14.
15.
电传操纵系统是影响飞行安全的关键系统,建立有效的电传操纵系统软件可靠性模型是软件可靠性测试和验证的基础.针对某型飞机系统安全性设计中横向通道电传操纵系统软件可靠性建模问题,建立了模块化的基于Markov的电传操纵系统软件可靠性模型.提出了综合考虑重要度和复杂度的软件可靠性指标分配方法,将可靠性目标分配到子模块.最后,通过实例验证了所建立的模型和指标分配方法的有效性.  相似文献   

16.
Software reliability is a rapidly developing discipline. In this paper we model the fault-detecting processes by Markov processes with decreasing jump intensity. The intensity function is suggested to be a power function of the number of the remaining faults in the software. The models generalize the software reliability model suggested by Jelinski and Moranda (‘Software reliability research’, in W. Freiberger (ed.), Statistical Computer Performance Evaluation, Academic Press, New York, 1972. pp. 465–497). The main advantage of our models is that we do not use the assumption that all software faults correspond to the same failure rate. Preliminary studies suggest that a second-order power function is quite a good approximation. Statistical tests also indicate that this may be the case. Numerical results show that the estimation of the expected time to next failure is both reasonable and decreases relatively stably when the number of removed faults is increased.  相似文献   

17.
The failure probability of a modular software system depends on the reliabilities of the modules and the software operational profile. The software operational profile estimated in the development phase has inherent uncertainty because estimation error is inevitable and the operational profile often changes in the operation phase. The software project manager must take this uncertainty into account, so that the customers will not suffer from an unacceptably large failure probability even when his/her operational profile deviates from the estimated one. In this paper, we formulate and solve three optimizing models for software reliability allocation under an uncertain operational profile. The numerical results indicate that when we take the uncertainty into account, the additional software development cost required is acceptably small.  相似文献   

18.
In this paper, we consider the error detection phenomena in the testing phase when modifications or improvements can be made to the software in the testing phase. The occurrence of improvements is described by a homogeneous Poisson process with intensity rate denoted by λ. The error detection phenomena is assumed to follow a nonhomogeneous Poisson process (NHPP) with the mean value function being denoted by m(t). Two models are presented and in one of the models, we have discussed an optimal release policy for the software taking into account the occurrences of errors and improvements. Finally, we discuss the possibility of an improvement removing k errors with probability pk, k ≥ 0 in the software and develop a NHPP model for the error detection phenomena in this situation.  相似文献   

19.
One of the challenging problems for software companies is to find the optimal time of release of the software so as to minimize the total cost expended on testing and potential penalty cost due to unresolved faults. If the software is for a safety critical system, then the software release time becomes more important. The criticality of a failure caused by a fault also becomes an important issue for safety critical software. In this paper we develop a total cost model based on criticality of the fault and cost of its occurrence during different phases of development for N-version programming scheme, a popular fault-tolerant architecture. The mathematical model is developed using the reliability growth model based on the non-homogeneous Poisson process. The models for optimal release time under different constraints are developed under the assumption that the debugging is imperfect and there is a penalty for late release of the software. The concept of Failure Mode Effects and Criticality Analysis is used for measuring criticality.  相似文献   

20.
Component deployment is a combinatorial optimisation problem in software engineering that aims at finding the best allocation of software components to hardware resources in order to optimise quality attributes, such as reliability. The problem is often constrained because of the limited hardware resources, and the communication network, which may connect only certain resources. Owing to the non-linear nature of the reliability function, current optimisation methods have focused mainly on heuristic or metaheuristic algorithms. These are approximate methods, which find near-optimal solutions in a reasonable amount of time. In this paper, we present a mixed integer linear programming (MILP) formulation of the component deployment problem. We design a set of experiments where we compare the MILP solver to methods previously used to solve this problem. Results show that the MILP solver is efficient in finding feasible solutions even where other methods fail, or prove infeasibility where feasible solutions do not exist.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号