首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 567 毫秒
1.
Slope failure mechanisms (e.g., why and where slope failure occurs) are usually unknown prior to slope stability analysis. Several possible failure scenarios (e.g., slope sliding along different slip surfaces) can be assumed, leading to a number of scenario failure events of slope stability. How to account rationally for various scenario failure events in slope stability reliability analysis and how to identify key failure events that have significant contributions to slope failure are critical questions in slope engineering. In this study, these questions are resolved by developing an efficient computer-based simulation method for slope system reliability analysis. The proposed approach decomposes a slope system failure event into a series of scenario failure events representing possible failure scenarios and calculates their occurrence probabilities by a single run of an advanced Monte Carlo simulation (MCS) method, called generalized Subset Simulation (GSS). Using GSS results, representative failure events (RFEs) that are considered relatively independent are identified from scenario failure events using probabilistic network evaluation technique. Their relative contributions are assessed quantitatively, based on which key failure events are determined. The proposed approach is illustrated using a soil slope example and a rock slope example. It is shown that the proposed approach provides proper estimates of occurrence probabilities of slope system failure event and scenario failure events by a single GSS run, which avoids repeatedly performing simulations for each failure event. Compared with direct MCS, the proposed approach significantly improves computational efficiency, particularly for failure events with small failure probabilities. Key failure events of slope stability are determined among scenario failure events in a cost-effective manner. Such information is valuable in making slope design decisions and remedial measures.  相似文献   

2.
This paper presents a general framework for the construction of Monte-Carlo algorithms for the solution of enumeration problems. As an application of the general framework, a Monte-Carlo method is constructed for estimating the failure probability of a multiterminal planar network whose edges are subject to independent random failures. The method is guaranteed to be effective when the failure probabilities of the edges are sufficiently small.  相似文献   

3.
Christian Bucher 《PAMM》2015,15(1):549-550
Monte Carlo methods are most versatile regarding applications to the reliability analysis of high-dimensional nonlinear structural systems. In addition to its versatility, the computational efficacy of Monte Carlo method is not adversely affected by the dimensionality of the problem. Crude Monte Carlo techniques, however, are very inefficient for extremely small failure probabilities such as typically required for sensitive structural systems. Therefore methods to increase the efficacy for small failure probability while keeping the adverse influence of dimensionality small are desirable. On such method is the asymptotic sampling method. Within the framework of this method, well-known asymptotic properties of the reliability index regarding the scaling of the basic variables are exploited to construct a regression model which allows to determine the reliability index for extremely small failure probabilities with high precision using a moderate number of Monte Carlo samples. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
A given finite set of tasks, having known nonnegligible failure probabilities and known costs (or rewards) for their performance, can be performed sequentially until either one of the tasks fails or all tasks have been executed. The allowable task performance sequences are constrained only by certain precedence requirements, which specify that certain tasks must be performed before certain other tasks. Given the individual task failure probabilities and task costs, along with the intertask precedence requirements, the problem is to determine an optimal task performance sequence having minimal expected cost (or maximal expected reward). A number of potential applications of such “task ordering” problems are described, including R&D project organization, design of screening procedures, and determining testing points for sequential manufacturing processes.The main results of this paper are a number of reduction theorems which lead to a very efficient optimization algorithm for a large class of task ordering problems. Though these theorems are not quite sufficient for us to give a fast optimization algorithm, we do show how their use can improve upon exhaustive search techniques.  相似文献   

5.
In this paper, we consider a repairable system in which two types of failures can occur on each failure. One is a minor failure that can be corrected with minimal repair, whereas the other type is a catastrophic failure that destroys the system. The total number of failures until the catastrophic failure is a positive random variable with a given probability vector. It is assumed that there is some partial information about the failure status of the system, and then various properties of the conditional probability of the system failure are studied. Mixture representations of the reliability function for the system in terms of the reliability function of the residual lifetimes of record values are obtained. Some stochastic properties of the conditional probabilities and the residual lifetimes of two systems are finally discussed. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
The optimal engineering design problem consists in minimizing the expected total cost of an infrastructure or equipment, including construction and expected repair costs, the latter depending on the failure probabilities of each failure mode. The solution becomes complex because the evaluation of failure probabilities using First-Order Reliability Methods (FORM) involves one optimization problem per failure mode. This paper formulates the optimal engineering design problem as a bi-level problem, i.e., an optimization problem constrained by a collection of other interrelated optimization problems. The structure of this bi-level problem is advantageously exploited using Benders’ decomposition to develop and report an efficient algorithm to solve it. An advantage of the proposed approach is that the design optimization and the reliability calculations are decoupled, resulting in a structurally simple algorithm that exhibits high computational efficiency. Bi-level problems are non-convex by nature and Benders algorithm is intended for convex optimization. However, possible non-convexities can be detected and tackled using simple heuristics. Its practical interest is illustrated through a realistic but simple case study, a breakwater design example with two failure modes: overtopping and armor instability.  相似文献   

7.
The optimal size of k is specified for two-state k-out-of-n systems that may be functioning or fail in either state. It is assumed that the steady-state, success and failure probabilities are not known exactly. The problem is reduced to finding the saddle-point solution to a minimax optimization problem. An example shows that the minimax design is robust with regard to uncertainty.  相似文献   

8.
堰塞湖排险的一个关键问题是如何针对实施不同应对措施情况下的堰塞湖溃坝概率进行估计,这是一个值得关注的重要研究课题。本文提出了一种基于故障树分析(FTA)的堰塞湖溃坝概率估计方法。首先,通过堰塞湖排险问题的实际背景分析,基于FTA构建了堰塞湖溃坝故障树的基本架构;然后,通过相关领域知识、历史案例分析、专家主观判断和多位专家主观判断信息的融合,可以确定实施某一应对措施情形下故障树中各基本事件在不同时段内发生的概率;进一步地,依据构建的故障树和基本事件发生的概率,给出了在不同时段内堰塞湖溃坝事件发生的概率的估计方法。最后,通过一个实例分析说明了本文所提出方法的可行性与有效性。  相似文献   

9.
The crosswind stability against overturning is a major national design criterium for high-speed railway vehicles. Due to the increasing interoperability in Europe it has also become an important international task. Especially modern light-weight trains are at risk to fail and counter-measures as for example wind-fences or adding extra weight in the underbelly are always very expensive. In recent years efforts were made to derive an uniform rule in certifying railway vehicles. In this case especially probabilistic methods were proposed. These probabilistic technics are common design criteria for wind turbines. This paper presents a sophisticated method to compute the reliability of railway vehicles under strong crosswind. In consideration of the given gust signal and the high-frequency, turbulent fluctuations of the wind the response of a simplified train model is computed. The major failure criterium to determine the reliability is the lowest wheel-rail contact force of the railway vehicle. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
A joint model for linear degradation and competing failure data with partial renewals is proposed. Non-parametric estimation procedures for failure intensities and failure probabilities as functions of degradation level are given. Asymptotic properties of the estimators are investigated. To cite this article: V. Bagdonavi?ius et al., C. R. Acad. Sci. Paris, Ser. I 342 (2006).  相似文献   

11.
A complex discrete warm standby system with loss of units   总被引:1,自引:0,他引:1  
A redundant complex discrete system is modelled through phase type distributions. The system is composed of a finite number of units, one online and the others in a warm standby arrangement. The units may undergo internal wear and/or accidental external failures. The latter may be repairable or non-repairable for the online unit, while the failures of the standby units are always repairable. The repairability of accidental failures for the online unit may be independent or not of the time elapsed up to their occurrence. The times up to failure of the online unit, the time up to accidental failure of the warm standby ones and the time needed for repair are assumed to be phase-type distributed. When a non-repairable failure occurs, the corresponding unit is removed. If all units are removed, the system is then reinitialized. The model is built and the transient and stationary distributions determined. Some measures of interest associated with the system, such as transition probabilities, availability and the conditional probability of failure are achieved in transient and stationary regimes. All measures are obtained in a matrix algebraic algorithmic form under which the model can be applied. The results in algorithmic form have been implemented computationally with Matlab. An optimization is performed when costs and rewards are present in the system. A numerical example illustrates the results and the CPU (Central Processing Unit) times for the computation are determined, showing the utility of the algorithms.  相似文献   

12.
This contribution presents an approach to account for imprecise data within an optimization task in view of engineering applications. In order to specify imprecise data the concept of imprecise probabilities is utilized, applying the generalized uncertainty model fuzzy randomness. Considering the fact, that the uncertainty affects both the objective function and the constraints, the optimum and the respective design is obtained imprecise. In view of decision making for engineering applications the optimization is converted to account for information reducing methods, e.g. determination of failure probabilities, defuzzification and robustness assessment. The introduced methods and algorithms are focused on a numerical treatment to solve nonlinear industry–sized problems. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
In this paper, a dynamic evaluation of the multistate weighted k‐out‐of‐n:F system is presented in an unreliability viewpoint. The expected failure cost of components is used as an unreliability index. Using failure cost provides an opportunity to employ financial concepts in system unreliability estimation. Hence, system unreliability and system cost can be compared easily in order to making decision. The components' probabilities are computed over time to model the dynamic behavior of the system. The whole system has been assessed by recursive algorithm approach. As a result, a bi‐objective optimization model can be developed to find optimal decisions on maintenance strategies. Finally, the application of the proposed model is investigated via a transportation system case. Matlab programming is developed for the case, and genetic algorithm is used to solve the optimization model. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
The safety analysis of systems with nonlinear performance function and small probability of failure is a challenge in the field of reliability analysis. In this study, an efficient approach is presented for approximating small failure probabilities. To meet this aim, by introducing Probability Density Function (PDF) control variates, the original failure probability integral was reformulated based on the Control Variates Technique (CVT). Accordingly, using the adaptive cooperation of the subset simulation (SubSim) and the CVT, a new formulation was offered for the approximation of small failure probabilities. The proposed formulation involves a probability term (resulting from a fast-moving SubSim) and an adaptive weighting term that refines the obtained probability. Several numerical and engineering problems, involving nonlinear performance functions and system-level reliability problems, are solved by the proposed approach and common reliability methods. Results showed that the proposed simulation approach is not only more efficient, but is also robust than common reliability methods. It also presents a good potential for application in engineering reliability problems.  相似文献   

15.
High cycle fatigue failure of hard steels is dominated by a long crack initiation phase and a short crack propagation phase. The crack initiation sites are inclusions or surface defects. Due to this, the endurace probability of a hard steel part depends on its crack initiating inclusion size or surface defect size distribution. These consideration lead to the weakest-link concept which allows to calculate local and total endurance probabilities of the cycled parts. From these probabilities, the endurance limit and the probable crack initiation sites can be predicted. Base of the prediction is the knowledge of the endurance limits under three different load conditions. From these data and fracture-mechanic relations for the different inclusion types, the distributions of the crack initiating inclusion sizes can be predicted.  相似文献   

16.
An efficient approach, called augmented line sampling, is proposed to locally evaluate the failure probability function (FPF) in structural reliability-based design by using only one reliability analysis run of line sampling. The novelty of this approach is that it re-uses the information of a single line sampling analysis to construct the FPF estimation, repeated evaluations of the failure probabilities can be avoided. It is shown that, when design parameters are the distribution parameters of basic random variables, the desired information about FPF can be extracted through a single implementation of line sampling. Line sampling is a highly efficient and widely used reliability analysis method. The proposed method extends the traditional line sampling for the failure probability estimation to the evaluation of the FPF which is a challenge task. The required computational effort is neither relatively sensitive to the number of uncertain parameters, nor grows with the number of design parameters. Numerical examples are given to show the advantages of the approach.  相似文献   

17.
This paper examines concepts of independence for full conditional probabilities; that is, for set-functions that encode conditional probabilities as primary objects, and that allow conditioning on events of probability zero. Full conditional probabilities have been used in economics, in philosophy, in statistics, in artificial intelligence. This paper characterizes the structure of full conditional probabilities under various concepts of independence; limitations of existing concepts are examined with respect to the theory of Bayesian networks. The concept of layer independence (factorization across layers) is introduced; this seems to be the first concept of independence for full conditional probabilities that satisfies the graphoid properties of Symmetry, Redundancy, Decomposition, Weak Union, and Contraction. A theory of Bayesian networks is proposed where full conditional probabilities are encoded using infinitesimals, with a brief discussion of hyperreal full conditional probabilities.  相似文献   

18.
We consider unrecoverable homogeneous multi-state systems with graduate failures, where each component can work at M + 1 linearly ordered levels of performance. The underlying process of failure for each component is a homogeneous Markov process such that the level of performance of one component can change only for one level lower than the observed one, and the failures are independent for different components. We derive the probability distribution of the random vector X, representing the state of the system at the moment of failure and use it for testing the hypothesis of equal transition intensities. Under the assumption that these intensities are equal, we derive the method of moments estimators for probabilities of failure in a given state vector and the intensity of failure. At the end we calculate the reliability function for such systems. Received: May 18, 2007., Revised: July 8, 2008., Accepted: September 29, 2008.  相似文献   

19.
In this paper dynamic and stationary measures of importance of a component in a binary system are considered. To arrive at explicit results we assume the performance processes of the components to be independent and the system to be coherent. Especially, the Barlow–Proschan and the Natvig measures are treated in detail and a series of new results and approaches are given. For the case of components not undergoing repair it is shown that both measures are sensible. Reasonable measures of component importance for repairable systems represent a challenge. A basic idea here is also to take a so-called dual term into account. According to the extended Barlow–Proschan measure a component is important if there are high probabilities both that its failure is the cause of system failure and that its repair is the cause of system repair. Even with this extension results for the stationary Barlow–Proschan measure are not satisfactory. According to the extended Natvig measure a component is important if both by failing it strongly reduces the expected system uptime and by being repaired it strongly reduces the expected system downtime. With this extension the results for the stationary Natvig measure seem very sensible.  相似文献   

20.
《随机分析与应用》2013,31(4):849-864
Abstract

This paper considers a Markovian imperfect software debugging model incorporating two types of faults and derives several measures including the first passage time distribution. When a debugging process upon each failure is completed, the fault which causes the failure is either removed from the fault contents with probability p or is remained in the system with probability 1 ? p. By defining the transition probabilities for the debugging process, we derive the distribution of first passage time to a prespecified number of fault removals and evaluate the expected numbers of perfect debuggings and debugging completions up to a specified time. The availability function of a software system, which is the probability that the software is in working state at a given time, is also derived and thus, the availability and working probability of the software system are obtained. Throughout the paper, the length of debugging time is treated to be random and thus its distribution is assumed. Numerical examples are provided for illustrative purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号