首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The goal of this paper is to promote computational thinking among mathematics, engineering, science and technology students, through hands-on computer experiments. These activities have the potential to empower students to learn, create and invent with technology, and they engage computational thinking through simulations, visualizations and data analysis. We present nine computer experiments and suggest a few more, with applications to calculus, probability and data analysis, which engage computational thinking through simulations, visualizations and data analysis. We are using the free (open-source) statistical programming language R. Our goal is to give a taste of what R offers rather than to present a comprehensive tutorial on the R language. In our experience, these kinds of interactive computer activities can be easily integrated into a smart classroom. Furthermore, these activities do tend to keep students motivated and actively engaged in the process of learning, problem solving and developing a better intuition for understanding complex mathematical concepts.  相似文献   

2.
Network reliability is a performance indicator of computer/communication networks to measure the quality level. However, it is costly to improve or maximize network reliability. This study attempts to maximize network reliability with minimal cost by finding the optimal transmission line assignment. These two conflicting objectives frustrate decision makers. In this study, a set of transmission lines is ready to be assigned to the computer network, and the computer network associated with any transmission line assignment is regarded as a stochastic computer network (SCN) because of the multistate transmission lines. Therefore, network reliability means the probability to transmit a specified amount of data successfully through the SCN. To solve this multiple objectives programming problem, this study proposes an approach integrating Non-dominated Sorting Genetic Algorithm II (NSGA-II) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). NSGA-II searches for the Pareto set where network reliability is evaluated in terms of minimal paths and Recursive Sum of Disjoint Products (RSDP). Subsequently, TOPSIS determines the best compromise solution. Several real computer networks serve to demonstrate the proposed approach.  相似文献   

3.
An efficient approach, called augmented line sampling, is proposed to locally evaluate the failure probability function (FPF) in structural reliability-based design by using only one reliability analysis run of line sampling. The novelty of this approach is that it re-uses the information of a single line sampling analysis to construct the FPF estimation, repeated evaluations of the failure probabilities can be avoided. It is shown that, when design parameters are the distribution parameters of basic random variables, the desired information about FPF can be extracted through a single implementation of line sampling. Line sampling is a highly efficient and widely used reliability analysis method. The proposed method extends the traditional line sampling for the failure probability estimation to the evaluation of the FPF which is a challenge task. The required computational effort is neither relatively sensitive to the number of uncertain parameters, nor grows with the number of design parameters. Numerical examples are given to show the advantages of the approach.  相似文献   

4.
High-dimensional reliability analysis is still an open challenge in structural reliability community. To address this problem, a new sampling approach, named the good lattice point method based partially stratified sampling is proposed in the fractional moments-based maximum entropy method. In this approach, the original sample space is first partitioned into several orthogonal low-dimensional sample spaces, say 2 and 1 dimensions. Then, the samples in each low-dimensional sample space are generated by the good lattice point method, which are deterministic points and possess the property of large variance reduction. Finally, the samples in the original space can be obtained by randomly pairing the samples in low-dimensions, which may also significantly reduce the variance in high-dimensional cases. Then, this sampling approach is applied to evaluate the low-order fractional moments in the maximum entropy method with the tradeoff of efficiency and accuracy for high-dimensional reliability problems. In this regard, the probability density function of the performance function involving a large number of random inputs can be derived accordingly, where the reliability can be straightforwardly evaluated by a simple integral over the probability density function. Numerical examples are studied to validate the proposed method, which indicate the proposed method is of accuracy and efficiency for high-dimensional reliability analysis.  相似文献   

5.
Recent developments in the field of stochastic mechanics and particularly regarding the stochastic finite element method allow to model uncertain behaviours for more complex engineering structures. In reliability analysis, polynomial chaos expansion is a useful tool because it helps to avoid thousands of time-consuming finite element model simulations for structures with uncertain parameters. The aim of this paper is to review and compare available techniques for both the construction of polynomial chaos and its use in computing failure probability. In particular, we compare results for the stochastic Galerkin method, stochastic collocation, and the regression method based on Latin hypercube sampling with predictions obtained by crude Monte Carlo sampling. As an illustrative engineering example, we consider a simple frame structure with uncertain parameters in loading and geometry with prescribed distributions defined by realistic histograms.  相似文献   

6.
For the time-variant hybrid reliability problem under random and interval uncertainties, the upper bound of time-variant failure probability, as a conservative index to quantify the safety level of the structure, is highly concerned. To efficiently estimate it, the adaptive Kriging respectively combined with design point based importance sampling and meta-model based one are proposed. The first algorithm firstly searches the design point of the hybrid problem, on which the candidate random samples are generated by shifting the sampling center from mean value to design point. Then, the Kriging model is iteratively trained and the hybrid problem is solved by the well-trained Kriging model. The second algorithm firstly utilizes the Kriging-based importance sampling to approximate the quasi-optimal importance sampling samples and estimate the augmented upper bound of time-variant failure probability. After that, the Kriging model is further updated based on these importance samples to estimate the correction factor, on which the hybrid failure probability is calculated by the product of augmented upper bound of time-variant failure probability and correction factor. Meanwhile, an improved learning function is presented to efficiently train an accurate Kriging model. The proposed methods integrate the merits of adaptive Kriging and importance sampling, which can conduct the hybrid reliability analysis by as little as possible computational cost. The presented examples show the feasibility of the proposed methods.  相似文献   

7.
This study proposes a new reliability sensitivity analysis approach using an efficient hybrid simulation method that is a combination of subset simulation, importance sampling and control variates techniques. This method contains a probability term (a fast-moving by subset simulation) and an adaptive weighting part that improves the calculated probability. The Finite Difference Method is used to obtain reliability sensitivities, and the related formulation is derived. Five numerical examples (four-branch model, beam-cable system, one-story frame, ring-stiffened cylinder buckling, and 25-bar steel truss) are presented to describe the applications of the proposed method. The results are compared with those obtained by the available techniques. The results revealed that the proposed method efficiently and accurately solves rare-event, system-level, and real-world engineering problems with explicit and implicit limit state functions.  相似文献   

8.
The safety analysis of systems with nonlinear performance function and small probability of failure is a challenge in the field of reliability analysis. In this study, an efficient approach is presented for approximating small failure probabilities. To meet this aim, by introducing Probability Density Function (PDF) control variates, the original failure probability integral was reformulated based on the Control Variates Technique (CVT). Accordingly, using the adaptive cooperation of the subset simulation (SubSim) and the CVT, a new formulation was offered for the approximation of small failure probabilities. The proposed formulation involves a probability term (resulting from a fast-moving SubSim) and an adaptive weighting term that refines the obtained probability. Several numerical and engineering problems, involving nonlinear performance functions and system-level reliability problems, are solved by the proposed approach and common reliability methods. Results showed that the proposed simulation approach is not only more efficient, but is also robust than common reliability methods. It also presents a good potential for application in engineering reliability problems.  相似文献   

9.
We study a parametric estimation problem. Our aim is to estimate or to identify the conditional probability which is called the system. We suppose that we can select appropriate inputs to the system when we gather the training data. This kind of estimation is called active learning in the context of the artificial neural networks. In this paper we suggest new active learning algorithms and evaluate the risk of the algorithms by using statistical asymptotic theory. The algorithms are regarded as a version of the experimental design with two-stage sampling. We verify the efficiency of the active learning by simple computer simulations.  相似文献   

10.
With increasing emphases on better and more reliable services, network systems have incorporated reliability analysis as an integral part in their planning, design and operation. This article first presents a simple exact decomposition algorithm for computing the NP-hard two-terminal reliability, which measures the probability of successful communication from specified source node to sink node in the network. Then a practical bounding algorithm, which is indispensable for large networks, is presented by modifying the exact algorithm for obtaining sequential lower and upper bounds on two-terminal reliability. Based on randomly generated large networks, computational experiments are conducted to compare the proposed algorithm to the well-known and widely used edge-packing approximation model and to explore the performance of the proposed bounding algorithm. Computational results reveal that the proposed bounding algorithm is superior to the edge-packing model, and the trade-off of accuracy for execution time ensures that an exact difference between upper and lower bounds on two-terminal reliability can be obtained within an acceptable time.  相似文献   

11.
Summary Simulation can be defined as a numerical technique for conducting experiments on a digital computer, which involves certain types of mathematical and logical models that describe the behaviour of a system over extended periods of real time. Simulation is, in a wide sense, a technique for performing sampling experiments on a model of the system. Stochastic simulation implies experimenting with the model over time including sampling stochastic variates from probability distributions. This paper describes the main concepts of the application of Stochastic Simulation and Monte Carlo methods to the analysis of the operation of electric energy systems, in particular to hydro-thermal generating systems. These techniques can take into account virtually all contingencies inherent in the operation of the system. Also, the operating policies that have an important effect on the performance of these systems can be realistically represented.  相似文献   

12.
Sliced Latin hypercube designs are popularly adopted for computer experiments with qualitative factors. Previous constructions require the sizes of different slices to be identical. Here we construct sliced designs with flexible sizes of slices. Besides achieving desirable one-dimensional uniformity, flexible sliced designs (FSDs) constructed in this paper accommodate arbitrary sizes for different slices and cover ordinary sliced Latin hypercube designs as special cases. The sampling properties of FSDs are derived and a central limit theorem is established. It shows that any linear combination of the sample means from different models on slices follows an asymptotic normal distribution. Some simulations compare FSDs with other sliced designs in collective evaluations of multiple computer models.  相似文献   

13.
针对现有的基于区间求解结构模糊可靠度方法的缺陷,提出了一种新的求解结构模糊可靠度方法.该方法利用泛灰数描述与结构基本变量概率分布相关的不确定参数,并将这些泛灰数引入到结构模糊可靠度计算中,得出了较为精确的结构可靠度计算结果.数值算例表明,该方法得到的结构可靠度区间更窄,实现了利用较少的信息量得到较精确的可靠度计算结果,相比传统的结构模糊可靠度计算方法能提供更多、更精确的关于结构安全程度的有用信息.  相似文献   

14.
This article deals with constructing a confidence interval for the reliability parameter using ranked set sampling. Some asymptotic and resampling-based intervals are suggested, and compared with their simple random sampling counterparts using Monte Carlo simulations. Finally, the methods are applied on a real data set in the context of agriculture.  相似文献   

15.
A novel machine learning aided structural reliability analysis for functionally graded frame structures against static loading is proposed. The uncertain system parameters, which include the material properties, dimensions of structural members, applied loads, as well as the degree of gradation of the functionally graded material (FGM), can be incorporated within a unified structural reliability analysis framework. A 3D finite element method (FEM) for static analysis of bar-type engineering structures involving FGM is presented. By extending the traditional support vector regression (SVR) method, a new kernel-based machine learning technique, namely the extended support vector regression (X-SVR), is proposed for modelling the underpinned relationship between the structural behaviours and the uncertain system inputs. The proposed structural reliability analysis inherits the advantages of the traditional sampling method (i.e., Monte-Carlo Simulation) on providing the information regarding the statistical characteristics (i.e., mean, standard deviations, probability density functions and cumulative distribution functions etc.) of any concerned structural outputs, but with significantly reduced computational efforts. Five numerical examples are investigated to illustrate the accuracy, applicability, and computational efficiency of the proposed computational scheme.  相似文献   

16.
Simulated computer experiments have become a viable cost-effective alternative for controlled real-life experiments. However, the simulation of complex systems with multiple input and output parameters can be a very time-consuming process. Many of these high-fidelity simulators need minutes, hours or even days to perform one simulation. The goal of global surrogate modeling is to create an approximation model that mimics the original simulator, based on a limited number of expensive simulations, but can be evaluated much faster. The set of simulations performed to create this model is called the experimental design. Traditionally, one-shot designs such as the Latin hypercube and factorial design are used, and all simulations are performed before the first model is built. In order to reduce the number of simulations needed to achieve the desired accuracy, sequential design methods can be employed. These methods generate the samples for the experimental design one by one, without knowing the total number of samples in advance. In this paper, the authors perform an extensive study of new and state-of-the-art space-filling sequential design methods. It is shown that the new sequential methods proposed in this paper produce results comparable to the best one-shot experimental designs available right now.  相似文献   

17.
A classical sampling strategy for load balancing policies is power-of-two, where any server pair is sampled with equal probability. This does not cover practical settings with assignment constraints which force non-uniform sampling. While intuition suggests that non-uniform sampling adversely impacts performance, this was only supported through simulations, and rigorous statements have remained elusive. Building on product-form distributions for redundancy systems, we prove the stochastic dominance of uniform sampling for a four-server system as well as arbitrary-size systems in light traffic.  相似文献   

18.
We study a stratified multisite cluster‐sampling panel time series approach in order to analyse and evaluate the quality and reliability of produced items, motivated by the problem to sample and analyse multisite outdoor measurements from photovoltaic systems. The specific stratified sampling in spatial clusters reduces sampling costs and allows for heterogeneity as well as for the analysis of spatial correlations due to defects and damages that tend to occur in clusters. The analysis is based on weighted least squares using data‐dependent weights. We show that this does not affect consistency and asymptotic normality of the least squares estimator under the proposed sampling design under general conditions. The estimation of the relevant variance–covariance matrices is discussed in detail for various models including nested designs and random effects. The strata corresponding to damages or manufacturers are modelled via a quality feature by means of a threshold approach. The analysis of outdoor electroluminescence images shows that spatial correlations and local clusters may arise in such photovoltaic data. Further, relevant statistics such as the mean pixel intensity cannot be assumed to follow a Gaussian law. We investigate the proposed inferential tools in detail by simulations in order to assess the influence of spatial cluster correlations and serial correlations on the test's size and power. ©2016 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.  相似文献   

19.
Probability estimation in sparse two-dimensional contingency tables with ordered categories is examined. Several smoothing procedures are compared to analysis of the unsmoothed table. It is shown that probability estimates obtained via maximum penalized likelihood smoothing are consistent under a sparse asymptotic framework if the underlying probability matrix is smooth, and are more accurate than kernel-based and other smoothing techniques. In fact, computer simulations indicate that smoothing based on a product kernel is less effective than no smoothing at all. An example is given to illustrate the smoothing technique. Possible extensions to model building and higher dimensional tables are discussed.  相似文献   

20.
This paper develops a discrete reliability growth (RG) model for an inverse sampling scheme, e.g., for destructive tests of expensive single-shot operations systems where design changes are made only and immediately after the occurrence of failures. For qi, the probability of failure at the i-th stage, a specific parametric form is chosen which conforms to the concept of the Duane (1964, IEEE Trans. Aerospace Electron. Systems, 2, 563-566) learning curve in the continuous-time RG setting. A generalized linear model approach is pursued which efficiently handles a certain non-standard situation arising in the study of large-sample properties of the maximum likelihood estimators (MLEs) of the parameters. Alternative closed-form estimators of the model parameters are proposed and compared with the MLEs through asymptotic efficiency as well as small and moderate sample size simulation studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号