首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The stochastic ultimate load analysis model used in the safety analysis of engineering structures can be treated as a special case of chance-constrained problems (CCP) which minimize a stochastic cost function subject to some probabilistic constraints. Some special cases (such as a deterministic cost function with probabilistic constraints or deterministic constraints with a random cost function) for ultimate load analysis have airady been investigated by various researchers. In this paper, a generai probabilistic approach to stochastic ultimate load analysis is given. In doing so, some approximation techniques are needed due to the fact that the problems at hand are too complicated to evaluate precisely. We propose two extensions of the SQP method in which the variables appear in the algorithms inexactly. These algorithms are shown to be globally convergent for all models and locally superlinearly convergent for some special cases  相似文献   

2.
Probabilistic analysis is becoming more important in mechanical science and real-world engineering applications. In this work, a novel generalized stochastic edge-based smoothed finite element method is proposed for Reissner–Mindlin plate problems. The edge-based smoothing technique is applied in the standard FEM to soften the over-stiff behavior of Reissner–Mindlin plate system, aiming to improve the accuracy of predictions for deterministic response. Then, the generalized nth order stochastic perturbation technique is incorporated with the edge-based S-FEM to formulate a generalized probabilistic ES-FEM framework (GP_ES-FEM). Based upon a general order Taylor expansion with random variables of input, it is able to determine higher order probabilistic moments and characteristics of the response of Reissner–Mindlin plates. The significant feature of the proposed approach is that it not only improves the numerical accuracy of deterministic output quantities with respect to a given random variable, but also overcomes the inherent drawbacks of conventional second-order perturbation approach, which is satisfactory only for small coefficients of variation of the stochastic input field. Two numerical examples for static analysis of Reissner–Mindlin plates are presented and verified by Monte Carlo simulations to demonstrate the effectiveness of the present method.  相似文献   

3.
Though forecasting methods are used in numerous fields, we have seen no work on providing a general theoretical framework to build forecast operators into temporal databases, producing an algebra that extends the relational algebra. In this paper, we first develop a formal definition of a forecast operator as a function that satisfies a suite of forecast axioms. Based on this definition, we propose three families of forecast operators called deterministic, probabilistic, and possible worlds forecast operators. Additional properties of coherence, orthogonality, monotonicity, and fact preservation are identified that these operators may satisfy (but are not required to). We show how deterministic forecast operators can always be encoded as probabilistic forecast operators, and how both deterministic and probabilistic forecast operators can be expressed as possible worlds forecast operators. Issues related to the complexity of these operators are studied, showing the relative computational tradeoffs of these types of forecast operators. We explore the integration of different forecast operators with standard relational operators in temporal databases—including extensions of the relational algebra for the probabilistic and possible worlds cases—and propose several policies for answering forecast queries. Instances where these different forecast policies are equivalent have been identified, and can form the basis of query optimization in forecasting. These policies are evaluated empirically using a prototype implementation of a forecast query answering system and several forecast operators.  相似文献   

4.
This paper discusses a general approach to obtain optimum performance bounds for (N+1)-person deterministic decision problems,N+1>2, with several levels of hierarchy and under partial dynamic information. Both cooperative and noncooperative modes of decision making are considered at the lower levels of hierarchy; in each case, it is shown that the optimum performance of the decision maker at the top of the hierarchy can be obtained by solving a sequence of open-loop (static) optimization problems. A numerical example included in the paper illustrates the general approach.  相似文献   

5.
In the present work, we explore a general framework for the design of new minimization algorithms with desirable characteristics, namely, supervisor-searcher cooperation. We propose a class of algorithms within this framework and examine a gradient algorithm in the class. Global convergence is established for the deterministic case in the absence of noise and the convergence rate is studied. Both theoretical analysis and numerical tests show that the algorithm is efficient for the deterministic case. Furthermore, the fact that there is no line search procedure incorporated in the algorithm seems to strengthen its robustness so that it tackles effectively test problems with stronger stochastic noises. The numerical results for both deterministic and stochastic test problems illustrate the appealing attributes of the algorithm.  相似文献   

6.
Two years ago, Conlon and Gowers, and Schacht proved general theorems that allow one to transfer a large class of extremal combinatorial results from the deterministic to the probabilistic setting. Even though the two papers solve the same set of long‐standing open problems in probabilistic combinatorics, the methods used in them vary significantly and therefore yield results that are not comparable in certain aspects. In particular, the theorem of Schacht yields stronger probability estimates, whereas the one of Conlon and Gowers also implies random versions of some structural statements such as the famous stability theorem of Erd?s and Simonovits. In this paper, we bridge the gap between these two transference theorems. Building on the approach of Schacht, we prove a general theorem that allows one to transfer deterministic stability results to the probabilistic setting. We then use this theorem to derive several new results, among them a random version of the Erd?s‐Simonovits stability theorem for arbitrary graphs, extending the result of Conlon and Gowers, who proved such a statement for so‐called strictly 2‐balanced graphs. The main new idea, a refined approach to multiple exposure when considering subsets of binomial random sets, may be of independent interest.Copyright © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 44, 269‐289, 2014  相似文献   

7.
Katrin Ellermann 《PAMM》2006,6(1):663-664
The analysis of the dynamical behavior of systems in ocean waves is an important part in offshore engineering. While a characterization of the response of a linearized model can be obtained in frequency domain, it has to be noted that offshore systems usually include components with nonlinear behavior. The systematic analysis of the nonlinear dynamics of floating structures is often facilitated by additional assumptions. One common example is the use of deterministic (harmonic) waves. Even though periodic waves may be a reasonable simplification for many applications, sea waves in general are usually better described by a spectral or probabilistic approach. This paper addresses different methods of describing random forces for the analysis of floating structures. Examples show the effects of different wave models on the analysis of a simple floating structure. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
Integer problems under joint probabilistic constraints with random coefficients in both sides of the constraints are extremely hard from a computational standpoint since two different sources of complexity are merged. The first one is related to the challenging presence of probabilistic constraints which assure the satisfaction of the stochastic constraints with a given probability, whereas the second one is due to the integer nature of the decision variables. In this paper we present a tailored heuristic approach based on alternating phases of exploration and feasibility repairing which we call Express (Explore and Repair Stochastic Solution) heuristic. The exploration is carried out by the iterative solution of simplified reduced integer problems in which probabilistic constraints are discarded and deterministic additional constraints are adjoined. Feasibility is restored through a penalty approach. Computational results, collected on a probabilistically constrained version of the classical 0–1 multiknapsack problem, show that the proposed heuristic is able to determine good quality solutions in a limited amount of time.  相似文献   

9.
Stuart  A. M. 《Numerical Algorithms》1997,14(1-3):227-260
The numerical solution of initial value problems for ordinary differential equations is frequently performed by means of adaptive algorithms with user-input tolerance τ. The time-step is then chosen according to an estimate, based on small time-step heuristics, designed to try and ensure that an approximation to the local error commited is bounded by τ. A question of natural interest is to determine how the global error behaves with respect to the tolerance τ. This has obvious practical interest and also leads to an interesting problem in mathematical analysis. The primary difficulties arising in the analysis are that: (i) the time-step selection mechanisms used in practice are discontinuous as functions of the specified data; (ii) the small time-step heuristics underlying the control of the local error can break down in some cases. In this paper an analysis is presented which incorporates these two difficulties. For a mathematical model of an error per unit step or error per step adaptive Runge–Kutta algorithm, it may be shown that in a certain probabilistic sense, with respect to a measure on the space of initial data, the small time-step heuristics are valid with probability one, leading to a probabilistic convergence result for the global error as τ→0. The probabilistic approach is only valid in dimension m>1 this observation is consistent with recent analysis concerning the existence of spurious steady solutions of software codes which highlights the difference between the cases m=1 and m>1. The breakdown of the small time-step heuristics can be circumvented by making minor modifications to the algorithm, leading to a deterministic convergence proof for the global error of such algorithms as τ→0. An underlying theory is developed and the deterministic and probabilistic convergence results proved as particular applications of this theory. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

10.
The wavelet‐based decomposition of random variables and fields is proposed here in the context of application of the stochastic second order perturbation technique. A general methodology is employed for the first two probabilistic moments of a linear algebraic equations system solution, which are obtained instead of a single solution projection in the deterministic case. The perturbation approach application allows determination of the closed formulas for a wavelet decomposition of random fields. Next, these formulas are tested by symbolic projection of some elementary random field. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
Decision Networks is a technique for solving problems which involve a sequence of decisions. It is similar in style to critical path analysis in that it consists of arrow diagrams which give a visual representation of the problem and are used as a basis for a simple calculation procedure. The technique can deal with deterministic and stochastic problems and in the latter case is more general than decision trees. The decision network approach meets the need for a method of solution for multi-stage decision problems which is easily understood, helps the user to visualize the nature of the problem and is routine in application.  相似文献   

12.
Traditionally, on-line problems have been studied under the assumption that there is a unique sequence of requests that must be served. This approach is common to most general models of on-line computation, such as Metrical Task Systems. However, there exist on-line problems in which the requests are organized in more than one independent thread. In this more general framework, at every moment the first unserved request of each thread is available. Therefore, apart from deciding how to serve a request, at each stage it is necessary to decide which request to serve among several possibilities.In this paper we introduce Multi-threaded Metrical Task Systems, that is, the generalization of Metrical Task Systems to the case in which there are many threads of tasks. We study the problem from a competitive analysis point of view, proving lower and upper bounds on the competitiveness of on-line algorithms. We consider finite and infinite sequences of tasks, as well as deterministic and randomized algorithms. In this work we present the first steps towards a more general framework for on-line problems which is not restricted to a sequential flow of information.  相似文献   

13.
In many database applications in telecommunication, environmental and health sciences, bioinformatics, physics, and econometrics, real-world data are uncertain and subjected to errors. These data are processed, transmitted and stored in large databases. We consider stochastic modelling for databases with uncertain data and for some basic database operations (for example, join, selection) with exact and approximate matching. Approximate join is used for merging or data deduplication in large databases. Distribution and mean of the join sizes are studied for random databases. A random database is treated as a table with independent random records with a common distribution (or a set of random tables). These results can be used for integration of information from different databases, multiple join optimization, and various probabilistic algorithms for structured random data.  相似文献   

14.
In order to enable domestic commercial banks to be more competitive globally, the Taiwanese government has twice attempted to financially restructure them, in 2001 and 2004. Different from other studies which use deterministic analyses to measure changes in performance between two periods, this paper adopts probabilistic analysis to take the uncertainty related to certain factors into account. Data from six years, from 2005 to 2010, are divided into two periods, 2005–2007 and 2008–2010, to calculate the global Malmquist productivity index (MPI) as a measure of the change in performance. By assuming beta distributions for the data, a Monte Carlo simulation is conducted to find the distribution of the MPI. The results show that, in general, the performance of the commercial banks has indeed improved. While conventional deterministic analyses may mislead top managers and make them overconfident about results that are actually uncertain, probabilistic analysis can produce more reliable information that can thus lead to better decisions.  相似文献   

15.
In this paper the utility and the difficulties of probabilistic analysis for optimization algorithms are discussed. Such an analysis is expected to deliver valuable criteria-better than the worst-case complexity-for the efficiency of an algorithm in practice. The author has done much work of that kind in the field of linear programming. Based on that experience he gives some insight into the general principles for such an approach. He reports on some typical and representative attempts to analyze algorithms, resp. problems, of linear and combinatorial optimization. For each case he describes the problem, the stochastic model under consideration, the algorithm, the results, and tries to give a brief idea of the way these results could be obtained. He concludes with a discussion of some drawbacks and difficulties in that field of research. Among these are the strong sensibility with respect to the chosen model, the restriction of results to the asymptotic case, the restriction to somehow inefficient algorithms, etc. These points are the reasons why probabilistic analysis is of limited value for practice today. On the other hand, they show which principal problems should be attacked in the future to obtain the desired utility.  相似文献   

16.
Many multiple attribute decision analysis (MADA) problems are characterised by both quantitative and qualitative attributes with various types of uncertainties. Incompleteness (or ignorance) and vagueness (or fuzziness) are among the most common uncertainties in decision analysis. The evidential reasoning (ER) approach has been developed in the 1990s and in the recent years to support the solution of MADA problems with ignorance, a kind of probabilistic uncertainty. In this paper, the ER approach is further developed to deal with MADA problems with both probabilistic and fuzzy uncertainties.In this newly developed ER approach, precise data, ignorance and fuzziness are all modelled under the unified framework of a distributed fuzzy belief structure, leading to a fuzzy belief decision matrix. A utility-based grade match method is proposed to transform both numerical data and qualitative (fuzzy) assessment information of various formats into the fuzzy belief structure. A new fuzzy ER algorithm is developed to aggregate multiple attributes using the information contained in the fuzzy belief matrix, resulting in an aggregated fuzzy distributed assessment for each alternative. Different from the existing ER algorithm that is of a recursive nature, the new fuzzy ER algorithm provides an analytical means for combining all attributes without iteration, thus providing scope and flexibility for sensitivity analysis and optimisation. A numerical example is provided to illustrate the detailed implementation process of the new ER approach and its validity and wide applicability.  相似文献   

17.
We propose a general-purpose algorithm APS (Adaptive Pareto-Sampling) for determining the set of Pareto-optimal solutions of bicriteria combinatorial optimization (CO) problems under uncertainty, where the objective functions are expectations of random variables depending on a decision from a finite feasible set. APS is iterative and population-based and combines random sampling with the solution of corresponding deterministic bicriteria CO problem instances. Special attention is given to the case where the corresponding deterministic bicriteria CO problem can be formulated as a bicriteria integer linear program (ILP). In this case, well-known solution techniques such as the algorithm by Chalmet et al. can be applied for solving the deterministic subproblem. If the execution of APS is terminated after a given number of iterations, only an approximate solution is obtained in general, such that APS must be considered a metaheuristic. Nevertheless, a strict mathematical result is shown that ensures, under rather mild conditions, convergence of the current solution set to the set of Pareto-optimal solutions. A modification replacing or supporting the bicriteria ILP solver by some metaheuristic for multicriteria CO problems is discussed. As an illustration, we outline the application of the method to stochastic bicriteria knapsack problems by specializing the general framework to this particular case and by providing computational examples.  相似文献   

18.
This paper introduces a probabilistic framework for the joint survivorship of couples in the context of dynamic stochastic mortality models. The death of one member of a couple can have either deterministic or stochastic effects on the other; our new framework gives an intuitive and flexible pairwise cohort-based probabilistic mechanism that can account for both. It is sufficiently flexible to allow modelling of effects that are short-term (called the broken-heart effect) and/or long-term (named life circumstances bereavement). In addition, it can account for the state of health of both the surviving and the dying spouse and can allow for dynamic and asymmetric reactions of varying complexity. Finally, it can accommodate the pairwise dependence of mortality intensities before the first death. Analytical expressions for bivariate survivorship in representative models are given, and their sensitivity analysis is performed for benchmark cases of old and young couples. Simulation and estimation procedures are provided that are straightforward to implement and lead to consistent parameter estimation on synthetic dataset of 10000 pairs of death times for couples.  相似文献   

19.
Probability theory has become the standard framework in the field of mobile robotics because of the inherent uncertainty associated with sensing and acting. In this paper, we show that the theory of belief functions with its ability to distinguish between different types of uncertainty is able to provide significant advantages over probabilistic approaches in the context of robotics. We do so by presenting solutions to the essential problems of simultaneous localization and mapping (SLAM) and planning based on belief functions. For SLAM, we show how the joint belief function over the map and the robot's poses can be factored and efficiently approximated using a Rao-Blackwellized particle filter, resulting in a generalization of the popular probabilistic FastSLAM algorithm. Our SLAM algorithm produces occupancy grid maps where belief functions explicitly represent additional information about missing and conflicting measurements compared to probabilistic grid maps. The basis for this SLAM algorithm are forward and inverse sensor models, and we present general evidential models for range sensors like sonar and laser scanners. Using the generated evidential grid maps, we show how optimal decisions can be made for path planning and active exploration. To demonstrate the effectiveness of our evidential approach, we apply it to two real-world datasets where a mobile robot has to explore unknown environments and solve different planning problems. Finally, we provide a quantitative evaluation and show that the evidential approach outperforms a probabilistic one both in terms of map quality and navigation performance.  相似文献   

20.
Performance data are usually collected in order to build well‐defined performance indicators. Since such data may conceal additional information, which can be revealed by secondary analysis, we believe that mining of performance data may be fruitful. We also note that performance databases usually contain both qualitative and quantitative variables for which it may be inappropriate to assume some specific (multivariate) underlying distribution. Thus, a suitable technique to deal with these issues should be adopted. In this work, we consider nonlinear principal component analysis (PCA) with optimal scaling, a method developed to incorporate all types of variables, and to discover and handle nonlinear relationships. The reader is offered a case study in which a student opinion database is mined. Though generally gathered to provide evidence of teaching ability, they are exploited here to provide a more general performance evaluation tool for those in charge of managing universities. We show how nonlinear PCA with optimal scaling applied to student opinion data enables users to point out some strengths and weaknesses of educational programs and services within a university. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号