首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, a Differential Transform Method (DTM) based on the mean fourth calculus is developed to solve random differential equations. An analytical mean fourth convergent series solution is found for a nonlinear random Riccati differential equation by using the random DTM. Besides obtaining the series solution of the Riccati equation, we provide approximations of the main statistical functions of the stochastic solution process such as the mean and variance. These approximations are compared to those obtained by the Euler and Monte Carlo methods. It is shown that this method applied to the random Riccati differential equation is more efficient than the two above mentioned methods.  相似文献   

2.
Summary. The simplest and the best-known method for numerical approximation of high-dimensional integrals is the Monte Carlo method (MC), i.e. random sampling. MC has also become the most popular method for constructing numerically solvable approximations of stochastic programs. However, certain modern integration quadratures are often superior to crude MC in high-dimensional integration, so it seems natural to try to use them also in discretization of stochastic programs. This paper derives conditions that guarantee the epi-convergence of the resulting objectives to the original one. Our epi-convergence result is closely related to some of the existing ones but it is easier to apply to discretizations and it allows the feasible set to depend on the probability measure. As examples, we prove epi-convergence of quadrature-based discretizations of three different models of portfolio management and we study their behavior numerically. Besides MC, our discretizations are the only existing ones with guaranteed epi-convergence for these problem classes. In our tests, modern quadratures seem to result in faster convergence of optimal values than MC.Mathematics Subject Classification (2000): 90C15, 49M25The work of this author was partially supported by The Finnish Foundation for Economic Education under grant no. 21599 and by Finnish Academy under contract no. 3385  相似文献   

3.
This article discusses a new methodology, which combines two efficient methods known as Monte Carlo (MC) and Stochastic‐algebraic (SA) methods for stochastic analyses and probabilistic assessments in electric power systems. The main idea is to use the advantages of each former method to cover the blind spots of the other. This new method is more efficient and more accurate than SA method and also faster than MC method while is less dependent of the sampling process. In this article, the proposed method and two other ones are used to obtain the probability density function of different variables in a power system. Different examples are studied to show the effectiveness of the hybrid method. The results of the proposed method are compared to the ones obtained using the MC and SA methods. © 2014 Wiley Periodicals, Inc. Complexity 21: 100–110, 2015  相似文献   

4.
We develop an implementable algorithm for stochastic optimization problems involving probability functions. Such problems arise in the design of structural and mechanical systems. The algorithm consists of a nonlinear optimization algorithm applied to sample average approximations and a precision-adjustment rule. The sample average approximations are constructed using Monte Carlo simulations or importance sampling techniques. We prove that the algorithm converges to a solution with probability one and illustrate its use by an example involving a reliability-based optimal design.  相似文献   

5.
New regulations, stronger competitions and more volatile capital markets have increased the demand for stochastic asset-liability management (ALM) models for insurance companies in recent years. The numerical simulation of such models is usually performed by Monte Carlo methods which suffer from a slow and erratic convergence, though. As alternatives to Monte Carlo simulation, we propose and investigate in this article the use of deterministic integration schemes, such as quasi-Monte Carlo and sparse grid quadrature methods. Numerical experiments with different ALM models for portfolios of participating life insurance products demonstrate that these deterministic methods often converge faster, are less erratic and produce more accurate results than Monte Carlo simulation even for small sample sizes and complex models if the methods are combined with adaptivity and dimension reduction techniques. In addition, we show by an analysis of variance (ANOVA) that ALM problems are often of very low effective dimension which provides a theoretical explanation for the success of the deterministic quadrature methods.  相似文献   

6.
你也需要蒙特卡罗方法——提高应用水平的若干技巧   总被引:3,自引:1,他引:2  
本文是《你也需要蒙特卡罗方法》中的第二篇。文中讨论提高应用水平的一些技巧,涉及模拟模型的选取,提高计算速度或降低抽样方差的一些方法,诸如重要抽样、相关抽样、对偶抽样和分层抽样等。还讨论了模拟中所需的抽样次数的确定和模拟结果的精度评估等实用问题。  相似文献   

7.
Large scale stochastic linear programs are typically solved using a combination of mathematical programming techniques and sample-based approximations. Some methods are designed to permit sample sizes to adapt to information obtained during the solution process, while others are not. In this paper, we experimentally examine the relative merits and challenges of approximations based on adaptive samples and those based on non-adaptive samples. We focus our attention on Stochastic Decomposition (SD) as an adaptive technique and Sample Average Approximation (SAA) as a non-adaptive technique. Our results indicate that there can be minimal difference in the quality of the solutions provided by these methods, although comparing their computational requirements would be more challenging.  相似文献   

8.
Monte Carlo sampling-based estimators of optimality gaps for stochastic programs are known to be biased. When bias is a prominent factor, estimates of optimality gaps tend to be large on average even for high-quality solutions. This diminishes our ability to recognize high-quality solutions. In this paper, we present a method for reducing the bias of the optimality gap estimators for two-stage stochastic linear programs with recourse via a probability metrics approach, motivated by stability results in stochastic programming. We apply this method to the Averaged Two-Replication Procedure (A2RP) by partitioning the observations in an effort to reduce bias, which can be done in polynomial time in sample size. We call the resulting procedure the Averaged Two-Replication Procedure with Bias Reduction (A2RP-B). We provide conditions under which A2RP-B produces strongly consistent point estimators and an asymptotically valid confidence interval. We illustrate the effectiveness of our approach analytically on a newsvendor problem and test the small-sample behavior of A2RP-B on a number of two-stage stochastic linear programs from the literature. Our computational results indicate that the procedure effectively reduces bias. We also observe variance reduction in certain circumstances.  相似文献   

9.
Martin Krosche  Martin Hautefeuille 《PAMM》2007,7(1):2140001-2140002
Often uncertainties occur in the numerical simulation of real-world problems. A useful method is stochastic modelling. In this context Monte Carlo (MC) methods and the stochastic Galerkin method are well known and frequently used. In the field of component-based software systems a component can be seen as an independent software or as a part of a software system. A corresponding architecture allows clearness, flexibility and reusability. The Component Template Library (CTL) is a so called middleware to realise component-based software systems, and fits to the requirements of scientific computing. In this paper we present the design of a software system for the stochastic simulation using MC and stochastic Galerkin methods based on the CTL. For each stochastic method a number of components are realised to allow parallel features. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
In this paper, we discuss here-and-now type stochastic programs with equilibrium constraints. We give a general formulation of such problems and study their basic properties such as measurability and continuity of the corresponding integrand functions. We discuss also the consistency and rate of convergence of sample average approximations of such stochastic problems  相似文献   

11.
Kinetic Monte Carlo methods provide a powerful computational tool for the simulation of microscopic processes such as the diffusion of interacting particles on a surface, at a detailed atomistic level. However such algorithms are typically computationatly expensive and are restricted to fairly small spatiotemporal scales. One approach towards overcoming this problem was the development of coarse-grained Monte Carlo algorithms. In recent literature, these methods were shown to be capable of efficiently describing much larger length scales while still incorporating information on microscopic interactions and fluctuations. In this paper, a coarse-grained Langevin system of stochastic differential equations as approximations of diffusion of interacting particles is derived, based on these earlier coarse-grained models. The authors demonstrate the asymptotic equivalence of transient and long time behavior of the Langevin approximation and the underlying microscopic process, using asymptotics methods such as large deviations for interacting particles systems, and furthermore, present corresponding numerical simulations, comparing statistical quantities like mean paths, auto correlations and power spectra of the microscopic and the approximating Langevin processes. Finally, it is shown that the Langevin approximations presented here are much more computationally efficient than conventional Kinetic Monte Carlo methods, since in addition to the reduction in the number of spatial degrees of freedom in coarse-grained Monte Carlo methods, the Langevin system of stochastic differential equations allows for multiple particle moves in a single timestep.  相似文献   

12.
In many instances, the exact evaluation of an objective function and its subgradients can be computationally demanding. By way of example, we cite problems that arise within the context of stochastic optimization, where the objective function is typically defined via multi-dimensional integration. In this paper, we address the solution of such optimization problems by exploring the use of successive approximation schemes within subgradient optimization methods. We refer to this new class of methods as inexact subgradient algorithms. With relatively mild conditions imposed on the approximations, we show that the inexact subgradient algorithms inherit properties associated with their traditional (i.e., exact) counterparts. Within the context of stochastic optimization, the conditions that we impose allow a relaxation of requirements traditionally imposed on steplengths in stochastic quasi-gradient methods. Additionally, we study methods in which steplengths may be defined adaptively, in a manner that reflects the improvement in the objective function approximations as the iterations proceed. We illustrate the applicability of our approach by proposing an inexact subgradient optimization method for the solution of stochastic linear programs.This work was supported by Grant Nos. NSF-DDM-89-10046 and NSF-DDM-9114352 from the National Science Foundation.  相似文献   

13.
In this paper we discuss the basket options valuation for a jump-diffusion model. The underlying asset prices follow some correlated local volatility diffusion processes with systematic jumps. We derive a forward partial integral differential equation (PIDE) for general stochastic processes and use the asymptotic expansion method to approximate the conditional expectation of the stochastic variance associated with the basket value process. The numerical tests show that the suggested method is fast and accurate in comparison with the Monte Carlo and other methods in most cases.  相似文献   

14.
In this paper we discuss the basket options valuation for a jump–diffusion model. The underlying asset prices follow some correlated local volatility diffusion processes with systematic jumps. We derive a forward partial integral differential equation (PIDE) for general stochastic processes and use the asymptotic expansion method to approximate the conditional expectation of the stochastic variance associated with the basket value process. The numerical tests show that the suggested method is fast and accurate in comparison with the Monte Carlo and other methods in most cases.  相似文献   

15.
Optimality functions define stationarity in nonlinear programming, semi-infinite optimization, and optimal control in some sense. In this paper, we consider optimality functions for stochastic programs with nonlinear, possibly nonconvex, expected value objective and constraint functions. We show that an optimality function directly relates to the difference in function values at a candidate point and a local minimizer. We construct confidence intervals for the value of the optimality function at a candidate point and, hence, provide a quantitative measure of solution quality. Based on sample average approximations, we develop an algorithm for classes of stochastic programs that include CVaR-problems and utilize optimality functions to select sample sizes.  相似文献   

16.
We consider smooth stochastic programs and develop a discrete-time optimal-control problem for adaptively selecting sample sizes in a class of algorithms based on variable sample average approximations (VSAA). The control problem aims to minimize the expected computational cost to obtain a near-optimal solution of a stochastic program and is solved approximately using dynamic programming. The optimal-control problem depends on unknown parameters such as rate of convergence, computational cost per iteration, and sampling error. Hence, we implement the approach within a receding-horizon framework where parameters are estimated and the optimal-control problem is solved repeatedly during the calculations of a VSAA algorithm. The resulting sample-size selection policy consistently produces near-optimal solutions in short computing times as compared to other plausible policies in several numerical examples.  相似文献   

17.
In this paper, two-stage stochastic quadratic programming problems with equality constraints are considered. By Monte Carlo simulation-based approximations of the objective function and its first (second)derivative,an inexact Lagrange-Newton type method is proposed.It is showed that this method is globally convergent with probability one. In particular, the convergence is local superlinear under an integral approximation error bound condition.Moreover, this method can be easily extended to solve stochastic quadratic programming problems with inequality constraints.  相似文献   

18.
Large deviations theory is a well-studied area which has shown to have numerous applications. Broadly speaking, the theory deals with analytical approximations of probabilities of certain types of rare events. Moreover, the theory has recently proven instrumental in the study of complexity of methods that solve stochastic optimization problems by replacing expectations with sample averages (such an approach is called sample average approximation in the literature). The typical results, however, assume that the underlying random variables are either i.i.d. or exhibit some form of Markovian dependence. Our interest in this paper is to study the application of large deviations results in the context of estimators built with Latin Hypercube sampling, a well-known sampling technique for variance reduction. We show that a large deviation principle holds for Latin Hypercube sampling for functions in one dimension and for separable multi-dimensional functions. Moreover, the upper bound of the probability of a large deviation in these cases is no higher under Latin Hypercube sampling than it is under Monte Carlo sampling. We extend the latter property to functions that are monotone in each argument. Numerical experiments illustrate the theoretical results presented in the paper.  相似文献   

19.
Periodically data envelopment analysis (DEA) is conducted on values that include estimated proportions, such as defect, satisfaction, mortality, or adverse event rates computed from samples. This occurs frequently in healthcare and public sector analysis where proportions frequently are estimated from partial samples. These estimates can produce statistically biased and variable estimates of DEA results, even as sample sizes become fairly large. This paper discusses several approaches to these problems, including Monte Carlo (MC), bootstrapping, chance constrained, and optimistic/pessimistic DEA methods. The performance of each method was compared using previously published data for fourteen Florida juvenile delinquency programs whose two of three inputs and one output were proportions. The impact of sample size and number of estimated rates also were investigated. In most cases, no statistically significant differences were found between the true DEA scores and the midpoints of optimistic/pessimistic, MC, and bootstrap intervals, the latter two after bias correction. True DEA results are strongly correlated with those produced by the MC (r=0.9865, p<0.001), chance constraint (r=0.9536,p<0.001), bootstrapping (r=0.9368,p<0.001), and optimistic/pessimistic (r=0.6799,p<0.001) approaches. While all methods perform fairly well, the MC approach tends to produce slightly better results and be fairly easy to implement.  相似文献   

20.
We theoretically compare variances between the Infinitesimal Perturbation Analysis (IPA) estimator and the Likelihood Ratio (LR) estimator to Monte Carlo gradient for stochastic systems. The results presented in Cui et al. (2020) [2] on variance comparison between these two estimators are substantially improved. We also prove a practically interesting result that the IPA estimators to European vanilla and arithmetic Asian options' Delta, respectively, have smaller variance when the underlying asset's return process is independent with the initial price and square integrable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号