首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
The stochastic approximation problem is to find some root or minimum of a nonlinear function in the presence of noisy measurements. The classical algorithm for stochastic approximation problem is the Robbins-Monro (RM) algorithm, which uses the noisy negative gradient direction as the iterative direction. In order to accelerate the classical RM algorithm, this paper gives a new combined direction stochastic approximation algorithm which employs a weighted combination of the current noisy negative gradient and some former noisy negative gradient as iterative direction. Both the almost sure convergence and the asymptotic rate of convergence of the new algorithm are established. Numerical experiments show that the new algorithm outperforms the classical RM algorithm.  相似文献   

2.
The convergent optimization via most promising area stochastic search (COMPASS) algorithm is a locally convergent random search algorithm for solving discrete optimization via simulation problems. COMPASS has drawn a significant amount of attention since its introduction. While the asymptotic convergence of COMPASS does not depend on the problem dimension, the finite-time performance of the algorithm often deteriorates as the dimension increases. In this paper, we investigate the reasons for this deterioration and propose a simple change to the solution-sampling scheme that significantly speeds up COMPASS for high-dimensional problems without affecting its convergence guarantee.  相似文献   

3.
Stochastic optimization/approximation algorithms are widely used to recursively estimate the optimum of a suitable function or its root under noisy observations when this optimum or root is a constant or evolves randomly according to slowly time-varying continuous sample paths. In comparison, this paper analyzes the asymptotic properties of stochastic optimization/approximation algorithms for recursively estimating the optimum or root when it evolves rapidly with nonsmooth (jump-changing) sample paths. The resulting problem falls into the category of regime-switching stochastic approximation algorithms with two-time scales. Motivated by emerging applications in wireless communications, and system identification, we analyze asymptotic behavior of such algorithms. Our analysis assumes that the noisy observations contain a (nonsmooth) jump process modeled by a discrete-time Markov chain whose transition frequency varies much faster than the adaptation rate of the stochastic optimization algorithm. Using stochastic averaging, we prove convergence of the algorithm. Rate of convergence of the algorithm is obtained via bounds on the estimation errors and diffusion approximations. Remarks on improving the convergence rates through iterate averaging, and limit mean dynamics represented by differential inclusions are also presented. The research of G. Yin was supported in part by the National Science Foundation under DMS-0603287, in part by the National Security Agency under MSPF-068-029, and in part by the National Natural Science Foundation of China under #60574069. The research of C. Ion was supported in part by the Wayne State University Rumble Fellowship. The research of V. Krishnamurthy was supported in part by NSERC (Canada).  相似文献   

4.
Predictive recursion (PR) is a fast stochastic algorithm for nonparametric estimation of mixing distributions in mixture models. It is known that the PR estimates of both the mixing and mixture densities are consistent under fairly mild conditions, but currently very little is known about the rate of convergence. Here I first investigate asymptotic convergence properties of the PR estimate under model misspecification in the special case of finite mixtures with known support. Tools from stochastic approximation theory are used to prove that the PR estimates converge, to the best Kullback-Leibler approximation, at a nearly root-n rate. When the support is unknown, PR can be used to construct an objective function which, when optimized, yields an estimate of the support. I apply the known-support results to derive a rate of convergence for this modified PR estimate in the unknown support case, which compares favorably to known optimal rates.  相似文献   

5.
Over the last decade the stochastic Galerkin method has become an established method to solve differential equations involving uncertain parameters. It is based on the generalized Wiener expansion of square integrable random variables. Although there exist very sophisticated variants of the stochastic Galerkin method (wavelet basis, multi-element approach) convergence for random ordinary differential equations has rarely been considered analytically. In this work we develop an asymptotic upper boundary for the L 2-error of the stochastic Galerkin method. Furthermore, we prove convergence of a local application of the stochastic Galerkin method and confirm convergence of the multi-element approach within this context.  相似文献   

6.
This paper concludes one part of the local convergence analysis of a certain class of iterative aggregation–disaggregation methods for computing a stationary probability distribution vector of an irreducible stochastic matrix B. We show that the local convergence of the algorithm is determined only by the sparsity pattern of the matrix and by the choice of the aggregation groups. We introduce the asymptotic convergence rates of the normalized components of approximations corresponding to particular aggregation groups and we also specify an upper bound on the rates. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
对凸二次整数极小化问题提出了一种随机水平值逼近算法,该算法应用了重点取样技术,并利用极小化相对熵的思想来更新取样密度.对算法的渐近收敛性进行了证明,给出了数值实验的结果.  相似文献   

8.
This work develops a class of stochastic optimization algorithms. It aims to provide numerical procedures for solving threshold-type optimal control problems. The main motivation stems from applications involving optimal or suboptimal hedging policies, for example, production planning of manufacturing systems including random demand and stochastic machine capacity. The proposed algorithm is a constrained stochastic approximation procedure that uses random-direction finite-difference gradient estimates. Under fairly general conditions, the convergence of the algorithm is established and the rate of convergence is also derived. A numerical example is reported to demonstrate the performance of the algorithm.  相似文献   

9.
程生敏  周少波 《数学杂志》2014,34(6):1073-1084
本文研究了随机延迟微分方程的平衡方法的收敛性和均方稳定性.利用半鞅收敛定理,给出了真解的渐进稳定和均方稳定的一个更弱的条件.平衡方法下随机延迟微分方程的真解的均方稳定性.  相似文献   

10.
A stopping rule for the multidimensional Robbins-Monro stochastic approximation method is developed in this paper. Both moving average and stationary -mixing type of correlated noise processes are treated. Sequentially determined confidence ellipsoids are constructed to fulfill the goal for the determination of the stopping rule. The limit behavior of the algorithm is investigated. It is shown that the stopped Robbins-Monro process is asymptotically normal. Such asymptotic normality is established by means of weak convergence methods.Communicated by Y. C. Ho  相似文献   

11.
该文研究了一般中立型随机微分方程解的渐近性质,利用Lyapunov函数和上鞅收敛定理,得到 了该方程解的一些渐近稳定性、多项式渐近稳定性及指数稳定性等渐近性质,其结果涵盖并 推广了已有文献的结论。  相似文献   

12.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.  相似文献   

13.
Practical Nonlinear Programming algorithms may converge to infeasible points. It is sensible to detect this situation as quickly as possible, in order to have time to change initial approximations and parameters, with the aim of obtaining convergence to acceptable solutions in further runs. In this paper, a recently introduced Augmented Lagrangian algorithm is modified in such a way that the probability of quick detection of asymptotic infeasibility is enhanced. The modified algorithm preserves the property of convergence to stationary points of the sum of squares of infeasibilities without harming the convergence to KKT points in feasible cases.  相似文献   

14.
We propose a stochastic algorithm for the global optimization of chance constrained problems. We assume that the probability measure with which the constraints are evaluated is known only through its moments. The algorithm proceeds in two phases. In the first phase the probability distribution is (coarsely) discretized and solved to global optimality using a stochastic algorithm. We only assume that the stochastic algorithm exhibits a weak* convergence to a probability measure assigning all its mass to the discretized problem. A diffusion process is derived that has this convergence property. In the second phase, the discretization is improved by solving another nonlinear programming problem. It is shown that the algorithm converges to the solution of the original problem. We discuss the numerical performance of the algorithm and its application to process design.  相似文献   

15.
The semimartingale stochastic approximation procedure, precisely, the Robbins-Monro type SDE, is introduced, which naturally includes both generalized stochastic approximation algorithms with martingale noises and recursive parameter estimation procedures for statistical models associated with semimartingales. General results concerning the asymptotic behavior of the solution are presented. In particular, the conditions ensuring the convergence, the rate of convergence, and the asymptotic expansion are established. The results concerning the Polyak weighted averaging procedure are also presented. __________ Translated from Sovremennaya Matematika i Ee Prilozheniya (Contemporary Mathematics and Its Applications), Vol. 45, Martingale Theory and Its Application, 2007.  相似文献   

16.
We describe the numerical scheme for the discretization and solution of 2D elliptic equations with strongly varying piecewise constant coefficients arising in the stochastic homogenization of multiscale composite materials. An efficient stiffness matrix generation scheme based on assembling the local Kronecker product matrices is introduced. The resulting large linear systems of equations are solved by the preconditioned conjugate gradient iteration with a convergence rate that is independent of the grid size and the variation in jumping coefficients (contrast). Using this solver, we numerically investigate the convergence of the representative volume element (RVE) method in stochastic homogenization that extracts the effective behavior of the random coefficient field. Our numerical experiments confirm the asymptotic convergence rate of systematic error and standard deviation in the size of RVE rigorously established in Gloria et al. The asymptotic behavior of covariances of the homogenized matrix in the form of a quartic tensor is also studied numerically. Our approach allows laptop computation of sufficiently large number of stochastic realizations even for large sizes of the RVE.  相似文献   

17.
陈永  王薇  徐以汎 《运筹学学报》2010,24(1):88-100
研究带线性约束的非凸全局优化问题,在有效集算法的基础上提出了一个具有间断扩散性质的随机微分方程算法,讨论了算法的理论性质和收敛性,证明了算法以概率收敛到问题的全局最优解,最后列出了数值实验效果.  相似文献   

18.
We propose a new stochastic first-order algorithm for solving sparse regression problems. In each iteration, our algorithm utilizes a stochastic oracle of the subgradient of the objective function. Our algorithm is based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory lectures on convex optimization: a basic course, Kluwer, Amsterdam, 2003). The convergence rate of our algorithm depends continuously on the noise level of the gradient. In particular, in the limiting case of noiseless gradient, the convergence rate of our algorithm is the same as that of optimal deterministic gradient algorithms. We also establish some large deviation properties of our algorithm. Unlike existing stochastic gradient methods with optimal convergence rates, our algorithm has the advantage of readily enforcing sparsity at all iterations, which is a critical property for applications of sparse regressions.  相似文献   

19.
本文研究了—般随机中立型泛函微分方程解的渐近性质,利用Lyapunov函数和半鞅收敛定理,建立了该方程解的一些渐近稳定性、多项式渐近稳定性及指数稳定性的充分性判据, 其条件无需算子LV负定,并且利用了随机扰动项在稳定性中所起的有益作用,其结果涵盖并推广了已有文献的结论.  相似文献   

20.
We connect some basic issues in survival analysis in biostatistics with estimation and convergence theories in stochastic filtering. Viewing censored data problems through a filtering perspective, we can derive estimators expressed using stochastic integral/differential equations. We then study statistical asymptotic using convergence theory of stochastic equations. We illustrate the effectiveness of such a program by revisiting the right censored and the doubly censored data problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号