Optimization algorithm with probabilistic estimation |
| |
Authors: | D. Yan H. Mukai |
| |
Affiliation: | (1) Department of Systems Science and Mathematics, Washington University, St. Louis, Missouri |
| |
Abstract: | In this paper, we present a stochastic optimization algorithm based on the idea of the gradient method which incorporates a new adaptive-precision technique. Because of this new technique, unlike recent methods, the proposed algorithm adaptively selects the precision without any need for prior knowledge on the speed of convergence of the generated sequence. With this new technique, the algorithm can avoid increasing the estimation precision unnecessarily, yet it retains its favorable convergence properties. In fact, it tries to maintain a nice balance between the requirements for computational accuracy and those for computational expediency. Furthermore, we present two types of convergence results delineating under what assumptions what kinds of convergence can be obtained for the proposed algorithm.The work reported here was supported in part by NSF Grant No. ECS-85-06249 and USAF Grant No. AFOSR-89-0518. The authors wish to thank the anonymous reviewers whose careful reading and criticism have helped them improve the paper considerably. |
| |
Keywords: | Stochastic optimization Monte Carlo simulation |
本文献已被 SpringerLink 等数据库收录! |
|