首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An estimate of the convergence rate of some homogeneous Markov monotone random search optimization algorithms is obtained.  相似文献   

2.
The exponential rate of convergence for Markov operators is established. The operators correspond to continuous iterated function systems which are a very useful tool in some cell cycle models.  相似文献   

3.
Estimates of the convergence rate of some homogeneous Markov monotone random search optimization methods are given.  相似文献   

4.
A hidden Markov model (HMM) is said to have path-mergeable states   if for any two states i,ji,j there exist a word ww and state kk such that it is possible to transition from both ii and jj to kk while emitting ww. We show that for a finite HMM with path-mergeable states the block estimates of the entropy rate converge exponentially fast. We also show that the path-mergeability property is asymptotically typical in the space of HMM topologies and easily testable.  相似文献   

5.
The convergence of the Luus-Jaakola search method for unconstrained optimization problems is established.Notation E n Euclideann-space - f Gradient off(x) - 2 f Hessian matrix - (·) T Transpose of (·) - I Index set {1, 2, ...,n} - [x i1 *(j) ] Point around which search is made in the (j + 1)th iteration, i.e., [x 1l *(j) ,x 2l *(j) ,...,x n1 *(j) ] - r i (i) Range ofx il *(i) in the (j + 1)th iteration - l 1 mini {r i (0) } - l 2 mini {r i (0) } - A j Region of search in thejth iteration, i.e., {x E n:x il *(j-1) –0.5r i (j-1) x ix il *(j-1) +0.5r i (j-1) ,i I} - S j Closed sphere with center origin and radius j - Reduction factor in each iteration - 1– - (·) Gamma function Many discussions with Dr. S. N. Iyer, Professor of Electrical Engineering, College of Engineering, Trivandrum, India, are gratefully acknowledged. The author has great pleasure to thank Dr. K. Surendran, Professor, Department of Electrical Engineering, P.S.G. College of Technology, Coimbatore, India, for suggesting this work.  相似文献   

6.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.  相似文献   

7.
This paper presents two main results: first, a Liapunov type criterion for the existence of a stationary probability distribution for a jump Markov process; second, a Liapunov type criterion for existence and tightness of stationary probability distributions for a sequence of jump Markov processes. If the corresponding semigroups TN(t) converge, under suitable hypotheses on the limit semigroup, this last result yields the weak convergence of the sequence of stationary processes (TN(t), πN) to the stationary limit one.  相似文献   

8.
Backtracking adaptive search is a simplified stochastic optimisation procedure which permits the acceptance of worsening objective function values. Key properties of backtracking adaptive search are defined and obtained using generating functions. Examples are given to illustrate the use of this methodology.   相似文献   

9.
We consider a method of centers for solving constrained optimization problems. We establish its global convergence and that it converges with a linear rate when the starting point of the algorithm is feasible as well as when the starting point is infeasible. We demonstrate the effect of the scaling on the rate of convergence. We extend afterwards, the stability result of [5] to the infeasible case anf finally, we give an application to semi-infinite optimization problems.  相似文献   

10.
The Pure Adaptive Search (PAS) algorithm for global optimization yields a sequence of points, each of which is uniformly distributed in the level set corresponding to its predecessor. This algorithm has the highly desirable property of solving a large class of global optimization problems using a number of iterations that increases at most linearly in the dimension of the problem. Unfortunately, PAS has remained of mostly theoretical interest due to the difficulty of generating, in each iteration, a point uniformly distributed in the improving feasible region. In this article, we derive a coupling equivalence between generating an approximately uniformly distributed point using Markov chain sampling, and generating an exactly uniformly distributed point with a certain probability. This result is used to characterize the complexity of a PAS-implementation as a function of (a) the number of iterations required by PAS to achieve a certain solution quality guarantee, and (b) the complexity of the sampling algorithm used. As an application, we use this equivalence to show that PAS, using the so-called Random ball walk Markov chain sampling method for generating nearly uniform points in a convex region, can be used to solve most convex programming problems in polynomial time.  相似文献   

11.
12.
1.IntroductionIntillspaperweanalyzetheconvergenceonmultiplicativeiterativealgorithmsfortheIninimizationofadiffcrentiablefunctiondefinedonthepositiveorthantofR".ThealgorithmissllggestedbyEggermolltl'],andisrelatedtotheEM[2](Expextation--Maximization)algoritllnlforPositronemissiontonlography[']andimagereconstructi..14].Wecollsidertheproblenl"linf(x)s.t.x20.Themultiplicativeiterativealgorithmshavethel'orlniforj=l,2,',n,withAhdeterminedthroughalinesearch.Whilelusem[5]establishedanelegantconv…  相似文献   

13.
14.
Pure Adaptive Search is a stochastic algorithm which has been analyzed for continuous global optimization. When a uniform distribution is used in PAS, it has been shown to have complexity which is linear in dimension. We define strong and weak variations of PAS in the setting of finite global optimization and prove analogous results. In particular, for then-dimensional lattice {1,,k} n , the expected number of iterations to find the global optimum is linear inn. Many discrete combinatorial optimization problems, although having intractably large domains, have quite small ranges. The strong version of PAS for all problems, and the weak version of PAS for a limited class of problems, has complexity the order of the size of the range.The authors would like to thank the Department of Mathematics and Statistics at the University of Canterbury for support of this research.  相似文献   

15.
The exponential convergence rate in entroy is studied for symmetric forms, with a special attention to the Markov chain with a state space having two points only. Some upper and lower bounds of the rate are obtained and five examples with precise or qualitatively exact estimates are presented.   相似文献   

16.
Let P be a transition matrix which is symmetric with respect to a measure π.The spectral gap of P in L2(π)-space,denoted by gap(P),is defined as the distance between 1 and the rest of the spectrum of P.In this paper,we study the relationship between gap(P) and the convergence rate of Pn.When P is transient,the convergence rate of P n is equal to 1 gap(P).When P is ergodic,we give the explicit upper and lower bounds for the convergence rate of Pn in terms of gap(P).These results are extended to L∞(π)-space.  相似文献   

17.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.  相似文献   

18.
This paper studies the ranking problem in the context of the regularization theory that allows a simultaneous analysis of a wide class of ranking algorithms. Some of them were previously studied separately. For such ones, our analysis gives a better convergence rate compared to the reported in the literature. We also supplement our theoretical results with numerical illustrations and discuss the application of ranking to the problem of estimating the risk from errors in blood glucose measurements of diabetic patients.  相似文献   

19.
线性约束的凸优化问题和鞍点问题的一阶最优性条件是一个单调变分不等式. 在变分不等式框架下求解这些问题, 选取适当的矩阵G, 采用G- 模下的PPA 算法, 会使迭代过程中的子问题求解变得相当容易. 本文证明这类定制的PPA 算法的误差界有1/k 的收敛速率.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号