首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stochastic approximation problem is to find some root or extremum of a non- linear function for which only noisy measurements of the function are available.The classical algorithm for stochastic approximation problem is the Robbins-Monro (RM) algorithm,which uses the noisy evaluation of the negative gradient direction as the iterative direction.In order to accelerate the RM algorithm,this paper gives a flame algorithm using adaptive iterative directions.At each iteration,the new algorithm goes towards either the noisy evaluation of the negative gradient direction or some other directions under some switch criterions.Two feasible choices of the criterions are pro- posed and two corresponding flame algorithms are formed.Different choices of the directions under the same given switch criterion in the flame can also form different algorithms.We also proposed the simultanous perturbation difference forms for the two flame algorithms.The almost surely convergence of the new algorithms are all established.The numerical experiments show that the new algorithms are promising.  相似文献   

2.
提出了求解阵列天线自适应滤波问题的一种调比随机逼近算法.每一步迭代中,算法选取调比的带噪负梯度方向作为新的迭代方向.相比已有的其他随机逼近算法,这个算法不需要调整稳定性常数,在一定程度上解决了稳定性常数选取难的问题.数值仿真实验表明,算法优于已有的滤波算法,且比经典Robbins-Monro (RM)算法具有更好的稳定性.  相似文献   

3.
In this paper, stochastic approximation (SA) algorithm with a new adaptive step size scheme is proposed. New adaptive step size scheme uses a fixed number of previous noisy function values to adjust steps at every iteration. The algorithm is formulated for a general descent direction and almost sure convergence is established. The case when negative gradient is chosen as a search direction is also considered. The algorithm is tested on a set of standard test problems. Numerical results show good performance and verify efficiency of the algorithm compared to some of existing algorithms with adaptive step sizes.  相似文献   

4.
In this paper, we consider optimizing the performance of a stochastic system that is too complex for theoretical analysis to be possible, but can be evaluated by using simulation or direct experimentation. To optimize the expected performance of such system as a function of several input parameters, we propose a hybrid stochastic approximation algorithm for finding the root of the gradient of the response function. At each iteration of the hybrid algorithm, alternatively, either an average of two independent noisy negative gradient directions or a scaled noisy negative gradient direction is selected. The almost sure convergence of the hybrid algorithm is established. Numerical comparisons of the hybrid algorithm with two other existing algorithms in a simple queueing system and five nonlinear unconstrained stochastic optimization problems show the advantage of the hybrid algorithm.  相似文献   

5.
A stochastic approximation (SA) algorithm with new adaptive step sizes for solving unconstrained minimization problems in noisy environment is proposed. New adaptive step size scheme uses ordered statistics of fixed number of previous noisy function values as a criterion for accepting good and rejecting bad steps. The scheme allows the algorithm to move in bigger steps and avoid steps proportional to $1/k$ when it is expected that larger steps will improve the performance. An algorithm with the new adaptive scheme is defined for a general descent direction. The almost sure convergence is established. The performance of new algorithm is tested on a set of standard test problems and compared with relevant algorithms. Numerical results support theoretical expectations and verify efficiency of the algorithm regardless of chosen search direction and noise level. Numerical results on problems arising in machine learning are also presented. Linear regression problem is considered using real data set. The results suggest that the proposed algorithm shows promise.  相似文献   

6.
本文给出了一个求解log-最优组合投资问题的自适应算法,它是一个变型的随机逼近方法。该问题是一个约束优化问题,因此,采用基于约束流形的梯度上升方向替代常规梯度上升方向,在一些合理的假设下证明了算法的收敛性并进行了渐近稳定性分析。最后,本文将该算法应用于上海证券交易所提供的实际数据的log-最优组合投资问题求解,获得了理想的数值模拟结果。  相似文献   

7.
本文研究球面上的$\ell_1$正则优化问题,其目标函数由一般光滑函数项和非光滑$\ell_1$正则项构成,且假设光滑函数的随机梯度可由随机一阶oracle估计.这类优化问题被广泛应用在机器学习,图像、信号处理和统计等领域.根据流形临近梯度法和随机梯度估计技术,提出一种球面随机临近梯度算法.基于非光滑函数的全局隐函数定理,分析了子问题解关于参数的Lipschtiz连续性,进而证明了算法的全局收敛性.在基于随机数据集和实际数据集的球面$\ell_1$正则二次规划问题、有限和SPCA问题和球面$\ell_1$正则逻辑回归问题上数值实验结果显示所提出的算法与流形临近梯度法、黎曼随机临近梯度法相比CPU时间上具有一定的优越性.  相似文献   

8.
In this paper, a new gradient-related algorithm for solving large-scale unconstrained optimization problems is proposed. The new algorithm is a kind of line search method. The basic idea is to choose a combination of the current gradient and some previous search directions as a new search direction and to find a step-size by using various inexact line searches. Using more information at the current iterative step may improve the performance of the algorithm. This motivates us to find some new gradient algorithms which may be more effective than standard conjugate gradient methods. Uniformly gradient-related conception is useful and it can be used to analyze global convergence of the new algorithm. The global convergence and linear convergence rate of the new algorithm are investigated under diverse weak conditions. Numerical experiments show that the new algorithm seems to converge more stably and is superior to other similar methods in many situations.  相似文献   

9.
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.  相似文献   

10.
采用既约预条件共轭梯度路径结合非单调技术解线性等式约束的非线性优化问题.基于广义消去法将原问题转化为等式约束矩阵的零空间中的一个无约束优化问题,通过一个增广系统获得既约预条件方程,并构造共轭梯度路径解二次模型,从而获得搜索方向和迭代步长.基于共轭梯度路径的良好性质,在合理的假设条件下,证明了算法不仅具有整体收敛性,而且保持快速的超线性收敛速率.进一步,数值计算表明了算法的可行性和有效性.  相似文献   

11.
This letter presents an iterative estimation algorithm for modeling a class of output nonlinear systems. The basic idea is to derive an estimation model and to solve an optimization problem using the gradient search. The proposed iterative numerical algorithm can estimate the parameters of a class of Wiener nonlinear systems from input–output measurement data. The proposed algorithm has faster convergence rates compared with the stochastic gradient algorithm. The numerical simulation results indicate that the proposed algorithm works well.  相似文献   

12.
A simple algorithmic solution is developed for the discrete time, nonlinear, system identification problem based on a stochastic approximation method. The method is applicable to the noisy, as well as the noiseless, input-output measurement case. A minimal statistical knowledge of the noise and input sequences is required for this method; also, the algorithm is very easy to program. The proof of convergence for the algorithm is given along with some experimental results obtained from some control system input-output data.This paper presents the results of one phase of research carried out at the University of California, Los Angeles, California, under Contract No. 951733 to the Jet Propulsion Laboratory, Pasadena, California.  相似文献   

13.
研究一类新的求解无约束优化问题的超记忆梯度法,分析了算法的全局收敛性和线性收敛速率.算法利用一种多步曲线搜索准则产生新的迭代点,在每步迭代时同时确定下降方向和步长,并且不用计算和存储矩阵,适于求解大规模优化问题.数值试验表明算法是有效的.  相似文献   

14.
Stochastic optimization/approximation algorithms are widely used to recursively estimate the optimum of a suitable function or its root under noisy observations when this optimum or root is a constant or evolves randomly according to slowly time-varying continuous sample paths. In comparison, this paper analyzes the asymptotic properties of stochastic optimization/approximation algorithms for recursively estimating the optimum or root when it evolves rapidly with nonsmooth (jump-changing) sample paths. The resulting problem falls into the category of regime-switching stochastic approximation algorithms with two-time scales. Motivated by emerging applications in wireless communications, and system identification, we analyze asymptotic behavior of such algorithms. Our analysis assumes that the noisy observations contain a (nonsmooth) jump process modeled by a discrete-time Markov chain whose transition frequency varies much faster than the adaptation rate of the stochastic optimization algorithm. Using stochastic averaging, we prove convergence of the algorithm. Rate of convergence of the algorithm is obtained via bounds on the estimation errors and diffusion approximations. Remarks on improving the convergence rates through iterate averaging, and limit mean dynamics represented by differential inclusions are also presented. The research of G. Yin was supported in part by the National Science Foundation under DMS-0603287, in part by the National Security Agency under MSPF-068-029, and in part by the National Natural Science Foundation of China under #60574069. The research of C. Ion was supported in part by the Wayne State University Rumble Fellowship. The research of V. Krishnamurthy was supported in part by NSERC (Canada).  相似文献   

15.
郭雄伟  王川龙 《计算数学》2022,44(4):534-544
本文提出了一种求解低秩张量填充问题的加速随机临近梯度算法.张量填充模型可以松弛为平均组合形式的无约束优化问题,在迭代过程中,随机选取该组合中的某一函数进行变量更新,有效减少了张量展开、矩阵折叠及奇异值分解带来的较大的计算花费.本文证明了算法的收敛率为$O (1/k^{2})$.最后,随机生成的和真实的张量填充实验结果表明新算法在CPU时间上优于现有的三种算法.  相似文献   

16.
A main problem in adaptive optics is to reconstruct the phase spectrum given noisy phase differences. We present an efficient approach to solve the least-squares minimization problem resulting from this reconstruction, using either a truncated singular value decomposition (TSVD)-type or a Tikhonov-type regularization. Both of these approaches make use of Kronecker products and the generalized singular value decomposition. The TSVD-type regularization operates as a direct method whereas the Tikhonov-type regularization uses a preconditioned conjugate gradient type iterative algorithm to achieve fast convergence.  相似文献   

17.
This paper presents an evolution program for deterministic and stochastic optimizations. To overcome premature convergence and stalling of the solution, we suggest an exponential-fitness scaling scheme. To avoid the chromosomes jamming into a corner, we introduce mutation-1 which mutates the chromosomes in a free direction. To improve the chromosomes, we introduce mutation-2 which mutates the chromosomes in the gradient direction or its negative, according to the kind of problem. Monte Carlo simulation will be employed to solve the multiple integral which is the most difficult task in the stochastic optimization. Finally, some numerical examples are discussed.  相似文献   

18.
In this paper, a truncated conjugate gradient method with an inexact Gauss-Newton technique is proposed for solving nonlinear systems.?The iterative direction is obtained by the conjugate gradient method solving the inexact Gauss-Newton equation.?Global convergence and local superlinear convergence rate of the proposed algorithm are established under some reasonable conditions. Finally, some numerical results are presented to illustrate the effectiveness of the proposed algorithm.  相似文献   

19.
一个新的无约束优化超记忆梯度算法   总被引:3,自引:0,他引:3  
时贞军 《数学进展》2006,35(3):265-274
本文提出一种新的无约束优化超记忆梯度算法,算法利用当前点的负梯度和前一点的负梯度的线性组合为搜索方向,以精确线性搜索和Armijo搜索确定步长.在很弱的条件下证明了算法具有全局收敛性和线性收敛速度.因算法中避免了存贮和计算与目标函数相关的矩阵,故适于求解大型无约束优化问题.数值实验表明算法比一般的共轭梯度算法有效.  相似文献   

20.
有界约束非线性优化问题的仿射共轭梯度路径法   总被引:2,自引:0,他引:2  
本文提出仿射内点离散共轭梯度路径法解有界约束的非线性优化问题,通过构造预条件离散的共轭梯度路径解二次模型获得预选迭代方向,结合内点回代线搜索获得下一步的迭代,在合理的假设条件下,证明了算法的整体收敛性与局部超线性收敛速率,最后,数值结果表明了算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号