首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
陈忠 《数学杂志》2003,23(1):54-56
郑权等在[1]-[3]中提出了一种求解无约束优化问题的均值算法,若假设目标函数f(x)是连续的,还讨论了均值算法的收敛性。若假设f(x) 有界闭集Ω上的凸函数,本文证明了求解凸函数极小值的均值算法是线性收敛的。  相似文献   

2.
交替最小化算法(简称AMA)最早由[SIAM J.Control Optim.,1991,29(1):119-138]提出,并能用于求解强凸函数与凸函数和的极小值问题.本文直接利用AMA算法来求解强凸函数与弱凸函数和的极小值问题.在强凸函数的模大于弱凸函数的模的假设下,我们证明了AMA生成的点列全局收敛到优化问题的解,并且若该优化问题中的某个函数是光滑函数时,AMA所生成的点列的收敛率是线性的.  相似文献   

3.
交替最小化算法(简称AMA)最早由[SIAM J.Control Optim.,1991,29(1):119-138]提出,并能用于求解强凸函数与凸函数和的极小值问题.本文直接利用AMA算法来求解强凸函数与弱凸函数和的极小值问题.在强凸函数的模大于弱凸函数的模的假设下,我们证明了AMA生成的点列全局收敛到优化问题的解,并且若该优化问题中的某个函数是光滑函数时,AMA所生成的点列的收敛率是线性的.  相似文献   

4.
DFP算法收敛性的一个结果   总被引:1,自引:0,他引:1  
变尺度算法作用于非凸函数,是否具有全局收敛性,有关这方面的研究是十分重要的。[1]在▽f满足Lipschitz条件且算法产生的点列收敛的假设下证明了DFP算法的全局收敛件。本文给出一个与Lipschitz条件互不包含的新的条件,在此条件下,我们证明了若算法产生的点列收敛于某点,则此点必为函数的稳定点。一、引言对于非线性最优化问题:_(x∈R~n)~min f(x),其中f:R~n→R~1连续可微,用变尺度算法来求解通常是有效的。而在众多的变尺算法中,DFP算法(Davidon、Fletcher and  相似文献   

5.
基于CHKS光滑函数的修改性版本,该文提出了一个带有尺度中心路径的求解对称锥线性规划(SCLP)的非单调光滑牛顿算法.通过应用欧氏若当代数理论,在适当的假设下,证明了该算法是全局收敛和超线性收敛的.数值结果表明了算法的有效性.  相似文献   

6.
对称的稳定分布参数变点估计的相合性   总被引:3,自引:0,他引:3       下载免费PDF全文
假设稳定分布的特征指数α满足1<α<2,关于均值μ对称. 本文讨论了稳定分布中α或刻度参数β的变化导致的变点问题,即是否发生变化及变化时刻.若均值已知,当α或β改变时,密度函数f(x)在μ处的值f(μ)发生变化,我们利用密度函数的核估计来估计该点的值. 若均值未知,利用经验特征函数估计该点的值,并进一步讨论了估计的相合性与收敛速度. 其次讨论了均值变化导致的变点问题,若均值发生变化,相应变点前后特征函数的参数将变化,利用经验特征函数给出了变点的估计, 获得了类似的收敛速度. 最后给出了检测金融市场突变性的应用.  相似文献   

7.
本文对等式约束问题提出了一个种组合信赖域与拟牛顿算法。该算法的特点是若Lagrangian函数的近似Hessian阵在等式约束Jacobi阵的零空间正定的,则选择拟牛顿算法,否则用信赖域算法,在通常信赖域算法的收敛假设下,该文证明了组合算法的全局收敛性。  相似文献   

8.
对于无约束优化问题,提出了一类新的三项记忆梯度算法.这类算法是在参数满足某些假设的条件下,确定它的取值范围,从而保证三项记忆梯度方向是使目标函数充分下降的方向.在非单调步长搜索下讨论了算法的全局收敛性.为了得到具有更好收敛性质的算法,结合Solodov and Svaiter(2000)中的部分技巧,提出了一种新的记忆梯度投影算法,并证明了该算法在函数伪凸的情况下具有整体收敛性.  相似文献   

9.
研究了由函数f(x)=cosx迭代所得到的一个动力系统的经典模型,讨论了其全局收敛性.首先,证明了对于任意的正整数n,函数cos~nx都存在唯一的不动点;其次,证明了对任意初值x_0∈R,皮卡迭代数列{cos~nx_0}都收敛到同一个常数,此常数正好为函数f(x)=cosx的不动点,从而证明了由函数f迭代生成的离散动力系统{f~0,f~1,f~2,…}是全局收敛的.  相似文献   

10.
对无约束规划 ( P) :minx∈ Rnf ( x) ,其中 f ( x)是 Rn→ R1上的一阶连续可微函数 ,设计了一个超记忆梯度求解算法 ,并在去掉迭代点列 { xk}有界和广义 Armijo步长搜索下 ,讨论了算法的全局的收敛性 ,证明了算法具有较强的收敛性质  相似文献   

11.
In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence property without convexity assumption on the objective function. Under some suitable conditions, the global convergence of the proposed method is proved. Some numerical results are reported which illustrate that the proposed method is efficient.  相似文献   

12.
DFP算法的全局收敛性分析   总被引:2,自引:0,他引:2  
徐大川 《计算数学》1997,19(3):287-292
1引言理论分析和大量数值试验表明,在求解(1.1)的各种算法中,拟Newton法是效果最好的一类方法.DFP算法是最早提出的拟Newton法,它首先由Davidon[2]给出并由Fletcher和Powell【3]修改DFP算法的计算步骤如下:算法1.1.1”.取二R”,BIE*”“”对称正定,k:=1.2”.计算gb=7八kh),若gb—0,则终止,得解kk.否则,转入下一步.3O.dk——BK‘gb.4“.进行线搜索确定步长aa.在上面的算法中,步长0。的确定有两种方式:其一,精确线搜索,即。。满足:其M,非精确线搜索.本文考察WOlfe线搜索,即a&满足:其中o…  相似文献   

13.
In this paper, we propose a new method, namely the level-value estimation method, for finding global minimizer of continuous optimization problem. For this purpose, we define the variance function and the mean deviation function, both depend on a level value of the objective function to be minimized. These functions have some good properties when Newton’s method is used to solve a variance equation resulting by setting the variance function to zero. We prove that the largest root of the variance equation equals the global minimal value of the corresponding optimization problem. We also propose an implementable algorithm of the level-value estimation method where importance sampling is used to calculate integrals of the variance function and the mean deviation function. The main idea of the cross-entropy method is used to update the parameters of sample distribution at each iteration. The implementable level-value estimation method has been verified to satisfy the convergent conditions of the inexact Newton method for solving a single variable nonlinear equation. Thus, convergence is guaranteed. The numerical results indicate that the proposed method is applicable and efficient in solving global optimization problems.  相似文献   

14.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

15.
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions.  相似文献   

16.
We propose a method that incorporates a non-Euclidean gradient descent step with a generic matrix sketching procedure, for solving unconstrained, nonconvex, matrix optimization problems, in which the decision variable cannot be saved in memory due to its size, and the objective function is the composition of a vector function on a linear operator. The method updates the sketch directly without updating or storing the decision variable. Subsequence convergence, global convergence under the Kurdyka–Lojasiewicz property, and rate of convergence, are established.  相似文献   

17.
Penalty function is an important tool in solving many constrained optimization problems in areas such as industrial design and management. In this paper, we study exactness and algorithm of an objective penalty function for inequality constrained optimization. In terms of exactness, this objective penalty function is at least as good as traditional exact penalty functions. Especially, in the case of a global solution, the exactness of the proposed objective penalty function shows a significant advantage. The sufficient and necessary stability condition used to determine whether the objective penalty function is exact for a global solution is proved. Based on the objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the objective penalty function is proved for a local solution. An algorithm is presented in the paper in finding a local solution, with its convergence proved under some conditions. Finally, numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm.  相似文献   

18.
In this paper we propose an algorithm using only the values of the objective function and constraints for solving one-dimensional global optimization problems where both the objective function and constraints are Lipschitzean and nonlinear. The constrained problem is reduced to an unconstrained one by the index scheme. To solve the reduced problem a new method with local tuning on the behavior of the objective function and constraints over different sectors of the search region is proposed. Sufficient conditions of global convergence are established. We also present results of some numerical experiments.  相似文献   

19.
In this paper, a new nonmonotone inexact line search rule is proposed and applied to the trust region method for unconstrained optimization problems. In our line search rule, the current nonmonotone term is a convex combination of the previous nonmonotone term and the current objective function value, instead of the current objective function value . We can obtain a larger stepsize in each line search procedure and possess nonmonotonicity when incorporating the nonmonotone term into the trust region method. Unlike the traditional trust region method, the algorithm avoids resolving the subproblem if a trial step is not accepted. Under suitable conditions, global convergence is established. Numerical results show that the new method is effective for solving unconstrained optimization problems.  相似文献   

20.
本文提出了一种解无约束优化问题的新的非单调自适应信赖域方法.这种方法借助于目标函数的海赛矩阵的近似数量矩阵来确定信赖域半径.在通常的条件下,给出了新算法的全局收敛性以及局部超线性收敛的结果,数值试验验证了新的非单调方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号