首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 139 毫秒
1.
柳颜  贺素香 《应用数学》2020,33(1):138-145
本文提出一个求解不等式约束优化问题的基于指数型增广Lagrange函数的信赖域方法.基于指数型增广Lagrange函数,将传统的增广Lagrange方法的精确求解子问题转化为一个信赖域子问题,从而减少了计算量,并建立相应的信赖域算法.在一定的假设条件下,证明了算法的全局收敛性,并给出相应经典算例的数值实验结果.  相似文献   

2.
该文给出了一个求解非线性系统的信赖域方法.主要思想是通过引入松弛变量,将问题等价地转化为带非负约束的最优化问题.作者利用有效集策略,在每次迭代中只需求解一个低维的信赖域子问题,该信赖域子问题是通过截断共轭梯度法来近似求解的.在较弱的条件下,获得了一个更一般的收敛性结果.  相似文献   

3.
刘景辉  马昌凤  陈争 《计算数学》2012,34(3):275-284
在传统信赖域方法的基础上, 提出了求解无约束最优化问题的一个新的带线搜索的信赖域算法. 该算法采用大步长 Armijo 线搜索技术获得迭代步长, 克服了每次迭代求解信赖域子问题时计算量较大的缺点, 因而适用于求解大型的优化问题. 在适当的条件下, 我们证明了算法的全局收敛性. 数值实验结果表明本文所提出的算法是有效的.  相似文献   

4.
结合有效集和多维滤子技术的拟Newton信赖域算法(英文)   总被引:1,自引:0,他引:1  
针对界约束优化问题,提出一个修正的多维滤子信赖域算法.将滤子技术引入到拟Newton信赖域方法,在每步迭代,Cauchy点用于预测有效集,此时试探步借助于求解一个较小规模的信赖域子问题获得.在一定条件下,本文所提出的修正算法对于凸约束优化问题全局收敛.数值试验验证了新算法的实际运行结果.  相似文献   

5.
信赖域方法是解决无约束优化问题的一类有效的方法,而求解信赖域子问题又是信赖域方法的一个重要的组成部分。在本文中,我们首先介绍Hager[4]的序列子空间方法,并分析了对于不同的子空间序列,该算法所具有的性质。随后我们在以上分析的启发下,给出SSM算法的一种改进算法,改进后的算法不仅是全局收敛的,而且进一步减少了矩阵运算量。最后我们给出一些初步的数值试验报告。  相似文献   

6.
解线性约束优化问题的新锥模型信赖域法   总被引:1,自引:0,他引:1  
本文提出了一个解线性等式约束优化问题的新锥模型信赖域方法.论文采用零空间技术消除了新锥模型子问题中的线性等式约束,用折线法求解转换后的子问题,并给出了解线性等式约束优化问题的信赖域方法.论文提出并证明了该方法的全局收敛性,并给出了该方法解线性等式约束优化问题的数值实验.理论和数值实验结果表明新锥模型信赖域方法是有效的,这给出了用新锥模型进一步研究非线性优化的基础.  相似文献   

7.
本文提出了一个解线性等式约束优化问题的新锥模型信赖域方法.论文采用零空间技术消除了新锥模型子问题中的线性等式约束,用折线法求解转换后的子问题,并给出了解线性等式约束优化问题的信赖域方法.论文提出并证明了该方法的全局收敛性,并给出了该方法解线性等式约束优化问题的数值实验.理论和数值实验结果表明新锥模型信赖域方法是有效的,这给出了用新锥模型进一步研究非线性优化的基础.  相似文献   

8.
本文描述了信赖域方法最优曲线在二维子空间内投影的几个性质,分析了几种信赖域折线法与该投影的关系,为推导理邹的求解信赖域子问题的折线近似提供理论依据。  相似文献   

9.
基于信赖域技术的处理带线性约束优化的内点算法   总被引:1,自引:0,他引:1  
欧宜贵  刘琼林 《应用数学》2005,18(3):365-372
基于信赖域技术,本文提出了一个求解带线性等式和非负约束优化问题的内点算法,其特点是:为了求得搜索方向,算法在每一步迭代时仅需要求解一线性方程组系统,从而避免了求解带信赖域界的子问题,然后利用非精确的Armijo线搜索法来得到下一个迭代内点. 从数值计算的观点来看,这种技巧可减少计算量.在适当的条件下,文中还证明了该算法所产生的迭代序列的每一个聚点都是原问题的KKT点.  相似文献   

10.
对于Hessian矩阵正定的情形,在求解二次函数模型信赖域子问题的隐式分段折线算法的基础上,提出一种求解信赖域子问题的改进的隐式Euler切线法,并分析该路径的性质.数值实验表明新算法是有效可行的,且较原算法具有迭代次数少、计算时间短等优点.  相似文献   

11.
Based on simple quadratic models of the trust region subproblem, we combine the trust region method with the nonmonotone and adaptive techniques to propose a new nonmonotone adaptive trust region algorithm for unconstrained optimization. Unlike traditional trust region method, our trust region subproblem is very simple by using a new scale approximation of the minimizing function??s Hessian. The new method needs less memory capacitance and computational complexity. The convergence results of the method are proved under certain conditions. Numerical results show that the new method is effective and attractive for large scale unconstrained problems.  相似文献   

12.
We propose a nonmonotone adaptive trust region method based on simple conic model for unconstrained optimization. Unlike traditional trust region methods, the subproblem in our method is a simple conic model, where the Hessian of the objective function is approximated by a scalar matrix. The trust region radius is adjusted with a new self-adaptive adjustment strategy which makes use of the information of the previous iteration and current iteration. The new method needs less memory and computational efforts. The global convergence and Q-superlinear convergence of the algorithm are established under the mild conditions. Numerical results on a series of standard test problems are reported to show that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

13.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

14.
The trust region(TR) method for optimization is a class of effective methods.The conic model can be regarded as a generalized quadratic model and it possesses the good convergence properties of the quadratic model near the minimizer.The Barzilai and Borwein(BB) gradient method is also an effective method,it can be used for solving large scale optimization problems to avoid the expensive computation and storage of matrices.In addition,the BB stepsize is easy to determine without large computational efforts.In this paper,based on the conic trust region framework,we employ the generalized BB stepsize,and propose a new nonmonotone adaptive trust region method based on simple conic model for large scale unconstrained optimization.Unlike traditional conic model,the Hessian approximation is an scalar matrix based on the generalized BB stepsize,which resulting a simple conic model.By adding the nonmonotone technique and adaptive technique to the simple conic model,the new method needs less storage location and converges faster.The global convergence of the algorithm is established under certain conditions.Numerical results indicate that the new method is effective and attractive for large scale unconstrained optimization problems.  相似文献   

15.
In this article, an ODE-based trust region filter algorithm for unconstrained optimization is proposed. It can be regarded as a combination of trust region and filter techniques with ODE-based methods. Unlike the existing trust-region-filter methods and ODE-based methods, a distinct feature of this method is that at each iteration, a reduced linear system is solved to obtain a trial step, thus avoiding solving a trust region subproblem. Under some standard assumptions, it is proven that the algorithm is globally convergent. Preliminary numerical results show that the new algorithm is efficient for large scale problems.  相似文献   

16.
In this paper, based on a simple model of trust region sub-problem, we combine the trust region method with the non-monotone and self-adaptive techniques to propose a new non-monotone self-adaptive trust region algorithm for unconstrained optimization. By use of the simple model, the new method needs less memory capacitance, computational complexity and CPU time. The convergence results of the method are proved under certain conditions. Numerical results show that the new method is effective and attractive for large-scale optimization problems.  相似文献   

17.
This paper studies an extended trust region subproblem (eTRS) in which the trust region intersects the unit ball with a single linear inequality constraint. We present an efficient algorithm to solve the problem using a diagonalization scheme that requires solving a simple convex minimization problem. Attainment of the global optimality conditions is discussed. Our preliminary numerical experiments on several randomly generated test problems show that, the new approach is much faster in finding the global optimal solution than the known semidefinite relaxation approach, especially when solving large scale problems.  相似文献   

18.
In this paper we consider the global convergence of a new supermemory gradient method for unconstrained optimization problems. New trust region radius is proposed to make the new method converge stably and averagely, and it will be suitable to solve large scale minimization problems. Some global convergence results are obtained under some mild conditions. Numerical results show that this new method is effective and stable in practical computation.  相似文献   

19.
In this paper, based on a simple model of the trust region subproblem, we propose a new self-adaptive trust region method with a line search technique for solving unconstrained optimization problems. By use of the simple subproblem model, the new method needs less memory capacitance and computational complexity. And the trust region radius is adjusted with a new self-adaptive adjustment strategy which makes full use of the information at the current point. When the trial step results in an increase in the objective function, the method does not resolve the subproblem, but it performs a line search technique from the failed point. Convergence properties of the method are proved under certain conditions. Numerical experiments show that the new method is effective and attractive for large-scale optimization problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号