首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article presents a likelihood-based boosting approach for fitting binary and ordinal mixed models. In contrast to common procedures, this approach can be used in high-dimensional settings where a large number of potentially influential explanatory variables are available. Constructed as a componentwise boosting method, it is able to perform variable selection with the complexity of the resulting estimator being determined by information criteria. The method is investigated in simulation studies both for cumulative and sequential models and is illustrated by using real datasets. The supplementary materials for the article are available online.  相似文献   

2.
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function under suitable conditions. Numerical results are reported. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

3.
4.
强Wolfe条件不能保证标准CD共轭梯度法全局收敛.本文通过建立新的共轭参数,提出无约束优化问题的一个新谱共轭梯度法,该方法在精确线搜索下与标准CD共轭梯度法等价,在标准wolfe线搜索下具有下降性和全局收敛性.初步的数值实验结果表明新方法是有效的,适合于求解非线性无约束优化问题.  相似文献   

5.
In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems.  相似文献   

6.
Modified Two-Point Stepsize Gradient Methods for Unconstrained Optimization   总被引:6,自引:0,他引:6  
For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved.  相似文献   

7.
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.

  相似文献   


8.
An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization   总被引:22,自引:0,他引:22  
Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar k with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.  相似文献   

9.
The gradient method for the symmetric positive definite linear system is as follows
(1)
where is the residual of the system at xk and αk is the stepsize. The stepsize is optimal in the sense that it minimizes the modulus , where λ1 and λn are the minimal and maximal eigenvalues of A respectively. Since λ1 and λn are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this paper, we will propose a new stepsize formula which tends to the optimal stepsize as . At the same time, the minimal and maximal eigenvalues, λ1 and λn, of A and their corresponding eigenvectors can be obtained. This research was initiated while the first author was visiting The Hong Kong Polytechnic University. This author was supported by the Chinese NSF grants (No. 40233029 and 101071104) and an innovation fund of Chinese Academy of Sciences. This author was supported by a grant from the Research Committee of the Hong Kong Polytechnic University (A-PC36).  相似文献   

10.
We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to an k -subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizes k are exogenously given, satisfying k=0 k = , k=0 k 2 < , and k is chosen so that k k for some > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Research of this author was partially supported by CNPq grant nos. 301280/86 and 300734/95-6.  相似文献   

11.
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.  相似文献   

12.
黄海 《经济数学》2011,28(2):25-28
在修正PRP共轭梯度法的基础上,提出了求解无约束优化问题的一个充分下降共轭梯度算法,证明了算法在Wolfe线搜索下全局收敛,并用数值实验表明该算法具有较好的数值结果.  相似文献   

13.
王长钰  李梅霞 《数学进展》2007,36(2):231-238
本文考虑了Solodov和Svaiter提出的带误差项的下降算法的收敛性.其重要特征是在收敛性的证明过程中没有应用梯度函数的Hlder连续性.因此,我们在较弱的条件下得到了该算法的收敛性结果.  相似文献   

14.
非凸极小极大问题是近期国际上优化与机器学习、信号处理等交叉领域的一个重要研究前沿和热点,包括对抗学习、强化学习、分布式非凸优化等前沿研究方向的一些关键科学问题都归结为该类问题。国际上凸-凹极小极大问题的研究已取得很好的成果,但非凸极小极大问题不同于凸-凹极小极大问题,是有其自身结构的非凸非光滑优化问题,理论研究和求解难度都更具挑战性,一般都是NP-难的。重点介绍非凸极小极大问题的优化算法和复杂度分析方面的最新进展。  相似文献   

15.
16.
In this paper, the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth. Their algorithm adopts the proximal gradient algorithm with extrapolation and a safe-guarding policy to minimize the smoothed objective function for better practical and theoretical performance. Moreover, the algorithm uses a easily checking rule to update the smoothing parameter to ensure that any accumulation point of the generated sequence is an (affine-scaled) Clarke stationary point of the original nonsmooth and nonconvex problem. Their experimental results indicate the effectiveness of the proposed algorithm.  相似文献   

17.
针对机器学习中广泛存在的一类问题:结构化随机优化问题(其中“结构化”是指问题的可行域具有块状结构,且目标函数的非光滑正则化部分在变量块之间是可分离的),我们研究了小批量随机块坐标下降算法(mSBD)。按照求解非复合问题和复合问题分别给出了基本的mSBD和它的变体,对于非复合问题,分析了算法在没有一致有界梯度方差假设情况下的收敛性质。而对于复合问题,在不需要通常的Lipschitz梯度连续性假设条件下得到了算法的收敛性。最后通过数值实验验证了mSBD的有效性。  相似文献   

18.
1. IntroductionConsider the unconstrained OPtbo8tion problem,min f(x), (1.1)where j is smooth and its gradient g is available. Conjugate gradieot methods are highly usefulfOr solving (1.1) especially if n is large. They are iterative methods of the formHere oh is a 8tepsbo obtained by a 1-dboensional line search and gk is a scalar. The chOiceof Ph is such tha (l.2)--(l.3) reduces to the linear cOnugate gradient method in the casewhen j is a strictly convex qUadratic and crk is the exact 1-…  相似文献   

19.
研究了一个简化的新的Laplace AR(1)模型参数的条件最小二乘估计和最大拟似然估计,并讨论了它们的强相合性和渐近正态性.通过数值模拟和实际例子,说明了最大拟似然估计及模型的优越性.  相似文献   

20.
我们首先提出了一个带ARMA(1,1)条件异方差相关的随机波动模型,它是基本的随机波动模型的一个自然的推广.进一步,对于这一新模型,我们给出了一个马尔可夫链蒙特卡罗(M CM C)算法.最后,利用该模型的模拟数据,展示了M CM C算法在这种模型中的应用.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号