共查询到20条相似文献,搜索用时 15 毫秒
1.
This article presents a likelihood-based boosting approach for fitting binary and ordinal mixed models. In contrast to common procedures, this approach can be used in high-dimensional settings where a large number of potentially influential explanatory variables are available. Constructed as a componentwise boosting method, it is able to perform variable selection with the complexity of the resulting estimator being determined by information criteria. The method is investigated in simulation studies both for cumulative and sequential models and is illustrated by using real datasets. The supplementary materials for the article are available online. 相似文献
2.
Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems 总被引:2,自引:0,他引:2
Gonglin Yuan 《Optimization Letters》2009,3(1):11-21
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate
gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global
convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function
under suitable conditions. Numerical results are reported.
This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001. 相似文献
3.
4.
非凸极小极大问题是近期国际上优化与机器学习、信号处理等交叉领域的一个重要研究前沿和热点,包括对抗学习、强化学习、分布式非凸优化等前沿研究方向的一些关键科学问题都归结为该类问题。国际上凸-凹极小极大问题的研究已取得很好的成果,但非凸极小极大问题不同于凸-凹极小极大问题,是有其自身结构的非凸非光滑优化问题,理论研究和求解难度都更具挑战性,一般都是NP-难的。重点介绍非凸极小极大问题的优化算法和复杂度分析方面的最新进展。 相似文献
5.
强Wolfe条件不能保证标准CD共轭梯度法全局收敛.本文通过建立新的共轭参数,提出无约束优化问题的一个新谱共轭梯度法,该方法在精确线搜索下与标准CD共轭梯度法等价,在标准wolfe线搜索下具有下降性和全局收敛性.初步的数值实验结果表明新方法是有效的,适合于求解非线性无约束优化问题. 相似文献
6.
In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems. 相似文献
7.
Yuhong Dai Jinyun Yuan Ya-Xiang Yuan 《Computational Optimization and Applications》2002,22(1):103-109
For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved. 相似文献
8.
Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar
k
with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems. 相似文献
9.
Yu-Hong Dai. 《Mathematics of Computation》2003,72(243):1317-1328
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.
10.
The gradient method for the symmetric positive definite linear system
is as follows
where
is the residual of the system at xk and αk is the stepsize. The stepsize
is optimal in the sense that it minimizes the modulus
, where λ1 and λn are the minimal and maximal eigenvalues of A respectively. Since λ1 and λn are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this
paper, we will propose a new stepsize formula which tends to the optimal stepsize as
. At the same time, the minimal and maximal eigenvalues, λ1 and λn, of A and their corresponding eigenvectors can be obtained.
This research was initiated while the first author was visiting The Hong Kong Polytechnic University.
This author was supported by the Chinese NSF grants (No. 40233029 and 101071104) and an innovation fund of Chinese Academy
of Sciences.
This author was supported by a grant from the Research Committee of the Hong Kong Polytechnic University (A-PC36). 相似文献
(1) |
11.
We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to an
k
-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizes
k
are exogenously given, satisfying
k=0
k = ,
k=0
k
2
< , and
k is chosen so that
k k for some > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Research of this author was partially supported by CNPq grant nos. 301280/86 and 300734/95-6. 相似文献
12.
Hans De Sterck 《Numerical Linear Algebra with Applications》2013,20(3):453-471
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
13.
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory
gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory
gradient method which generates a descent search direction for the objective function at every iteration. We show that our
method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy.
Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter
included in the method. 相似文献
14.
本文考虑了Solodov和Svaiter提出的带误差项的下降算法的收敛性.其重要特征是在收敛性的证明过程中没有应用梯度函数的Hlder连续性.因此,我们在较弱的条件下得到了该算法的收敛性结果. 相似文献
15.
在修正PRP共轭梯度法的基础上,提出了求解无约束优化问题的一个充分下降共轭梯度算法,证明了算法在Wolfe线搜索下全局收敛,并用数值实验表明该算法具有较好的数值结果. 相似文献
16.
17.
18.
In this paper, we propose a three-term conjugate gradient method via the symmetric rank-one update. The basic idea is to exploit the good properties of the SR1 update in providing quality Hessian approximations to construct a conjugate gradient line search direction without the storage of matrices and possess the sufficient descent property. Numerical experiments on a set of standard unconstrained optimization problems showed that the proposed method is superior to many well-known conjugate gradient methods in terms of efficiency and robustness. 相似文献
19.
20.
Extrapolated Smoothing Descent Algorithm for Constrained Nonconvex and Nonsmooth Composite Problems*
In this paper, the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth. Their algorithm adopts the proximal gradient algorithm with extrapolation and a safe-guarding policy to minimize the smoothed objective function for better practical and theoretical performance. Moreover, the algorithm uses a easily checking rule to update the smoothing parameter to ensure that any accumulation point of the generated sequence is an (affine-scaled) Clarke stationary point of the original nonsmooth and nonconvex problem. Their experimental results indicate the effectiveness of the proposed algorithm. 相似文献