共查询到20条相似文献,搜索用时 31 毫秒
1.
Krzysztof C. Kiwiel 《Mathematical Programming》1999,85(2):241-258
k } by taking xk to be an approximate minimizer of , where is a piecewise linear model of f constructed from accumulated subgradient linearizations of f, Dh is the D-function of a generalized Bregman function h and tk>0. Convergence under implementable criteria is established by extending our recent framework of Bregman proximal minimization,
which is of independent interest, e.g., for nonquadratic multiplier methods for constrained minimization. In particular, we
provide new insights into the convergence properties of bundle methods based on h=?|·|2.
Received September 18, 1997 / Revised version received June 30, 1998
Published online November 24, 1998 相似文献
2.
Stephen M. Robinson 《Mathematical Programming》1999,86(1):41-50
This paper establishes a linear convergence rate for a class of epsilon-subgradient descent methods for minimizing certain
convex functions on ℝ
n
. Currently prominent methods belonging to this class include the resolvent (proximal point) method and the bundle method
in proximal form (considered as a sequence of serious steps). Other methods, such as a variant of the proximal point method
given by Correa and Lemaréchal, can also fit within this framework, depending on how they are implemented. The convex functions
covered by the analysis are those whose conjugates have subdifferentials that are locally upper Lipschitzian at the origin,
a property generalizing classical regularity conditions.
Received March 29, 1996 / Revised version received March 5, 1999? Published online June 11, 1999 相似文献
3.
In this paper, we analyze a class of methods for minimizing a proper lower semicontinuous extended-valued convex function
. Instead of the original objective function f, we employ a convex approximation f
k
+ 1 at the kth iteration. Some global convergence rate estimates are obtained. We illustrate our approach by proposing (i) a new family of proximal point algorithms which possesses the global convergence rate estimate
even it the iteration points are calculated approximately, where
are the proximal parameters, and (ii) a variant proximal bundle method. Applications to stochastic programs are discussed. 相似文献
4.
In this paper, by the use of Gram-Schmidt orthogonalization, we propose a class of modified conjugate gradient methods. The
methods are modifications of the well-known conjugate gradient methods including the PRP, the HS, the FR and the DY methods.
A common property of the modified methods is that the direction generated by any member of the class satisfies gkTdk=-||gk||2g_{k}^{T}d_k=-\|g_k\|^2. Moreover, if line search is exact, the modified method reduces to the standard conjugate gradient method accordingly. In
particular, we study the modified YT and YT+ methods. Under suitable conditions, we prove the global convergence of these
two methods. Extensive numerical experiments show that the proposed methods are efficient for the test problems from the CUTE
library. 相似文献
5.
Alfredo N. Iusem 《Acta Appl Math》1995,38(2):163-184
We analyze an algorithm for the problem minf(x) s.t.x 0 suggested, without convergence proof, by Eggermont. The iterative step is given by x
j
k+1
=x
j
k
(1-kf(x
k)j) with k > 0 determined through a line search. This method can be seen as a natural extension of the steepest descent method for unconstrained optimization, and we establish convergence properties similar to those known for steepest descent, namely weak convergence to a KKT point for a generalf, weak convergence to a solution for convexf and full convergence to the solution for strictly convexf. Applying this method to a maximum likelihood estimation problem, we obtain an additively overrelaxed version of the EM Algorithm. We extend the full convergence results known for EM to this overrelaxed version by establishing local Fejér monotonicity to the solution set.Research for this paper was partially supported by CNPq grant No 301280/86. 相似文献
6.
Some Convergence Properties of Descent Methods 总被引:6,自引:0,他引:6
In this paper, we discuss the convergence properties of a class of descent algorithms for minimizing a continuously differentiable function f on R
n without assuming that the sequence { x
k
} of iterates is bounded. Under mild conditions, we prove that the limit infimum of
is zero and that false convergence does not occur when f is convex. Furthermore, we discuss the convergence rate of {
} and { f(x
k
)} when { x
k
} is unbounded and { f(x
k
)} is bounded. 相似文献
7.
n . The method is based on Rockafellar’s proximal point algorithm and a cutting-plane technique. At each step, we use an approximate
proximal point pa(xk) of xk to define a vk∈∂εkf(pa(xk)) with εk≤α∥vk∥, where α is a constant. The method monitors the reduction in the value of ∥vk∥ to identify when a line search on f should be used. The quasi-Newton step is used to reduce the value of ∥vk∥. Without the differentiability of f, the method converges globally and the rate of convergence is Q-linear. Superlinear
convergence is also discussed to extend the characterization result of Dennis and Moré. Numerical results show the good performance
of the method.
Received October 3, 1995 / Revised version received August 20, 1998
Published online January 20, 1999 相似文献
8.
Let p be a prime, χ denote the Dirichlet character modulo p, f (x) = a
0 + a
1
x + ... + a
k
x
k
is a k-degree polynomial with integral coefficients such that (p, a
0, a
1, ..., a
k
) = 1, for any integer m, we study the asymptotic property of
$
\sum\limits_{\chi \ne \chi _0 } {\left| {\sum\limits_{a = 1}^{p - 1} {\chi (a)e\left( {\frac{{f(a)}}
{p}} \right)} } \right|^2 \left| {L(1,\chi )} \right|^{2m} } ,
$
\sum\limits_{\chi \ne \chi _0 } {\left| {\sum\limits_{a = 1}^{p - 1} {\chi (a)e\left( {\frac{{f(a)}}
{p}} \right)} } \right|^2 \left| {L(1,\chi )} \right|^{2m} } ,
相似文献
9.
Alessio Martini 《Mathematische Zeitschrift》2010,265(4):831-848
The Heisenberg–Pauli–Weyl (HPW) uncertainty inequality on
\mathbbRn{\mathbb{R}^n} says that
|