Abstract: | In unconstrained optimization, the usual quasi-Newton equation is B
k+1
s
k=y
k, where y
k is the difference of the gradients at the last two iterates. In this paper, we propose a new quasi-Newton equation,
, in which
is based on both the function values and gradients at the last two iterates. The new equation is superior to the old equation in the sense that
better approximates 2
f(x
k+1)s
k than y
k. Modified quasi-Newton methods based on the new quasi-Newton equation are locally and superlinearly convergent. Extensive numerical experiments have been conducted which show that the new quasi-Newton methods are encouraging. |