Block truncated-Newton methods for parallel optimization |
| |
Authors: | Stephen G. Nash Ariela Sofer |
| |
Affiliation: | (1) Operations Research and Applied Statistics Department, George Mason University, 22030 Fairfax, VA, USA |
| |
Abstract: | Truncated-Newton methods are a class of optimization methods suitable for large scale problems. At each iteration, a search direction is obtained by approximately solving the Newton equations using an iterative method. In this way, matrix costs and second-derivative calculations are avoided, hence removing the major drawbacks of Newton's method. In this form, the algorithms are well-suited for vectorization. Further improvements in performance are sought by using block iterative methods for computing the search direction. In particular, conjugate-gradient-type methods are considered. Computational experience on a hypercube computer is reported, indicating that on some problems the improvements in performance can be better than that attributable to parallelism alone.Partially supported by Air Force Office of Scientific Research grant AFOSR-85-0222.Partially supported by National Science Foundation grant ECS-8709795, co-funded by the U.S. Air Force Office of Scientific Research. |
| |
Keywords: | Nonlinear optimization parallel computing block iterative methods truncated-Newton methods |
本文献已被 SpringerLink 等数据库收录! |
|