首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
The nine quadratically convergent algorithms for function minimization appearing in Ref. 2 are tested through several numerical examples. A quadratic function and four nonquadratic functions are investigated. For the quadratic function, the results show that, if high-precision arithmetic together with high accuracy in the one-dimensional search is employed, all the algorithms behave identically: they all produce the same sequence of points and they all lead to the minimal point in the same number of iterations (this number is equal at most to the number of variables). For the nonquadratic functions, the results show that some of the algorithms behave identically and, therefore, any one of them can be considered to be representative of the entire class. The effect of different restarting conditions on the convergence characteristics of the algorithms is studied. Proper restarting conditions for faster convergence are given.This research, supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67, is a condensation of the investigation described in Ref. 1.  相似文献   

2.
3.
Mathematical Programming - Huber et al. (SIAM J Comput 43:1064–1084, 2014) introduced a concept of skew bisubmodularity, as a generalization of bisubmodularity, in their complexity dichotomy...  相似文献   

4.
In this paper, the method of dual matrices for the minimization of functions is introduced. The method, which is developed on the model of a quadratic function, is characterized by two matrices at each iteration. One matrix is such that a linearly independent set of directions can be generated, regardless of the stepsize employed. The other matrix is such that, at the point where the first matrix fails to yield a gradient linearly independent of all the previous gradients, it generates a displacement leading to the minimal point. Thus, the one-dimensional search is bypassed. For a quadratic function, it is proved that the minimal point is obtained in at mostn + 1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn + 2.Three algorithms of the method are presented. A reverse algorithm, which permits the use of only one matrix, is also given. Considerations pertaining to the applications of this method to the minimization of a quadratic function and a nonquadratic function are given. It is believed that, since the one-dimensional search can be bypassed, a considerable amount of computational saving can be achieved.This paper, supported by the National Science Foundation, Grant No. GP-32453, is based on Ref. 1.  相似文献   

5.
In this paper, a unified method to construct quadratically convergent algorithms for function minimization is described. With this unified method, a generalized algorithm is derived. It is shown that all the existing conjugate-gradient algorithms and variable-metric algorithms can be obtained as particular cases. In addition, several new practical algorithms can be generated. The application of these algorithms to quadratic functions as well as nonquadratic functions is discussed.This research, supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67, is based on Ref. 1.  相似文献   

6.
This paper presents a systematic investigation of the numerical continuation algorithms for bifurcation problems (simple turning points and Hopf bifurcation points) of 2D nonlinear elliptic equations. The continuation algorithms employed are based only on iterative methods (Preconditioned Generalized Conjugate Gradient, PGCG, and Multigrid, MG). PGCG is mainly used as coarse grid solver in the MG cycle. Numerical experiments were made with the MG continuation algorithms developed by Hackbusch [W. Hackbusch, Multi-Grid Solution of Continuation Problems, Lecture Notes in Math., vol. 953, Springer, Berlin, 1982], Meis et al. [T.F. Meiss, H. Lehman, H. Michael, Application of the Multigrid Method to a Nonlinear Indefinite Problem, Lecture Notes in Math., vol. 960, Springer, Berlin, 1982], and Mittelmann and Weber [H.D. Mittelmann, H. Weber, Multi-grid solution of bifurcation problems, SIAM J. Sci. Statist. Comput. 6 (1985) 49]. The mathematical models selected, as test problems, are well-known diffusion–reaction systems; non-isothermal catalyst pellet and Lengyel–Epstein model of the CIMA reaction. The numerical methods proved to be efficient and reliable so that computations with fine grids can easily be performed.  相似文献   

7.
Bisubmodular functions are a natural “directed”, or “signed”, extension of submodular functions with several applications. Recently Fujishige and Iwata showed how to extend the Iwata, Fleischer, and Fujishige (IFF) algorithm for submodular function minimization (SFM) to bisubmodular function minimization (BSFM). However, they were able to extend only the weakly polynomial version of IFF to BSFM. Here we investigate the difficulty that prevented them from also extending the strongly polynomial version of IFF to BSFM, and we show a way around the difficulty. This new method gives a somewhat simpler strongly polynomial SFM algorithm, as well as the first combinatorial strongly polynomial algorithm for BSFM. This further leads to extending Iwata’s fully combinatorial version of IFF to BSFM. The research of S. T. McCormick was supported by an NSERC Operating Grant. The research of S. Fujishige was supported by a Grant-in-Aid of the Ministry of Education, Culture, Science and Technology of Japan.  相似文献   

8.
The problem of minimizing a functionf(x) subject to the constraint ?(x)=0 is considered. Here,f is a scalar,x is ann-vector, and ? is anm-vector, wherem <n. A general quadratically convergent algorithm is presented. The conjugate-gradient algorithm and the variable-metric algorithms for constrained function minimization can be obtained as particular cases of the general algorithm. It is shown that, for a quadratic function subject to a linear constraint, all the particular algorithms behave identically if the one-dimensional search for the stepsize is exact. Specifically, they all produce the same sequence of points and lead to the constrained minimal point in no more thann ?r descent steps, wherer is the number of linearly independent constraints. The algorithms are then modified so that they can also be employed for a nonquadratic function subject to a nonlinear constraint. Some particular algorithms are tested through several numerical examples.  相似文献   

9.
10.
Numerical results are obtained on sequential and parallel versions of ABS algorithms for linear systems for both full matrices andq-band matrices. The results using the sequential algorithm on full matrices indicate the superiority of a particular implementation of the symmetric algorithm. The condensed form of the algorithm is well suited for implementation in a parallel environment, and results obtained on the IBM 4381 system favor a synchronous implementation over the asynchronous one. Results are obtained from sequential implementations of theLU, Cholesky, and symmetric algorithms of the ABS class forq-band matrices able to reduce memory storage. A simple parallelization of do-loops for calculating components gives interesting performances.This work has been developed in the framework of a collaboration between IBM-ECSEC, Rome, Italy, and the Department of Mathematics of the University of Bergamo, Bergamo, Italy.The author is grateful to Prof. J. Abaffy (University of Economics, Budapest), Prof. L. Dixon (Hatfield Polytechnic), and Prof. E. Spedicato (Department of Mathematics, University of Bergamo) for useful suggestions.  相似文献   

11.
On search directions for minimization algorithms   总被引:1,自引:0,他引:1  
Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.  相似文献   

12.
This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as the intersection of possibly infinitely many constraint sets. Each constraint set is assumed to be given as a level set of a convex but not necessarily differentiable function. The proposed algorithms are applicable to the situation where the whole constraint set of the problem is not known in advance, but it is rather learned in time through observations. Also, the algorithms are of interest for constrained optimization problems where the constraints are known but the number of constraints is either large or not finite. We analyze the proposed algorithm for the case when the objective function is differentiable with Lipschitz gradients and the case when the objective function is not necessarily differentiable. The behavior of the algorithm is investigated both for diminishing and non-diminishing stepsize values. The almost sure convergence to an optimal solution is established for diminishing stepsize. For non-diminishing stepsize, the error bounds are established for the expected distances of the weighted averages of the iterates from the constraint set, as well as for the expected sub-optimality of the function values along the weighted averages.  相似文献   

13.
In 1969, Huang (Ref. 1) proposed a unified approach to quadratically convergent algorithms for function minimization without constraints; if we slightly modify a few points of his development, some other algorithms can be generated, among which several were published after 1969. It is also possible to generate a class of algorithms which provide quadratic convergence without linear search at each step.  相似文献   

14.
In 1952, Hestenes and Stiefel first established, along with the conjugate-gradient algorithm, fundamental relations which exist between conjugate direction methods for function minimization on the one hand and Gram-Schmidt processes relative to a given positive-definite, symmetric matrix on the other. This paper is based on a recent reformulation of these relations by Hestenes which yield the conjugate Gram-Schmidt (CGS) algorithm. CGS includes a variety of function minimization routines, one of which is the conjugate-gradient routine. This paper gives the basic equations of CGS, including the form applicable to minimizing general nonquadratic functions ofn variables. Results of numerical experiments of one form of CGS on five standard test functions are presented. These results show that this version of CGS is very effective.The preparation of this paper was sponsored in part by the US Army Research Office, Grant No. DH-ARO-D-31-124-71-G18.The authors wish to thank Mr. Paul Speckman for the many computer runs made using these algorithms. They served as a good check on the results which they had obtained earlier. Special thanks must go to Professor M. R. Hestenes whose constant encouragement and assistance made this paper possible.  相似文献   

15.
We present a simple and unified technique to establish convergence of various minimization methods. These contain the (conceptual) proximal point method, as well as implementable forms such as bundle algorithms, including the classical subgradient relaxation algorithm with divergent series.An important research work of Phil Wolfe's concerned convex minimization. This paper is dedicated to him, on the occasion of his 65th birthday, in appreciation of his creative and pioneering work.  相似文献   

16.
《Discrete Applied Mathematics》2004,134(1-3):303-316
M-convex functions, introduced by Murota (Adv. Math. 124 (1996) 272; Math. Prog. 83 (1998) 313), enjoy various desirable properties as “discrete convex functions.” In this paper, we propose two new polynomial-time scaling algorithms for the minimization of an M-convex function. Both algorithms apply a scaling technique to a greedy algorithm for M-convex function minimization, and run as fast as the previous minimization algorithms. We also specialize our scaling algorithms for the resource allocation problem which is a special case of M-convex function minimization.  相似文献   

17.
We investigate the solution of large-scale generalized algebraic Bernoulli equations as those arising in control and systems theory. Here, we discuss algorithms based on a generalization of the Newton iteration for the matrix sign function. The algorithms are easy to parallelize and provide an efficient numerical tool to solve large-scale problems. Both the accuracy and the parallel performance of our implementations on a cluster of Intel Xeon processors are reported.   相似文献   

18.
Summary Direct methods for computing the Moore-Penrose inverse of a matrix are surveyed, classified and tested. It is observed that the algorithms using matrix decompositions or bordered matrices are numerically more stable.  相似文献   

19.
20.
Quasi-Newton algorithms minimize a functionF(x),xR n, searching at any iterationk along the directions k=?H kgk, whereg k=?F(x k) andH k approximates in some sense the inverse Hessian ofF(x) atx k. When the matrixH is updated according to the formulas in Broyden's family and when an exact line search is performed at any iteration, a compact algorithm (free from the Broyden's family parameter) can be conceived in terms of the followingn ×n matrix: $$H{_R} = H - Hgg{^T} H/g{^T} Hg,$$ which can be viewed as an approximating reduced inverse Hessian. In this paper, a new algorithm is proposed which uses at any iteration an (n?1)×(n?1) matrixK related toH R by $$H_R = Q\left[ {\begin{array}{*{20}c} 0 & 0 \\ 0 & K \\ \end{array} } \right]Q$$ whereQ is a suitable orthogonaln×n matrix. The updating formula in terms of the matrixK incorporated in this algorithm is only moderately more complicated than the standard updating formulas for variable-metric methods, but, at the same time, it updates at any iteration a positive definite matrixK, instead of a singular matrixH R. Other than the compactness with respect to the algorithms with updating formulas in Broyden's class, a further noticeable feature of the reduced Hessian algorithm is that the downhill condition can be stated in a simple way, and thus efficient line searches may be implemented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号