首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
2.
In a series of recent papers, Oren, Oren and Luenberger, Oren and Spedicato, and Spedicato have developed the self-scaling variable metric algorithms. These algorithms alter Broyden's single parameter family of approximations to the inverse Hessian to a double parameter family. Conditions are given on the new parameter to minimize a bound on the condition number of the approximated inverse Hessian while insuring improved step-wise convergence.Davidon has devised an update which also minimizes the bound on the condition number while remaining in the Broyden single parameter family.This paper derives initial scalings for the approximate inverse Hessian which makes members of the Broyden class self-scaling. The Davidon, BFGS, and Oren—Spedicato updates are tested for computational efficiency and stability on numerous test functions, with the results indicating strong superiority computationally for the Davidon and BFGS update over the self-scaling update, except on a special class of functions, the homogeneous functions.  相似文献   

3.
This paper addresses the problem of selecting the parameter in a family of algorithms for unconstrained minimization known as Self Scaling Variable Metric (SSVM) Algorithms. This family, that has some very attractive properties, is based on a two parameter formula for updating the inverse Hessian approximation, in which the parameters can take any values between zero and one. Earlier results obtained for SSVM algorithms apply to the entire family and give no indication of how the choice of parameter may affect the algorithm's performance. In this paper, we examine empirically the effect of varying the parameters and relaxing the line-search. Theoretical consideration also leads to a switching tule for these parameters. Numerical results obtained for the SSVM algorithm indicate that with proper parameter selection it is superior to the DFP algorithm, particularly for high-dimensional problems.This paper was presented at the 8th International Symposium on Mathematical Programming held at Stanford University, California, August 1973.  相似文献   

4.
In this paper we propose a nonmonotone approach to recurrent neural networks training for temporal sequence processing applications. This approach allows learning performance to deteriorate in some iterations, nevertheless the network’s performance is improved over time. A self-scaling BFGS is equipped with an adaptive nonmonotone technique that employs approximations of the Lipschitz constant and is tested on a set of sequence processing problems. Simulation results show that the proposed algorithm outperforms the BFGS as well as other methods previously applied to these sequences, providing an effective modification that is capable of training recurrent networks of various architectures.  相似文献   

5.
Edge insertion iteratively improves a triangulation of a finite point set in ℜ2 by adding a new edge, deleting old edges crossing the new edge, and retriangulating the polygonal regions on either side of the new edge. This paper presents an abstract view of the edge insertion paradigm, and then shows that it gives polynomial-time algorithms for several types of optimal triangulations, including minimizing the maximum slope of a piecewise-linear interpolating surface. The research of the second author was supported by the National Science Foundation under Grant No. CCR-8921421 and under the Alan T. Waterman award, Grant No. CCR-9118874. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of the National Science Foundation. Part of the work was done while the second, third, and fourth authors visited the Xerox Palo Alto Research Center, and while the fifth author was on study leave at the University of Illinois.  相似文献   

6.
Variable Metric Methods are Newton—Raphson-like algorithms for unconstrained minimization in which the inverse Hessian is replaced by an approximation, inferred from previous gradients and updated at each iteration. During the past decade various approaches have been used to derive general classes of such algorithms having the common properties of being Conjugate Directions methods and having quadratic termination. Observed differences in actual performance of such methods motivated recent attempts to identify variable metric algorithms having additional properties that may be significant in practical situations (e.g. nonquadratic functions, inaccurate linesearch, etc.). The SSVM algorithms, introduced by this first author, are such methods that among their other properties, they automatically compensate for poor scaling of the objective function. This paper presents some new theoretical results identifying a subclass of SSVM algorithms that have the additional property of minimizing a sharp bound on the condition number of the inverse Hessian approximation at each iteration. Reducing this condition number is important for decreasing the roundoff error. The theoretical properties of this subclass are explored and two of its special cases are tested numerically in comparison with other SSVM algorithms.This work has been done while this author was a visiting fellow at the Engineering Economic System Department, Stanford University.  相似文献   

7.
It has been conjectured that every configurationC of convex objects in 3-space with disjoint interiors can be taken apart by translation with two hands: that is, some proper subset ofC can be translated to infinity without disturbing its complement. We show that the conjecture holds for five or fewer objects and give a counterexample with six objects. We extend the counterexample to a configuration that cannot be taken apart with two hands using arbitrary isometries (rigid motions). The research of J. Snoeyink was supported in part by an NSERC Research Grant. J. Stolfi was previously at DEC Systems Research Center, Palo Alto, CA, USA.  相似文献   

8.
Self-scaling quasi-Newton methods for unconstrained optimization depend upon updating the Hessian approximation by a formula which depends on two parameters (say, and ) such that = 1, = 0, and = 1 yield the unscaled Broyden family, the BFGS update, and the DFP update, respectively. In previous work, conditions were obtained on these parameters that imply global and superlinear convergence for self-scaling methods on convex objective functions. This paper discusses the practical performance of several new algorithms designed to satisfy these conditions.  相似文献   

9.
In this paper, we review some methods which are designed to solve equality constrained minimization problems by following the trajectory defined by a system of ordinary differential equations. The numerical performance of a number of these methods is compared with that of some popular sequential quadratic programming algorithms. On a set of eighteen difficult test problems, we observe that several of the ODE methods are more successful than any of the SQP techniques. We suggest that these experimental results indicate the need for research both to analyze and develop new ODE techniques and also to strengthen the currently available SQP algorithms.This work was supported by a SERC Research Studentship for the first author. Both authors are indebted to Dr. J. J. McKeown and Dr. K. D. Patel of SCICON Ltd., the collaborating establishment, for their advice and encouragement.  相似文献   

10.
A barrier function method for minimax problems   总被引:2,自引:0,他引:2  
This paper presents an algorithm based on barrier functions for solving semi-infinite minimax problems which arise in an engineering design setting. The algorithm bears a resemblance to some of the current interior penalty function methods used to solve constrained minimization problems. Global convergence is proven, and numerical results are reported which show that the algorithm is exceptionally robust, and that its performance is comparable, while its structure is simpler than that of current first-order minimax algorithms.This research was supported by the National Science Foundation grant ECS-8517362, the Air Force Office Scientific Research grant 86-0116, the California State MICRO program, and the United Kingdom Science and Engineering Research Council.  相似文献   

11.
Variable-metric algorithms have played an important role in unconstrained optimization theory. This paper presents a sufficiency condition on the sequence of metrics in a variable-metric algorithm that will make it a conjugate-gradient algorithm. The Huang class of algorithms (Ref. 1) and the class of self-scaling variable-metric algorithms by Oren (Ref. 2) all satisfy the condition. This paper also includes a discussion of the behavior of algorithms that meet the condition on nonquadratic functions.  相似文献   

12.
In this paper a class of polynomial interior-point algorithms for horizontal linear complementarity problem based on a new parametric kernel function, with parameters p[0,1] and σ≥1, are presented. The proposed parametric kernel function is not exponentially convex and also not strongly convex like the usual kernel functions, and has a finite value at the boundary of the feasible region. It is used both for determining the search directions and for measuring the distance between the given iterate and the μ-center for the algorithm. The currently best known iteration bounds for the algorithm with large- and small-update methods are derived, namely, and , respectively, which reduce the gap between the practical behavior of the algorithms and their theoretical performance results. Numerical tests demonstrate the behavior of the algorithms for different results of the parameters p,σ and θ.  相似文献   

13.
SSVM: A Smooth Support Vector Machine for Classification   总被引:10,自引:0,他引:10  
Smoothing methods, extensively used for solving important mathematical programming problems and applications, are applied here to generate and solve an unconstrained smooth reformulation of the support vector machine for pattern classification using a completely arbitrary kernel. We term such reformulation a smooth support vector machine (SSVM). A fast Newton–Armijo algorithm for solving the SSVM converges globally and quadratically. Numerical results and comparisons are given to demonstrate the effectiveness and speed of the algorithm. On six publicly available datasets, tenfold cross validation correctness of SSVM was the highest compared with four other methods as well as the fastest. On larger problems, SSVM was comparable or faster than SVM light (T. Joachims, in Advances in Kernel Methods—Support Vector Learning, MIT Press: Cambridge, MA, 1999), SOR (O.L. Mangasarian and David R. Musicant, IEEE Transactions on Neural Networks, vol. 10, pp. 1032–1037, 1999) and SMO (J. Platt, in Advances in Kernel Methods—Support Vector Learning, MIT Press: Cambridge, MA, 1999). SSVM can also generate a highly nonlinear separating surface such as a checkerboard.  相似文献   

14.
In a series of letters to D. Stanton, R.W. Gosper presented many strange evaluations of hypergeometric series. Recently, we rediscovered one of the strange hypergeometric identities appearing in (Gosper: A letter to D. Stanton, XEROX Palo Alto Research Center, 1977). In this paper, we prove this identity and derive its generalization using contiguity operators.  相似文献   

15.
After studying Gaussian type quadrature formulae with mixed boundary conditions, we suggest a fast algorithm for computing their nodes and weights. It is shown that the latter are computed in the same manner as in the theory of the classical Gauss quadrature formulae. In fact, all nodes and weights are again computed as eigenvalues and eigenvectors of a real symmetric tridiagonal matrix. Hence, we can adapt existing procedures for generating such quadrature formulae. Comparative results with various methods now in use are given. In the second part of this paper, new algorithms for spectral approximations for second-order elliptic problems are derived. The key to the efficiency of our algorithms is to find an appropriate spectral approximation by using the most accurate quadrature formula, which takes the boundary conditions into account in such a way that the resulting discrete system has a diagonal mass matrix. Hence, our algorithms can be used to introduce explicit resolutions for the time-dependent problems. This is the so-called lumped mass method. The performance of the approach is illustrated with several numerical examples in one and two space dimensions.

  相似文献   


16.
We examine certain questions related to the choice of scaling, shifting and weighting strategies for interior-point methods for linear programming. One theme is the desire to make trajectories to be followed by algorithms into straight lines if possible to encourage fast convergence. While interior-point methods in general follow curves, this occurrence of straight lines seems appropriate to honor George Dantzig's contributions to linear programming, since his simplex method can be seen as following either a piecewise-linear path inn-space or a straight line inm-space (the simplex interpretation).Dedicated to Professor George B. Dantzig on the occasion of his eightieth birthday.Research supported in part by NSF, AFOSR, and ONR through NSF Grant DMS-8920550.  相似文献   

17.
This paper provides an analysis of the polynomiality of primal-dual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu’s scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primal-dual affine scaling algorithms generates an approximate solution (given a precision ε) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial ofn, ln(1/ε) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in Jansen et al., SIAM Journal on Optimization 7 (1997) 126–140. Research supported in part by Grant-in-Aids for Encouragement of Young Scientists (06750066) from the Ministry of Education, Science and Culture, Japan. Research supported by Dutch Organization for Scientific Research (NWO), grant 611-304-028  相似文献   

18.
Truncated-Newton methods are a class of optimization methods suitable for large scale problems. At each iteration, a search direction is obtained by approximately solving the Newton equations using an iterative method. In this way, matrix costs and second-derivative calculations are avoided, hence removing the major drawbacks of Newton's method. In this form, the algorithms are well-suited for vectorization. Further improvements in performance are sought by using block iterative methods for computing the search direction. In particular, conjugate-gradient-type methods are considered. Computational experience on a hypercube computer is reported, indicating that on some problems the improvements in performance can be better than that attributable to parallelism alone.Partially supported by Air Force Office of Scientific Research grant AFOSR-85-0222.Partially supported by National Science Foundation grant ECS-8709795, co-funded by the U.S. Air Force Office of Scientific Research.  相似文献   

19.
We introduce a new class of methods for the Cauchy problem for ordinary differential equations (ODEs). We begin by converting the original ODE into the corresponding Picard equation and apply a deferred correction procedure in the integral formulation, driven by either the explicit or the implicit Euler marching scheme. The approach results in algorithms of essentially arbitrary order accuracy for both non-stiff and stiff problems; their performance is illustrated with several numerical examples. For non-stiff problems, the stability behavior of the obtained explicit schemes is very satisfactory and algorithms with orders between 8 and 20 should be competitive with the best existing ones. In our preliminary experiments with stiff problems, a simple adaptive implementation of the method demonstrates performance comparable to that of a state-of-the-art extrapolation code (at least, at moderate to high precision).Deferred correction methods based on the Picard equation appear to be promising candidates for further investigation.  相似文献   

20.
The filled function method is an effective approach to find the global minimizer. Two of the recently proposed filled functions are H(X) and L2(X). Although their numerical behavior is acceptable, they are not defined everywhere. This paper proposes a class of augmented filled functions with improved analyticity. Issues covered in the presented work include: theoretical properties, convergence analysis, geometric interpretation, algorithms, and numerical experiments. The overall performance of the new approach is comparable to the recently proposed ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号