首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multi-step quasi-Newton methods for optimization   总被引:4,自引:0,他引:4  
Quasi-Newton methods update, at each iteration, the existing Hessian approximation (or its inverse) by means of data deriving from the step just completed. We show how “multi-step” methods (employing, in addition, data from previous iterations) may be constructed by means of interpolating polynomials, leading to a generalization of the “secant” (or “quasi-Newton”) equation. The issue of positive-definiteness in the Hessian approximation is addressed and shown to depend on a generalized version of the condition which is required to hold in the original “single-step” methods. The results of extensive numerical experimentation indicate strongly that computational advantages can accrue from such an approach (by comparison with “single-step” methods), particularly as the dimension of the problem increases.  相似文献   

2.
Solutions to Laplace's equation are required for a wide range of problems. Arguably, the most difficult class of problems involves a “free” boundary, where the location of one (or more) of the boundaries is initially unknown. Analytical solutions for these problems were restricted to regular boundary geometries. However, recently the classical series method has been modified, to cater for arbitrary boundary geometries, using least squares methods. For free boundary problems, solutions can be obtained by solving a sequence of known boundary problems—at each iteration, the series coefficients can be estimated. Efficient calculation of the series coefficients becomes very important, particularly when the number of iterations is relatively high. In this paper, three methods for estimating the series coefficients will be described, in the context of a free boundary problem. The computational cost of each method will be analysed and compared, and the most appropriate method for this class of problem is indicated.  相似文献   

3.
A convex polytope P can be specified in two ways: as the convex hull of the vertex set V of P, or as the intersection of the set H of its facet-inducing halfspaces. The vertex enumeration problem is to compute V from H>. The facet enumeration problem is to compute H from V. These two problems are essentially equivalent under point/hyperplane duality. They are among the central computational problems in the theory of polytopes. It is open whether they can be solved in time polynomial in |H| + |V| and the dimension. In this paper we consider the main known classes of algorithms for solving these problems. We argue that they all have at least one of two weaknesses: inability to deal well with “degeneracies”, or, inability to control the sizes of intermediate results. We then introduce families of polytopes that exercise those weaknesses. Roughly speaking, fat-lattice or intricate polytopes cause algorithms with bad degeneracy handling to perform badly; dwarfed polytopes cause algorithms with bad intermediate size control to perform badly. We also present computational experience with trying to solve these problem on these hard polytopes, using various implementations of the main algorithms.  相似文献   

4.
Our main interest in this paper is to translate from “natural language” into “system theoretical language”. This is of course important since a statement in system theory can be analyzed mathematically or computationally. We assume that, in order to obtain a good translation, “system theoretical language” should have great power of expression. Thus we first propose a new frame of system theory, which includes the concepts of “measurement” as well as “state equation”. And we show that a certain statement in usual conversation, i.e., fuzzy modus ponens with the word “very”, can be translated into a statement in the new frame of system theory. Though our result is merely one example of the translation from “natural language” into “system theoretical language”, we believe that our method is fairly general.  相似文献   

5.
Two numerical methods for solving systems of equations have recently been proposed: a method based on monomial approximations (the “monomial method”) and a technique based on S-system methodology (the “S-system method”). The two methods have been shown to be fundamentally identical, that is, they are both equivalent to Newton's method operating on a transformed version of the system of equations. Yet, when applied specifically to algebraic systems of equations, they have significant computational differences that may impact the relative computational efficiency of the two methods. These computational differences are described. A combinatorial strategy for locating many, and sometimes all, solutions to a system of nonlinear equations has also been suggested previously. This paper further investigates the effectiveness of this strategy when applied to either of the two methods.  相似文献   

6.
For the parallel integration of nonstiff initial value problems (IVPs), three main approaches can be distinguished: approaches based on “parallelism across the problem”, on “parallelism across the method” and on “parallelism across the steps”. The first type of parallelism does not require special integration methods and can be exploited within any available IVP solver. The method-parallelism approach received much attention, particularly within the class of explicit Runge-Kutta methods originating from fixed point iteration of implicit Runge-Kutta methods of Gaussian type. The construction and implementation on a parallel machine of such methods is extremely simple. Since the computational work per processor is modest with respect to the number of data to be exchanged between the various processors, this type of parallelism is most suitable for shared memory systems. The required number of processors is roughly half the order of the generating Runge-Kutta method and the speed-up with respect to a good sequential IVP solver is about a factor 2. The third type of parallelism (step-parallelism) can be achieved in any IVP solver based on predictor-corrector iteration and requires the processors to communicate after each full iteration. If the iterations have sufficient computational volume, then the step-parallel approach may be suitable for implementation on distributed memory systems. Most step-parallel methods proposed so far employ a large number of processors, but lack the property of robustness, due to a poor convergence behaviour in the iteration process. Hence, the effective speed-up is rather poor. The dynamic step-parallel iteration process proposed in the present paper is less massively parallel, but turns out to be sufficiently robust to achieve speed-up factors up to 15.  相似文献   

7.
8.
It is shown that in the numerical solution of the Cauchy problem for systems of second-order ordinary differential equations, when solved for the highest-order derivative, it is possible to construct simple and economical implicit computational algorithms for step-by-step integration without using laborious iterative procedures based on processes of the Newton-Raphson iterative type. The initial problem must first be transformed to a new argument — the length of its integral curve. Such a transformation is carried out using an equation relating the initial parameter of the problem to the length of the integral curve. The linear acceleration method is used as an example to demonstrate the procedure of constructing an implicit algorithm using simple iterations for the numerical solution of the transformed Cauchy problem. Propositions concerning the computational properties of the iterative process are formulated and proved. Explicit estimates are given for an integration stepsize that guarantees the convergence of the simple iterations. The efficacy of the proposed procedure is demonstrated by the numerical solution of three problems. A comparative analysis is carried out of the numerical solutions obtained with and without parametrization of the initial problems in these three settings. As a qualitative test the problem of the celestial mechanics of the “Pleiades” is considered. The second example is devoted to modelling the non-linear dynamics of an elastic flexible rod fixed at one end as a cantilever and coiled in its initial (static) state into a ring by a bending moment. The third example demonstrates the numerical solution of the problem of the “unfolding” of a mechanical system consisting of three flexible rods with given control input.  相似文献   

9.
Two natural questions are answered in the negative:
• “If a space has the property that small nulhomotopic loops bound small nulhomotopies, then are loops which are limits of nulhomotopic loops themselves nulhomotopic?”

• “Can adding arcs to a space cause an essential curve to become nulhomotopic?”

The answer to the first question clarifies the relationship between the notions of a space being homotopically Hausdorff and π1-shape injective.

Keywords: Peano continuum; Path space; Shape injective; Homotopically Hausdorff; 1-ULC  相似文献   


10.
Realization of PID controls by fuzzy control methods   总被引:10,自引:0,他引:10  
This paper shows that PID controllers can be realized by fuzzy control methods of “product-sum-gravity method” and “simplified fuzzy reasoning method”. PID controllers, however, cannot be constructed by min-max-gravity method known as Mamdani's fuzzy reasoning method. Furthermore, extrapolative reasoning can be executed by the product-sum-gravity method and simplified fuzzy reasoning method by extending membership functions of antecedent parts of fuzzy rules.  相似文献   

11.
We prove that, in all dimensions d4, every simple open polygonal chain and every tree may be straightened, and every simple closed polygonal chain may be convexified. These reconfigurations can be achieved by algorithms that use polynomial time in the number of vertices, and result in a polynomial number of “moves”. These results contrast to those known for d=2, where trees can “lock”, and for d=3, where open and closed chains can lock.  相似文献   

12.
We consider a “visual” metric between multivariate densities that is defined in terms of the Hausdorff distance between their hypographs. This distance has been first proposed and analyzed by Beer (1982) in the non-probabilistic context of approximation theory. We suggest the use of this distance in density estimation as a weaker, more flexible alternative to the supremum metric: it also has a direct visual interpretation but does not require very restrictive continuity assumptions. A further Hausdorff-based distance is also proposed and analyzed. We obtain consistency results, and a convergence rate, for the usual kernel density estimators with respect to these metrics provided that the underlying density is not too discontinuous. These results can be seen as a partial extension to the “qualitative smoothing” setup (see Marron and Tsybakov, 1995) of the classical analogous theorems with respect to the supremum metric.  相似文献   

13.
Minimum cost multicommodity flows are a useful model for bandwidth allocation problems. These problems are arising more frequently as regional service providers wish to carry their traffic over some national core network. We describe a simple and practical combinatorial algorithm to find a minimum cost multicommodity flow in a ring network. Apart from 1 and 2-commodity flow problems, this seems to be the only such “combinatorial augmentation algorithm” for a version of exact mincost multicommodity flow. The solution it produces is always half-integral, and by increasing the capacity of each link by one, we may also find an integral routing of no greater cost. The “pivots” in the algorithm are determined by choosing an >0, increasing and decreasing sets of variables, and adjusting these variables up or down accordingly by . In this sense, it generalizes the cycle cancelling algorithms for (single source) mincost flow. Although the algorithm is easily stated, proof of its correctness and polynomially bounded running time are more complex.  相似文献   

14.
A differential pursuit-evasion game is considered with three pursuers and one evader. It is assumed that all objects (players) have simple motions and that the game takes place in a plane. The control vectors satisfy geometrical constraints and the evader has a superiority in control resources. The game time is fixed. The value functional is the distance between the evader and the nearest pursuer at the end of the game. The problem of determining the value function of the game for any possible position is solved.

Three possible cases for the relative arrangement of the players at an arbitrary time are studied: “one-after-one”, “two-after-one”, “three-after-one-in-the-middle” and “three-after-one”. For each of the relative arrangements of the players a guaranteed result function is constructed. In the first three cases the function is expressed analytically. In the fourth case a piecewise-programmed construction is presented with one switchover, on the basis of which the value of the function is determined numerically. The guaranteed result function is shown to be identical with the game value function. When the initial pursuer positions are fixed in an arbitrary manner there are four game domains depending on their relative positions. The boundary between the “three-after-one-in-the-middle” domain and the “three-after-one” domain is found numerically, and the remaining boundaries are interior Nicomedean conchoids, lines and circles. Programs are written that construct singular manifolds and the value function level lines.  相似文献   


15.
We show that an algebra with a non-nilpotent Lie group of automorphisms or “symmetries” (e.g., smooth functions on a manifold with such a group of diffeomorphisms) may generally be deformed (in the function case, “quantized”) in such a way that only a proper subgroup of the original group acts. This symmetry breaking is a consequence of the existence of certain “universal deformation formulas” which are elements, independent of the original algebra, in the tensor algebra of the enveloping algebra of the Lie algebra of the group.  相似文献   

16.
For a permutation group given by a set of generators, the problem of finding “special” group members is NP-hard in many cases, e.g., this is true for the problem of finding a permutation with a minimum number of fixed points or a permutation with a minimal Hamming distance from a given permutation. Many of these problems can be modeled as linear optimization problems over permutation groups. We develop a polyhedral approach to this general problem and derive an exact and practically fast algorithm based on the branch & cut-technique.  相似文献   

17.
What Monte Carlo models can do and cannot do efficiently?   总被引:2,自引:0,他引:2  
The question “what Monte Carlo models can do and cannot do efficiently” is discussed for some functional spaces that define the regularity of the input data. Data classes important for practical computations are considered: classes of functions with bounded derivatives and Hölder type conditions, as well as Korobov-like spaces.

Theoretical performance analysis of some algorithms with unimprovable rate of convergence is given. Estimates of computational complexity of two classes of algorithms – deterministic and randomized for both problems – numerical multidimensional integration and calculation of linear functionals of the solution of a class of integral equations are presented.  相似文献   


18.
We study the computational problem “find the value of the quantified formula obtained by quantifying the variables in a sum of terms.” The “sum” can be based on any commutative monoid, the “quantifiers” need only satisfy two simple conditions, and the variables can have any finite domain. This problem is a generalization of the problem “given a sum-of-products of terms, find the value of the sum” studied in [R.E. Stearns and H.B. Hunt III, SIAM J. Comput. 25 (1996) 448–476]. A data structure called a “structure tree” is defined which displays information about “subproblems” that can be solved independently during the process of evaluating the formula. Some formulas have “good” structure trees which enable certain generic algorithms to evaluate the formulas in significantly less time than by brute force evaluation. By “generic algorithm,” we mean an algorithm constructed from uninterpreted function symbols, quantifier symbols, and monoid operations. The algebraic nature of the model facilitates a formal treatment of “local reductions” based on the “local replacement” of terms. Such local reductions “preserve formula structure” in the sense that structure trees with nice properties transform into structure trees with similar properties. These local reductions can also be used to transform hierarchical specified problems with useful structure into hierarchically specified problems having similar structure.  相似文献   

19.
The irradiation of solids by pulsed (of nanosecond periodicity) relativistic electron beams (also by powerful optic laser beams) led to the discovery of a new type of fracture /1–14/, entirely different from viscous or brittle fracture type produced by mechanical loads /15/. A theory based on the assumption of formation in a solid subjected to such irradiation of clusters of electrons that act as “knives” or “wedges” cutting the solid. Basic model problems of this theory are formulated.  相似文献   

20.
We study two systems which lead to a lattice when an integration path is specified in “aesthetic field theory”. One of these cases involves nonsoliton type particles (magnitudes of maxima and minima oscillate in time). The other system is made up of soliton type particles. The two systems are intrinsically three-dimensional. We speak of the third dimension as “time”. In one of our solutions, the particles move on straight line trajectories, insofar as our numerical work indicates. In the other solution, the soliton type particles undergo what appears to be simple harmonic motion in both the x- and y-directions (loop motion). We then study these two systems using the new approach to integrability which involves a superposition principle and is characterized by a unique change function at each point. We still find multi maxima and minima. The systems are not as symmetric as the lattice. The soliton characteristic is preserved by the new method. We investigated the motion of lattice particles. We found evidence of maxima (minima) regions coalescing so that the location of the maxima (minima) became difficult to follow. The concept of location of particles may not even have a well-defined meaning here. We find examples of soliton particles appearing and disappearing. We conclude that the manner of integration in a no integrability theory can transform a system with well-defined trajectories into a system where particles can no longer be followed in time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号