首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent advances in gradient algorithms for optimal control problems   总被引:1,自引:0,他引:1  
This paper summarizes recent advances in the area of gradient algorithms for optimal control problems, with particular emphasis on the work performed by the staff of the Aero-Astronautics Group of Rice University. The following basic problem is considered: minimize a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter π are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the sequential gradient-restoration algorithm and the combined gradient-restoration algorithm are presented. The descent properties of these algorithms are studied, and schemes to determine the optimum stepsize are discussed. Both of the above algorithms require the solution of a linear, two-point boundary-value problem at each iteration. Hence, a discussion of integration techniques is given. Next, a family of gradient-restoration algorithms is introduced. Not only does this family include the previous two algorithms as particular cases, but it allows one to generate several additional algorithms, namely, those with alternate restoration and optional restoration. Then, two modifications of the sequential gradient-restoration algorithm are presented in an effort to accelerate terminal convergence. In the first modification, the quadratic constraint imposed on the variations of the control is modified by the inclusion of a positive-definite weighting matrix (the matrix of the second derivatives of the Hamiltonian with respect to the control). The second modification is a conjugate-gradient extension of the sequential gradient-restoration algorithm. Next, the addition of a nondifferential constraint, to be satisfied everywhere along the interval of integration, is considered. In theory, this seems to be only a minor modification of the basic problem. In practice, the change is considerable in that it enlarges dramatically the number and variety of problems of optimal control which can be treated by gradient-restoration algorithms. Indeed, by suitable transformations, almost every known problem of optimal control theory can be brought into this scheme. This statement applies, for instance, to the following situations: (i) problems with control equality constraints, (ii) problems with state equality constraints, (iii) problems with equality constraints on the time rate of change of the state, (iv) problems with control inequality constraints, (v) problems with state inequality constraints, and (vi) problems with inequality constraints on the time rate of change of the state. Finally, the simultaneous presence of nondifferential constraints and multiple subarcs is considered. The possibility that the analytical form of the functions under consideration might change from one subarc to another is taken into account. The resulting formulation is particularly relevant to those problems of optimal control involving bounds on the control or the state or the time derivative of the state. For these problems, one might be unwilling to accept the simplistic view of a continuous extremal arc. Indeed, one might want to take the more realistic view of an extremal arc composed of several subarcs, some internal to the boundary being considered and some lying on the boundary. The paper ends with a section dealing with transformation techniques. This section illustrates several analytical devices by means of which a great number of problems of optimal control can be reduced to one of the formulations presented here. In particular, the following topics are treated: (i) time normalization, (ii) free initial state, (iii) bounded control, and (iv) bounded state.  相似文献   

2.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the case of a quadratic functional subject to linear constraints is considered, and a conjugate-gradient algorithm is derived. Nominal functionsx(t),u(t), π satisfying all the differential equations and boundary conditions are assumed. Variations Δx(t), δu(t), Δπ are determined so that the value of the functional is decreased. These variations are obtained by minimizing the first-order change of the functional subject to the differential equations, the boundary conditions, and a quadratic constraint on the variations of the control and the parameter. Next, the more general case of a nonquadratic functional subject to nonlinear constraints is considered. The algorithm derived for the linear-quadratic case is employed with one modification: a restoration phase is inserted between any two successive conjugate-gradient phases. In the restoration phase, variations Δx(t), Δu(t), Δπ are determined by requiring the least-square change of the control and the parameter subject to the linearized differential equations and the linearized boundary conditions. Thus, a sequential conjugate-gradient-restoration algorithm is constructed in such a way that the differential equations and the boundary conditions are satisfied at the end of each complete conjugate-gradient-restoration cycle. Several numerical examples illustrating the theory of this paper are given in Part 2 (see Ref. 1). These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper. This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor A. Miele for stimulating discussions. Formerly, Graduate Studient in Aero-Astronautics, Department of Mechanical and Aerospace Engineering and Materials Science, Rice University, Houston, Texas.  相似文献   

3.
In this paper, the problem of minimizing a nonlinear functionf(x) subject to a nonlinear constraint (x)=0 is considered, wheref is a scalar,x is ann-vector, and is aq-vector, withq<n. A conjugate gradient-restoration algorithm similar to those developed by Mieleet al. (Refs. 1 and 2) is employed. This particular algorithm consists of a sequence of conjugate gradient-restoration cycles. The conjugate gradient portion of each cycle is based upon a conjugate gradient algorithm that is derived for the special case of a quadratic function subject to linear constraints. This portion of the cycle involves a single step and is designed to decrease the value of the function while satisfying the constraints to first order. The restoration portion of each cycle involves one or more iterations and is designed to restore the norm of the constraint function to within a predetermined tolerance about zero.The conjugate gradient-restoration sequence is reinitialized with a simple gradient step everyn–q or less cycles. At the beginning of each simple gradient step, a positive-definite preconditioning matrix is used to accelerate the convergence of the algorithm. The preconditioner chosen,H +, is the positive-definite reflection of the Hessian matrixH. The matrixH + is defined herein to be a matrix whose eigenvectors are identical to those of the Hessian and whose eigenvalues are the moduli of the latter's eigenvalues. A singular-value decomposition is used to efficiently construct this matrix. The selection of the matrixH + as the preconditioner is motivated by the fact that gradient algorithms exhibit excellent convergence characteristics on quadratic problems whose Hessians have small condition numbers. To this end, the transforming operatorH + 1/2 produces a transformed Hessian with a condition number of one.A higher-order example, which has resulted from a new eigenstructure assignment formulation (Ref. 3), is used to illustrate the rapidity of convergence of the algorithm, along with two simpler examples.  相似文献   

4.
This paper considers the problem of extremizing a functionalI which depends on the statex(t), the controlu(t), and the parameter . The state is ann-vector, the control is anm-vector, and the parameter is ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. A modified quasilinearization algorithm is developed; its main property is a descent property in the performance indexR, the cumulative error in the constraints and the optimum conditions.Modified quasilinearization differs from ordinary quasilinearization because of the inclusion of a scaling factor (or stepsize) in the system of variations. The stepsize is determined by a one-dimensional search so as to ensure the decrease in the performance indexR; this can be achieved through a bisection process starting from = 1. Convergence is achieved whenR becomes smaller than some preselected value.In order to start the algorithm, some nominal functionsx(t),u(t), and multipliers (t), must be chosen. In a real problem, the selection ofx(t),u(t), can be made on the basis of physical considerations. Concerning (t) and , no useful guidelines have been available thus far. In this paper, a method for selecting (t) and optimally is presented: the performance indexR is minimized with respect to (t) and . Since the functionalR is quadratically dependent on (t) and , the resulting variational problem is governed by Euler equations and boundary conditions which are linear.Two numerical examples are presented, and it is shown that, if the initial multipliers (t) and are chosen optimally, modified quasilinearization converges rapidly to the solution. On the other hand, if the initial multipliers are chosen arbitrarily, modified quasilinearization may or may not converge to the solution. From the examples, it is concluded that the beneficial effects associated with the optimal initial choice of the multipliers (t) and lie primarily in increasing the likelihood of convergence rather than accelerating convergence. However, this optimal choice does not guarantee convergence, since convergence depends on the functional being extremized, the differential constraints, the boundary conditions, and the nominal functionsx(t),u(t), chosen in order to start the algorithm.This research, supported by the National Science Foundation, Grant No. GP-18522, is based on Ref. 1. The authors are indebted to Mr. E. E. Cragg for analytical and computational assistance.  相似文献   

5.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter . Here,I is a scalar,x ann-vector,u anm-vector, and ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. Asequential algorithm composed of the alternate succession of gradient phases and restoration phases is presented. This sequential algorithm is contructed in such a way that the differential equations and boundary conditions are satisfied at the end of each iteration, that is, at the end of a complete gradient-restoration phase; hence, the value of the functional at the end of one iteration is comparable with the value of the functional at the end of any other iteration.In thegradient phase, nominal functionsx(t),u(t), satisfying all the differential equations and boundary conditions are assumed. Variations x(t), u(t), leading to varied functions (t),(t), are determined so that the value of the functional is decreased. These variations are obtained by minimizing the first-order change of the functional subject to the linearized differential equations, the linearized boundary conditions, and a quadratic constraint on the variations of the control and the parameter.Since the constraints are satisfied only to first order during the gradient phase, the functions (t),(t), may violate the differential equations and/or the boundary conditions. This being the case, a restoration phase is needed prior to starting the next gradient phase. In thisrestoration phase, the functions (t),(t), are assumed to be the nominal functions. Variations (t), (t), leading to varied functions (t),û(t), consistent with all the differential equations and boundary conditions are determined. These variations are obtained by requiring the least-square change of the control and the parameter subject to the linearized differential equations and the linearized boundary conditions. Of course, the restoration phase must be performed iteratively until the cumulative error in the differential equations and boundary conditions becomes smaller than some preselected value.If the gradient stepsize is , an order-of-magnitude analysis shows that the gradient corrections are x=O(), u=O(), =O(), while the restoration corrections are . Hence, for sufficiently small, the restoration phase preserves the descent property of the gradient phase: the functionalI decreases between any two successive iterations.Methods to determine the gradient stepsize in an optimal fashion are discussed. Examples are presented for both the fixed-final-time case and the free-final-time case. The numerical results show the rapid convergence characteristics of the sequential gradient-restoration algorithm.The portions of this paper dealing with the fixed-final-time case were presented by the senior author at the 2nd Hawaii International Conference on System Sciences, Honolulu, Hawaii, 1969. The portions of this paper dealing with the free-final-time case were presented by the senior author at the 20th International Astronautical Congress, Mar del Plata, Argentina, 1969. This research, supported by the NASA-Manned Spacecraft Center, Grant No. NGR-44-006-089, Supplement No. 1, is a condensation of the investigations presented in Refs. 1–5. The authors are indebted to Professor H. Y. Huang for helpful discussions.  相似文献   

6.
In this paper, the problem of minimizing a functionf(x) subject to a constraint (x)=0 is considered, wheref is a scalar,x ann-vector, and aq-vector, withq <n. Several conjugate gradient-restoration algorithms are analyzed: these algorithms are composed of the alternate succession of conjugate gradient phases and restoration phases. In the conjugate gradient phase, one tries to improve the value of the function while avoiding excessive constraint violation. In the restoration phase, one tries to reduce the constraint error, while avoiding excessive change in the value of the function.Concerning the conjugate gradient phase, two classes of algorithms are considered: for algorithms of Class I, the multiplier is determined so that the error in the optimum condition is minimized for givenx; for algorithms of Class II, the multiplier is determined so that the constraint is satisfied to first order. Concerning the restoration phase, two topics are investigated: (a) restoration type, that is, complete restoration vs incomplete restoration and (b) restoration frequency, that is, frequent restoration vs infrequent restoration.Depending on the combination of type and frequency of restoration, four algorithms are generated within Class I and within Class II, respectively: Algorithm () is characterized by complete and frequent restoration; Algorithm () is characterized by incomplete and frequent restoration; Algorithm () is characterized by complete and infrequent restoration; and Algorithm () is characterized by incomplete and infrequent restoration.If the functionf(x) is quadratic and the constraint (x) is linear, all of the previous algorithms are identical, that is, they produce the same sequence of points and converge to the solution in the same number of iterations. This number of iterations is at mostN* =nq if the starting pointx s is such that (x s)=0, and at mostN*=1+nq if the starting pointx s is such that (x s) 0.In order to illustrate the theory, five numerical examples are developed. The first example refers to a quadratic function and a linear constraint. The remaining examples refer to a nonquadratic function and a nonlinear constraint. For the linear-quadratic example, all the algorithms behave identically, as predicted by the theory. For the nonlinear-nonquadratic examples, Algorithm (II-), which is characterized by incomplete and infrequent restoration, exhibits superior convergence characteristics.It is of interest to compare Algorithm (II-) with Algorithm (I-), which is the sequential conjugate gradient-restoration algorithm of Ref. 1 and is characterized by complete and frequent restoration. For the nonlinear-nonquadratic examples, Algorithm (II-) converges to the solution in a number of iterations which is about one-half to two-thirds that of Algorithm (I-).This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67.  相似文献   

7.
In this paper, the problem of minimizing a functionf(x) subject to a constraint (x)=0 is considered. Here,f is a scalar,x ann-vector, and aq-vector, withq<n. The use of the augmented penalty function is explored in connection with theconjugate gradient-restoration algorithm. The augmented penalty functionW(x, ,k) is defined to be the linear combination of the augmented functionF(x, ) and the constraint errorP(x), where theq-vector is the Lagrange multiplier and the scalark is the penalty constant.The conjugate gradient-restoration algorithm includes a conjugate-gradient phase involvingn-q iterations and a restoration phase involving one iteration. In the conjugate-gradient phase, one tries to improve the value of the function, while avoiding excessive constraint violation. In the restoration phase, one reduces the constraint error, while avoiding excessive change in the value of the function.Concerning the conjugate-gradient phase, two classes of algorithms are considered: for algorithms of Class I, the Lagrange multiplier is determined so that the error in the optimum condition is minimized for givenx; for algorithms of Class II, the Lagrange multiplier is determined so that the constraint is satisfied to first order. For each class, two versions are studied. In version (), the penalty constant is held unchanged throughout the entire algorithm. In version (), the penalty constant is updated at the beginning of each conjugate-gradient phase so as to achieve certain desirable properties.Concerning the restoration phase, the minimum distance algorithm is employed. Since the use of the augmented penalty function automatically prevents excessive constraint violation, single-step restoration is considered.If the functionf(x) is quadratic and the constraint (x) is linear, all the previous algorithms are identical, that is, they produce the same sequence of points and converge to the solution in the same number of iterations. This number of iterations is at mostN*=nq if the starting pointx s is such that (x s)=0 and at mostN*=1+nq if the starting pointx s is such that (x s)0.In order to illustrate the theory, seven numerical examples are developed. The first example refers to a quadratic function and a linear constraint. The remaining examples refer to nonquadratic functions and nonlinear constraints. For the linear-quadratic example, all the algorithms behave identically, as predicted by the theory. For the nonlinear-nonquadratic examples, algorithms of Class II generally exhibit faster convergence than algorithms of Class I, and algorithms of type () generally exhibit faster convergence than algorithms of type ().This research was supported by the National Science Foundation, Grant No. GP-27271.  相似文献   

8.
This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, a state inequality constraint, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy.The approach taken is a sequence of two-phase processes or cycles, composed of a gradient phase and a restoration phase. The gradient phase involves a single iteration and is designed to decrease the functional, while the constraints are satisfied to first order. The restoration phase involves one or several iterations and is designed to restore the constraints to a predetermined accuracy, while the norm of the variations of the control and the parameter is minimized. The principal property of the algorithm is that it produces a sequence of feasible suboptimal solutions: the functionsx(t),u(t), obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the functionals of any two elements of the sequence are comparable.Here, the state inequality constraint is handled in a direct manner. A predetermined number and sequence of subarcs is assumed and, for the time interval for which the trajectory of the system lies on the state boundary, the control is determined so that the state boundary is satisfied. The state boundary and the entrance conditions are assumed to be linear inx and , and the sequential gradient-restoration algorithm is constructed in such a way that the state inequality constraint is satisfied at each iteration of the gradient phase and the restoration phase along all of the subarcs composing the trajectory.At first glance, the assumed linearity of the state boundary and the entrance conditions appears to be a limitation to the theory. Actually, this is not the case. The reason is that every constrained minimization problem can be brought to the present form through the introduction of additional state variables.To facilitate the numerical solution on digital computers, the actual time is replaced by the normalized timet, defined in such a way that each of the subarcs composing the extremal arc has a normalized time length t=1. In this way, variable-time corner conditions and variable-time terminal conditions are transformed into fixed-time corner conditions and fixed-time terminal conditions. The actual times 1, 2, at which (i) the state boundary is entered, (ii) the state boundary is exited, and (iii) the terminal boundary is reached are regarded to be components of the parameter being optimized.The numerical examples illustrating the theory demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.This paper is based in part on a portion of the dissertation which the first author submitted in partial fulfillment of the requirements for the PhD Degree at the Air Force Institute of Technology, Wright-Patterson AFB, Ohio. This research was supported in part by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185. The authors are indebted to Professor H. Y. Huang, Dr. R. R. Iyer, Dr. J. N. Damoulakis, Mr. A. Esterle, and Mr. J. R. Cloutier for helpful discussions as well as analytical and numerical assistance. This paper is a condensation of the investigations reported in Refs. 1–2.  相似文献   

9.
This paper contains general transformation techniques useful to convert minimax problems of optimal control into the Mayer-Bolza problem of the calculus of variations [Problem (P)]. We consider two types of minimax problems: minimax problems of Type (Q), in which the minimax function depends on the state and does not depend on the control; and minimax problems of Type (R), in which the minimax function depends on both the state and the control. Both Problem (Q) and Problem (R) can be reduced to Problem (P).For Problem (Q), we exploit the analogy with a bounded-state problem in combination with a transformation of the Jacobson type. This requires the proper augmentation of the state vectorx(t), the control vectoru(t), and the parameter vector , as well as the proper augmentation of the constraining relations. As a result of the transformation, the unknown minimax value of the performance index becomes a component of the parameter vector being optimized.For Problem (R), we exploit the analogy with a bounded-control problem in combination with a transformation of the Valentine type. This requires the proper augmentation of the control vectoru(t) and the parameter vector , as well as the proper augmentation of the constraining relations. As a result of the transformation, the unknown minimax value of the performance index becomes a component of the parameter vector being optimized.In a subsequent paper (Part 2), the transformation techniques presented here are employed in conjunction with the sequential gradient-restoration algorithm for solving optimal control problems on a digital computer; both the single-subarc approach and the multiple-subarc approach are discussed.This research was supported by the National Science Foundation, Grant No. ENG-79-18667, and by Wright-Patterson Air Force Base, Contract No. F33615-80-C3000. This paper is a condensation of the investigations reported in Refs. 1–7. The authors are indebted to E. M. Coker and E. M. Sims for analytical and computational assistance.  相似文献   

10.
This paper considers the numerical solution of two classes of optimal control problems, called Problem P1 and Problem P2 for easy identification.Problem P1 involves a functionalI subject to differential constraints and general boundary conditions. It consists of finding the statex(t), the controlu(t), and the parameter so that the functionalI is minimized, while the constraints and the boundary conditions are satisfied to a predetermined accuracy. Problem P2 extends Problem P1 to include nondifferential constraints to be satisfied everywhere along the interval of integration. Algorithms are developed for both Problem P1 and Problem P2.The approach taken is a sequence of two-phase cycles, composed of a gradient phase and a restoration phase. The gradient phase involves one iteration and is designed to decrease the value of the functional, while the constraints are satisfied to first order. The restoration phase involves one or more iterations and is designed to force constraint satisfaction to a predetermined accuracy, while the norm squared of the variations of the control, the parameter, and the missing components of the initial state is minimized.The principal property of both algorithms is that they produce a sequence of feasible suboptimal solutions: the functions obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the values of the functionalI corresponding to any two elements of the sequence are comparable.The stepsize of the gradient phase is determined by a one-dimensional search on the augmented functionalJ, while the stepsize of the restoration phase is obtained by a one-dimensional search on the constraint errorP. The gradient stepsize and the restoration stepsize are chosen so that the restoration phase preserves the descent property of the gradient phase. Therefore, the value of the functionalI at the end of any complete gradient-restoration cycle is smaller than the value of the same functional at the beginning of that cycle.The algorithms presented here differ from those of Refs. 1 and 2, in that it is not required that the state vector be given at the initial point. Instead, the initial conditions can be absolutely general. In analogy with Refs. 1 and 2, the present algorithms are capable of handling general final conditions; therefore, they are suited for the solution of optimal control problems with general boundary conditions. Their importance lies in the fact that many optimal control problems involve initial conditions of the type considered here.Six numerical examples are presented in order to illustrate the performance of the algorithms associated with Problem P1 and Problem P2. The numerical results show the feasibility as well as the convergence characteristics of these algorithms.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-76-3075. Partial support for S. Gonzalez was provided by CONACYT, Consejo Nacional de Ciencia y Tecnologia, Mexico City, Mexico.  相似文献   

11.
Bounded terminal conditions of nonlinear optimization problems are converted to equality terminal conditions via the Valentine's device. In so doing, additional unknown parameters are introduced into the problem. The transformed problems can still be easily solved using the sequential gradient-restoration algorithm (SGRA) via a simple augmentation of the unknown parameter vector . Three example problems with bounded terminal conditions are solved to verify this technique.This research was supported in part by the National Aeronautics and Space Administration under NASA Grant No. NCC 2-106.  相似文献   

12.
A permutation 1 2 ... n is alternating if 1<2>3<4 .... We present a constant average-time algorithm for generating all alternating permutations in lexicographic order. Ranking and unranking algorithms are also derived.Research supported by the Natural Sciences and Engineering Research Council of Canada under grant A3379.  相似文献   

13.
The problem of minimizing a function fnof(x) subject to the nonlinear constraint ?(x) = 0 is considered, where fnof is a scalar, x is an n-vector, and ? is a q-vector, with q < n. The sequential gradient-restoration algorithm (SGRA: Miele, [1, 2]) and the gradient-projection algorithm (GPA: Rosen, [3, 4]) are considered. These algorithms have one common characteristic: they are all composed of the alternate succession of gradient phases and restoration phases. However, they are different in several aspects, namely, (a) problem formulation, (b) structure of the gradient phase, and (c) structure of the restoration phase. First, a critical summary of SGRA and GPA is presented. Then, a comparison is undertaken by considering the speed of convergence and, above all, robustness (that is, the capacity of an algorithm to converge to a solution). The comparison is done through 16 numerical examples. In order to understand the separate effects of characteristics (a), (b), (c), six new experimental algorithms are generated by combining parts of Miele's algorithm with parts of Rosen's algorithm. Thus, the total number of algorithms investigated is eight. The numerical results show that Miele's method is on the average faster than Rosen's method. More importantly, regarding robustness, Miele's method compares favorably with Rosen's method. Through the examples, it is shown that Miele's advantage in robustness is more prominent as the curvature of the constraint increases. While this advantage is due to the combined effect of characteristics (a), (b), (c), it is characteristic (c) that plays the dominant role. Indeed, Miele's restoration provides a better search direction as well as better step-size control than Rosen's restoration.  相似文献   

14.
We describe algorithms for estimating a given measure known up to a constant of proportionality, based on a large class of diffusions (extending the Langevin model) for which is invariant. We show that under weak conditions one can choose from this class in such a way that the diffusions converge at exponential rate to , and one can even ensure that convergence is independent of the starting point of the algorithm. When convergence is less than exponential we show that it is often polynomial at verifiable rates. We then consider methods of discretizing the diffusion in time, and find methods which inherit the convergence rates of the continuous time process. These contrast with the behavior of the naive or Euler discretization, which can behave badly even in simple cases. Our results are described in detail in one dimension only, although extensions to higher dimensions are also briefly described.  相似文献   

15.
It is shown that algorithms for minimizing an unconstrained functionF(x), x E n , which are solely methods of conjugate directions can be expected to exhibit only ann or (n–1) step superlinear rate of convergence to an isolated local minimizer. This is contrasted with quasi-Newton methods which can be expected to exhibit every step superlinear convergence. Similar statements about a quadratic rate of convergence hold when a Lipschitz condition is placed on the second derivatives ofF(x). Research was supported in part by Army Research Office, Contract Number DAHC 19-69-C-0017 and the Office of Naval Research, Contract Number N00014-71-C-0116 (NR 047-99).  相似文献   

16.
We discuss the definition and effectiveness of a Padé-type approximation to 2-periodic finite Baire measures on [-,]. In the first two sections we recall the definitions and basic properties of the Padré-type approximants to harmonic functions in the unit disk and to L p -functions on the unit circle. Section 3 deals with the extension of these definitions and properties to a finite 2-periodic Baire measure. Finally, section 4 is devoted to a study of the convergence of a sequence of such approximants, in the weak star topology of measures.  相似文献   

17.
The Metropolis-Hastings algorithm for estimating a distribution is based on choosing a candidate Markov chain and then accepting or rejecting moves of the candidate to produce a chain known to have as the invariant measure. The traditional methods use candidates essentially unconnected to . We show that the class of candidate distributions, developed in Part I (Stramer and Tweedie 1999), which self-target towards the high density areas of , produce Metropolis-Hastings algorithms with convergence rates that appear to be considerably better than those known for the traditional candidate choices, such as random walk. We illustrate this behavior for examples with exponential and polynomial tails, and for a logistic regression model using a Gibbs sampling algorithm. The detailed results are given in one dimension but we indicate how they may extend successfully to higher dimensions.  相似文献   

18.
This paper studies the growth function, with respect to the generating set of edge identifications, of a surface group with fundamental domainD in the hyperbolic plane ann-gon whose angles alternate between /p and /q. The possibilities ofn,p andq for which a torsion-free surface group can have such a fundamental polygon are classified, and the growth functions are computed. Conditions are given for which the denominator of the growth function is a product of cyclotomic polynomials and a Salem polynomial.This work was supported in part by NSF Research Grants.  相似文献   

19.
In this paper, we describe a solution to the register synthesis problem for a class of sequence generators known as algebraic feedback shift registers (AFSRs). These registers are based on the algebra of -adic numbers, where is an element in a ring R, and produce sequences of elements in R/(). We give several cases where the register synthesis problem can be solved by an efficient algorithm. Consequently, any keystreams over R/() used in stream ciphers must be unable to be generated by a small register in these classes. This paper extends the analyses of feedback with carry shift registers and algebraic feedback shift registers by Goresky, Klapper, and Xu.  相似文献   

20.
Frank Ruskey 《Order》1989,6(3):227-233
A permutation 1 2... n is alternating if 1< 2> 3< 4.... Alternating permutations are counted by the Euler numbers. Here we show that alternating permutations can be listed so that successive permutations differ by a transposition, ifn is odd. Extensions and open problems are mentioned.Research supported by the Natural Sciences and Engineering Research Council of Canada under grant A3379.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号