首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
A numerical method for solving a special class of optimal control problems is given. The solution is based on state parametrization as a polynomial with unknown coefficients. This converts the problem to a non-linear optimization problem. To facilitate the computation of optimal coefficients, an improved iterative method is suggested. Convergence of this iterative method and its implementation for numerical examples are also given.  相似文献   

2.
A numerical algorithm to obtain the consistent conditions satisfied by singular arcs for singular linear–quadratic optimal control problems is presented. The algorithm is based on the Presymplectic Constraint Algorithm (PCA) by Gotay-Nester (Gotay et al., J Math Phys 19:2388–2399, 1978; Volckaert and Aeyels 1999) that allows to solve presymplectic Hamiltonian systems and that provides a geometrical framework to the Dirac-Bergmann theory of constraints for singular Lagrangian systems (Dirac, Can J Math 2:129–148, 1950). The numerical implementation of the algorithm is based on the singular value decomposition that, on each step, allows to construct a semi-explicit system. Several examples and experiments are discussed, among them a family of arbitrary large singular LQ systems with index 2 and a family of examples of arbitrary large index, all of them exhibiting stable behaviour. Research partially supported by MEC grant MTM2004-07090-C03-03. SIMUMAT-CM, UC3M-MTM-05-028 and CCG06-UC3M/ESP-0850.  相似文献   

3.
4.
《Optimization》2012,61(3):347-363
In the article, minimax optimal control problems governed by parabolic equations are considered. We apply a new dual dynamic programming approach to derive sufficient optimality conditions for such problems. The idea is to move all the notions from a state space to a dual space and to obtain a new verification theorem providing the conditions, which should be satisfied by a solution of the dual partial differential equation of dynamic programming. We also give sufficient optimality conditions for the existence of an optimal dual feedback control and some approximation of the problem considered, which seems to be very useful from a practical point of view.  相似文献   

5.
A general optimal control problem for ordinary differential equations is considered. For this problem, some improvements of the algorithm of Sakawa are discussed. We avoid any convexity assumption and show with an example that the algorithm is even applicable in cases in which no optimal control exists.  相似文献   

6.
7.
Naive implementations of Newton's method for unconstrainedN-stage discrete-time optimal control problems with Bolza objective functions tend to increase in cost likeN 3 asN increases. However, if the inherent recursive structure of the Bolza problem is properly exploited, the cost of computing a Newton step will increase only linearly withN. The efficient Newton implementation scheme proposed here is similar to Mayne's DDP (differential dynamic programming) method but produces the Newton step exactly, even when the dynamical equations are nonlinear. The proposed scheme is also related to a Riccati treatment of the linear, two-point boundary-value problems that characterize optimal solutions. For discrete-time problems, the dynamic programming approach and the Riccati substitution differ in an interesting way; however, these differences essentially vanish in the continuous-time limit.This work was supported by the National Science Foundation, Grant No. DMS-85-03746.  相似文献   

8.
The stochastic optimal control of linear systems with time-varying and partially observable parameters is synthesized under noisy measurements and a quadratic performance criterion. The structure of the regulator is given, and the optimal solution is reduced to a two-point boundary-value problem. Comments on the numerical solution by appropriate integration schemes is included.  相似文献   

9.
Methods are described for the numerical solution of singular optimal control problems. A simple method is given for solving a class of problems which form a transition from nonsingular to singular cases. A procedure is given for determining the structure of a singular problem if it is initially unknown. Several numerical examples are presented.This work is based on the author's PhD Dissertation at The Hatfield Polytechnic, Hatfield, Hertfordshire, England.  相似文献   

10.
A discrete method of optimal control is proposed in this paper. The continuum state space of a system is discretized into a cell state space, and the cost function is discretized in a similar manner. Assuming intervalwise constant controls and using a finite set of admissible control levels (u) and a finite set of admissible time intervals (), the motion of the system under all possible interval controls (u, ) can then be expressed in terms of a family of cell-to-cell mappings. The proposed method extracts the optimal control results from these mappings by a systematic search, culminating in the construction of a discrete optimal control table.The possibility of expressing the optimal control results in the form of a control table seems to give this method a means to make systems real-time controllable.Dedicated to G. LeitmannThe material is based upon work supported by the National Science Foundation under Grant No. MEA-82-17471. The author is also indebted to Professor G. Leitmann for his many helpful comments.  相似文献   

11.
In this paper, we present a new computational approach for solving an internal optimal control problem, which is governed by a linear parabolic partial differential equation. Our approach is to approximate the PDE problem by a nonhomogeneous ordinary differential equation system in higher dimension. Then, the homogeneous part of ODES is solved using semigroup theory. In the next step, the convergence of this approach is verified by means of Toeplitz matrix. In the rest of the paper, the optimal control problem is solved by utilizing the solution of homogeneous part. Finally, a numerical example is given.  相似文献   

12.
A semi-analytical direct optimal control solution for strongly excited and dissipative Hamiltonian systems is proposed based on the extended Hamiltonian principle, the Hamilton-Jacobi-Bellman (HJB) equation and its variational integral equation, and the finite time element approximation. The differential extended Hamiltonian equations for structural vibration systems are replaced by the variational integral equation, which can preserve intrinsic system structure. The optimal control law dependent on the value function is determined by the HJB equation so as to satisfy the overall optimality principle. The partial differential equation for the value function is converted into the integral equation with variational weighting. Then the successive solution of optimal control with system state is designed. The two variational integral equations are applied to sequential time elements and transformed into the algebraic equations by using the finite time element approximation. The direct optimal control on each time element is obtained respectively by solving the algebraic equations, which is unconstrained by the system state observed. The proposed control algorithm is applicable to linear and nonlinear systems with the quadratic performance index, and takes into account the effects of external excitations measured on control. Numerical examples are given to illustrate the optimal control effectiveness.  相似文献   

13.
This paper deals with the computation of optimal feedback control laws for a nonlinear stochastic third-order system in which the nonlinear element is not completely specified. It is shown that, due to the structure of the system, the optimal feedback control law, whenever it exists, is not unique. Also, it is shown that, in order to implement an optimal feedback control law, a nonlinear partial differential equation has to be solved. A finite-difference algorithm for the solution of this equation is suggested, and its efficiency and applicability are demonstrated with examples.  相似文献   

14.
We propose a generalization of the structured doubling algorithm to compute invariant subspaces of structured matrix pencils that arise in the context of solving linear quadratic optimal control problems. The new algorithm is designed to attain better accuracy when the classical Riccati equation approach for the solution of the optimal control problem is not well suited because the stable and unstable invariant subspaces are not well separated (because of eigenvalues near or on the imaginary axis) or in the case when the Riccati solution does not exist at all. We analyze the convergence of the method and compare the new method with the classical structured doubling algorithm as well as some structured QR methods. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper we describe the algorithm OPTCON which has been developed for the optimal control of nonlinear stochastic models. It can be applied to obtain approximate numerical solutions of control problems where the objective function is quadratic and the dynamic system is nonlinear. In addition to the usual additive uncertainty, some or all of the parameters of the model may be stochastic variables. The optimal values of the control variables are computed in an iterative fashion: First, the time-invariant nonlinear system is linearized around a reference path and approximated by a time-varying linear system. Second, this new problem is solved by applying Bellman's principle of optimality. The resulting feedback equations are used to project expected optimal state and control variables. These projections then serve as a new reference path, and the two steps are repeated until convergence is reached. The algorithm has been implemented in the statistical programming system GAUSS. We derive some mathematical results needed for the algorithm and give an overview of the structure of OPTCON. Moreover, we report on some tentative applications of OPTCON to two small macroeconometric models for Austria.  相似文献   

16.
17.
Feedback synthesis of optimal constrained controls for single-input bilinear systems is considered. Quadratic cost functionals (with and without quadratic control penalization) are modified by the inclusion of additional nonnegative state penalizing functions in the respective cost integrands. The latter functions are chosen so as to regularize the problems, in the sense that feedback solutions of particularly simple form are obtained. Finite and infinite time horizon problem formulations are treated, and associated aspects of feedback stabilization of bilinear systems are discussed.  相似文献   

18.
In this paper, we investigate the relationship between two classes of optimality which have arisen in the study of dynamic optimization problems defined on an infinite-time domain. We utilize an optimal control framework to discuss our results. In particular, we establish relationships between limiting objective functional type optimality concepts, commonly known as overtaking optimality and weakly overtaking optimality, and the finite-horizon solution concepts of decision-horizon optimality and agreeable plans. Our results show that both classes of optimality are implied by corresponding uniform limiting objective functional type optimality concepts, referred to here as uniformly overtaking optimality and uniformly weakly overtaking optimality. This observation permits us to extract sufficient conditions for optimality from known sufficient conditions for overtaking and weakly overtaking optimality by strengthening their hypotheses. These results take the form of a strengthened maximum principle. Examples are given to show that the hypotheses of these results can be realized.This research was supported by the National Science Foundation, Grant No. DMS-87-00706, and by the Southern Illinois University at Carbondale, Summer Research Fellowship Program.  相似文献   

19.
《Optimization》2012,61(1):115-130
In this article, we establish the existence of optimal solutions for a large class of nonconvex infinite horizon discrete-time optimal control problems. This class contains optimal control problems arising in economic dynamics which describe a model with nonconcave utility functions representing the preferences of the planner.  相似文献   

20.
We present local sensitivity analysis for discrete optimal control problems with varying endpoints in the case when the customary regularity of boundary conditions can be violated. We study the behavior of the optimal solutions subject to parametric perturbations of the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号