首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
2.
3.
4.
5.
6.
The calculation and implementation of the neighboring optimal feedback control law for multiinput, nonlinear dynamical systems, using discontinuous control, is the subject of this paper. The concept of neighboring optimal feedback control of systems with continuous, unbounded control functions has been investigated by others. The differentiating features between this class of problems and that considered here are the control discontinuities and the inherent system uncontrollability during the latter stages of the control-law operating time.The neighboring control law is determined by minimizing the second-order terms in the expansion of the performance index about an optimal nominal path. The resulting gains are a function of the states associated with the nominal trajectory. The development of a feedback control scheme utilizing these gains requires a technique for choosing the gains appropriate for each neighboring state. Such a technique is described in this paper. The technique combines abootstrap algorithm for determining the number of neighboring switch times and the initial and final controls with a scheme based ontime-to-go along the nominal and neighboring paths until the next predicted switch time or the predicted final time. This scheme requires that the nominal state, which is used to specify the feedback gains, be chosen such that the predicted time-to-go from the neighboring state be identical to the time-to-go from the nominal state. This technique for choosing feedback gains possesses minimal storage requirements and readily leads to a real-time feedback implementation of the neighboring control law.The optimal feedback control scheme described in this paper is utilized to solve the minimum-time satellite attitude-acquisition problem. The action of the neighboring control scheme when applied to states which do not lie in an immediate neighborhood of the nominal path is investigated. For this particular problem, the neighboring control scheme performs quite well despite the fact that, when the state perturbations are finite, the terminal constraints can never be satisfied exactly.This research was sponsored by the National Aeronautics and Space Administration under Research Grant No. NGL-05-020-007 and is a condensed version of the investigation described in Ref. 1. The authors are indebted to Professor Arthur E. Bryson, Jr., for suggesting the topic and providing stimulating discussions.  相似文献   

7.
This paper presents the control and synchronization of chaos by designing linear feedback controllers. The linear feedback control problem for nonlinear systems has been formulated under optimal control theory viewpoint. Asymptotic stability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton–Jacobi–Bellman equation thus guaranteeing both stability and optimality. The formulated theorem expresses explicitly the form of minimized functional and gives the sufficient conditions that allow using the linear feedback control for nonlinear system. The numerical simulations were provided in order to show the effectiveness of this method for the control of the chaotic Rössler system and synchronization of the hyperchaotic Rössler system.  相似文献   

8.
In this paper, the task of achieving the soft landing of a lunar module such that the fuel consumption and the flight time are minimized is formulated as an optimal control problem. The motion of the lunar module is described in a three dimensional coordinate system. We obtain the form of the optimal closed loop control law, where a feedback gain matrix is involved. It is then shown that this feedback gain matrix satisfies a Riccati-like matrix differential equation. The optimal control problem is first solved as an open loop optimal control problem by using a time scaling transform and the control parameterization method. Then, by virtue of the relationship between the optimal open loop control and the optimal closed loop control along the optimal trajectory, we present a practical method to calculate an approximate optimal feedback gain matrix, without having to solve an optimal control problem involving the complex Riccati-like matrix differential equation coupled with the original system dynamics. Simulation results show that the proposed approach is highly effective.  相似文献   

9.
The stochastic optimal control of linear systems with time-varying and partially observable parameters is synthesized under noisy measurements and a quadratic performance criterion. The structure of the regulator is given, and the optimal solution is reduced to a two-point boundary-value problem. Comments on the numerical solution by appropriate integration schemes is included.  相似文献   

10.
This work presents chaos synchronization between two different chaotic systems via nonlinear feedback control. On the basis of a converse Lyapunov theorem and balanced gain scheme, control gains of controller are derived to achieve chaos synchronization for the unified chaotic systems. Numerical simulations are shown to verify the results.  相似文献   

11.
Dynamical behaviors of Liu system is studied using Routh–Hurwitz criteria, Center manifold theorem and Hopf bifurcation theorem. Periodic solutions and their stabilities about the equilibrium points are studied by using Hsü & Kazarinoff theorem. Linear feedback control techniques are used to stabilize and synchronize the chaotic Liu system.  相似文献   

12.
We consider a linear dynamic system in the presence of an unknown but bounded perturbation and study how to control the system in order to get into a prescribed neighborhood of a zero at a given final moment. The quality of a control is estimated by the quadratic functional. We define optimal guaranteed program controls as controls that are allowed to be corrected at one intermediate time moment. We show that an infinite dimensional problem of constructing such controls is equivalent to a special bilevel problem of mathematical programming which can be solved explicitely. An easy implementable algorithm for solving the bilevel optimization problem is derived. Based on this algorithm we propose an algorithm of constructing a guaranteed feedback control with one correction moment. We describe the rules of computing feedback which can be implemented in real time mode. The results of illustrative tests are given.  相似文献   

13.
We consider the general continuous time finite-dimensional deterministic system under a finite horizon cost functional. Our aim is to calculate approximate solutions to the optimal feedback control. First we apply the dynamic programming principle to obtain the evolutive Hamilton–Jacobi–Bellman (HJB) equation satisfied by the value function of the optimal control problem. We then propose two schemes to solve the equation numerically. One is in terms of the time difference approximation and the other the time-space approximation. For each scheme, we prove that (a) the algorithm is convergent, that is, the solution of the discrete scheme converges to the viscosity solution of the HJB equation, and (b) the optimal control of the discrete system determined by the corresponding dynamic programming is a minimizing sequence of the optimal feedback control of the continuous counterpart. An example is presented for the time-space algorithm; the results illustrate that the scheme is effective.  相似文献   

14.
The goal of this paper is to extend the classical notion of Gaussian curvature of a two-dimensional Riemannian surface to two-dimensional optimal control systems with scalar input using Cartan’s moving frame method. This notion was already introduced by A. A. Agrachev and R. V. Gamkrelidze for more general control systems using a purely variational approach. Further, we will see that the “control” analogue of Gaussian curvature reflects similar intrinsic properties of the extremal flow. In particular, if the curvature is negative, arbitrarily long segments of extremals are locally optimal. Finally, we will define and characterize flat control systems. __________ Translated from Sovremennaya Matematika i Ee Prilozheniya (Contemporary Mathematics and Its Applications), Vol. 33, Suzdal Conference-2004, Part 1, 2005.  相似文献   

15.
16.
In this paper, the problem of guaranteed cost synchronization for a complex network is investigated. In order to achieve the synchronization, two types of guaranteed cost dynamic feedback controller are designed. Based on Lyapunov stability theory, a linear matrix inequality (LMI) convex optimization problem is formulated to find the controller which guarantees the asymptotic stability and minimizes the upper bound of a given quadratic cost function. Finally, a numerical example is given to illustrate the proposed method.  相似文献   

17.
We prove the existence of an optimal control for systems of stochastic differential equations without solving the Bellman dynamic programming equation. Instead, we use direct methods for solving extremal problems.  相似文献   

18.
Journal of Optimization Theory and Applications - Optimal modal feedback control laws are synthesized from the modal equations which are obtained by eigenfunction expansion of the diffusion...  相似文献   

19.
Using a semi-discrete model that describes the heat transfer of a continuous casting process of steel, this paper is addressed to an optimal control problem of the continuous casting process in the secondary cooling zone with water spray control. The approach is based on the Hamilton–Jacobi–Bellman equation satisfied by the value function. It is shown that the value function is the viscosity solution of the Hamilton–Jacobi–Bellman equation. The optimal feedback control is found numerically by solving the associated Hamilton–Jacobi–Bellman equation through a designed finite difference scheme. The validity of the optimality of the obtained control is experimented numerically through comparisons with different admissible controls. Detailed study of a low-carbon billet caster is presented.  相似文献   

20.
Power-series methods are developed for designing approximately optimal state regulators for a nonlinear system subject to white Gaussian random disturbances. The performance index of the control is an ensemble average of a quadratic form. A perfect observation of the system state is assumed. When the system nonlinearity is small and it is characterized by a polynomial function of the state, a definite method is presented to construct a suboptimal feedback control of a power-series form in a small nonlinearity parameter. If the variance of noise is small, an alternative method is also applicable which yields a suboptimal control in a power series with respect to a variance parameter. A simple one-dimensional problem is examined to make comparison between controls of the two different forms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号