首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
We consider optimal control problems with fixed final time and terminal-integral cost functional, and address the question of constructing a grid optimal synthesis (a universal feedback) on the basis of classical characteristics of the Bellman equation. To construct an optimal synthesis, we propose a numerical algorithm that relies on the necessary optimality conditions (the Pontryagin maximum principle) and sufficient conditions in the Hamiltonian form. We obtain estimates for the efficiency of the numerical method. The method is illustrated by an example of the numerical solution of a nonlinear optimal control problem.  相似文献   

2.
3.
We address finding the semi-global solutions to optimal feedback control and the Hamilton–Jacobi–Bellman (HJB) equation. Using the solution of an HJB equation, a feedback optimal control law can be implemented in real-time with minimum computational load. However, except for systems with two or three state variables, using traditional techniques for numerically finding a semi-global solution to an HJB equation for general nonlinear systems is infeasible due to the curse of dimensionality. Here we present a new computational method for finding feedback optimal control and solving HJB equations which is able to mitigate the curse of dimensionality. We do not discretize the HJB equation directly, instead we introduce a sparse grid in the state space and use the Pontryagin’s maximum principle to derive a set of necessary conditions in the form of a boundary value problem, also known as the characteristic equations, for each grid point. Using this approach, the method is spatially causality free, which enjoys the advantage of perfect parallelism on a sparse grid. Compared with dense grids, a sparse grid has a significantly reduced size which is feasible for systems with relatively high dimensions, such as the 6-D system shown in the examples. Once the solution obtained at each grid point, high-order accurate polynomial interpolation is used to approximate the feedback control at arbitrary points. We prove an upper bound for the approximation error and approximate it numerically. This sparse grid characteristics method is demonstrated with three examples of rigid body attitude control using momentum wheels.  相似文献   

4.
We will explain a new method for obtaining the nearly optimal domain for optimal shape design problems associated with the solution of a nonlinear wave equation. Taking into account the boundary and terminal conditions of the system, a new approach is applied to determine the optimal domain and its related optimal control function with respect to the integral performance criteria, by use of positive Radon measures. The approach, say shape-measure, consists of two steps; first for a fixed domain, the optimal control will be identified by the use of measures. This function and the optimal value of the objective function depend on the geometrical variables of the domain. In the second step, based on the results of the previous one and by applying some convenient optimization techniques, the optimal domain and its related optimal control function will be identified at the same time. The existence of the optimal solution is considered and a numerical example is also given.  相似文献   

5.
Summary. An optimal control problem for impressed cathodic systems in electrochemistry is studied. The control in this problem is the current density on the anode. A matching objective functional is considered. We first demonstrate the existence and uniqueness of solutions for the governing partial differential equation with a nonlinear boundary condition. We then prove the existence of an optimal solution. Next, we derive a necessary condition of optimality and establish an optimality system of equations. Finally, we define a finite element algorithm and derive optimal error estimates. Received March 10, 1993 / Revised version received July 4, 1994  相似文献   

6.
In a fairly recent paper (2008 American Control Conference, June 11‐13, 1035‐1039), the problem of dealing with trading in optimal pairs was treated from the viewpoint of stochastic control. The analysis of the subsequent nonlinear evolution partial differential equation was based upon a succession of Ansätze, which can lead to a solution of the terminal‐value problem. Through an application of the Lie Theory of Continuous Groups to this equation, we show that the Ansätze are based upon the underlying symmetries of the equation (their (14)). We solve the problem in a more general context by allowing the parameters to be explicitly time dependent. The extension means thatmore realistic problems are amenable to the samemode of solution. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

7.
Many practical optimal control problems include discrete decisions. These may be either time-independent parameters or time-dependent control functions as gears or valves that can only take discrete values at any given time. While great progress has been achieved in the solution of optimization problems involving integer variables, in particular mixed-integer linear programs, as well as in continuous optimal control problems, the combination of the two is yet an open field of research. We consider the question of lower bounds that can be obtained by a relaxation of the integer requirements. For general nonlinear mixed-integer programs such lower bounds typically suffer from a huge integer gap. We convexify (with respect to binary controls) and relax the original problem and prove that the optimal solution of this continuous control problem yields the best lower bound for the nonlinear integer problem. Building on this theoretical result we present a novel algorithm to solve mixed-integer optimal control problems, with a focus on discrete-valued control functions. Our algorithm is based on the direct multiple shooting method, an adaptive refinement of the underlying control discretization grid and tailored heuristic integer methods. Its applicability is shown by a challenging application, the energy optimal control of a subway train with discrete gears and velocity limits.   相似文献   

8.
《Optimization》2012,61(3):237-244
In this paper, we consider a class of nonlinear optimal control problems (Bolza-problems) with constraints of the control vector, initial and boundary conditions of the state vectors. The time interval is fixed. Our approach to parametrize both the state functions and the control functions is described by general piecewise polynomials with unknown coefficients (parameters), where a fixed partition of the time interval is used. Here each of these functions in a suitable way individually will be approximated by such polynomials. The optimal control problem thus is reduced to a mathematical programming problem for these parameters. The existence of an optimal solution is assumed. Convergence properties of this method are not considered in this paper.  相似文献   

9.
We present a non-overlapping spatial domain decomposition method for the solution of linear–quadratic parabolic optimal control problems. The spatial domain is decomposed into non-overlapping subdomains. The original parabolic optimal control problem is decomposed into smaller problems posed on space–time cylinder subdomains with auxiliary state and adjoint variables imposed as Dirichlet boundary conditions on the space–time interface boundary. The subdomain problems are coupled through Robin transmission conditions. This leads to a Schur complement equation in which the unknowns are the auxiliary state adjoint variables on the space-time interface boundary. The Schur complement operator is the sum of space–time subdomain Schur complement operators. The application of these subdomain Schur complement operators is equivalent to the solution of an subdomain parabolic optimal control problem. The subdomain Schur complement operators are shown to be invertible and the application of their inverses is equivalent to the solution of a related subdomain parabolic optimal control problem. We introduce a new family of Neumann–Neumann type preconditioners for the Schur complement system including several different coarse grid corrections. We compare the numerical performance of our preconditioners with an alternative approach recently introduced by Benamou.  相似文献   

10.
For a zero-sum differential game, we consider an algorithm for constructing optimal control strategies with the use of backward minimax constructions. The dynamics of the game is not necessarily linear, the players’ controls satisfy geometric constraints, and the terminal payoff function satisfies the Lipschitz condition and is compactly supported. The game value function is computed by multilinear interpolation of grid functions. We show that the algorithm error can be arbitrarily small if the discretization step in time is sufficiently small and the discretization step in the state space has a higher smallness order than the time discretization step. We show that the algorithm can be used for differential games with a terminal set. We present the results of computations for a problem of conflict control of a nonlinear pendulum.  相似文献   

11.
In this paper we consider an optimal control system described byn-dimensional heat equation with a thermal source. Thus problem is to find an optimal control which puts the system in a finite time T, into a stationary regime and to minimize a general objective function. Here we assume there is no constraints on control. This problem is reduced to a moment problem.We modify the moment problem into one consisting of the minimization of a positive linear functional over a set of Radon measures and we show that there is an optimal measure corresponding to the optimal control. The above optimal measure approximated by a finite combination of atomic measures. This construction gives rise to a finite dimensional linear programming problem, where its solution can be used to determine the optimal combination of atomic measures. Then by using the solution of the above linear programming problem we find a piecewise-constant optimal control function which is an approximate control for the original optimal control problem. Finally we obtain piecewise-constant optimal control for two examples of heat equations with a thermal source in one-dimensional.  相似文献   

12.
We consider a controlled system driven by a coupled forward–backward stochastic differential equation with a non degenerate diffusion matrix. The cost functional is defined by the solution of the controlled backward stochastic differential equation, at the initial time. Our goal is to find an optimal control which minimizes the cost functional. The method consists to construct a sequence of approximating controlled systems for which we show the existence of a sequence of feedback optimal controls. By passing to the limit, we establish the existence of a relaxed optimal control to the initial problem. The existence of a strict control follows from the Filippov convexity condition.  相似文献   

13.
We consider the minimization problem of an integral functional in a separable Hilbert space with integrand not convex in the control defined on solutions of the control system described by nonlinear evolutionary equations with mixed nonconvex constraints. The evolutionary operator of the system is the subdifferential of a proper, convex, lower semicontinuous function depending on time. Along with the initial problem, the author considers the relaxed problem with the convexicated control constraint and the integrand convexicated with respect to the control. Under sufficiently general assumptions, it is proved that the relaxed problem has an optimal solution, and for any optimal solution, there exists a minimizing sequence of the initial problem converging to the optimal solution with respect to trajectories and the functional. An example of a controlled parabolic variational inequality with obstacle is considered in detail. __________ Translated from Sovremennaya Matematika i Ee Prilozheniya (Contemporary Mathematics and Its Applications), Vol. 26, Nonlinear Dynamics, 2005.  相似文献   

14.
This paper considers a free terminal time optimal control problem governed by nonlinear time delayed system, where both the terminal time and the control are required to be determined such that a cost function is minimized subject to continuous inequality state constraints. To solve this free terminal time optimal control problem, the control parameterization technique is applied to approximate the control function as a piecewise constant control function, where both the heights and the switching times are regarded as decision variables. In this way, the free terminal time optimal control problem is approximated as a sequence of optimal parameter selection problems governed by nonlinear time delayed systems, each of which can be viewed as a nonlinear optimization problem. Then, a fully informed particle swarm optimization method is adopted to solve the approximate problem. Finally, two free terminal time optimal control problems, including an optimal fishery control problem, are solved by using the proposed method so as to demonstrate its applicability.  相似文献   

15.
We consider the general continuous time finite-dimensional deterministic system under a finite horizon cost functional. Our aim is to calculate approximate solutions to the optimal feedback control. First we apply the dynamic programming principle to obtain the evolutive Hamilton–Jacobi–Bellman (HJB) equation satisfied by the value function of the optimal control problem. We then propose two schemes to solve the equation numerically. One is in terms of the time difference approximation and the other the time-space approximation. For each scheme, we prove that (a) the algorithm is convergent, that is, the solution of the discrete scheme converges to the viscosity solution of the HJB equation, and (b) the optimal control of the discrete system determined by the corresponding dynamic programming is a minimizing sequence of the optimal feedback control of the continuous counterpart. An example is presented for the time-space algorithm; the results illustrate that the scheme is effective.  相似文献   

16.
In this paper, we obtain the discrete optimality system of an optimal harvesting problem. While maximizing a combination of the total expected utility of the consumption and of the terminal size of a population, as a dynamic constraint, we assume that the density of the population is modeled by a stochastic quasi-linear heat equation. Finite-difference and symplectic partitioned Runge–Kutta (SPRK) schemes are used for space and time discretizations, respectively. It is the first time that a SPRK scheme is employed for the optimal control of stochastic partial differential equations. Monte-Carlo simulation is applied to handle expectation appearing in the cost functional. We present our results together with a numerical example. The paper ends with a conclusion and an outlook to future studies, on further research questions and applications.  相似文献   

17.
18.
Under the framework of switched systems, this paper considers a multi-proportional-integral-derivative controller parameter tuning problem with terminal equality constraints and continuous-time inequality constraints. The switching time and controller parameters are decision variables to be chosen optimally. Firstly, we transform the optimal control problem into an equivalent problem with fixed switching instants by introducing an auxiliary function and a time-scaling transformation. Because of the complexity of constraints, it is difficult to solve the problem by conventional optimization techniques. To overcome this difficulty, a novel exact penalty function is introduced for these constraints. Furthermore, the penalty function is appended to the cost functional to form an augmented cost functional, giving rise to an approximate nonlinear parameter optimization problem that can be solved using any gradient-based method. Convergence results indicate that any local optimal solution of the approximate problem is also a local optimal solution of the original problem as long as the penalty parameter is sufficiently large. Finally, an example is provided to illustrate the effectiveness of the developed algorithm.  相似文献   

19.
We consider the minimization problem of an integral functional with integrand that is not convex in the control on solutions of a control system described by fractional differential equation with mixed nonconvex constraints on the control. A relaxation problem is treated along with the original problem. It is proved that, under general assumptions, the relaxation problem has an optimal solution, and that for each optimal solution there is a minimizing sequence of the original problem that converges to the optimal solution with respect to the trajectory, the control, and the functional in appropriate topologies simultaneously.  相似文献   

20.
In this paper, we first design a time optimal control problem for the heat equation with sampled-data controls, and then use it to approximate a time optimal control problem for the heat equation with distributed controls.The study of such a time optimal sampled-data control problem is not easy, because it may have infinitely many optimal controls. We find connections among this problem, a minimal norm sampled-data control problem and a minimization problem, and obtain some properties on these problems. Based on these, we not only build up error estimates for optimal time and optimal controls between the time optimal sampled-data control problem and the time optimal distributed control problem, in terms of the sampling period, but we also prove that such estimates are optimal in some sense.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号