首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We consider a stochastic control problem for a random evolution. We study the Bellman equation of the problem and we prove the existence of an optimal stochastic control which is Markovian. This problem enables us to approximate the general problem of the optimal control of solutions of stochastic differential equations.  相似文献   

2.
We consider the optimal control problem for a system governed by a nonlinear hyperbolic equation without any constraints on the parameter of nonlinearity. No uniqueness theorem is established for a solution to this problem. The control-state mapping of this system is not Gateaux differentiable. We study an approximate solution of the optimal control problem by means of the penalty method.  相似文献   

3.
We consider the problem of determining an optimal driving strategy in a train control problem with a generalised equation of motion. We assume that the journey must be completed within a given time and seek a strategy that minimises fuel consumption. On the one hand we consider the case where continuous control can be used and on the other hand we consider the case where only discrete control is available. We pay particular attention to a unified development of the two cases. For the continuous control problem we use the Pontryagin principle to find necessary conditions on an optimal strategy and show that these conditions yield key equations that determine the optimal switching points. In the discrete control problem, which is the typical situation with diesel-electric locomotives, we show that for each fixed control sequence the cost of fuel can be minimised by finding the optimal switching times. The corresponding strategies are called strategies of optimal type and in this case we use the Kuhn–Tucker equations to find key equations that determine the optimal switching times. We note that the strategies of optimal type can be used to approximate as closely as we please the optimal strategy obtained using continuous control and we present two new derivations of the key equations. We illustrate our general remarks by reference to a typical train control problem.  相似文献   

4.
《Optimization》2012,61(5):677-687
We consider the problem of approximate minimax for the Bolza problem of optimal control. Starting from the method of dynamic programming (Bellman) we define the ?-value function to be the approximation for the value function being a solution to the Hamilton–Jacobi equation.  相似文献   

5.
We consider the problem of optimal control for the wave equation. For the formulated problem, we find the optimal control in the form of a feedback in the case where the control reaches a restriction, construct an approximate control, and substantiate its correctness, i.e., prove that the proposed control realizes the minimum of the quality criterion. __________ Translated from Ukrains’kyi Matematychnyi Zhurnal, Vol. 59, No. 8, pp. 1094–1104, August, 2007.  相似文献   

6.
We consider an optimal control problem for systems governed by ordinary differential equations with control constraints. The state equation is discretized by the explicit fourth order Runge-Kutta scheme and the controls are approximated by discontinuous piecewise affine ones. We then propose an approximate gradient projection method that generates sequences of discrete controls and progressively refines the discretization during the iterations. Instead of using the exact discrete directional derivative, which is difficult to calculate, we use an approximate derivative of the cost functional defined by discretizing the continuous adjoint equation by the same Runge-Kutta scheme and the integral involved by Simpson's integration rule, both involving intermediate approximations. The main result is that accumulation points, if they exist, of sequences constructed by this method satisfy the weak necessary conditions for optimality for the continuous problem. Finally, numerical examples are given.  相似文献   

7.
In this paper we consider an optimal control system described byn-dimensional heat equation with a thermal source. Thus problem is to find an optimal control which puts the system in a finite time T, into a stationary regime and to minimize a general objective function. Here we assume there is no constraints on control. This problem is reduced to a moment problem.We modify the moment problem into one consisting of the minimization of a positive linear functional over a set of Radon measures and we show that there is an optimal measure corresponding to the optimal control. The above optimal measure approximated by a finite combination of atomic measures. This construction gives rise to a finite dimensional linear programming problem, where its solution can be used to determine the optimal combination of atomic measures. Then by using the solution of the above linear programming problem we find a piecewise-constant optimal control function which is an approximate control for the original optimal control problem. Finally we obtain piecewise-constant optimal control for two examples of heat equations with a thermal source in one-dimensional.  相似文献   

8.
In this work we consider an L minimax ergodic optimal control problem with cumulative cost. We approximate the cost function as a limit of evolutions problems. We present the associated Hamilton-Jacobi-Bellman equation and we prove that it has a unique solution in the viscosity sense. As this HJB equation is consistent with a numerical procedure, we use this discretization to obtain a procedure for the primitive problem. For the numerical solution of the ergodic version we need a perturbation of the instantaneous cost function. We give an appropriate selection of the discretization and penalization parameters to obtain discrete solutions that converge to the optimal cost. We present numerical results. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
We consider the optimal control problem for systems described by nonlinear equations of elliptic type. If the nonlinear term in the equation is smooth and the nonlinearity increases at a comparatively low rate of growth, then necessary conditions for optimality can be obtained by well-known methods. For small values of the nonlinearity exponent in the smooth case, we propose to approximate the state operator by a certain differentiable operator. We show that the solution of the approximate problem obtained by standard methods ensures that the optimality criterion for the initial problem is close to its minimal value. For sufficiently large values of the nonlinearity exponent, the dependence of the state function on the control is nondifferentiable even under smoothness conditions for the operator. But this dependence becomes differentiable in a certain extended sense, which is sufficient for obtaining necessary conditions for optimality. Finally, if there is no smoothness and no restrictions are imposed on the nonlinearity exponent of the equation, then a smooth approximation of the state operator is possible. Next, we obtain necessary conditions for optimality of the approximate problem using the notion of extended differentiability of the solution of the equation approximated with respect to the control, and then we show that the optimal control of the approximated extremum problem minimizes the original functional with arbitrary accuracy.  相似文献   

10.
In this paper, we consider a class of optimal control problems involving a second-order, linear parabolic partial differential equation with Neumann boundary conditions. The time-delayed arguments are assumed to appear in the boundary conditions. A necessary and sufficient condition for optimality is derived, and an iterative method for solving this optimal control problem is proposed. The convergence property of this iterative method is also investigated.On the basis of a finite-element Galerkin's scheme, we convert the original distributed optimal control problem into a sequence of approximate problems involving only lumped-parameter systems. A computational algorithm is then developed for each of these approximate problems. For illustration, a one-dimensional example is solved.  相似文献   

11.
We consider a control problem for a string equation. The control is provided by an external load function, which is determined in the course of the solution of the problem in closed form. We obtain criteria for approximate null-controllability and null-controllability.  相似文献   

12.
Analytical solutions for the Cahn-Hilliard initial value problem are obtained through an application of the homotopy analysis method. While there exist numerical results in the literature for the Cahn-Hilliard equation, a nonlinear partial differential equation, the present results are completely analytical. In order to obtain accurate approximate analytical solutions, we consider multiple auxiliary linear operators, in order to find the best operator which permits accuracy after relatively few terms are calculated. We also select the convergence control parameter optimally, through the construction of an optimal control problem for the minimization of the accumulated L 2-norm of the residual errors. In this way, we obtain optimal homotopy analysis solutions for this complicated nonlinear initial value problem. A variety of initial conditions are selected, in order to fully demonstrate the range of solutions possible.  相似文献   

13.
《Optimization》2012,61(4):621-634
We consider an optimal control problem for an abstract ITO equation on a Gelfand triple of Hilbert spaces. This control problem is approximated by means of a family of optimal control problems for elliptic systems  相似文献   

14.
We consider the general continuous time finite-dimensional deterministic system under a finite horizon cost functional. Our aim is to calculate approximate solutions to the optimal feedback control. First we apply the dynamic programming principle to obtain the evolutive Hamilton–Jacobi–Bellman (HJB) equation satisfied by the value function of the optimal control problem. We then propose two schemes to solve the equation numerically. One is in terms of the time difference approximation and the other the time-space approximation. For each scheme, we prove that (a) the algorithm is convergent, that is, the solution of the discrete scheme converges to the viscosity solution of the HJB equation, and (b) the optimal control of the discrete system determined by the corresponding dynamic programming is a minimizing sequence of the optimal feedback control of the continuous counterpart. An example is presented for the time-space algorithm; the results illustrate that the scheme is effective.  相似文献   

15.
A minimax optimal control problem with infinite horizon is studied. We analyze a relaxation of the controls, which allows us to consider a generalization of the original problem that not only has existence of an optimal control but also enables us to approximate the infinite-horizon problem with a sequence of finite-horizon problems. We give a set of conditions that are sufficient to solve directly, without relaxation, the infinite-horizon problem as the limit of finite-horizon problems.  相似文献   

16.
Radouen Ghanem 《Positivity》2009,13(2):321-338
We consider an optimal control problem for the obstacle problem with an elliptic variational inequality. The obstacle function which is the control function is assumed in H2. We use an approximate technique to introduce a family of problems governed by variational equations. We prove optimal solutions existence and give necessary optimality conditions. The author is grateful to Prof. M. Bergounioux for her instructive suggestions.  相似文献   

17.
In this paper, we consider an optimal control problem in which the control takes values from a discrete set and the state and control are subject to continuous inequality constraints. By introducing auxiliary controls and applying a time-scaling transformation, we transform this optimal control problem into an equivalent problem subject to additional linear and quadratic constraints. The feasible region defined by these additional constraints is disconnected, and thus standard optimization methods struggle to handle these constraints. We introduce a novel exact penalty function to penalize constraint violations, and then append this penalty function to the objective. This leads to an approximate optimal control problem that can be solved using standard software packages such as MISER. Convergence results show that when the penalty parameter is sufficiently large, any local solution of the approximate problem is also a local solution of the original problem. We conclude the paper with some numerical results for two difficult train control problems.  相似文献   

18.
We consider an optimal coefficient control problem for a quasilinear parabolic equation. We study the well-posedness of the problem and obtain a necessary condition for optimality.  相似文献   

19.
We consider a nonlinear optimal control problem with an integral equation as the control object, subject to control constraints. This integral equation corresponds to the fractional moment of a stochastic process involving short-range and long-range dependences. For both cases, we derive the first-order necessary optimality conditions in the form of the Euler–Lagrange equation, and then apply them to obtain a numerical solution of the problem of optimal portfolio selection.  相似文献   

20.
In this paper, the task of achieving the soft landing of a lunar module such that the fuel consumption and the flight time are minimized is formulated as an optimal control problem. The motion of the lunar module is described in a three dimensional coordinate system. We obtain the form of the optimal closed loop control law, where a feedback gain matrix is involved. It is then shown that this feedback gain matrix satisfies a Riccati-like matrix differential equation. The optimal control problem is first solved as an open loop optimal control problem by using a time scaling transform and the control parameterization method. Then, by virtue of the relationship between the optimal open loop control and the optimal closed loop control along the optimal trajectory, we present a practical method to calculate an approximate optimal feedback gain matrix, without having to solve an optimal control problem involving the complex Riccati-like matrix differential equation coupled with the original system dynamics. Simulation results show that the proposed approach is highly effective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号