首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is well-known in optimal control theory that the maximum principle, in general, furnishes only necessary optimality conditions for an admissible process to be an optimal one. It is also well-known that if a process satisfies the maximum principle in a problem with convex data, the maximum principle turns to be likewise a sufficient condition. Here an invexity type condition for state constrained optimal control problems is defined and shown to be a sufficient optimality condition. Further, it is demonstrated that all optimal control problems where all extremal processes are optimal necessarily obey this invexity condition. Thus optimal control problems which satisfy such a condition constitute the most general class of problems where the maximum principle becomes automatically a set of sufficient optimality conditions.  相似文献   

2.
A maximum principle for the open-loop optimal control of a vibrating system relative to a given convex index of performance is investigated. Though maximum principles have been studied by many people (see, e.g., Refs. 1–5), the principle derived in this paper is of particular use for control problems involving mechanical structures. The state variable satisfies general initial conditions as well as a self-adjoint system of partial differential equations together with a homogeneous system of boundary conditions. The mass matrix is diagonal, constant, and singular, and the viscous damping matrix is diagonal. The maximum principle relates the optimal control with the solution of the homogeneous adjoint equation in which terminal conditions are prescribed in terms of the terminal values of the optimal state variable. An application of this theory to a structural vibrating system is given in a companion paper (Ref. 6).  相似文献   

3.
We consider the nonlinear optimal control problem with an integral functional in which the integrand function is the characteristic function of a given closed set in the phase space. The approximation method is applied to prove the necessary conditions of optimality in the form of the Pontryagin maximum principle without any prior assumptions on the behavior of the optimal trajectory. Similarly to the case of phase-constrained problems, we derive conditions of nondegeneracy and pointwise nontriviality of the maximum principle. __________ Translated from Nelineinaya Dinamika i Upravlenie, No. 4, pp. 241–256, 2004.  相似文献   

4.
We consider a Bolza optimal control problem with state constraints. It is well known that under some technical assumptions every strong local minimizer of this problem satisfies first order necessary optimality conditions in the form of a constrained maximum principle. In general, the maximum principle may be abnormal or even degenerate and so does not provide a sufficient information about optimal controls. In the recent literature some sufficient conditions were proposed to guarantee that at least one maximum principle is nondegenerate, cf. [A.V. Arutyanov, S.M. Aseev, Investigation of the degeneracy phenomenon of the maximum principle for optimal control problems with state constraints, SIAM J. Control Optim. 35 (1997) 930–952; F. Rampazzo, R.B. Vinter, A theorem on existence of neighbouring trajectories satisfying a state constraint, with applications to optimal control, IMA 16 (4) (1999) 335–351; F. Rampazzo, R.B. Vinter, Degenerate optimal control problems with state constraints, SIAM J. Control Optim. 39 (4) (2000) 989–1007]. Our aim is to show that actually conditions of a similar nature guarantee normality of every nondegenerate maximum principle. In particular we allow the initial condition to be fixed and the state constraints to be nonsmooth. To prove normality we use J. Yorke type linearization of control systems and show the existence of a solution to a linearized control system satisfying new state constraints defined, in turn, by linearization of the original set of constraints along an extremal trajectory.  相似文献   

5.
6.
In the present paper, we study the resource allocation problem for a two-sector economic model of special form, which is of interest in applications. The optimization problem is considered on a given finite time interval. We show that, under certain conditions on the model parameters, the optimal solution contains a singular mode. We construct optimal solutions in closed form. The theoretical basis for the obtained results is provided by necessary optimality conditions (the Pontryagin maximum principle) and sufficient optimality conditions in terms of constructions of the Pontryagin maximum principle.  相似文献   

7.
This paper considers multidimensional control problems governed by a first-order PDE system. It is known that, if the structure of the problem is linear-convex, then the so-called ε-maximum principle, a set of necessary optimality conditions involving a perturbation parameter ε > 0, holds. Assuming that the optimal controls are piecewise continuous, we are able to drop the perturbation parameter within the conditions, proving the Pontryagin maximum principle with piecewise regular multipliers (measures). The Lebesgue and Hahn decompositions of the multipliers lead to refined maximum conditions. Our proof is based on the Baire classification of the admissible controls.  相似文献   

8.
This paper studies the optimal control problem for point processes with Gaussian white-noised observations. A general maximum principle is proved for the partially observed optimal control of point processes, without using the associated filtering equation . Adjoint flows—the adjoint processes of the stochastic flows of the optimal system—are introduced, and their relations are established. Adjoint vector fields , which are observation-predictable, are introduced as the solutions of associated backward stochastic integral-partial differential equtions driven by the observation process. In a heuristic way, their relations are explained, and the adjoint processes are expressed in terms of the adjoint vector fields, their gradients and Hessians, along the optimal state process. In this way the adjoint processes are naturally connected to the adjoint equation of the associated filtering equation . This shows that the conditional expectation in the maximum condition is computable through filtering the optimal state, as usually expected. Some variants of the partially observed stochastic maximum principle are derived, and the corresponding maximum conditions are quite different from the counterpart for the diffusion case. Finally, as an example, a quadratic optimal control problem with a free Poisson process and a Gaussian white-noised observation is explicitly solved using the partially observed maximum principle. Accepted 8 August 2001. Online publication 17 December, 2001.  相似文献   

9.
A maximum principle in the form given by R.V. Gamkrelidze is obtained, although without a priori regularity assumptions to be satisfied by the optimal trajectory. After its formulation and proof, we propose various regularity concepts that guarantee, in one sense or another, the nondegeneracy of the maximum principle. Finally, we show how the already known first-order necessary conditions can be deduced from the proposed theorem.  相似文献   

10.
An optimal control problem with a prescribed performance index for parabolic systems with time delays is investigated. A necessary condition for optimality is formulated and proved in the form of a maximum principle. Under additional conditions, the maximum principle gives sufficient conditions for optimality. It is also shown that the optimal control is unique. As an illustration of the theoretical consideration, an analytic solution is obtained for a time-delayed diffusion system.The author wishes to express his deep gratitude to Professors J. M. Sloss and S. Adali for the valuable guidance and constant encouragement during the preparation of this paper.  相似文献   

11.
We study the Pontryagin maximum principle for an optimal control problem with state constraints. We analyze the continuity of a vector function µ (which is one of the Lagrange multipliers corresponding to an extremal by virtue of the maximum principle) at the points where the extremal trajectory meets the boundary of the set given by the state constraints. We obtain sufficient conditions for the continuity of µ in terms of the smoothness of the extremal trajectory.  相似文献   

12.
Optimality conditions are derived in the form of a maximum principle governing solutions to an optimal control problem which involves state constraints. The conditions, which apply in the absence of differentiability assumptions on the data, are stated in terms of Clarke's generalized Jacobians. Although not the most general available, the conditions are derived by a novel method: this involves removal of the state constraints by introduction of a penalty term and application of Ekeland's variational principle.  相似文献   

13.
An optimal control problem with state constraints is considered. Some properties of extremals to the Pontryagin maximum principle are studied. It is shown that, from the conditions of the maximum principle, it follows that the extended Hamiltonian is a Lipschitz function along the extremal and its total time derivative coincides with its partial derivative with respect to time.  相似文献   

14.

In this paper, we are concerned with optimal control problems where the system is driven by a stochastic differential equation of the Ito type. We study the relaxed model for which an optimal solution exists. This is an extension of the initial control problem, where admissible controls are measure valued processes. Using Ekeland's variational principle and some stability properties of the corresponding state equation and adjoint processes, we establish necessary conditions for optimality satisfied by an optimal relaxed control. This is the first version of the stochastic maximum principle that covers relaxed controls.  相似文献   

15.
We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.  相似文献   

16.
The aim of this paper is to present a stochastic maximum principle for an optimal control problem of switching systems. It presents necessary conditions of optimality in the form of a maximum principle for stochastic switching systems, in which the dynamic of the constituent processes takes the form of stochastic differential equations. The restrictions on transitions for the system are described through equality constraints.  相似文献   

17.
Abstract. Optimal control problems governed by semilinear parabolic partial differential equations are considered. No Cesari-type conditions are assumed. By proving the existence theorem and the Pontryagin maximum principle of optimal ``state-control" pairs for the corresponding relaxed problems, an existence theorem of optimal pairs for the original problem is established.  相似文献   

18.
A basic feature of Pontryagin’s maximum principle is its native Hamiltonian format, inherent in the principle regardless of any regularity conditions imposed on the optimal problem under consideration. It canonically assigns to the problem a family of Hamiltonian systems, indexed with the control parameter, and complements the family with the maximum condition, which makes it possible to solve the initial value problem for the system by “dynamically” eliminating the parameter as we proceed along the trajectory, thus providing extremals of the problem. Much has been said about the maximum condition since its discovery in 1956, and all achievements in the field were mainly credited to it, whereas the Hamiltonian format of the maximum principle has always been taken for granted and never been discussed seriously. Meanwhile, the very possibility of formulating the maximum principle is intimately connected with its native Hamiltonian format and with the parametrization of the problem with the control parameter. Both these starting steps were made by L.S. Pontryagin in 1955 from scratch, in fact, out of nothing, and eventually led to the discovery of the maximum principle. Since the present volume is dedicated to the centenary of the birth of Lev Semenovich Pontryagin, I decided to return to this now semi-historical topic and give a short exposition of the Hamiltonian format of the maximum principle.  相似文献   

19.
A study is made of an overtaking optimal problem for a population system consisting of two competing species, which is controlled by fertilities. The existence of optimal policy is proved and a maximum principle is carefully derived under less restrictive conditions. Weak and strong turnpike properties of optimal trajectories are established.  相似文献   

20.
The concept of a local infimum for an optimal control problem is introduced, and necessary conditions for it are formulated in the form of a family of “maximum principles.” If the infimum coincides with a strong minimum, then this family contains the classical Pontryagin maximum principle. Examples are given to show that the obtained necessary conditions strengthen and generalize previously known results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号