首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
This paper presents three versions of maximum principle for a stochastic optimal control problem of Markov regime-switching forward–backward stochastic differential equations with jumps. First, a general sufficient maximum principle for optimal control for a system, driven by a Markov regime-switching forward–backward jump–diffusion model, is developed. In the regime-switching case, it might happen that the associated Hamiltonian is not concave and hence the classical maximum principle cannot be applied. Hence, an equivalent type maximum principle is introduced and proved. In view of solving an optimal control problem when the Hamiltonian is not concave, we use a third approach based on Malliavin calculus to derive a general stochastic maximum principle. This approach also enables us to derive an explicit solution of a control problem when the concavity assumption is not satisfied. In addition, the framework we propose allows us to apply our results to solve a recursive utility maximization problem.  相似文献   

2.
This paper studies a time optimal control problem for a class of ordinary differential equations. The control systems may have multiple solutions. Based on the properties fulfilled by the solutions of the concerned equations, we get both the existence and the Pontryagin maximum principle for optimal controls.  相似文献   

3.
考虑一个带非局部低阶项非线性抛物型方程的时间最优控制问题.首先利用Schauder不动点定理证明了系统的适定性,然后利用Carleman不等式和Kakutani不动点定理证明了容许控制和最优控制的存在性,并且建立了时间最优控制的最大值原理.  相似文献   

4.
This note considers the time optimal problem for a linear neutral system with a control integral constraint. A maximum principle is derived.  相似文献   

5.
A maximum principle is developed for a class of problems involving the optimal control of a damped-parameter system governed by a linear hyperbolic equation in one space dimension that is not necessarily separable. A convex index of performance is formulated, which consists of functionals of the state variable, its first- and second-order space derivatives, its first-order time derivative, and a penalty functional involving the open-loop control force. The solution of the optimal control problem is shown to be unique. The adjoint operator is determined, and a maximum principle relating the control function to the adjoint variable is stated. The proof of the maximum principle is given with the help of convexity arguments. The maximum principle can be used to compute the optimal control function and is particularly suitable for problems involving the active control of structural elements for vibration suppression.  相似文献   

6.
This article is concerned with a risk-sensitive stochastic optimal control problem motivated by a kind of optimal portfolio choice problem in the financial market. The maximum principle for this kind of problem is obtained, which is similar in form to its risk-neutral counterpart. But the adjoint equations and maximum condition heavily depend on the risk-sensitive parameter. This result is used to solve a kind of optimal portfolio choice problem and the optimal portfolio choice strategy is obtained. Computational results and figures explicitly illustrate the optimal solution and the sensitivity to the volatility rate parameter.  相似文献   

7.
In this paper, the nondifferentiable optimal control problemwith discrete time is considered. For this problem, the discretemaximum principle is derived under weak assumptions concerningthe performance index and inequality constraints. The techniqueof the proof is also used to formulate a globaily convergentalgorithm based on the discrete maximum principle. At everyiteration of this algorithm, a convex optimal control problemmust be solved. An efficient version of a proximity algorithmis proposed for this convex problem.  相似文献   

8.
This paper considers an optimal control problem for the dynamics of a predator-prey model. The predator population has to choose the predation intensity over time in a way that maximizes the present value of the utility stream derived by consuming prey. The utility function is assumed to be convex for small levels of consumption and concave otherwise. The problem is solved using the maximum principle and different time patterns of the optimal solution are obtained in the cases of small, medium and high rates of time preference. The model has features of both, convex and concave optimal control problems and therefore phase plane analysis has to be combined with the problem of synthesis of bang-bang, singular and chattering solution pieces.  相似文献   

9.
In this work, an optimal control problem with state constraints of equality type is considered. Novelty of the problem formulation is justified. Under various regularity assumptions imposed on the optimal trajectory, a non-degenerate Pontryagin Maximum Principle is proven. As a consequence of the maximum principle, the Euler–Lagrange and Legendre conditions for a variational problem with equality and inequality state constraints are obtained. As an application, the equation of the geodesic curve for a complex domain is derived. In control theory, the Maximum Principle suggests the global maximum condition, also known as the Weierstrass–Pontryagin maximum condition, due to which the optimal control function, at each instant of time, turns out to be a solution to a global finite-dimensional optimization problem.  相似文献   

10.
Pontryagin's maximum principle gives no information about a singular optimal control if the problem is linear. This survey shows how candidate singular optimal controls may be found for linear and nonlinear problems. A theorem is given on the maximum order of a linear singular problem.This paper is based in part on the research undertaken by the author at the Hatfield Polytechnic, Hatfield, Hertfordshire, England, for the Ph.D. Degree.  相似文献   

11.
This paper presents a sufficient stochastic maximum principle for a stochastic optimal control problem of Markov regime-switching forward–backward stochastic differential equations with jumps. The relationship between the stochastic maximum principle and the dynamic programming principle in a Markovian case is also established. Finally, applications of the main results to a recursive utility portfolio optimization problem in a financial market are discussed.  相似文献   

12.
This paper is concerned with the optimal distributed control of the viscous weakly dispersive Degasperis–Procesi equation in nonlinear shallow water dynamics. It is well known that the Pontryagin maximum principle, which unifies calculus of variations and control theory of ordinary differential equations, sets up the theoretical basis of the modern optimal control theory along with the Bellman dynamic programming principle. In this paper, we commit ourselves to infinite dimensional generalizations of the maximum principle and aim at the optimal control theory of partial differential equations. In contrast to the finite dimensional setting, the maximum principle for the infinite dimensional system does not generally hold as a necessary condition for optimal control. By the Dubovitskii and Milyutin functional analytical approach, we prove the Pontryagin maximum principle of the controlled viscous weakly dispersive Degasperis–Procesi equation. The necessary optimality condition is established for the problem in fixed final horizon case. Finally, a remark on how to utilize the obtained results is also made. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
It is well-known in optimal control theory that the maximum principle, in general, furnishes only necessary optimality conditions for an admissible process to be an optimal one. It is also well-known that if a process satisfies the maximum principle in a problem with convex data, the maximum principle turns to be likewise a sufficient condition. Here an invexity type condition for state constrained optimal control problems is defined and shown to be a sufficient optimality condition. Further, it is demonstrated that all optimal control problems where all extremal processes are optimal necessarily obey this invexity condition. Thus optimal control problems which satisfy such a condition constitute the most general class of problems where the maximum principle becomes automatically a set of sufficient optimality conditions.  相似文献   

14.
An optimal control problem with an integral quality index specified in a finite time interval is formulated for a model of economic growth that leads to emission of greenhouse gases. The controlled system is linear with respect to control. The problem contains phase constraints that abandon emission of greenhouse gases above some predefined time-dependent limit. As is known, optimal control problems with phase constraints fall beyond the sphere of efficient application of the Pontryagin maximum principle because, for such problems, this principle is formulated in a complicated form difficult for analytic treatment in particular situations. In this study, the analytic structure of the optimal control and phase trajectories is constructed using the double variation method.  相似文献   

15.
A nonlinear optimal impulsive control problem with trajectories of bounded variation subject to intermediate state constraints at a finite number on nonfixed instants of time is considered. Features of this problem are discussed from the viewpoint of the extension of the classical optimal control problem with the corresponding state constraints. A necessary optimality condition is formulated in the form of a smooth maximum principle; thorough comments are given, a short proof is presented, and examples are discussed.  相似文献   

16.
We consider a nonautonomous optimal control problem on an infinite time horizon with an integral functional containing a positive discounting factor. In the case of a dominating discounting factor, we obtain a variant of the Pontryagin maximum principle that contains explicit expressions for the adjoint variable and the Hamiltonian of the problem.  相似文献   

17.
This paper is concerned with the stochastic maximum principle for impulse optimal control problems of forward–backward systems, where the coefficients of the forward part are Lipschitz continuous. The domain of the regular controls is not necessarily convex. We establish a Pontryagins maximum principle for this control problem by applying Ekelands variational principle to a sequence of approximated control problems with smooth coefficients of the initial problems.  相似文献   

18.
1IntroductionItiswellkllownthatDuboviskij-Milujintheorembeef.1providesanapproachtoderivingPontryagin'smedmumprincipleforOPtalcontrolproblemswith"dxedphase-controlconstraints.Totreatphase-controlconstraintsofequalitytype,oneimmediatelyencountersaproblem:isthesumOfw*-closedconvexconesstillw*-closed?Theanswer,however,is.generallyno.Toovercomethisdifficulty,Ledzewicz-kowalewsforproposedtheconeeptofjointlyregularitybeef.2,whileWalczakproposedtheconceptofsystemofthesamesenseRef.3.Thecontribution…  相似文献   

19.
We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.  相似文献   

20.
We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号