首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An optimal stochastic control problem is considered in this paper, where the diffusion coefficient also depends on the control and is possibly degenerate. In addition to the usual adjoint process, a second-order adjoint process is introduced. Some relationships between the value function and the adjoint processes are presented via the “super- and sub-differential” which is related to the viscosity solution, without assuming the smoothness of the value function. The maximum principle, dynamic programming and their connections are then established within a unified framework of viscosity solution  相似文献   

2.
In this paper we discuss the necessary and sufficient conditions for near-optimal singular stochastic controls for the systems driven by a nonlinear stochastic differential equations (SDEs in short). The proof of our result is based on Ekeland’s variational principle and some delicate estimates of the state and adjoint processes. It is well known that optimal singular controls may fail to exist even in simple cases. This justifies the use of near-optimal singular controls, which exist under minimal conditions and are sufficient in most practical cases. Moreover, since there are many near-optimal singular controls, it is possible to choose suitable ones, that are convenient for implementation. This result is a generalization of Zhou’s stochastic maximum principle for near-optimality to singular control problem.  相似文献   

3.
Near-optimization is as sensible and important as optimization for both theory and applications. This paper deals with necessary and sufficient conditions for near-optimal singular stochastic controls for nonlinear controlled stochastic differential equations of mean-field type, which is also called McKean–Vlasov-type equations. The proof of our main result is based on Ekeland’s variational principle and some estimates of the state and adjoint processes. It is shown that optimal singular control may fail to exist even in simple cases, while near-optimal singular controls always exist. This justifies the use of near-optimal stochastic controls, which exist under minimal hypotheses and are sufficient in most practical cases. Moreover, since there are many near-optimal singular controls, it is possible to select among them appropriate ones that are easier for analysis and implementation. Under an additional assumptions, we prove that the near-maximum condition on the Hamiltonian function is a sufficient condition for near-optimality. This paper extends the results obtained in (Zhou, X.Y.: SIAM J. Control Optim. 36(3), 929–947, 1998) to a class of singular stochastic control problems involving stochastic differential equations of mean-field type. An example is given to illustrate the theoretical results.  相似文献   

4.
In this paper, we give a probabilistic interpretation for a coupled system of Hamilton–Jacobi–Bellman equations using the value function of a stochastic control problem. First we introduce this stochastic control problem. Then we prove that the value function of this problem is deterministic and satisfies a (strong) dynamic programming principle. And finally, the value function is shown to be the unique viscosity solution of the coupled system of Hamilton–Jacobi–Bellman equations.  相似文献   

5.
Near-optimal controls are as important as optimal controls for both theory and applications. Meanwhile, using inhibitor to control harmful microorganisms and ensure maximum growth of beneficial microorganisms (target microorganisms) is a very interesting topic in the chemostat. Thus, in this paper, we consider a stochastic chemostat model with non-zero cost inhibiting in finite time. The near-optimal control problem was constructed by minimizing the number of harmful microorganisms and minimizing the cost of inhibitor. We find that the Hamiltonian function is key to estimate objective function, and according to the adjoint equation, we obtain some error estimations of the near-optimality. Finally, we establish sufficient and necessary conditions for stochastic near-optimal controls of this model and numerical simulations and some conclusions are given.  相似文献   

6.
We develop a viscosity solution theory for a system of nonlinear degenerate parabolic integro-partial differential equations (IPDEs) related to stochastic optimal switching and control problems or stochastic games. In the case of stochastic optimal switching and control, we prove via dynamic programming methods that the value function is a viscosity solution of the IPDEs. In our setting the value functions or the solutions of the IPDEs are not smooth, so classical verification theorems do not apply.  相似文献   

7.
We prove a convergence theorem for a family of value functions associated with stochastic control problems whose cost functions are defined by backward stochastic differential equations. The limit function is characterized as a viscosity solution to a fully nonlinear partial differential equation of second order. The key assumption we use in our approach is shown to be a necessary and sufficient assumption for the homogenizability of the control problem. The results generalize partially homogenization problems for Hamilton–Jacobi–Bellman equations treated recently by Alvarez and Bardi by viscosity solution methods. In contrast to their approach, we use mainly probabilistic arguments, and discuss a stochastic control interpretation for the limit equation.  相似文献   

8.
We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

9.
??We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

10.
This paper is concerned with the stochastic optimal control problem of jump diffusions. The relationship between stochastic maximum principle and dynamic programming principle is discussed. Without involving any derivatives of the value function, relations among the adjoint processes, the generalized Hamiltonian and the value function are investigated by employing the notions of semijets evoked in defining the viscosity solutions. Stochastic verification theorem is also given to verify whether a given admissible control is optimal.  相似文献   

11.
This article is devoted to the study of fully nonlinear stochastic Hamilton-Jacobi(HJ) equations for the optimal stochastic control problem of ordinary differential equations with random coefficients. Under the standard Lipschitz continuity assumptions on the coefficients, the value function is proved to be the unique viscosity solution of the associated stochastic HJ equation.  相似文献   

12.
Existence of a viscosity solution to a non-local Hamilton-Jacobi-Bellman equation in a Hilbert space is established. We prove that the value function of an associated stochastic control problem is a viscosity solution. We provide a complete proof of the Dynamic Programming Principle for the stochastic control problem. We also illustrate the theory with Bellman equations associated to a controlled wave equation and controlled Musiela equation of mathematical finance both perturbed by Lévy processes.  相似文献   

13.
We study the boundary control problems for stochastic parabolic equations with Neumann boundary conditions. Imposing super-parabolic conditions, we establish the existence and uniqueness of the solution of state and adjoint equations with non-homogeneous boundary conditions by the Galerkin approximations method. We also find that, in this case, the adjoint equation (BSPDE) has two boundary conditions (one is non-homogeneous, the other is homogeneous). By these results we derive necessary optimality conditions for the control systems under convex state constraints by the convex perturbation method.  相似文献   

14.
We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S.?Peng (SIAM J. Control Optim. 28(4):966?C979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A?comparable situation exists in an article by R.?Buckdahn, B.?Djehiche, and J.?Li (Appl. Math. Optim. 64(2):197?C216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.  相似文献   

15.
We prove optimality principles for semicontinuous bounded viscosity solutions of Hamilton-Jacobi-Bellman equations. In particular, we provide a representation formula for viscosity supersolutions as value functions of suitable obstacle control problems. This result is applied to extend the Lyapunov direct method for stability to controlled Ito stochastic differential equations. We define the appropriate concept of the Lyapunov function to study stochastic open loop stabilizability in probability and local and global asymptotic stabilizability (or asymptotic controllability). Finally, we illustrate the theory with some examples.  相似文献   

16.
In this paper we are interested in an investment problem with stochastic volatilities and portfolio constraints on amounts. We model the risky assets by jump diffusion processes and we consider an exponential utility function. The objective is to maximize the expected utility from the investor terminal wealth. The value function is known to be a viscosity solution of an integro-differential Hamilton-Jacobi-Bellman (HJB in short) equation which could not be solved when the risky assets number exceeds three. Thanks to an exponential transformation, we reduce the nonlinearity of the HJB equation to a semilinear equation. We prove the existence of a smooth solution to the latter equation and we state a verification theorem which relates this solution to the value function. We present an example that shows the importance of this reduction for numerical study of the optimal portfolio. We then compute the optimal strategy of investment by solving the associated optimization problem.  相似文献   

17.
This paper is concerned with optimal control of neutral stochastic functional differential equations (NSFDEs). The Pontryagin maximum principle is proved for optimal control, where the adjoint equation is a linear neutral backward stochastic functional equation of Volterra type (VNBSFE). The existence and uniqueness of the solution are proved for the general nonlinear VNBSFEs. Under the convexity assumption of the Hamiltonian function, a sufficient condition for the optimality is addressed as well.  相似文献   

18.
Abstract. This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature.  相似文献   

19.
This work is devoted to the study of a class of Hamilton–Jacobi–Bellman equations associated to an optimal control problem where the state equation is a stochastic differential inclusion with a maximal monotone operator. We show that the value function minimizing a Bolza-type cost functional is a viscosity solution of the HJB equation. The proof is based on the perturbation of the initial problem by approximating the unbounded operator. Finally, by providing a comparison principle we are able to show that the solution of the equation is unique.  相似文献   

20.
   Abstract. This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号