首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We prove a convergence theorem for a family of value functions associated with stochastic control problems whose cost functions are defined by backward stochastic differential equations. The limit function is characterized as a viscosity solution to a fully nonlinear partial differential equation of second order. The key assumption we use in our approach is shown to be a necessary and sufficient assumption for the homogenizability of the control problem. The results generalize partially homogenization problems for Hamilton–Jacobi–Bellman equations treated recently by Alvarez and Bardi by viscosity solution methods. In contrast to their approach, we use mainly probabilistic arguments, and discuss a stochastic control interpretation for the limit equation.  相似文献   

2.
We prove a large deviation principle result for solutions of abstract stochastic evolution equations perturbed by small Lévy noise. We use general large deviations theorems of Varadhan and Bryc coupled with the techniques of Feng and Kurtz (2006) [15], viscosity solutions of integro-partial differential equations in Hilbert spaces, and deterministic optimal control methods. The Laplace limit is identified as a viscosity solution of a Hamilton-Jacobi-Bellman equation of an associated control problem. We also establish exponential moment estimates for solutions of stochastic evolution equations driven by Lévy noise. General results are applied to stochastic hyperbolic equations perturbed by subordinated Wiener process.  相似文献   

3.
This work is devoted to the study of a class of Hamilton–Jacobi–Bellman equations associated to an optimal control problem where the state equation is a stochastic differential inclusion with a maximal monotone operator. We show that the value function minimizing a Bolza-type cost functional is a viscosity solution of the HJB equation. The proof is based on the perturbation of the initial problem by approximating the unbounded operator. Finally, by providing a comparison principle we are able to show that the solution of the equation is unique.  相似文献   

4.
We study a stochastic optimal control problem for a partially observed diffusion. By using the control randomization method in Bandini et al. (2018), we prove a corresponding randomized dynamic programming principle (DPP) for the value function, which is obtained from a flow property of an associated filter process. This DPP is the key step towards our main result: a characterization of the value function of the partial observation control problem as the unique viscosity solution to the corresponding dynamic programming Hamilton–Jacobi–Bellman (HJB) equation. The latter is formulated as a new, fully non linear partial differential equation on the Wasserstein space of probability measures. An important feature of our approach is that it does not require any non-degeneracy condition on the diffusion coefficient, and no condition is imposed to guarantee existence of a density for the filter process solution to the controlled Zakai equation. Finally, we give an explicit solution to our HJB equation in the case of a partially observed non Gaussian linear–quadratic model.  相似文献   

5.
We investigate the Cauchy problem for a nonlinear parabolic partial differential equation of Hamilton–Jacobi–Bellman type and prove some regularity results, such as Lipschitz continuity and semiconcavity, for its unique viscosity solution. Our method is based on the possibility of representing such a solution as the value function of the associated stochastic optimal control problem. The main feature of our result is the fact that the solution is shown to be jointly regular in space and time without any strong ellipticity assumption on the Hamilton–Jacobi–Bellman equation.  相似文献   

6.
SOLVXBILITYOFFORWARD-BACKWARDSDESANDTHENODALSETOFHAMILTON-JACOBI-BELLMANEQUATIONS¥MAJIN;YONGJIONGMINAbstract:Thesolvabilityof...  相似文献   

7.
Abstract. This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature.  相似文献   

8.
The solvability of a class of forward-backward stochastic differential equations (SDEs for short) over an arbitrarily prescribed time duration is studied. The authors design a stochastic relaxed control problem, with both drift and difftusion all being controlled, so that the solvability problem is converted to a problem of finding the nodal set of the viscosity solution to a certain Hamilton-Jacobi-Bellman equation. This method overcomes the fatal difficulty encountered in the traditional contraction mapping approach to the existence theorem of such SDEs.  相似文献   

9.
This article is devoted to the study of fully nonlinear stochastic Hamilton-Jacobi(HJ) equations for the optimal stochastic control problem of ordinary differential equations with random coefficients. Under the standard Lipschitz continuity assumptions on the coefficients, the value function is proved to be the unique viscosity solution of the associated stochastic HJ equation.  相似文献   

10.
This paper treats a finite time horizon optimal control problem in which the controlled state dynamics are governed by a general system of stochastic functional differential equations with a bounded memory. An infinite dimensional Hamilton–Jacobi–Bellman (HJB) equation is derived using a Bellman-type dynamic programming principle. It is shown that the value function is the unique viscosity solution of the HJB equation.  相似文献   

11.
《随机分析与应用》2013,31(5):927-946
We study the stochastic optimization problem of renewable resources to maximize the expected discounted utility of exploitation. We develop the viscosity solution method to the associated Hamilton–Jacobi–Bellman equation and further show the C 2-regularity of the viscosity solution under the strict concavity of the utility function. The optimal policy is shown to exist and given in a feedback form or a stochastic version of Hotelling's rule.  相似文献   

12.
In the present paper, we study a necessary condition under which the solutions of a stochastic differential equation governed by unbounded control processes, remain in an arbitrarily small neighborhood of a given set of constraints. We prove that, in comparison to the classical constrained control problem with bounded control processes, a further assumption on the growth of control processes is needed in order to obtain a necessary and sufficient condition in terms of viscosity solution of the associated Hamilton-Jacobi-Bellman equation. A rather general example illustrates our main result.  相似文献   

13.
   Abstract. This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature.  相似文献   

14.
In the present paper we analyse the American option valuation problem in a stochastic volatility model when transaction costs are taken into account. We shall show that it can be formulated as a singular stochastic optimal control problem, proving the existence and uniqueness of the viscosity solution for the associated Hamilton–Jacobi–Bellman partial differential equation. Moreover, after performing a dimensionality reduction through a suitable choice of the utility function, we shall provide a numerical example illustrating how American options prices can be computed in the present modelling framework.  相似文献   

15.
The operation of a stand‐alone photovoltaic (PV) system ultimately aims for the optimization of its energy storage. We present a mathematical model for cost‐effective control of a stand‐alone system based on a PV panel equipped with an angle adjustment device. The model is based on viscosity solutions to partial differential equations, which serve as a new and mathematically rigorous tool for modeling, analyzing, and controlling PV systems. We formulate a stochastic optimal switching problem of the panel angle, which is here a binary variable to be dynamically controlled under stochastic weather condition. The stochasticity comes from cloud cover dynamics, which is modeled with a nonlinear stochastic differential equation. In finding the optimal control policy of the panel angle, switching the angle is subject to impulsive cost and reduces to solving a system of Hamilton‐Jacobi‐Bellman quasi‐variational inequalities (HJBQVIs). We show that the stochastic differential equation is well posed and that the HJBQVIs admit a unique viscosity solution. In addition, a finite‐difference scheme is proposed for the numerical discretization of HJBQVIs. A demonstrative computational example of the HJBQVIs, with emphasis on a stand‐alone experimental system, is finally presented with practical implications for its cost‐effective operation.  相似文献   

16.
This paper considers a stochastic control problem in which the dynamic system is a controlled backward stochastic heat equation with Neumann boundary control and boundary noise and the state must coincide with a given random vector at terminal time. Through defining a proper form of the mild solution for the state equation, the existence and uniqueness of the mild solution is given. As a main result, a global maximum principle for our control problem is presented. The main result is also applied to a backward linear-quadratic control problem in which an optimal control is obtained explicitly as a feedback of the solution to a forward–backward stochastic partial differential equation.  相似文献   

17.
We consider the problem of viscosity solution of integro-partial differential equation(IPDE in short) with one obstacle via the solution of reflected backward stochastic differential equations(RBSDE in short) with jumps. We show the existence and uniqueness of a continuous viscosity solution of equation with non local terms, if the generator is not monotonous and Levy's measure is infinite.  相似文献   

18.
The limiting behavior as the viscosity goes to zero of the solution of the first boundary value problem for Burger's equation is considered. The method consists in identifying the solution of Burger's equation with the optimal control of an appropriate stochastic control problem.  相似文献   

19.
In this paper, we study the stochastic Ramsey problem related to an economic growth model with the CES production function in a finite time horizon. By changing variables, the Hamilton-Jacobi-Bellman equation associated with this optimization problem is transformed. By the viscosity solution technique, we show the existence of a classical solution of the transformed Hamilton-Jacobi-Bellman equation, and then give an optimal consumption policy of the original problem.  相似文献   

20.
We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号