首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The dynamic programming approach for the control of a 3D flow governed by the stochastic Navier–Stokes equations for incompressible fluid in a bounded domain is studied. By a compactness argument, existence of solutions for the associated Hamilton–Jacobi–Bellman equation is proved. Finally, existence of an optimal control through the feedback formula and of an optimal state is discussed. This paper has been written at Scuola Normale Superiore di Pisa and at école Normale Supérieure de Cachan, Antenne de Bretagne.  相似文献   

2.
We consider the general continuous time finite-dimensional deterministic system under a finite horizon cost functional. Our aim is to calculate approximate solutions to the optimal feedback control. First we apply the dynamic programming principle to obtain the evolutive Hamilton–Jacobi–Bellman (HJB) equation satisfied by the value function of the optimal control problem. We then propose two schemes to solve the equation numerically. One is in terms of the time difference approximation and the other the time-space approximation. For each scheme, we prove that (a) the algorithm is convergent, that is, the solution of the discrete scheme converges to the viscosity solution of the HJB equation, and (b) the optimal control of the discrete system determined by the corresponding dynamic programming is a minimizing sequence of the optimal feedback control of the continuous counterpart. An example is presented for the time-space algorithm; the results illustrate that the scheme is effective.  相似文献   

3.
In this paper, we give a probabilistic interpretation for a coupled system of Hamilton–Jacobi–Bellman equations using the value function of a stochastic control problem. First we introduce this stochastic control problem. Then we prove that the value function of this problem is deterministic and satisfies a (strong) dynamic programming principle. And finally, the value function is shown to be the unique viscosity solution of the coupled system of Hamilton–Jacobi–Bellman equations.  相似文献   

4.
    
We study an infinite-dimensional Black—Scholes—Barenblatt equation which is a Hamilton—Jacobi—Bellman equation that is related to option pricing in the Musiela model of interest rate dynamics. We prove the existence and uniqueness of viscosity solutions of the Black—Scholes—Barenblatt equation and discuss their stochastic optimal control interpretation. We also show that in some cases the solution can be locally uniformly approximated by solutions of suitable finite-dimensional Hamilton—Jacobi—Bellman equations.  相似文献   

5.
We study an infinite-dimensional Black—Scholes—Barenblatt equation which is a Hamilton—Jacobi—Bellman equation that is related to option pricing in the Musiela model of interest rate dynamics. We prove the existence and uniqueness of viscosity solutions of the Black—Scholes—Barenblatt equation and discuss their stochastic optimal control interpretation. We also show that in some cases the solution can be locally uniformly approximated by solutions of suitable finite-dimensional Hamilton—Jacobi—Bellman equations.  相似文献   

6.
In this paper, we study optimal reinsurance/new business and investment (no-shorting) strategy for the mean-variance problem in two risk models: a classical risk model and a diffusion model. The problem is firstly reduced to a stochastic linear-quadratic (LQ) control problem with constraints. Then, the efficient frontiers and efficient strategies are derived explicitly by a verification theorem with the viscosity solutions of Hamilton–Jacobi–Bellman (HJB) equations, which is different from that given in Zhou et al. (SIAM J Control Optim 35:243–253, 1997). Furthermore, by comparisons, we find that they are identical under the two risk models. This work was supported by National Basic Research Program of China (973 Program) 2007CB814905 and National Natural Science Foundation of China (10571092).  相似文献   

7.
We analyze the process of mortgage loan securitization that has been a root cause of the current subprime mortgage crisis (SMC). In particular, we solve an optimal securitization problem for banks that has the cash outflow rate for financing a portfolio of mortgage-backed securities (MBSs) and the bank’s investment in MBSs as controls. In our case, the associated Hamilton–Jacobi–Bellman equation (HJBE) has a smooth solution when the optimal controls are computed via a power utility function. Finally, we analyze this optimization problem and its connections with the SMC.  相似文献   

8.
The present paper is concerned with the study of the Hamilton–Jacobi–Bellman equation for the time optimal control problem associated with infinite-dimensional linear control systems from the point of view of continuous contingent solutions.  相似文献   

9.
We present a novel numerical method for the Hamilton–Jacobi–Bellman equation governing a class of optimal feedback control problems. The spatial discretization is based on a least-squares collocation Radial Basis Function method and the time discretization is the backward Euler finite difference. A stability analysis is performed for the discretization method. An adaptive algorithm is proposed so that at each time step, the approximate solution can be constructed recursively and optimally. Numerical results are presented to demonstrate the efficiency and accuracy of the method.  相似文献   

10.
We consider a stochastic control problem over an infinite horizon where the state process is influenced by an unobservable environment process. In particular, the Hidden-Markov-model and the Bayesian model are included. This model under partial information is transformed into an equivalent one with complete information by using the well-known filter technique. In particular, the optimal controls and the value functions of the original and the transformed problem are the same. An explicit representation of the filter process which is a piecewise-deterministic process, is also given. Then we propose two solution techniques for the transformed model. First, a generalized verification technique (with a generalized Hamilton–Jacobi–Bellman equation) is formulated where the strict differentiability of the value function is weaken to local Lipschitz continuity. Second, we present a discrete-time Markovian decision model by which we are able to compute an optimal control of our given problem. In this context we are also able to state a general existence result for optimal controls. The power of both solution techniques is finally demonstrated for a parallel queueing model with unknown service rates. In particular, the filter process is discussed in detail, the value function is explicitly computed and the optimal control is completely characterized in the symmetric case.  相似文献   

11.
In this paper, we outline an impulse stochastic control formulation for pricing variable annuities with a guaranteed minimum withdrawal benefit (GMWB) assuming the policyholder is allowed to withdraw funds continuously. We develop a numerical scheme for solving the Hamilton–Jacobi–Bellman (HJB) variational inequality corresponding to the impulse control problem. We prove the convergence of our scheme to the viscosity solution of the continuous withdrawal problem, provided a strong comparison result holds. The scheme can be easily generalized to price discrete withdrawal contracts. Numerical experiments are conducted, which show a region where the optimal control appears to be non-unique.  相似文献   

12.
This work is devoted to the study of a class of Hamilton–Jacobi–Bellman equations associated to an optimal control problem where the state equation is a stochastic differential inclusion with a maximal monotone operator. We show that the value function minimizing a Bolza-type cost functional is a viscosity solution of the HJB equation. The proof is based on the perturbation of the initial problem by approximating the unbounded operator. Finally, by providing a comparison principle we are able to show that the solution of the equation is unique.  相似文献   

13.
We apply the Stochastic Perron Method, created by Bayraktar and Sîrbu, to a stochastic exit time control problem. Our main assumption is the validity of the Strong Comparison Result for the related Hamilton–Jacobi–Bellman (HJB) equation. Without relying on Bellman's optimality principle we prove that inside the domain the value function is continuous and coincides with a viscosity solution of the Dirichlet boundary value problem for the HJB equation.  相似文献   

14.
We consider a one-dimensional stochastic control problem that arises from queueing network applications. The state process corresponding to the queue-length process is given by a stochastic differential equation which reflects at the origin. The controller can choose the drift coefficient which represents the service rate and the buffer size b>0. When the queue length reaches b, the new customers are rejected and this incurs a penalty. There are three types of costs involved: A “control cost” related to the dynamically controlled service rate, a “congestion cost” which depends on the queue length and a “rejection penalty” for the rejection of the customers. We consider the problem of minimizing long-term average cost, which is also known as the ergodic cost criterion. We obtain an optimal drift rate (i.e. an optimal service rate) as well as the optimal buffer size b *>0. When the buffer size b>0 is fixed and where there is no congestion cost, this problem is similar to the work in Ata, Harrison and Shepp (Ann. Appl. Probab. 15, 1145–1160, 2005). Our method is quite different from that of (Ata, Harrison and Shepp (Ann. Appl. Probab. 15, 1145–1160, 2005)). To obtain a solution to the corresponding Hamilton–Jacobi–Bellman (HJB) equation, we analyze a family of ordinary differential equations. We make use of some specific characteristics of this family of solutions to obtain the optimal buffer size b *>0. A.P. Weerasinghe’s research supported by US Army Research Office grant W911NF0510032.  相似文献   

15.
Bing Sun Department of Mathematics, Beijing Institute of Technology, Beijing 100081, People's Republic of China and School of Computational and Applied Mathematics, University of the Witwatersrand, Wits 2050, Johannesburg, South Africa Email: bzguo{at}iss.ac.cn Received on March 15, 2007; Revision received October 17, 2007. A new algorithm for finding numerical solutions of optimal feedbackcontrol based on dynamic programming is developed. The algorithmis based on two observations: (1) the value function of theoptimal control problem considered is the viscosity solutionof the associated Hamilton–Jacobi–Bellman (HJB)equation and (2) the appearance of the gradient of the valuefunction in the HJB equation is in the form of directional derivative.The algorithm proposes a discretization method for seeking optimalcontrol–trajectory pairs based on a finite-differencescheme in time through solving the HJB equation and state equation.We apply the algorithm to a simple optimal control problem,which can be solved analytically. The consistence of the numericalsolution obtained to its analytical counterpart indicates theeffectiveness of the algorithm.  相似文献   

16.
In this paper, we use the variational iteration method (VIM) for optimal control problems. First, optimal control problems are transferred to Hamilton–Jacobi–Bellman (HJB) equation as a nonlinear first order hyperbolic partial differential equation. Then, the basic VIM is applied to construct a nonlinear optimal feedback control law. By this method, the control and state variables can be approximated as a function of time. Also, the numerical value of the performance index is obtained readily. In view of the convergence of the method, some illustrative examples are presented to show the efficiency and reliability of the presented method.  相似文献   

17.
Semilinear parabolic differential equations are solved in a mild sense in an infinite-dimensional Hilbert space. Applications to stochastic optimal control problems are studied by solving the associated Hamilton–Jacobi–Bellman equation. These results are applied to some controlled stochastic partial differential equations.  相似文献   

18.
The aim of this paper is to apply methods from optimal control theory, and from the theory of dynamic systems to the mathematical modeling of biological pest control. The linear feedback control problem for nonlinear systems has been formulated in order to obtain the optimal pest control strategy only through the introduction of natural enemies. Asymptotic stability of the closed-loop nonlinear Kolmogorov system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton–Jacobi–Bellman equation, thus guaranteeing both stability and optimality. Numerical simulations for three possible scenarios of biological pest control based on the Lotka–Volterra models are provided to show the effectiveness of this method.  相似文献   

19.
This paper derives explicit closed form solutions, for the efficient frontier and optimal investment strategy, for the dynamic mean–variance portfolio selection problem under the constraint of a higher borrowing rate. The method used is the Hamilton–Jacobi–Bellman (HJB) equation in a stochastic piecewise linear-quadratic (PLQ) control framework. The results are illustrated on an example.  相似文献   

20.
This paper is concerned with processes which are max-plus counterparts of Markov diffusion processes governed by Ito sense stochastic differential equations. Concepts of max-plus martingale and max-plus stochastic differential equation are introduced. The max-plus counterparts of backward and forward PDEs for Markov diffusions turn out to be first-order PDEs of Hamilton–Jacobi–Bellman type. Max-plus additive integrals and a max-plus additive dynamic programming principle are considered. This leads to variational inequalities of Hamilton–Jacobi–Bellman type.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号