首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Semilinear parabolic differential equations are solved in a mild sense in an infinite-dimensional Hilbert space. Applications to stochastic optimal control problems are studied by solving the associated Hamilton–Jacobi–Bellman equation. These results are applied to some controlled stochastic partial differential equations.  相似文献   

2.
We prove a convergence theorem for a family of value functions associated with stochastic control problems whose cost functions are defined by backward stochastic differential equations. The limit function is characterized as a viscosity solution to a fully nonlinear partial differential equation of second order. The key assumption we use in our approach is shown to be a necessary and sufficient assumption for the homogenizability of the control problem. The results generalize partially homogenization problems for Hamilton–Jacobi–Bellman equations treated recently by Alvarez and Bardi by viscosity solution methods. In contrast to their approach, we use mainly probabilistic arguments, and discuss a stochastic control interpretation for the limit equation.  相似文献   

3.
We show the effectiveness of viscosity-solution methods in asymptotic problems for second-order elliptic partial differential equations (PDEs) with a small parameter. Our stress here is on the point that the methods, based on stability results [3], [16], apply without hard PDE calculations. We treat two examples from [11] and [23]. Moreover, we generalize the results to those for Hamilton—Jacobi—Bellman equations with a small parameter.H. Ishii was supported in part by the AFOSR under Grant No. AFOSR 85-0315 and the Division of Applied Mathematics, Brown University.  相似文献   

4.
In this paper, we give a probabilistic interpretation for a coupled system of Hamilton–Jacobi–Bellman equations using the value function of a stochastic control problem. First we introduce this stochastic control problem. Then we prove that the value function of this problem is deterministic and satisfies a (strong) dynamic programming principle. And finally, the value function is shown to be the unique viscosity solution of the coupled system of Hamilton–Jacobi–Bellman equations.  相似文献   

5.
The present paper is concerned with the study of the Hamilton–Jacobi–Bellman equation for the time optimal control problem associated with infinite-dimensional linear control systems from the point of view of continuous contingent solutions.  相似文献   

6.
    
We study an infinite-dimensional Black—Scholes—Barenblatt equation which is a Hamilton—Jacobi—Bellman equation that is related to option pricing in the Musiela model of interest rate dynamics. We prove the existence and uniqueness of viscosity solutions of the Black—Scholes—Barenblatt equation and discuss their stochastic optimal control interpretation. We also show that in some cases the solution can be locally uniformly approximated by solutions of suitable finite-dimensional Hamilton—Jacobi—Bellman equations.  相似文献   

7.
We study an infinite-dimensional Black—Scholes—Barenblatt equation which is a Hamilton—Jacobi—Bellman equation that is related to option pricing in the Musiela model of interest rate dynamics. We prove the existence and uniqueness of viscosity solutions of the Black—Scholes—Barenblatt equation and discuss their stochastic optimal control interpretation. We also show that in some cases the solution can be locally uniformly approximated by solutions of suitable finite-dimensional Hamilton—Jacobi—Bellman equations.  相似文献   

8.
This work is devoted to the study of a class of Hamilton–Jacobi–Bellman equations associated to an optimal control problem where the state equation is a stochastic differential inclusion with a maximal monotone operator. We show that the value function minimizing a Bolza-type cost functional is a viscosity solution of the HJB equation. The proof is based on the perturbation of the initial problem by approximating the unbounded operator. Finally, by providing a comparison principle we are able to show that the solution of the equation is unique.  相似文献   

9.
This paper is concerned with the optimal production planning in a dynamic stochastic manufacturing system consisting of a single machine that is failure prone and facing a constant demand. The objective is to choose the rate of production over time in order to minimize the long-run average cost of production and surplus. The analysis proceeds with a study of the corresponding problem with a discounted cost. It is shown using the vanishing discount approach that the Hamilton–Jacobi–Bellman equation for the average cost problem has a solution giving rise to the minimal average cost and the so-called potential function. The result helps in establishing a verification theorem. Finally, the optimal control policy is specified in terms of the potential function.  相似文献   

10.
This paper studies the scheduling problem for two products on a single production facility. The objective is to specify a production and setup policy that minimizes the average inventory, backlog, and setup costs. Assuming that the production rate can be adjusted during the production runs, we provide a close form for an optimal production and setup schedule. Dynamic programming and Hamilton–Jacobi–Bellman equation is used to verify the optimality of the obtained policy.  相似文献   

11.
In this paper we provide estimates of the rates of convergence of monotone approximation schemes for non-convex equations in one space-dimension. The equations under consideration are the degenerate elliptic Isaacs equations with x-depending coefficients, and the results applies in particular to certain finite difference methods and control schemes based on the dynamic programming principle. Recently, Krylov, Barles, and Jakobsen obtained similar estimates for convex Hamilton–Jacobi–Bellman equations in arbitrary space-dimensions. Our results are only valid in one space-dimension, but they are the first results of this type for non-convex second-order equations.  相似文献   

12.
This paper derives explicit closed form solutions, for the efficient frontier and optimal investment strategy, for the dynamic mean–variance portfolio selection problem under the constraint of a higher borrowing rate. The method used is the Hamilton–Jacobi–Bellman (HJB) equation in a stochastic piecewise linear-quadratic (PLQ) control framework. The results are illustrated on an example.  相似文献   

13.
Summary. Generalizing an idea from deterministic optimal control, we construct a posteriori error estimates for the spatial discretization error of the stochastic dynamic programming method based on a discrete Hamilton–Jacobi–Bellman equation. These error estimates are shown to be efficient and reliable, furthermore, a priori bounds on the estimates depending on the regularity of the approximate solution are derived. Based on these error estimates we propose an adaptive space discretization scheme whose performance is illustrated by two numerical examples.Mathematics Subject Classification (2000): 93E20, 65N50, 49L20, 49M25, 65N15Acknowledgments. This research was supported by the Center for Empirical Macroeconomics, University of Bielefeld. The support is gratefully acknowledged. I would also like to thank an anonymous referee who suggested several improvements for the paper.  相似文献   

14.
Using a semi-discrete model that describes the heat transfer of a continuous casting process of steel, this paper is addressed to an optimal control problem of the continuous casting process in the secondary cooling zone with water spray control. The approach is based on the Hamilton–Jacobi–Bellman equation satisfied by the value function. It is shown that the value function is the viscosity solution of the Hamilton–Jacobi–Bellman equation. The optimal feedback control is found numerically by solving the associated Hamilton–Jacobi–Bellman equation through a designed finite difference scheme. The validity of the optimality of the obtained control is experimented numerically through comparisons with different admissible controls. Detailed study of a low-carbon billet caster is presented.  相似文献   

15.
This is the first of two papers regarding a family of linear convex control problems in Hilbert spaces and the related Hamilton–Jacobi–Bellman equations. The framework is motivated by an application to boundary control of a PDE modeling investments with vintage capital. Existence and uniqueness of a strong solution (namely, the limit of classic solutions of approximating equations, introduced by Barbu and Da Prato) is investigated. Moreover, such a solution is proved to be C1 in the space variable.  相似文献   

16.
In this paper, we develop a new method to approximate the solution to the Hamilton–Jacobi–Bellman (HJB) equation which arises in optimal control when the plant is modeled by nonlinear dynamics. The approximation is comprised of two steps. First, successive approximation is used to reduce the HJB equation to a sequence of linear partial differential equations. These equations are then approximated via the Galerkin spectral method. The resulting algorithm has several important advantages over previously reported methods. Namely, the resulting control is in feedback form and its associated region of attraction is well defined. In addition, all computations are performed off-line and the control can be made arbitrarily close to optimal. Accordingly, this paper presents a new tool for designing nonlinear control systems that adhere to a prescribed integral performance criterion.  相似文献   

17.
The simultaneous planning of the production and the maintenance in a flexible manufacturing system is considered in this paper. The manufacturing system is composed of one machine that produces a single product. There is a preventive maintenance plan to reduce the failure rate of the machine. This paper is different from the previous researches in this area in two separate ways. First, the failure rate of the machine is supposed to be a function of its age. Second, we assume that the demand of the manufacturing product is time dependent and its rate depends on the level of advertisement on that product. The objective is to maximize the expected discounted total profit of the firm over an infinite time horizon. In the process of finding a solution to the problem, we first characterize an optimal control by introducing a set of Hamilton–Jacobi–Bellman partial differential equations. Then we realize that under practical assumptions, this set of equations can not be solved analytically. Thus to find a suboptimal control, we approximate the original stochastic optimal control model by a discrete-time deterministic optimal control problem. Then proposing a numerical method to solve the steady state Riccati equation, we approximate a suboptimal solution to the problem.  相似文献   

18.
In this paper we study backward stochastic differential equations (BSDEs) driven by the compensated random measure associated to a given pure jump Markov process XX on a general state space KK. We apply these results to prove well-posedness of a class of nonlinear parabolic differential equations on KK, that generalize the Kolmogorov equation of XX. Finally we formulate and solve optimal control problems for Markov jump processes, relating the value function and the optimal control law to an appropriate BSDE that also allows to construct probabilistically the unique solution to the Hamilton–Jacobi–Bellman equation and to identify it with the value function.  相似文献   

19.
The aim of this paper is to apply methods from optimal control theory, and from the theory of dynamic systems to the mathematical modeling of biological pest control. The linear feedback control problem for nonlinear systems has been formulated in order to obtain the optimal pest control strategy only through the introduction of natural enemies. Asymptotic stability of the closed-loop nonlinear Kolmogorov system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution of the Hamilton–Jacobi–Bellman equation, thus guaranteeing both stability and optimality. Numerical simulations for three possible scenarios of biological pest control based on the Lotka–Volterra models are provided to show the effectiveness of this method.  相似文献   

20.
Optimal investment and reinsurance of an insurer with model uncertainty   总被引:1,自引:0,他引:1  
We introduce a novel approach to optimal investment–reinsurance problems of an insurance company facing model uncertainty via a game theoretic approach. The insurance company invests in a capital market index whose dynamics follow a geometric Brownian motion. The risk process of the company is governed by either a compound Poisson process or its diffusion approximation. The company can also transfer a certain proportion of the insurance risk to a reinsurance company by purchasing reinsurance. The optimal investment–reinsurance problems with model uncertainty are formulated as two-player, zero-sum, stochastic differential games between the insurance company and the market. We provide verification theorems for the Hamilton–Jacobi–Bellman–Isaacs (HJBI) solutions to the optimal investment–reinsurance problems and derive closed-form solutions to the problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号