首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This work is devoted to the study of a class of Hamilton–Jacobi–Bellman equations associated to an optimal control problem where the state equation is a stochastic differential inclusion with a maximal monotone operator. We show that the value function minimizing a Bolza-type cost functional is a viscosity solution of the HJB equation. The proof is based on the perturbation of the initial problem by approximating the unbounded operator. Finally, by providing a comparison principle we are able to show that the solution of the equation is unique.  相似文献   

2.
In this paper, the authors investigate the optimal conversion rate at which land use is irreversibly converted from biodiversity conservation to agricultural production. This problem is formulated as a stochastic control model, then transformed into a HJB equation involving free boundary. Since the state equation has singularity, it is difficult to directly derive the boundary value condition for the HJB equation. They provide a new method to overcome the difficulty via constructing another auxiliary stochastic control problem,and impose a proper boundary value condition. Moreover, they establish the existence and uniqueness of the viscosity solution of the HJB equation. Finally, they propose a stable numerical method for the HJB equation involving free boundary, and show some numerical results.  相似文献   

3.
This paper derives explicit closed form solutions, for the efficient frontier and optimal investment strategy, for the dynamic mean–variance portfolio selection problem under the constraint of a higher borrowing rate. The method used is the Hamilton–Jacobi–Bellman (HJB) equation in a stochastic piecewise linear-quadratic (PLQ) control framework. The results are illustrated on an example.  相似文献   

4.
In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton–Jacobian–Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.  相似文献   

5.
In [21], Sethi et al. introduced a particular new-product adoption model. They determine optimal advertising and pricing policies of an associated deterministic infinite horizon discounted control problem. Their analysis is based on the fact that the corresponding Hamilton–Jacobi–Bellman (HJB) equation is an ordinary non-linear differential equation which has an analytical solution. In this paper, generalizations of their model are considered. We take arbitrary adoption and saturation effects into account, and solve finite and infinite horizon discounted variations of associated control problems. If the horizon is finite, the HJB-equation is a 1st order non-linear partial differential equation with specific boundary conditions. For a fairly general class of models we show that these partial differential equations have analytical solutions. Explicit formulas of the value function and the optimal policies are derived. The controlled Bass model with isoelastic demand is a special example of the class of controlled adoption models to be examined and will be analyzed in some detail.  相似文献   

6.
This paper studies the production inventory problem of minimizing the expected discounted present value of production cost control in a manufacturing system with degenerate stochastic demand. We establish the existence of a unique solution of the Hamilton-Jacobi-Bellman (HJB) equation associated with this problem. The optimal control is given by a solution to the corresponding HJB equation.  相似文献   

7.

This paper considers a robust optimal portfolio problem under Heston model in which the risky asset price is related to the historical performance. The finance market includes a riskless asset and a risky asset whose price is controlled by a stochastic delay equation. The objective is to choose the investment strategy to maximize the minimal expected utility of terminal wealth. By employing dynamic programming principle and Hamilton-Jacobin-Bellman (HJB) equation, we obtain the specific expression of the optimal control and the explicit solution of the corresponding HJB equation. Besides, a verification theorem is provided to ensure the value function is indeed the solution of the HJB equation. Finally, we use numerical examples to illustrate the relationship between the optimal strategy and parameters.

  相似文献   

8.
In this paper, we use the variational iteration method (VIM) for optimal control problems. First, optimal control problems are transferred to Hamilton–Jacobi–Bellman (HJB) equation as a nonlinear first order hyperbolic partial differential equation. Then, the basic VIM is applied to construct a nonlinear optimal feedback control law. By this method, the control and state variables can be approximated as a function of time. Also, the numerical value of the performance index is obtained readily. In view of the convergence of the method, some illustrative examples are presented to show the efficiency and reliability of the presented method.  相似文献   

9.
Using the decomposition of solution of SDE, we consider the stochastic optimal control problem with anticipative controls as a family of deterministic control problems parametrized by the paths of the driving Wiener process and of a newly introduced Lagrange multiplier stochastic process (nonanticipativity equality constraint). It is shown that the value function of these problems is the unique global solution of a robust equation (random partial differential equation) associated to a linear backward Hamilton-Jacobi-Bellman stochastic partial differential equation (HJB SPDE). This appears as limiting SPDE for a sequence of random HJB PDE's when linear interpolation approximation of the Wiener process is used. Our approach extends the Wong-Zakai type results [20] from SDE to the stochastic dynamic programming equation by showing how this arises as average of the limit of a sequence of deterministic dynamic programming equations. The stochastic characteristics method of Kunita [13] is used to represent the value function. By choosing the Lagrange multiplier equal to its nonanticipative constraint value the usual stochastic (nonanticipative) optimal control and optimal cost are recovered. This suggests a method for solving the anticipative control problems by almost sure deterministic optimal control. We obtain a PDE for the “cost of perfect information” the difference between the cost function of the nonanticipative control problem and the cost of the anticipative problem which satisfies a nonlinear backward HJB SPDE. Poisson bracket conditions are found ensuring this has a global solution. The cost of perfect information is shown to be zero when a Lagrangian submanifold is invariant for the stochastic characteristics. The LQG problem and a nonlinear anticipative control problem are considered as examples in this framework  相似文献   

10.
This paper treats a finite time horizon optimal control problem in which the controlled state dynamics are governed by a general system of stochastic functional differential equations with a bounded memory. An infinite dimensional Hamilton–Jacobi–Bellman (HJB) equation is derived using a Bellman-type dynamic programming principle. It is shown that the value function is the unique viscosity solution of the HJB equation.  相似文献   

11.
In power production problems maximum power and minimum entropy production and inherently connected by the Gouy–Stodola law. In this paper various mathematical tools are applied in dynamic optimization of power-maximizing paths, with special attention paid to nonlinear systems. Maximum power and/or minimum entropy production are governed by Hamilton–Jacobi–Bellman (HJB) equations which describe the value function of the problem and associated controls. Yet, in many cases optimal relaxation curve is non-exponential, governing HJB equations do not admit classical solutions and one has to work with viscosity solutions. Systems with nonlinear kinetics (e.g. radiation engines) are particularly difficult, thus, discrete counterparts of continuous HJB equations and numerical approaches are recommended. Discrete algorithms of dynamic programming (DP), which lead to power limits and associated availabilities, are effective. We consider convergence of discrete algorithms to viscosity solutions of HJB equations, discrete approximations, and the role of Lagrange multiplier λ associated with the duration constraint. In analytical discrete schemes, the Legendre transformation is a significant tool leading to original work function. We also describe numerical algorithms of dynamic programming and consider dimensionality reduction in these algorithms. Indications showing the method potential for other systems, in particular chemical energy systems, are given.  相似文献   

12.
Abstract

This article studies the optimization problem of maximizing the expected discounted present value of lifetime utility of consumption in the framework of one-sector neoclassical growth model with the Constant Elasticity of Substitution (CES) production function. We establish the existence of a classical solution of the Hamilton–Jacobi–Bellman (HJB) equation associated with this problem by the technique of viscosity solutions under the strict concavity of the utility function, and hence derive an optimal consumption from the optimality conditions in the HJB equation.  相似文献   

13.
This paper concerns an optimal dividend-penalty problem for the risk models with surplus-dependent premiums. The objective is to maximize the difference of the expected cumulative discounted dividend payments received until the moment of ruin and a discounted penalty payment taken at the moment of ruin. Since the value function may be not smooth enough to be the classical solution of the HJB equation, the viscosity solution is involved.The optimal value function can be characterized as the smallest viscosity supersolution of the HJB equation and the optimal dividend-penalty strategy has a band structure. Finally,some numerical examples with gamma distribution for the claims are analyzed.  相似文献   

14.
In this paper we are interested in an investment problem with stochastic volatilities and portfolio constraints on amounts. We model the risky assets by jump diffusion processes and we consider an exponential utility function. The objective is to maximize the expected utility from the investor terminal wealth. The value function is known to be a viscosity solution of an integro-differential Hamilton-Jacobi-Bellman (HJB in short) equation which could not be solved when the risky assets number exceeds three. Thanks to an exponential transformation, we reduce the nonlinearity of the HJB equation to a semilinear equation. We prove the existence of a smooth solution to the latter equation and we state a verification theorem which relates this solution to the value function. We present an example that shows the importance of this reduction for numerical study of the optimal portfolio. We then compute the optimal strategy of investment by solving the associated optimization problem.  相似文献   

15.
We study a stochastic optimal control problem for a partially observed diffusion. By using the control randomization method in Bandini et al. (2018), we prove a corresponding randomized dynamic programming principle (DPP) for the value function, which is obtained from a flow property of an associated filter process. This DPP is the key step towards our main result: a characterization of the value function of the partial observation control problem as the unique viscosity solution to the corresponding dynamic programming Hamilton–Jacobi–Bellman (HJB) equation. The latter is formulated as a new, fully non linear partial differential equation on the Wasserstein space of probability measures. An important feature of our approach is that it does not require any non-degeneracy condition on the diffusion coefficient, and no condition is imposed to guarantee existence of a density for the filter process solution to the controlled Zakai equation. Finally, we give an explicit solution to our HJB equation in the case of a partially observed non Gaussian linear–quadratic model.  相似文献   

16.
This paper considers the problem of maximizing expected utility from consumption and terminal wealth under model uncertainty for a general semimartingale market, where the agent with an initial capital and a random endowment can invest. To find a solution to the investment problem we use the martingale method. We first prove that under appropriate assumptions a unique solution to the investment problem exists. Then we deduce that the value functions of primal problem and dual problem are convex conjugate functions. Furthermore we consider a diffusion-jump-model where the coefficients depend on the state of a Markov chain and the investor is ambiguity to the intensity of the underlying Poisson process. Finally, for an agent with the logarithmic utility function, we use the stochastic control method to derive the Hamilton-Jacobi-Bellmann (HJB) equation. And the solution to this HJB equation can be determined numerically. We also show how thereby the optimal investment strategy can be computed.  相似文献   

17.
We consider a stochastic optimal control problem in a market model with temporary and permanent price impact, which is related to an expected utility maximization problem under finite fuel constraint. We establish the initial condition fulfilled by the corresponding value function and show its first regularity property. Moreover, we can prove the existence and uniqueness of an optimal strategy under rather mild model assumptions. This will then allow us to derive further regularity properties of the corresponding value function, in particular its continuity and partial differentiability. As a consequence of the continuity of the value function, we will prove a dynamic programming principle without appealing to the classical measurable selection arguments. This permits us to establish a tight relation between our value function and a nonlinear parabolic degenerated Hamilton–Jacobi–Bellman (HJB) equation with singularity. To conclude, we show a comparison principle, which allows us to characterize our value function as the unique viscosity solution of the HJB equation.  相似文献   

18.
We address finding the semi-global solutions to optimal feedback control and the Hamilton–Jacobi–Bellman (HJB) equation. Using the solution of an HJB equation, a feedback optimal control law can be implemented in real-time with minimum computational load. However, except for systems with two or three state variables, using traditional techniques for numerically finding a semi-global solution to an HJB equation for general nonlinear systems is infeasible due to the curse of dimensionality. Here we present a new computational method for finding feedback optimal control and solving HJB equations which is able to mitigate the curse of dimensionality. We do not discretize the HJB equation directly, instead we introduce a sparse grid in the state space and use the Pontryagin’s maximum principle to derive a set of necessary conditions in the form of a boundary value problem, also known as the characteristic equations, for each grid point. Using this approach, the method is spatially causality free, which enjoys the advantage of perfect parallelism on a sparse grid. Compared with dense grids, a sparse grid has a significantly reduced size which is feasible for systems with relatively high dimensions, such as the 6-D system shown in the examples. Once the solution obtained at each grid point, high-order accurate polynomial interpolation is used to approximate the feedback control at arbitrary points. We prove an upper bound for the approximation error and approximate it numerically. This sparse grid characteristics method is demonstrated with three examples of rigid body attitude control using momentum wheels.  相似文献   

19.
We consider a network of d companies (insurance companies, for example) operating under a treaty to diversify risk. Internal and external borrowing are allowed to avert ruin of any member of the network. The amount borrowed to prevent ruin is viewed upon as control. Repayment of these loans entails a control cost in addition to the usual costs. Each company tries to minimize its repayment liability. This leads to a d -person differential game with state space constraints. If the companies are also in possible competition a Nash equilibrium is sought. Otherwise a utopian equilibrium is more appropriate. The corresponding systems of HJB equations and boundary conditions are derived. In the case of Nash equilibrium, the Hamiltonian can be discontinuous; there are d interlinked control problems with state constraints; each value function is a constrained viscosity solution to the appropriate discontinuous HJB equation. Uniqueness does not hold in general in this case. In the case of utopian equilibrium, each value function turns out to be the unique constrained viscosity solution to the appropriate HJB equation. Connection with Skorokhod problem is briefly discussed.  相似文献   

20.
In this paper, we study a stochastic recursive optimal control problem in which the objective functional is described by the solution of a backward stochastic differential equation driven by \(G\) -Brownian motion. Under standard assumptions, we establish the dynamic programming principle and the related Hamilton–Jacobi–Bellman (HJB) equation in the framework of \(G\) -expectation. Finally, we show that the value function is the viscosity solution of the obtained HJB equation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号