首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study the near-optimal control for systems governed by forward–backward stochastic differential equations via dynamic programming principle. Since the nonsmoothness is inherent in this field, the viscosity solution approach is employed to investigate the relationships among the value function, the adjoint equations along near-optimal trajectories. Unlike the classical case, the definition of viscosity solution contains a perturbation factor, through which the illusory differentiability conditions on the value function are dispensed properly. Moreover, we establish new relationships between variational equations and adjoint equations. As an application, a kind of stochastic recursive near-optimal control problem is given to illustrate our theoretical results.  相似文献   

2.
This paper is concerned with the stochastic optimal control problem of jump diffusions. The relationship between stochastic maximum principle and dynamic programming principle is discussed. Without involving any derivatives of the value function, relations among the adjoint processes, the generalized Hamiltonian and the value function are investigated by employing the notions of semijets evoked in defining the viscosity solutions. Stochastic verification theorem is also given to verify whether a given admissible control is optimal.  相似文献   

3.
We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

4.
??We study the linear quadratic optimal stochastic control problem which is jointly driven by Brownian motion and L\'{e}vy processes. We prove that the new affine stochastic differential adjoint equation exists an inverse process by applying the profound section theorem. Applying for the Bellman's principle of quasilinearization and a monotone iterative convergence method, we prove the existence and uniqueness of the solution of the backward Riccati differential equation. Finally, we prove that the optimal feedback control exists, and the value function is composed of the initial value of the solution of the related backward Riccati differential equation and the related adjoint equation.  相似文献   

5.
《Optimization》2012,61(1):9-32
We analyse the Euler discretization to a class of linear optimal control problems. First we show convergence of order h for the discrete approximation of the adjoint solution and the switching function, where h is the mesh size. Under the additional assumption that the optimal control has bang-bang structure we show that the discrete and the exact controls coincide except on a set of measure O(h). As a consequence, the discrete optimal control approximates the optimal control with order 1 w.r.t. the L 1-norm and with order 1/2 w.r.t. the L 2-norm. An essential assumption is that the slopes of the switching function at its zeros are bounded away from zero which is in fact an inverse stability condition for these zeros. We also discuss higher order approximation methods based on the approximation of the adjoint solution and the switching function. Several numerical examples underline the results.  相似文献   

6.
Parametric nonlinear control problems subject to vector-valued mixed control-state constraints are investigated. The model perturbations are implemented by a parameter p of a Banach-space P. We prove solution differentiability in the sense that the optimal solution and the associated adjoint multiplier function are differentiable functions of the parameter. The main assumptions for solution differentiability are composed by regularity conditions and recently developed second-order sufficient conditions (SSC). The analysis generalizes the approach in [16, 20] and establishes a link between (1) shooting techniques for solving the associated boundary value problem (BVP) and (2) SSC. We shall make use of sensitivity results from finite-dimensional parametric programming and exploit the relationships between the variational system associated to BVP and its corresponding Riccati equation.Solution differentiability is the theoretical backbone for any numerical sensitivity analysis. A numerical example with a vector-valued control is presented that illustrates sensitivity analysis in detail.  相似文献   

7.
We consider a network of d companies (insurance companies, for example) operating under a treaty to diversify risk. Internal and external borrowing are allowed to avert ruin of any member of the network. The amount borrowed to prevent ruin is viewed upon as control. Repayment of these loans entails a control cost in addition to the usual costs. Each company tries to minimize its repayment liability. This leads to a d -person differential game with state space constraints. If the companies are also in possible competition a Nash equilibrium is sought. Otherwise a utopian equilibrium is more appropriate. The corresponding systems of HJB equations and boundary conditions are derived. In the case of Nash equilibrium, the Hamiltonian can be discontinuous; there are d interlinked control problems with state constraints; each value function is a constrained viscosity solution to the appropriate discontinuous HJB equation. Uniqueness does not hold in general in this case. In the case of utopian equilibrium, each value function turns out to be the unique constrained viscosity solution to the appropriate HJB equation. Connection with Skorokhod problem is briefly discussed.  相似文献   

8.
This paper studies the optimal control problem for point processes with Gaussian white-noised observations. A general maximum principle is proved for the partially observed optimal control of point processes, without using the associated filtering equation . Adjoint flows—the adjoint processes of the stochastic flows of the optimal system—are introduced, and their relations are established. Adjoint vector fields , which are observation-predictable, are introduced as the solutions of associated backward stochastic integral-partial differential equtions driven by the observation process. In a heuristic way, their relations are explained, and the adjoint processes are expressed in terms of the adjoint vector fields, their gradients and Hessians, along the optimal state process. In this way the adjoint processes are naturally connected to the adjoint equation of the associated filtering equation . This shows that the conditional expectation in the maximum condition is computable through filtering the optimal state, as usually expected. Some variants of the partially observed stochastic maximum principle are derived, and the corresponding maximum conditions are quite different from the counterpart for the diffusion case. Finally, as an example, a quadratic optimal control problem with a free Poisson process and a Gaussian white-noised observation is explicitly solved using the partially observed maximum principle. Accepted 8 August 2001. Online publication 17 December, 2001.  相似文献   

9.
In the Maslov idempotent probability calculus, expectations of random variables are defined so as to be linear with respect to max-plus addition and scalar multiplication. This paper considers control problems in which the objective is to minimize the max-plus expectation of some max-plus additive running cost. Such problems arise naturally as limits of some types of risk sensitive stochastic control problems. The value function is a viscosity solution to a quasivariational inequality (QVI) of dynamic programming. Equivalence of this QVI to a nonlinear parabolic PDE with discontinuous Hamiltonian is used to prove a comparison theorem for viscosity sub- and super-solutions. An example from mathematical finance is given, and an application in nonlinear H-infinity control is sketched.  相似文献   

10.
An optimal portfolio/control problem is considered for a two-dimen\-sional model in finance. A pair consisting of the wealth process and cumulutative consumption process driven by a geometric Lévy process is controlled by adapted processes. The value function appears and turns out to be a viscosity solution to some integro-differential equation, by using the Bellman principle.  相似文献   

11.
There are usually two ways to study optimal stochastic control problems: Pontryagin's maximum principle and Bellman's dynamic programming, involving an adjoint process ψ and the value function V, respectively. The classical result on the connection between the maximum principle and dynamic programming is known as ψ(t)=V x(t,?(t)) where ?(∣) is the optimal path. In this paper we establish a nonsmooth version of the classical result by employing the notions of super_ and sub_differential introduced by Crandall and Lions. Thus the illusory assumption that V is differentiate is dispensed with.  相似文献   

12.
We consider mixed control problems for diffusion processes, i.e. problems which involve both optimal control and stopping. The running reward is assumed to be smooth, but the stopping reward need only be semicontinuous. We show that, under suitable conditions, the value function w has the same regularity as the final reward g, i.e. w is lower or upper semicontinuous if g is. Furthermore, when g is l.s.c., we prove that the value function is a viscosity solution of the associated variational inequality.  相似文献   

13.
We consider continuous-state and continuous-time control problems where the admissible trajectories of the system are constrained to remain on a network. In our setting, the value function is continuous. We define a notion of constrained viscosity solution of Hamilton–Jacobi equations on the network and we study related comparison principles. Under suitable assumptions, we prove in particular that the value function is the unique constrained viscosity solution of the Hamilton–Jacobi equation on the network.  相似文献   

14.
Using a semi-discrete model that describes the heat transfer of a continuous casting process of steel, this paper is addressed to an optimal control problem of the continuous casting process in the secondary cooling zone with water spray control. The approach is based on the Hamilton–Jacobi–Bellman equation satisfied by the value function. It is shown that the value function is the viscosity solution of the Hamilton–Jacobi–Bellman equation. The optimal feedback control is found numerically by solving the associated Hamilton–Jacobi–Bellman equation through a designed finite difference scheme. The validity of the optimality of the obtained control is experimented numerically through comparisons with different admissible controls. Detailed study of a low-carbon billet caster is presented.  相似文献   

15.
We investigate some classes of eigenvalue dependent boundary value problems of the form where A ? A+ is a symmetric operator or relation in a Krein space K, τ is a matrix function and Γ0, Γ1 are abstract boundary mappings. It is assumed that A admits a self‐adjoint extension in K which locally has the same spectral properties as a definitizable relation, and that τ is a matrix function which locally can be represented with the resolvent of a self‐adjoint definitizable relation. The strict part of τ is realized as the Weyl function of a symmetric operator T in a Krein space H, a self‐adjoint extension à of A × T in K × H with the property that the compressed resolvent PK (Ãλ)–1|K k yields the unique solution of the boundary value problem is constructed, and the local spectral properties of this so‐called linearization à are studied. The general results are applied to indefinite Sturm–Liouville operators with eigenvalue dependent boundary conditions (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
We develop a viscosity solution theory for a system of nonlinear degenerate parabolic integro-partial differential equations (IPDEs) related to stochastic optimal switching and control problems or stochastic games. In the case of stochastic optimal switching and control, we prove via dynamic programming methods that the value function is a viscosity solution of the IPDEs. In our setting the value functions or the solutions of the IPDEs are not smooth, so classical verification theorems do not apply.  相似文献   

17.
We study a stochastic optimal control problem for a partially observed diffusion. By using the control randomization method in Bandini et al. (2018), we prove a corresponding randomized dynamic programming principle (DPP) for the value function, which is obtained from a flow property of an associated filter process. This DPP is the key step towards our main result: a characterization of the value function of the partial observation control problem as the unique viscosity solution to the corresponding dynamic programming Hamilton–Jacobi–Bellman (HJB) equation. The latter is formulated as a new, fully non linear partial differential equation on the Wasserstein space of probability measures. An important feature of our approach is that it does not require any non-degeneracy condition on the diffusion coefficient, and no condition is imposed to guarantee existence of a density for the filter process solution to the controlled Zakai equation. Finally, we give an explicit solution to our HJB equation in the case of a partially observed non Gaussian linear–quadratic model.  相似文献   

18.
We prove a convergence theorem for a family of value functions associated with stochastic control problems whose cost functions are defined by backward stochastic differential equations. The limit function is characterized as a viscosity solution to a fully nonlinear partial differential equation of second order. The key assumption we use in our approach is shown to be a necessary and sufficient assumption for the homogenizability of the control problem. The results generalize partially homogenization problems for Hamilton–Jacobi–Bellman equations treated recently by Alvarez and Bardi by viscosity solution methods. In contrast to their approach, we use mainly probabilistic arguments, and discuss a stochastic control interpretation for the limit equation.  相似文献   

19.
Existence of a viscosity solution to a non-local Hamilton-Jacobi-Bellman equation in a Hilbert space is established. We prove that the value function of an associated stochastic control problem is a viscosity solution. We provide a complete proof of the Dynamic Programming Principle for the stochastic control problem. We also illustrate the theory with Bellman equations associated to a controlled wave equation and controlled Musiela equation of mathematical finance both perturbed by Lévy processes.  相似文献   

20.
A general framework is developed for the finite element solution of optimal control problems governed by elliptic nonlinear partial differential equations. Typical applications are steady‐state problems in nonlinear continuum mechanics, where a certain property of the solution (a function of displacements, temperatures, etc.) is to be minimized by applying control loads. In contrast to existing formulations, which are based on the “adjoint state,” the present formulation is a direct one, which does not use adjoint variables. The formulation is presented first in a general nonlinear setting, then specialized to a case leading to a sequence of quadratic programming problems, and then specialized further to the unconstrained case. Linear governing partial differential equations are also considered as a special case in each of these categories. © 1999 John Wiley & Sons, Inc. Numer Methods Partial Differential Eq 15:371–388, 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号