首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We show that a large class of discrete-time dynamic games can be obtained as a limit of stochastic control problems with multiplicative cost. Our approach consists in analyzing the large deviation properties of the Markov kernels associated with the stochastic dynamics, and allows us to give a unitary treatment of several nonlinear models.  相似文献   

2.
3.
4.
This paper is concerned with the stochastic maximum principle for impulse optimal control problems of forward–backward systems, where the coefficients of the forward part are Lipschitz continuous. The domain of the regular controls is not necessarily convex. We establish a Pontryagins maximum principle for this control problem by applying Ekelands variational principle to a sequence of approximated control problems with smooth coefficients of the initial problems.  相似文献   

5.
Most of the multiple objective linear programming (MOLP) methods which have been proposed in the last fifteen years suppose deterministic contexts, but because many real problems imply uncertainty, some methods have been recently developed to deal with MOLP problems in stochastic contexts. In order to help the decision maker (DM) who is placed before such stochastic MOLP problems, we have built a Decision Support System called PROMISE. On the one hand, our DSS enables the DM to identify many current stochastic contexts: risky situations and also situations of partial uncertainty. On the other hand, according to the nature of the uncertainty, our DSS enables the DM to choose the most appropriate interactive stochastic MOLP method among the available methods, if such a method exists, and to solve his problem via the chosen method.  相似文献   

6.
7.
8.
Abstract

We consider stochastic optimal control problems in Banach spaces, related to nonlinear controlled equations with dissipative non linearities: on the nonlinear term we do not impose any growth condition. The problems are treated via the backward stochastic differential equations approach, that allows also to solve in mild sense Hamilton Jacobi Bellman equations in Banach spaces. We apply the results to controlled stochastic heat equation, in space dimension 1, with control and noise acting on a subdomain.  相似文献   

9.
10.
In this work,we study the gradient projection method for solving a class of stochastic control problems by using a mesh free approximation ap-proach to implement spatial dimension approximation.Our main contribu-tion is to extend the existing gradient projection method to moderate high-dimensional space.The moving least square method and the general radial basis function interpolation method are introduced as showcase methods to demonstrate our computational framework,and rigorous numerical analysis is provided to prove the convergence of our meshfree approximation approach.We also present several numerical experiments to validate the theoretical re-sults of our approach and demonstrate the performance meshfree approxima-tion in solving stochastic optimal control problems.  相似文献   

11.
This work is concerned with numerical schemes for stochastic optimal control problems (SOCPs) by means of forward backward stochastic differential equations (FBSDEs). We first convert the stochastic optimal control problem into an equivalent stochastic optimality system of FBSDEs. Then we design an efficient second order FBSDE solver and an quasi-Newton type optimization solver for the resulting system. It is noticed that our approach admits the second order rate of convergence even when the state equation is approximated by the Euler scheme. Several numerical examples are presented to illustrate the effectiveness and the accuracy of the proposed numerical schemes.  相似文献   

12.
We apply the Monte Carlo, stochastic Galerkin, and stochastic collocation methods to solving the drift-diffusion equations coupled with the Poisson equation arising in semiconductor devices with random rough surfaces. Instead of dividing the rough surface into slices, we use stochastic mapping to transform the original deterministic equations in a random domain into stochastic equations in the corresponding deterministic domain. A finite element discretization with the help of AFEPack is applied to the physical space, and the equations obtained are solved by the approximate Newton iterative method. Comparison of the three stochastic methods through numerical experiment on different PN junctions are given. The numerical results show that, for such a complicated nonlinear problem, the stochastic Galerkin method has no obvious advantages on efficiency except accuracy over the other two methods, and the stochastic collocation method combines the accuracy of the stochastic Galerkin method and the easy implementation of the Monte Carlo method.  相似文献   

13.
14.
The computational complexity of linear and nonlinear programming problems depends on the number of objective functions and constraints involved and solving a large problem often becomes a difficult task. Redundancy detection and elimination provides a suitable tool for reducing this complexity and simplifying a linear or nonlinear programming problem while maintaining the essential properties of the original system. Although a large number of redundancy detection methods have been proposed to simplify linear and nonlinear stochastic programming problems, very little research has been developed for fuzzy stochastic (FS) fractional programming problems. We propose an algorithm that allows to simultaneously detect both redundant objective function(s) and redundant constraint(s) in FS multi-objective linear fractional programming problems. More precisely, our algorithm reduces the number of linear fuzzy fractional objective functions by transforming them in probabilistic–possibilistic constraints characterized by predetermined confidence levels. We present two numerical examples to demonstrate the applicability of the proposed algorithm and exhibit its efficacy.  相似文献   

15.
Using the decomposition of solution of SDE, we consider the stochastic optimal control problem with anticipative controls as a family of deterministic control problems parametrized by the paths of the driving Wiener process and of a newly introduced Lagrange multiplier stochastic process (nonanticipativity equality constraint). It is shown that the value function of these problems is the unique global solution of a robust equation (random partial differential equation) associated to a linear backward Hamilton-Jacobi-Bellman stochastic partial differential equation (HJB SPDE). This appears as limiting SPDE for a sequence of random HJB PDE's when linear interpolation approximation of the Wiener process is used. Our approach extends the Wong-Zakai type results [20] from SDE to the stochastic dynamic programming equation by showing how this arises as average of the limit of a sequence of deterministic dynamic programming equations. The stochastic characteristics method of Kunita [13] is used to represent the value function. By choosing the Lagrange multiplier equal to its nonanticipative constraint value the usual stochastic (nonanticipative) optimal control and optimal cost are recovered. This suggests a method for solving the anticipative control problems by almost sure deterministic optimal control. We obtain a PDE for the “cost of perfect information” the difference between the cost function of the nonanticipative control problem and the cost of the anticipative problem which satisfies a nonlinear backward HJB SPDE. Poisson bracket conditions are found ensuring this has a global solution. The cost of perfect information is shown to be zero when a Lagrangian submanifold is invariant for the stochastic characteristics. The LQG problem and a nonlinear anticipative control problem are considered as examples in this framework  相似文献   

16.
We consider a general optimal switching problem for a controlled diffusion and show that its value coincides with the value of a well-suited stochastic target problem associated to a diffusion with jumps. The proof consists in showing that the Hamilton–Jacobi–Bellman equations of both problems are the same and in proving a comparison principle for this equation. This provides a new family of lower bounds for the optimal switching problem, which can be computed by Monte-Carlo methods. This result has also a nice economical interpretation in terms of a firm's valuation.  相似文献   

17.
18.
We prove a convergence theorem for a family of value functions associated with stochastic control problems whose cost functions are defined by backward stochastic differential equations. The limit function is characterized as a viscosity solution to a fully nonlinear partial differential equation of second order. The key assumption we use in our approach is shown to be a necessary and sufficient assumption for the homogenizability of the control problem. The results generalize partially homogenization problems for Hamilton–Jacobi–Bellman equations treated recently by Alvarez and Bardi by viscosity solution methods. In contrast to their approach, we use mainly probabilistic arguments, and discuss a stochastic control interpretation for the limit equation.  相似文献   

19.
In this paper, we work on indifference valuation of variable annuities and give a computation method for indifference fees. We focus on the guaranteed minimum death benefits (GMDB) and the guaranteed minimum living benefits (GMLB) and allow the policyholder to make withdrawals. We assume that the fees are continuously paid and that the fee rate is fixed at the beginning of the contract. Following indifference pricing theory, we define indifference fee rate for the insurer as a solution of an equation involving two stochastic control problems. Relating these problems to backward stochastic differential equations (BSDEs) with jumps, we provide a verification theorem and give the optimal strategies associated to our control problems. From these, we derive a computation method to get indifference fee rates. We conclude our work with numerical illustrations of indifference fees sensibilities with respect to parameters.  相似文献   

20.
We consider two-stage risk-averse stochastic optimization problems with a stochastic ordering constraint on the recourse function. Two new characterizations of the increasing convex order relation are provided. They are based on conditional expectations and on integrated quantile functions: a counterpart of the Lorenz function. We propose two decomposition methods to solve the problems and prove their convergence. Our methods exploit the decomposition structure of the risk-neutral two-stage problems and construct successive approximations of the stochastic ordering constraints. Numerical results confirm the efficiency of the methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号