首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Motivated by the idea of imposing paralleling computing on solving stochastic differential equations (SDEs), we introduce a new domain decomposition scheme to solve forward–backward stochastic differential equations (FBSDEs) parallel. We reconstruct the four step scheme in Ma et al. (1994) [1] and then associate it with the idea of domain decomposition methods. We also introduce a new technique to prove the convergence of domain decomposition methods for systems of quasilinear parabolic equations and use it to prove the convergence of our scheme for the FBSDEs.  相似文献   

2.
In this paper, we study Nash equilibrium payoffs for two-player nonzero-sum stochastic differential games via the theory of backward stochastic differential equations. We obtain an existence theorem and a characterization theorem of Nash equilibrium payoffs for two-player nonzero-sum stochastic differential games with nonlinear cost functionals defined with the help of doubly controlled backward stochastic differential equations. Our results extend former ones by Buckdahn et al. (2004) [3] and are based on a backward stochastic differential equation approach.  相似文献   

3.
We develop some numerical schemes for dd-dimensional stochastic differential equations derived from Milstein approximations of diffusions which are obtained by lifting the solutions of the stochastic differential equations to higher dimensional spaces using geometrical tools, in the line of the work [A.B. Cruzeiro, P. Malliavin, A. Thalmaier, Geometrization of Monte-Carlo numerical analysis of an elliptic operator: Strong approximation, C. R. Acad. Sci. Paris, Ser. I 338 (2004) 481–486].  相似文献   

4.
We consider time-changed Poisson processes, and derive the governing difference–differential equations (DDEs) for these processes. In particular, we consider the time-changed Poisson processes where the time-change is inverse Gaussian, or its hitting time process, and discuss the governing DDEs. The stable subordinator, inverse stable subordinator and their iterated versions are also considered as time-changes. DDEs corresponding to probability mass functions of these time-changed processes are obtained. Finally, we obtain a new governing partial differential equation for the tempered stable subordinator of index 0<β<1, when β is a rational number. We then use this result to obtain the governing DDE for the mass function of the Poisson process time-changed by the tempered stable subordinator. Our results extend and complement the results in Baeumer et al. (2009) and Beghin and Orsingher (2009) in several directions.  相似文献   

5.
In this paper we study stochastic optimal control problems with jumps with the help of the theory of Backward Stochastic Differential Equations (BSDEs) with jumps. We generalize the results of Peng [S. Peng, BSDE and stochastic optimizations, in: J. Yan, S. Peng, S. Fang, L. Wu, Topics in Stochastic Analysis, Science Press, Beijing, 1997 (Chapter 2) (in Chinese)] by considering cost functionals defined by controlled BSDEs with jumps. The application of BSDE methods, in particular, the use of the notion of stochastic backward semigroups introduced by Peng in the above-mentioned work allows a straightforward proof of a dynamic programming principle for value functions associated with stochastic optimal control problems with jumps. We prove that the value functions are the viscosity solutions of the associated generalized Hamilton–Jacobi–Bellman equations with integral-differential operators. For this proof, we adapt Peng’s BSDE approach, given in the above-mentioned reference, developed in the framework of stochastic control problems driven by Brownian motion to that of stochastic control problems driven by Brownian motion and Poisson random measure.  相似文献   

6.
In this paper we generalize the comparison result of Bostan and Namah (2007) [8] to the second-order parabolic case and prove two properties of pseudo-almost periodic functions; then by using Perron’s method we prove the existence and uniqueness of time pseudo-almost periodic viscosity solutions of second-order parabolic equations under usual hypotheses.  相似文献   

7.
The theorem on existence of the Liapunov functionals and the theorem on stability in first approximation for a stochastic differential equation with aftereffect are proved.The suggestion of the replacement of Liapunov functions by functionals [1] in the investigation of the stability of ordinary differential equations with lag, has been widely utilized in dealing with determinate systems, as well as in the case of linear and nonlinear stochastic systems (see e. g. [2 – 11]). Results concerning the stability in the first approximation were obtained for stochastic systems in [12 – 18] and others. Use of Liapunov functionals for the differential equations with aftereffect was first encountered in [1, 19, 20] where the inversion theorems were proved and conditions for the stability in first approximation were obtained.Below a stochastic differential equation with aftereffect is investigated where the random perturbations represent an arbitrary process with independent increments.  相似文献   

8.
In this paper we investigate zero-sum two-player stochastic differential games whose cost functionals are given by doubly controlled reflected backward stochastic differential equations (RBSDEs) with two barriers. For admissible controls which can depend on the whole past and so include, in particular, information occurring before the beginning of the game, the games are interpreted as games of the type “admissible strategy” against “admissible control”, and the associated lower and upper value functions are studied. A priori random, they are shown to be deterministic, and it is proved that they are the unique viscosity solutions of the associated upper and the lower Bellman–Isaacs equations with two barriers, respectively. For the proofs we make full use of the penalization method for RBSDEs with one barrier and RBSDEs with two barriers. For this end we also prove new estimates for RBSDEs with two barriers, which are sharper than those in Hamadène, Hassani (Probab Theory Relat Fields 132:237–264, 2005). Furthermore, we show that the viscosity solution of the Isaacs equation with two reflecting barriers not only can be approximated by the viscosity solutions of penalized Isaacs equations with one barrier, but also directly by the viscosity solutions of penalized Isaacs equations without barrier. Partially supported by the NSF of P.R.China (No. 10701050; 10671112), Shandong Province (No. Q2007A04), and National Basic Research Program of China (973 Program) (No. 2007CB814904).  相似文献   

9.
In this paper, we first discuss the solvability of coupled forward–backward stochastic differential equations (FBSDEs, for short) with random terminal time. We prove the existence and uniqueness of adapted solution to such FBSDEs under some natural assumptions. The method of proof adopted is to construct a contraction mapping related to the solutions of a sequence of backward SDEs. Our monotonicity-type assumptions are different from those in Hu and Peng (1995) [4], Peng and Shi (2000) [11], and so on. As a corollary of our main result, the solvability of FBSDEs with a finite time horizon is discussed. Finally, the existence and uniqueness theorem of the solution to FBSDEs with a finite time horizon is applied to price special European-type options for a large investor.  相似文献   

10.
Let G=(V,E) be a simple, connected and undirected graph with vertex set V(G) and edge set E(G). Also let D(G) be the distance matrix of a graph G (Jane?i? et al., 2007) [13]. Here we obtain Nordhaus–Gaddum-type result for the spectral radius of distance matrix of a graph.A sharp upper bound on the maximal entry in the principal eigenvector of an adjacency matrix and signless Laplacian matrix of a simple, connected and undirected graph are investigated in Das (2009) [4] and Papendieck and Recht (2000) [15]. Generally, an upper bound on the maximal entry in the principal eigenvector of a symmetric nonnegative matrix with zero diagonal entries and without zero diagonal entries are investigated in Zhao and Hong (2002) [21] and Das (2009) [4], respectively. In this paper, we obtain an upper bound on minimal entry in the principal eigenvector for the distance matrix of a graph and characterize extremal graphs. Moreover, we present the lower and upper bounds on maximal entry in the principal eigenvector for the distance matrix of a graph and characterize extremal graphs.  相似文献   

11.
We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, where the lower and the upper value functions are defined through this BSDE. The existence and the uniqueness theorem and comparison theorem are proved for such equations with the help of an iteration method. We also show that the lower and the upper value functions satisfy the dynamic programming principle. Moreover, we study the associated Hamilton-Jacobi-Bellman-Isaacs(HJB-Isaacs)equations, which are nonlocal, and strongly coupled with the lower and the upper value functions. Using a new method, we characterize the pair(W, U) consisting of the lower and the upper value functions as the unique viscosity solution of our nonlocal HJB-Isaacs equation. Furthermore, the game has a value under the Isaacs' condition.  相似文献   

12.
In this paper, we study the non-linear backward problems (with deterministic or stochastic durations) of stochastic differential equations on the Sierpinski gasket. We prove the existence and uniqueness of solutions of backward stochastic differential equations driven by Brownian martingale (defined in Section 2) on the Sierpinski gasket constructed by S. Goldstein and S. Kusuoka. The exponential integrability of quadratic processes for martingale additive functionals is obtained, and as an application, a Feynman–Kac representation formula for weak solutions of semi-linear parabolic PDEs on the gasket is also established.  相似文献   

13.
《随机分析与应用》2013,31(6):1553-1576
Abstract

Stochastic Taylor expansions of the expectation of functionals applied to diffusion processes which are solutions of stochastic differential equation systems are introduced. Taylor formulas w.r.t. increments of the time are presented for both, Itô and Stratonovich stochastic differential equation systems with multi-dimensional Wiener processes. Due to the very complex formulas arising for higher order expansions, an advantageous graphical representation by coloured trees is developed. The convergence of truncated formulas is analyzed and estimates for the truncation error are calculated. Finally, the stochastic Taylor formulas based on coloured trees turn out to be a generalization of the deterministic Taylor formulas using plain trees as recommended by Butcher for the solutions of ordinary differential equations.  相似文献   

14.
In this paper, we revisit the consumption–investment problem with a general discount function and a logarithmic utility function in a non-Markovian framework. The coefficients in our model, including the interest rate, appreciation rate and volatility of the stock, are assumed to be adapted stochastic processes. Following Yong (2012a,b)’s method, we study an N-person differential game. We adopt a martingale method to solve an optimization problem of each player and characterize their optimal strategies and value functions in terms of the unique solutions of BSDEs. Then by taking limit, we show that a time-consistent equilibrium consumption–investment strategy of the original problem consists of a deterministic function and the ratio of the market price of risk to the volatility, and the corresponding equilibrium value function can be characterized by the unique solution of a family of BSDEs parameterized by a time variable.  相似文献   

15.
Using the decomposition of solution of SDE, we consider the stochastic optimal control problem with anticipative controls as a family of deterministic control problems parametrized by the paths of the driving Wiener process and of a newly introduced Lagrange multiplier stochastic process (nonanticipativity equality constraint). It is shown that the value function of these problems is the unique global solution of a robust equation (random partial differential equation) associated to a linear backward Hamilton-Jacobi-Bellman stochastic partial differential equation (HJB SPDE). This appears as limiting SPDE for a sequence of random HJB PDE's when linear interpolation approximation of the Wiener process is used. Our approach extends the Wong-Zakai type results [20] from SDE to the stochastic dynamic programming equation by showing how this arises as average of the limit of a sequence of deterministic dynamic programming equations. The stochastic characteristics method of Kunita [13] is used to represent the value function. By choosing the Lagrange multiplier equal to its nonanticipative constraint value the usual stochastic (nonanticipative) optimal control and optimal cost are recovered. This suggests a method for solving the anticipative control problems by almost sure deterministic optimal control. We obtain a PDE for the “cost of perfect information” the difference between the cost function of the nonanticipative control problem and the cost of the anticipative problem which satisfies a nonlinear backward HJB SPDE. Poisson bracket conditions are found ensuring this has a global solution. The cost of perfect information is shown to be zero when a Lagrangian submanifold is invariant for the stochastic characteristics. The LQG problem and a nonlinear anticipative control problem are considered as examples in this framework  相似文献   

16.
In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the dynamic programming principle for the upper and the lower value functions of this kind of stochastic differential games with reflection in a straightforward way. Then the upper and the lower value functions are proved to be the unique viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations with obstacles, respectively. The method differs significantly from those used for control problems with reflection, with new techniques developed of interest on its own. Further, we also prove a new estimate for RBSDEs being sharper than that in the paper of El Karoui, Kapoudjian, Pardoux, Peng and Quenez (1997), which turns out to be very useful because it allows us to estimate the L p -distance of the solutions of two different RBSDEs by the p-th power of the distance of the initial values of the driving forward equations. We also show that the unique viscosity solution to the approximating Isaacs equation constructed by the penalization method converges to the viscosity solution of the Isaacs equation with obstacle.  相似文献   

17.
It is demonstrated that the upper and lower values of a two-person, zero-sum differential game solve the respective upper and lower Isaacs' equations in the viscosity sense (introduced by Crandall and Lions (Trans. Amer. Math. Soc. 277 (1983), 1–42). Since such solutions are unique, this yields a fairly simple proof that the game has value should the minimax condition hold. As a further application of viscosity techniques, a new and simpler proof that the upper and lower values can be approximated by the values of certain games with Lipschitz controls is given.  相似文献   

18.
The force of interest is modelled by a homogeneous time-continuous Markov chain with finite state space. Ordinary differential equations are obtained for expected values of various functionals of this process, in particular for moments of present values of payment streams that may be deterministic or, possibly, also stochastic and driven by a time-continuous Markov chain. The homogeneity of the interest process gives rise to explicit formulae for expected values of some stationary functionals, e.g. moments of a perpetuity. Applications are made to some standard forms of insurance.  相似文献   

19.
In a previous paper we gave a new formulation and derived the Euler equations and other necessary conditions to solve strong, pathwise, stochastic variational problems with trajectories driven by Brownian motion. Thus, unlike current methods which minimize the control over deterministic functionals (the expected value), we find the control which gives the critical point solution of random functionals of a Brownian path and then, if we choose, find the expected value.This increase in information is balanced by the fact that our methods are anticipative while current methods are not. However, our methods are more directly connected to the theory and meaningful examples of deterministic variational theory and provide better means of solution for free and constrained problems. In addition, examples indicate that there are methods to obtain nonanticipative solutions from our equations although the anticipative optimal cost function has smaller expected value.In this paper we give new, efficient numerical methods to find the solution of these problems in the quadratic case. Of interest is that our numerical solution has a maximal, a priori, pointwise error of O(h3/2) where h is the node size. We believe our results are unique for any theory of stochastic control and that our methods of proof involve new and sophisticated ideas for strong solutions which extend previous deterministic results by the first author where the error was O(h2).We note that, although our solutions are given in terms of stochastic differential equations, we are not using the now standard numerical methods for stochastic differential equations. Instead we find an approximation to the critical point solution of the variational problem using relations derived from setting to zero the directional derivative of the cost functional in the direction of simple test functions.Our results are even more significant than they first appear because we can reformulate stochastic control problems or constrained calculus of variations problems in the unconstrained, stochastic calculus of variations formulation of this paper. This will allow us to find efficient and accurate numerical solutions for general constrained, stochastic optimization problems. This is not yet being done, even in the deterministic case, except by the first author.  相似文献   

20.
We construct a weak solution to the stochastic functional differential equation , where Mt=sup0≤stXs. Using the excursion theory, we then solve explicitly the following problem: for a natural class of joint density functions μ(y,b), we specify σ(.,.), so that X is a martingale, and the terminal level and supremum of X, when stopped at an independent exponential time ξλ, is distributed according to μ. We can view (Xtξλ) as an alternate solution to the problem of finding a continuous local martingale with a given joint law for the maximum and the drawdown, which was originally solved by Rogers (1993) [21] using the excursion theory. This complements the recent work of Carr (2009) [5] and Cox et al. (2010) [7], who consider a standard one-dimensional diffusion evaluated at an independent exponential time.1  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号