首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the physical model of a classical mechanical system (called “small system”) undergoing repeated interactions with a chain of identical small pieces (called “environment”). This physical setup constitutes an advantageous way of implementing dissipation for classical systems; it is at the same time Hamiltonian and Markovian. This kind of model has already been studied in the context of quantum mechanical systems, where it was shown to give rise to quantum Langevin equations in the limit of continuous time interactions (Attal and Pautrat in Ann Henri Poincaré 7:59–104, 2006), but it has never been considered for classical mechanical systems yet. The aim of this article is to compute the continuous limit of repeated interactions for classical systems and to prove that they give rise to particular stochastic differential equations (SDEs) in the limit. In particular, we recover the usual Langevin equations associated with the action of heat baths. In order to obtain these results, we consider the discrete-time dynamical system induced by Hamilton’s equations and the repeated interactions. We embed it into a continuous-time dynamical system and compute the limit when the time step goes to 0. This way, we obtain a discrete-time approximation of SDE, considered as a deterministic dynamical system on the Wiener space, which is not exactly of the usual Euler scheme type. We prove the L p and almost sure convergence of this scheme. We end up with applications to concrete physical examples such as a charged particle in a uniform electric field or a harmonic interaction. We obtain the usual Langevin equation for the action of a heat bath when considering a damped harmonic oscillator as the small system.  相似文献   

2.
We study the limit behaviour of a nonlinear differential equation whose solution is a superadditive generalisation of a stochastic matrix, prove convergence, and provide necessary and sufficient conditions for ergodicity. In the linear case, the solution of our differential equation is equal to the matrix exponential of an intensity matrix and can then be interpreted as the transition operator of a homogeneous continuous-time Markov chain. Similarly, in the generalised nonlinear case that we consider, the solution can be interpreted as the lower transition operator of a specific set of non-homogeneous continuous-time Markov chains, called an imprecise continuous-time Markov chain. In this context, our convergence result shows that for a fixed initial state, an imprecise continuous-time Markov chain always converges to a limiting distribution, and our ergodicity result provides a necessary and sufficient condition for this limiting distribution to be independent of the initial state.  相似文献   

3.
We prove a convergence theorem for a family of value functions associated with stochastic control problems whose cost functions are defined by backward stochastic differential equations. The limit function is characterized as a viscosity solution to a fully nonlinear partial differential equation of second order. The key assumption we use in our approach is shown to be a necessary and sufficient assumption for the homogenizability of the control problem. The results generalize partially homogenization problems for Hamilton–Jacobi–Bellman equations treated recently by Alvarez and Bardi by viscosity solution methods. In contrast to their approach, we use mainly probabilistic arguments, and discuss a stochastic control interpretation for the limit equation.  相似文献   

4.
This paper is concerned with processes which are max-plus counterparts of Markov diffusion processes governed by Ito sense stochastic differential equations. Concepts of max-plus martingale and max-plus stochastic differential equation are introduced. The max-plus counterparts of backward and forward PDEs for Markov diffusions turn out to be first-order PDEs of Hamilton–Jacobi–Bellman type. Max-plus additive integrals and a max-plus additive dynamic programming principle are considered. This leads to variational inequalities of Hamilton–Jacobi–Bellman type.  相似文献   

5.
This work develops numerical approximation algorithms for solutions of stochastic differential equations with Markovian switching. The existing numerical algorithms all use a discrete-time Markov chain for the approximation of the continuous-time Markov chain. In contrast, we generate the continuous-time Markov chain directly, and then use its skeleton process in the approximation algorithm. Focusing on weak approximation, we take a re-embedding approach, and define the approximation and the solution to the switching stochastic differential equation on the same space. In our approximation, we use a sequence of independent and identically distributed (i.i.d.) random variables in lieu of the common practice of using Brownian increments. By virtue of the strong invariance principle, we ascertain rates of convergence in the pathwise sense for the weak approximation scheme.  相似文献   

6.
We prove a general theorem on the convergence of solutions of stochastic differential equations. As a corollary, we obtain a result concerning the convergence of solutions of stochastic differential equations with absolutely continuous processes to a solution of an equation with Brownian motion.  相似文献   

7.
We consider a stochastic differential equation in a Hilbert space with time-dependent coefficients for which no general existence and uniqueness results are known. We prove, under suitable assumptions, the existence and uniqueness of a measure valued solution, for the corresponding Fokker–Planck equation. In particular, we verify the Chapman–Kolmogorov equations and get an evolution system of transition probabilities for the stochastic dynamics informally given by the stochastic differential equation.  相似文献   

8.
In this paper, we derive the stochastic maximum principle for optimal control problems of the forward-backward Markovian regime-switching system. The control system is described by an anticipated forward-backward stochastic pantograph equation and modulated by a continuous-time finite-state Markov chain. By virtue of classical variational approach, duality method, and convex analysis, we obtain a stochastic maximum principle for the optimal control.  相似文献   

9.
We analyze general enough models of repeated indirect measurements in which a quantum system interacts repeatedly with randomly chosen probes on which von Neumann direct measurements are performed. We prove, under suitable hypotheses, that the system state probability distribution converges after a large number of repeated indirect measurements, in a way compatible with quantum wave function collapse. We extend this result to mixed states and we prove similar results for the system density matrix. We show that the convergence is exponential with a rate given by some relevant mean relative entropies. We also prove that, under appropriate rescaling of the system and probe interactions, the state probability distribution and the system density matrix are solutions of stochastic differential equations modeling continuous-time quantum measurements. We analyze the large time behavior of these continuous time processes and prove convergence.  相似文献   

10.
Focusing on stochastic systems arising in mean-field models, the systems under consideration belong to the class of switching diffusions, in which continuous dynamics and discrete events coexist and interact. The discrete events are modeled by a continuous-time Markov chain. Different from the usual switching diffusions, the systems include mean-field interactions. Our effort is devoted to obtaining laws of large numbers for the underlying systems. One of the distinct features of the paper is the limit of the empirical measures is not deterministic but a random measure depending on the history of the Markovian switching process. A main difficulty is that the standard martingale approach cannot be used to characterize the limit because of the coupling due to the random switching process. In this paper, in contrast to the classical approach, the limit is characterized as the conditional distribution (given the history of the switching process) of the solution to a stochastic McKean–Vlasov differential equation with Markovian switching.  相似文献   

11.
 We consider random evolution of an interface on a hard wall under periodic boundary conditions. The dynamics are governed by a system of stochastic differential equations of Skorohod type, which is Langevin equation associated with massless Hamiltonian added a strong repelling force for the interface to stay over the wall. We study its macroscopic behavior under a suitable large scale space-time limit and derive a nonlinear partial differential equation, which describes the mean curvature motion except for some anisotropy effects, with reflection at the wall. Such equation is characterized by an evolutionary variational inequality. Received: 10 January 2002 / Revised version: 18 August 2002 / Published online: 15 April 2003 Mathematics Subject Classification (2000): 60K35, 82C24, 35K55, 35K85 Key words or phrases: Hydrodynamic limit – Effective interfaces – Hard wall – Skorohod's stochastic differential equation – Evolutionary variational inequality  相似文献   

12.
We prove a large deviation principle result for solutions of abstract stochastic evolution equations perturbed by small Lévy noise. We use general large deviations theorems of Varadhan and Bryc coupled with the techniques of Feng and Kurtz (2006) [15], viscosity solutions of integro-partial differential equations in Hilbert spaces, and deterministic optimal control methods. The Laplace limit is identified as a viscosity solution of a Hamilton-Jacobi-Bellman equation of an associated control problem. We also establish exponential moment estimates for solutions of stochastic evolution equations driven by Lévy noise. General results are applied to stochastic hyperbolic equations perturbed by subordinated Wiener process.  相似文献   

13.
We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.  相似文献   

14.
For a system of identical particles, described by stochastic differential equations of the continuous type, we derive a kinetic equation and equations for statistically independent Markov limit trajectories of the particles.Translated from Ukrainskii Matematicheskii Zhurnal, Vol. 43, No. 1, pp. 137–140, January, 1991.  相似文献   

15.
The Zakai equation for the unnormalized conditional density is derived as a mild stochastic bilinear differential equation on a suitableL 2 space. It is assumed that the Markov semigroup corresponding to the state process isC 0 on such space. This allows the establishment of the existence and uniqueness of the solution by means of general theorems on stochastic differential equations in Hilbert space. Moreover, an easy treatment of convergence conditions can be given for a general class of finite-dimensional approximations, including Galerkin schemes. This is done by using a general continuity result for the solution of a mild stochastic bilinear differential equation on a Hilbert space with respect to the semigroup, the forcing operator, and the initial state, within a suitable topology.  相似文献   

16.
We start with a discussion of coupled algebraic Riccati equations arising in the study of linear-quadratic optimal control problems for Markov jump linear systems. Under suitable assumptions, this system of equations has a unique positive semidefinite solution, which is the solution of practical interest. The coupled equations can be rewritten as a single linearly perturbed matrix Riccati equation with special structures. We study the linearly perturbed Riccati equation in a more general setting and obtain a class of iterative methods from different splittings of a positive operator involved in the Riccati equation. We prove some special properties of the sequences generated by these methods and determine and compare the convergence rates of these methods. Our results are then applied to the coupled Riccati equations of jump linear systems. We obtain linear convergence of the Lyapunov iteration and the modified Lyapunov iteration, and confirm that the modified Lyapunov iteration indeed has faster convergence than the original Lyapunov iteration.  相似文献   

17.
Semilinear parabolic differential equations are solved in a mild sense in an infinite-dimensional Hilbert space. Applications to stochastic optimal control problems are studied by solving the associated Hamilton–Jacobi–Bellman equation. These results are applied to some controlled stochastic partial differential equations.  相似文献   

18.
A class of quasilinear stochastic partial differential equations (SPDEs), driven by spatially correlated Brownian noise, is shown to become macroscopic (i.e., deterministic), as the length of the correlations tends to 0. The limit is the solution of a quasilinear partial differential equation. The quasilinear SPDEs are obtained as a continuum limit from the empirical distribution of a large number of stochastic ordinary differential equations (SODEs), coupled though a mean-field interaction and driven by correlated Brownian noise. The limit theorems are obtained by application of a general result on the convergence of exchangeable systems of processes. We also compare our approach to SODEs with the one introduced by Kunita.  相似文献   

19.
Focusing on stochastic dynamics involve continuous states as well as discrete events, this article investigates stochastic logistic model with regime switching modulated by a singular Markov chain involving a small parameter. This Markov chain undergoes weak and strong interactions, where the small parameter is used to reflect rapid rate of regime switching among each state class. Two-time-scale formulation is used to reduce the complexity. We obtain weak convergence of the underlying system so that the limit has much simpler structure. Then we utilize the structure of limit system as a bridge, to invest stochastic permanence of original system driving by a singular Markov chain with a large number of states. Sufficient conditions for stochastic permanence are obtained. A couple of examples and numerical simulations are given to illustrate our results.  相似文献   

20.
    
We study an infinite-dimensional Black—Scholes—Barenblatt equation which is a Hamilton—Jacobi—Bellman equation that is related to option pricing in the Musiela model of interest rate dynamics. We prove the existence and uniqueness of viscosity solutions of the Black—Scholes—Barenblatt equation and discuss their stochastic optimal control interpretation. We also show that in some cases the solution can be locally uniformly approximated by solutions of suitable finite-dimensional Hamilton—Jacobi—Bellman equations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号