首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider a long-time behavior of stable-like processes. A stable-like process is a Feller process given by the symbol p(x,ξ)=−iβ(x)ξ+γ(x)|ξ|α(x)p(x,ξ)=iβ(x)ξ+γ(x)|ξ|α(x), where α(x)∈(0,2)α(x)(0,2), β(x)∈Rβ(x)R and γ(x)∈(0,∞)γ(x)(0,). More precisely, we give sufficient conditions for recurrence, transience and ergodicity of stable-like processes in terms of the stability function α(x)α(x), the drift function β(x)β(x) and the scaling function γ(x)γ(x). Further, as a special case of these results we give a new proof for the recurrence and transience property of one-dimensional symmetric stable Lévy processes with the index of stability α≠1α1.  相似文献   

2.
In this paper we consider a homotopy deformation approach to solving Markov decision process problems by the continuous deformation of a simpler Markov decision process problem until it is identical with the original problem. Algorithms and performance bounds are given.  相似文献   

3.
By adopting a nice auxiliary transform of Markov operators, we derive new bounds for the first eigenvalue of the generator corresponding to symmetric Markov processes. Our results not only extend the related topic in the literature, but also are efficiently used to study the first eigenvalue of birth-death processes with killing and that of elliptic operators with killing on half line. In particular, we obtain two approximation procedures for the first eigenvalue of birth-death processes with killing, and present qualitatively sharp upper and lower bounds for the first eigenvalue of elliptic operators with killing on half line.  相似文献   

4.
We give two simple axioms that characterize a simple functional form for aggregation of column stochastic matrices (i.e., Markov processes). Several additional observations are made about such aggregation, including the special case in which the aggregated process is Markovian relative to the original one.  相似文献   

5.
6.
We consider a discrete-time constrained Markov decision process under the discounted cost optimality criterion. The state and action spaces are assumed to be Borel spaces, while the cost and constraint functions might be unbounded. We are interested in approximating numerically the optimal discounted constrained cost. To this end, we suppose that the transition kernel of the Markov decision process is absolutely continuous with respect to some probability measure μ  . Then, by solving the linear programming formulation of a constrained control problem related to the empirical probability measure μnμn of μ, we obtain the corresponding approximation of the optimal constrained cost. We derive a concentration inequality which gives bounds on the probability that the estimation error is larger than some given constant. This bound is shown to decrease exponentially in n. Our theoretical results are illustrated with a numerical application based on a stochastic version of the Beverton–Holt population model.  相似文献   

7.
This note presents a technique that is useful for the study of piecewise deterministic Markov decision processes (PDMDPs) with general policies and unbounded transition intensities. This technique produces an auxiliary PDMDP from the original one. The auxiliary PDMDP possesses certain desired properties, which may not be possessed by the original PDMDP. We apply this technique to risk-sensitive PDMDPs with total cost criteria, and comment on its connection with the uniformization technique.  相似文献   

8.
We introduce and study the natural counterpart of the Dunkl Markov processes in a negatively curved setting. We give a semimartingale decomposition of the radial part, and some properties of the jumps. We prove also a law of large numbers, a central limit theorem, and the convergence of the normalized process to the Dunkl process. Eventually we describe the asymptotic behavior of the infinite loop as it was done by Anker, Bougerol and Jeulin in the symmetric spaces setting in (Iberoamericana 18: 41–97, 2002). Partially supported by the European Commission (IHP Network HARP 2002–2006).  相似文献   

9.
We prove the metastable behavior of reversible Markov processes on finite state spaces under minimal conditions on the jump rates. To illustrate the result we deduce the metastable behavior of the Ising model with a small magnetic field at very low temperature.  相似文献   

10.
1.IntrodnctionTheweightedMarkovdecisionprocesses(MDP's)havebeenextensivelystudiedsince1980's,seeforinstance,[1-6]andsoon.ThetheoryofweightedMDP'swithperturbedtransitionprobabilitiesappearstohavebeenmentionedonlyin[7].Thispaperwilldiscussthemodelsofwe...  相似文献   

11.
We study controlled Markov processes where multiple decisions need to be made for each state. We present conditions on the cost structure and the state transition mechanism of the process under which optimal decisions are restricted to a subset of the decision space. As a result, the numerical computation of the optimal policy may be significantly expedited.  相似文献   

12.
A finite-state Markov decision process, in which, associated with each action in each state, there are two rewards, is considered. The objective is to optimize the ratio of the two rewards over an infinite horizon. In the discounted version of this decision problem, it is shown that the optimal value is unique and the optimal strategy is pure and stationary; however, they are dependent on the starting state. Also, a finite algorithm for computing the solution is given.  相似文献   

13.
We characterize the value function and the optimal stopping time for a large class of optimal stopping problems where the underlying process to be stopped is a fairly general Markov process. The main result is inspired by recent findings for Lévy processes obtained essentially via the Wiener–Hopf factorization. The main ingredient in our approach is the representation of the ββ-excessive functions as expected suprema. A variety of examples is given.  相似文献   

14.
LetX be a strongly symmetric standard Markov process on a locally compact metric spaceS with 1-potential densityu 1(x, y). Let {L t y , (t, y)R +×S} denote the local times ofX and letG={G(y), yS} be a mean zero Gaussian process with covarianceu 1(x, y). In this paper results about the moduli of continuity ofG are carried over to give similar moduli of continuity results aboutL t y considered as a function ofy. Several examples are given with particular attention paid to symmetric Lévy processes.The research of both authors was supported in part by a grant from the National Science Foundation. In addition the research of Professor Rosen was also supported in part by a PSC-CUNY research grant. Professor Rosen would like to thank the Israel Institute of Technology, where he spent the academic year 1989–90 and was supported, in part, by the United States-Israel Binational Science Foundation. Professor Marcus was a faculty member at Texas A&M University while some of this research was carried out.  相似文献   

15.
In this paper, we study the quasi-stationarity and quasi-ergodicity of general Markov processes. We show, among other things, that if X is a standard Markov process admitting a dual with respect to a finite measure m and if X admits a strictly positive continuous transition density p(t, x, y) (with respect to m) which is bounded in (x, y) for every t > 0, then X has a unique quasi-stationary distribution and a unique quasi-ergodic distribution. We also present several classes of Markov processes satisfying the above conditions.  相似文献   

16.
In this paper we address the problem of efficiently deriving the steady-state distribution for a continuous time Markov chain (CTMC) S evolving in a random environment E. The process underlying E is also a CTMC. S is called Markov modulated process. Markov modulated processes have been widely studied in literature since they are applicable when an environment influences the behaviour of a system. For instance, this is the case of a wireless link, whose quality may depend on the state of some random factors such as the intensity of the noise in the environment. In this paper we study the class of Markov modulated processes which exhibits separable, product-form stationary distribution. We show that several models that have been proposed in literature can be studied applying the Extended Reversed Compound Agent Theorem (ERCAT), and also new product-forms are derived. We also address the problem of the necessity of ERCAT for product-forms and show a meaningful example of product-form not derivable via ERCAT.  相似文献   

17.
18.
A stochastic matrix is “monotone” [4] if its row-vectors are stochastically increasing. Closure properties, characterizations and the availability of a second maximal eigenvalue are developed. Such monotonicity is present in a variety of processes in discrete and continous time. In particular, birth-death processes are monotone. Conditions for the sequential monotonicity of a process are given and related inequalities presented.  相似文献   

19.
We study the Markov decision processes under the average-valueat-risk criterion. The state space and the action space are Borel spaces, the costs are admitted to be unbounded from above, and the discount factors are state-action dependent. Under suitable conditions, we establish the existence of optimal deterministic stationary policies. Furthermore, we apply our main results to a cash-balance model.  相似文献   

20.
It is well-known that well-posedness of a martingale problem in the class of continuous (or r.c.l.l.) solutions enables one to construct the associated transition probability functions. We extend this result to the case when the martingale problem is well-posed in the class of solutions which are continuous in probability. This extension is used to improve on a criterion for a probability measure to be invariant for the semigroup associated with the Markov process. We also give examples of martingale problems that are well-posed in the class of solutions which are continuous in probability but for which no r.c.l.l. solution exists.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号