首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We suggest a unified approach to claims reserving for life insurance policies with reserve-dependent payments driven by multi-state Markov chains. The associated prospective reserve is formulated as a recursive utility function using the framework of backward stochastic differential equations (BSDE). We show that the prospective reserve satisfies a nonlinear Thiele equation for Markovian BSDEs when the driver is a deterministic function of the reserve and the underlying Markov chain. Aggregation of prospective reserves for large and homogeneous insurance portfolios is considered through mean-field approximations. We show that the corresponding prospective reserve satisfies a BSDE of mean-field type and derive the associated nonlinear Thiele equation.  相似文献   

2.
Markov chains and mean-field analysis are powerful tools and widely used for performance analysis in large-scale computer and communication systems. In this paper, we consider the application of Markov modeling and mean-field analysis to solid-state drives (SSDs). SSDs are now widely deployed in mobiles, desktops, and data centers due to their high I/O performance and low energy consumption. In particular, we focus on characterizing the performance–durability tradeoff of garbage collection (GC) algorithms in SSDs. Specifically, we first develop a stochastic Markov chain model to capture the I/O dynamics of large-scale SSDs, then adapt mean-field analysis to derive the asymptotic steady state, based on which we are able to easily analyze the performance–durability tradeoff of a large family of GC algorithms. We further prove the model convergence and generalize the model for all types of workload. Inspired by this model, we also propose a randomized greedy algorithm (RGA) which has a single tunable parameter to trade between performance and durability. Using trace-driven simulation on DiskSim with SSD add-ons, we demonstrate how RGA can be parameterized to realize the performance–durability tradeoff.  相似文献   

3.
The existing literature contains many examples of mean-field particle systems converging to the distribution of a Markov process conditioned to not hit a given set. In many situations, these mean-field particle systems are failable, meaning that they are not well defined after a given random time. Our first aim is to introduce an original mean-field particle system, which is always well defined and whose large number particle limit is, in all generality, the distribution of a process conditioned to not hit a given set. Under natural conditions on the underlying process, we also prove that the convergence holds uniformly in time as the number of particles goes to infinity. As an illustration, we show that our assumptions are satisfied in the case of a piece-wise deterministic Markov process.  相似文献   

4.
Veretennikov  A. Yu. 《Queueing Systems》2020,94(3-4):243-255
Queueing Systems - A mean-field extension of the queueing system (GI/GI/1) is considered. The process is constructed as a Markov solution of a martingale problem. Uniqueness in distribution is also...  相似文献   

5.
We present and study an approximation scheme for the mean of a stochastic simulation that models a population subject to nonlinear birth and exogenous disturbances. We use the information from the probability distribution for the disturbance times to construct a method that improves upon the mean-field approximation. We show through two example systems the effectiveness of the Markov embedding approximation and discuss the contexts in which it is an appropriate method.  相似文献   

6.
超市模型是针对大型并行排队网络所进行的 实时动态控制的随机负载平衡策略, 它在计算机网络、云计算、制造系统、交通网络等领域有着重要的实际应用价值. 本文考虑了超市模型中的若干重要问题: 实时动态控制模式; 效率比较; 平均场黑洞; 马氏变动环境; 稳定性; 固定点; 系统性能评价等等. 同时, 本文也通过数值算例研究了上述重要问题, 包括对顾客加入最短队列的超市模型与服务台服务最长队列的超市模型进行了性能比较, 给出了他们效率的优劣分析; 在超市模型中对控制到达过程机制进行了三种情况的对比; 对马氏变动环境下的超市模型进行了性能评价.  相似文献   

7.
We introduce the geometric Markov renewal processes as a model for a security market and study this processes in a series scheme. We consider its approximations in the form of averaged, merged and double averaged geometric Markov renewal processes. Weak convergence analysis and rates of convergence of ergodic geometric Markov renewal processes are presented. Martingale properties, infinitesimal operators of geometric Markov renewal processes are presented and a Markov renewal equation for expectation is derived. As an application, we consider the case of two ergodic classes. Moreover, we consider a generalized binomial model for a security market induced by a position dependent random map as a special case of a geometric Markov renewal process.  相似文献   

8.
宋娟  张铭 《数学学报》2018,61(2):337-346
本文将耦合方法应用于非时齐马氏过程,推广了时齐情形的耦合基本定理,为后续研究非时齐马氏过程的耦合提供了理论基础.  相似文献   

9.
The limit distribution for homogeneous Markov processes is studied extensively and well understood, but it is not the case for inhomogeneous Markov processes. In this paper, we review some recent results on inhomogeneous Markov processes generated by non-autonomous stochastic (partial) differential equations (SDE in short). Under some suitable conditions, we show that the distribution of recurrent solutions of SDEs constitutes the limit distribution of the corresponding inhomogeneous Markov processes.  相似文献   

10.
Focusing on stochastic systems arising in mean-field models, the systems under consideration belong to the class of switching diffusions, in which continuous dynamics and discrete events coexist and interact. The discrete events are modeled by a continuous-time Markov chain. Different from the usual switching diffusions, the systems include mean-field interactions. Our effort is devoted to obtaining laws of large numbers for the underlying systems. One of the distinct features of the paper is the limit of the empirical measures is not deterministic but a random measure depending on the history of the Markovian switching process. A main difficulty is that the standard martingale approach cannot be used to characterize the limit because of the coupling due to the random switching process. In this paper, in contrast to the classical approach, the limit is characterized as the conditional distribution (given the history of the switching process) of the solution to a stochastic McKean–Vlasov differential equation with Markovian switching.  相似文献   

11.
The literature about maximum of entropy for Markov processes deals mainly with discrete-time Markov chains. Very few papers dealing with continuous-time jump Markov processes exist and none dealing with semi-Markov processes. It is the aim of this paper to contribute to fill this lack. We recall the basics concerning entropy for Markov and semi-Markov processes and we study several problems to give an overview of the possible directions of use of maximum entropy in connection with these processes. Numeric illustrations are presented, in particular in application to reliability.  相似文献   

12.
张美娟  张铭 《数学杂志》2017,37(4):819-822
本文研究了非时齐马氏过程的随机单调性问题.利用时齐的马氏过程随机单调性的相关证明方法,加以改进,获得了非时齐马氏过程随机单调性的显式判定方法,并进一步将这一充分性条件推广为等价条件.  相似文献   

13.
We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S.?Peng (SIAM J. Control Optim. 28(4):966?C979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A?comparable situation exists in an article by R.?Buckdahn, B.?Djehiche, and J.?Li (Appl. Math. Optim. 64(2):197?C216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.  相似文献   

14.
Summary This paper studies processes constructed by birthing the trajectories of a given Markov process along time according to random probabilities. Getoor has considered the case where the random probabilities are determined by comultiplicative functionals and proved for right processes that the post-birth process has the Markov property. Here randomizations of comultiplicative functionals are described which give rise to conditionally Markov processes. The main argument is developed for general Markov processes and the transition probabilities of the new process, including those from the pre-birth state, are explicited.  相似文献   

15.
We consider piecewise-deterministic Markov processes that occur as scaling limits of discrete-time Markov chains that describe the Transmission Control Protocol (TCP). The class of processes allows for general increase and decrease profiles. Our key observation is that stationary results for the general class follow directly from the stationary results for the idealized TCP process. The latter is a Markov process that increases linearly and experiences downward jumps at times governed by a Poisson process. To establish this connection, we apply space–time transformations that preserve the properties of the class of Markov processes.  相似文献   

16.
In this paper a new notion of a hierarchic Markov process is introduced. It is a series of Markov decision processes called subprocesses built together in one Markov decision process called the main process. The hierarchic structure is specially designed to fit replacement models which in the traditional formulation as ordinary Markov decision processes are usually very large. The basic theory of hierarchic Markov processes is described and examples are given of applications in replacement models. The theory can be extended to fit a situation where the replacement decision depends on the quality of the new asset available for replacement.  相似文献   

17.
§1 状态分类 定义1.1 设I是非负整数集,P={P_(ij)(s,t)|i,j∈I,α≤s≤t≤b}是转移函数矩阵。称P对i在t右标准,若limp_(ii)(t,t+h)=1;称P对i在t左标准,若limP_(ii)(t-h,t)=1.若P对i在t同时为右标准的和左标准的,则称P对i在t标准。若P对i在每个t标准,则称P对i标准。P对i右标准或左标准与此类似。若P对每个i标准,则称P标准。P右标准或左标准与此类似(参看[5]、[6])。  相似文献   

18.
The isomorphism theorem of Dynkin is definitely an important tool to investigate the problems raised in terms of local times of Markov processes. This theorem concerns continuous time Markov processes. We give here an equivalent version for Markov chains.  相似文献   

19.
Limit theorems for functionals of classical (homogeneous) Markov renewal and semi-Markov processes have been known for a long time, since the pioneering work of Pyke Schaufele (Limit theorems for Markov renewal processes, Ann. Math. Statist., 35(4):1746–1764, 1964). Since then, these processes, as well as their time-inhomogeneous generalizations, have found many applications, for example, in finance and insurance. Unfortunately, no limit theorems have been obtained for functionals of inhomogeneous Markov renewal and semi-Markov processes as of today, to the best of the authors’ knowledge. In this article, we provide strong law of large numbers and central limit theorem results for such processes. In particular, we make an important connection of our results with the theory of ergodicity of inhomogeneous Markov chains. Finally, we provide an application to risk processes used in insurance by considering a inhomogeneous semi-Markov version of the well-known continuous-time Markov chain model, widely used in the literature.  相似文献   

20.
Changing time of simple continuous-time Markov counting processes by independent unit-rate Poisson processes results in Markov counting processes for which we provide closed-form transition rates via composition of trajectories and with which we construct novel, simpler infinitesimally over-dispersed processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号