首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A stochastic model of migration, occupational and vertical mobility, based on the theory of Semi‐Markov processes, is presented and important features of these processes derived. The model is a generalization of the Markov process in which the probability of leaving a state can depend in any arbitrary way on the length of time the state has been occupied (duration‐of‐stay) and on the next state entered (pushes and pulls). For mobility processes it thus captures McGinnis’ ‘axiom of cumulative inertia.’ Several distributions with cumulative inertia are presented and the relationship between the Semi‐Markov model and the Mover‐Stayer model explored. A method of including age effects is described. The model is shown to have applications to many other social processes, in addition to mobility, which have duration‐of‐stay effects.  相似文献   

2.
本文是利用压缩算子求解的折扣有限水平Markov决策过程逼近非折扣情形的一点注记。这里涉及的状态集与活动集均为可数集。  相似文献   

3.
For at least twenty‐five years, the concept of the clique has had a prominent place in sociometric and other kinds of sociological research. Recently, with the advent of large, fast computers and with the growth of interest in graph‐theoretic social network studies, research on the definition and investigation of the graph theoretic properties of clique‐like structures has grown. In the present paper, several of these formulations are examined, and their mathematical properties analyzed. A family of new clique‐like structures is proposed which captures an aspect of cliques which is seldom treated in the existing literature. The new structures, when used to complement existing concepts, provide a new means of tapping several important properties of social networks.  相似文献   

4.
We consider a finite time horizon optimal stopping of a regime-switching Lévy process. We prove that the value function of the optimal stopping problem can be characterized as the unique viscosity solution of the associated Hamilton–Jacobi–Bellman variational inequalities.  相似文献   

5.
A class of Hilbert space-valued Markov processes which can be expressed as the mild solution of a linear abstract evolution equation is studied. Sufficient conditions for the generator of the Markov process to be well-defined are given and Kolmogorov's equation and an equation for the characteristic function of the process are derived. The theory is illustrated by examples of parabolic, hyperbolic and delay stochastic differential equations.  相似文献   

6.
We establish integral tests and laws of the iterated logarithm for the upper envelope of the future infimum of positive self-similar Markov processes and for increasing self-similar Markov processes at 0 and +∞. Our proofs are based on the Lamperti representation and time reversal arguments due to Chaumont, L. and Pardo, J.C. (Prépublication (L'université de Paris 6), 2005). These results extend laws of the iterated logarithm for the future infimum of Bessel processes due to Khoshnevisan, D., Lewis, T.M. and Li, W.V. (On the future infima of some transient processes, Probability Theory and Related Fields, 99, 337–360, 1994).  相似文献   

7.
We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudopolynomial exact and approximation algorithms.  相似文献   

8.
In this paper, we consider the nonstationary Markov decision processes (MDP, for short) with average variance criterion on a countable state space, finite action spaces and bounded one-step rewards. From the optimality equations which are provided in this paper, we translate the average variance criterion into a new average expected cost criterion. Then we prove that there exists a Markov policy, which is optimal in an original average expected reward criterion, that minimizies the average variance in the class of optimal policies for the original average expected reward criterion.  相似文献   

9.
10.
In this paper, we consider a mean–variance optimization problem for Markov decision processes (MDPs) over the set of (deterministic stationary) policies. Different from the usual formulation in MDPs, we aim to obtain the mean–variance optimal policy that minimizes the variance over a set of all policies with a given expected reward. For continuous-time MDPs with the discounted criterion and finite-state and action spaces, we prove that the mean–variance optimization problem can be transformed to an equivalent discounted optimization problem using the conditional expectation and Markov properties. Then, we show that a mean–variance optimal policy and the efficient frontier can be obtained by policy iteration methods with a finite number of iterations. We also address related issues such as a mutual fund theorem and illustrate our results with an example.  相似文献   

11.
This paper studies three ways to construct a nonhomogeneous jump Markov process: (i) via a compensator of the random measure of a multivariate point process, (ii) as a minimal solution of the backward Kolmogorov equation, and (iii) as a minimal solution of the forward Kolmogorov equation. The main conclusion of this paper is that, for a given measurable transition intensity, commonly called a Q-function, all these constructions define the same transition function. If this transition function is regular, that is, the probability of accumulation of jumps is zero, then this transition function is the unique solution of the backward and forward Kolmogorov equations. For continuous Q-functions, Kolmogorov equations were studied in Feller?s seminal paper. In particular, this paper extends Feller?s results for continuous Q-functions to measurable Q-functions and provides additional results.  相似文献   

12.
A general problem in relation to application of Markov decision processes to real world problems is the curse of dimensionality, since the size of the state space grows to prohibitive levels when information on all relevant traits of the system being modeled are included. In herd management, we face a hierarchy of decisions made at different levels with different time horizons, and the decisions made at different levels are mutually dependent. Furthermore, decisions have to be made without certainty about the future state of the system. These aspects contribute even further to the dimensionality problem. A new notion of a multilevel hierarchic Markov process specially designed to solve dynamic decision problems involving decisions with varying time horizon has been presented. The method contributes significantly to circumvent the curse of dimensionality, and it provides a framework for general herd management support instead of very specialized models only concerned with a single decision as, for instance, replacement. The applicational perspectives of the technique are illustrated by potential examples relating to the management of a sow herd and a dairy herd.  相似文献   

13.
We introduce a stochastic dynamics related to the measures that arise in harmonic analysis on the infinite–dimensional unitary group. Our dynamics is obtained as a limit of a sequence of natural Markov chains on the Gelfand–Tsetlin graph. We compute the finite-dimensional distributions of the limit Markov process, the generator and eigenfunctions of the semigroup related to this process. The limit process can be identified with the Doob h-transform of a family of independent diffusions. The space-time correlation functions of the limit process have a determinantal form. Bibliography: 21 titles. Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 360, 2008, pp. 91–123.  相似文献   

14.
In this article, we provide predictable and chaotic representations for Itô–Markov additive processes X. Such a process is governed by a finite-state continuous time Markov chain J which allows one to modify the parameters of the Itô-jump process (in so-called regime switching manner). In addition, the transition of J triggers the jump of X distributed depending on the states of J just prior to the transition. This family of processes includes Markov modulated Itô–Lévy processes and Markov additive processes. The derived chaotic representation of a square-integrable random variable is given as a sum of stochastic integrals with respect to some explicitly constructed orthogonal martingales. We identify the predictable representation of a square-integrable martingale as a sum of stochastic integrals of predictable processes with respect to Brownian motion and power-jumps martingales related to all the jumps appearing in the model. This result generalizes the seminal result of Jacod–Yor and is of importance in financial mathematics. The derived representation then allows one to enlarge the incomplete market by a series of power-jump assets and to price all market-derivatives.  相似文献   

15.
We consider a modified Markov branching process incorporating with both state-independent immigration and instantaneous resurrection.The existence criterion of the process is firstly considered.We prove that if the sum of the resurrection rates is finite,then there does not exist any process.An existence criterion is then established when the sum of the resurrection rates is infinite.Some equivalent criteria,possessing the advantage of being easily checked,are obtained for the latter case.The uniqueness criterion for such process is also investigated.We prove that although there exist infinitely many of them,there always exists a unique honest process for a given q-matrix.This unique honest process is then constructed.The ergodicity property of this honest process is analysed in detail.We prove that this honest process is always ergodic and the explicit expression for the equilibrium distribution is established.  相似文献   

16.
This article introduces several rotation and additive invariant ultrametrics on the finite adèle ring A f of the rational numbers ?. Symmetry, regularity and uniqueness properties of these ultrametrics are provided. With these non-Archimedean metrics at hand it is possible to define a wide class of rotation and additive invariant Markov processes on A f .  相似文献   

17.
The bilinear Chapman–Kolmogorov equation determines the dynamical behavior of Markov processes. The task to solve it directly (i.e., without linearizations) was posed by Bernstein in 1932 and was partially solved by Sarmanov in 1961 (solutions are represented by bilinear series). In 2007–2010, the author found several special solutions (represented both by Sarmanov-type series and by integrals) under the assumption that the state space of the Markov process is one-dimensional. In the presented paper, three special solutions have been found (in the integral form) for the multidimensional- state Markov process. Results have been illustrated using five examples, including an example that shows that the original equation has solutions without a probabilistic interpretation.  相似文献   

18.
We extend the central limit theorem for additive functionals of a stationary, ergodic Markov chain with normal transition operator due to Gordin and Lif?ic, 1981 [A remark about a Markov process with normal transition operator, In: Third Vilnius Conference on Probability and Statistics 1, pp. 147–48] to continuous-time Markov processes with normal generators. As examples, we discuss random walks on compact commutative hypergroups as well as certain random walks on non-commutative, compact groups.  相似文献   

19.
20.
Markov耦合与Markov过程的遍历性   总被引:4,自引:0,他引:4  
徐侃  张绍义 《数学杂志》2001,21(3):315-318
本文运用耦合方法,真接通过无穷小算子判别随机过程的遍历性,得到了便于应用的遍历性定理。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号