首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider the problem of optimally controlling a diffusion process on a closed bounded region ofR n with reflection at the boundary. Employing methods similar to Fleming (Ref. 1), we present a constructive proof that there exists an optimal Markov control that is measurable or lower semicontinuous. We prove further that the expected cost function corresponding to the optimal control is the unique solution of the quasilinear parabolic differential equation of dynamic programming with Neumann boundary conditions and that there exists a diffusion process (in the sense of Stroock and Varadhan) corresponding to the optimal control.This work was partially supported by the National Science Foundation, Grant No. GK-18339, by the Office of Naval Research, Grant No. NR-042-264, and by the National Research Council of Canada, Grant No. A3609.The author would like to thank S. R. Pliska, J. Pisa, and N. Trudinger for helpful suggestions. He is especially grateful to Professor A. F. Veinott, Jr., for help and advice in the preparation of the doctoral dissertation, on which part of this paper is based. Finally, he wishes to thank one of the referees for the careful reading and constructive comments on an earlier version of this paper.  相似文献   

2.
Summary One considers a simple exclusion particle jump process on , where the underlying one particle motion is a degenerate random walk that moves only to the right. One starts with the configuration in which the left halfline is completely occupied and the right one free. It is shown that the number of particles at time t between site [u t] and [v t], divided by t, converges a.s. to , where f might be called the density profile. It is explicitely determined and shown to be an affine function. Secondly we prove that the distribution of the process looked at by an observer travelling at constant speed u, converges weakly to the Bernoulli measure with density f(u), as the time tends to infinity.This work has been supported by the Deutsche Forschungsgemeinschaft  相似文献   

3.
We consider the partially observed Markov decision process with observations delayed by k time periods. We show that at stage t, a sufficient statistic is the probability distribution of the underlying system state at stage t - k and all actions taken from stage t - k through stage t - 1. We show that improved observation quality and/or reduced data delay will not decrease the optimal expected total discounted reward, and we explore the optimality conditions for three important special cases. We present a measure of the marginal value of receiving state observations delayed by (k - 1) stages rather than delayed by k stages. We show that in the limit as k →∞ the problem is equivalent to the completely unobserved case. We present numerical examples which illustrate the value of receiving state information delayed by k stages.  相似文献   

4.
5.
In this paper, we study the optimal ergodic control problem with minimum variance for a general class of controlled Markov diffusion processes. To this end, we follow a lexicographical approach. Namely, we first identify the class of average optimal control policies, and then within this class, we search policies that minimize the limiting average variance. To do this, a key intermediate step is to show that the limiting average variance is a constant independent of the initial state. Our proof of this latter fact gives a result stronger than the central limit theorem for diffusions. An application to manufacturing systems illustrates our results.  相似文献   

6.
7.
8.
We consider stochastic flows with interaction in a finite phase space. The flows with variable generators generating evolutionary measure-valued processes are described. The influence of the interaction of particles on the entropy of the flow is analyzed. Translated from Ukrains’kyi Matematychnyi Zhurnal, Vol. 60, No. 11, pp. 1572–1577, November, 2008.  相似文献   

9.
10.
We study a class of stationary Markov processes with marginal distributions identifiable by moments such that every conditional moment of degree say m is a polynomial of degree at most m. We show that then under some additional, natural technical assumption there exists a family of orthogonal polynomial martingales. More precisely we show that such a family of processes is completely characterized by the sequence {(αn, pn)}n ? 0 where α′ns are some positive reals while pns are some monic orthogonal polynomials. Bakry and Mazet (Séminaire de Probabilit?s, vol. 37, 2003) showed that under some additional mild technical conditions each such sequence generates some stationary Markov process with polynomial regression.

We single out two important subclasses of the considered class of Markov processes. The class of harnesses that we characterize completely. The second one constitutes of the processes that have independent regression property and are stationary. Processes with independent regression property so to say generalize ordinary Ornstein–Uhlenbeck (OU) processes or can also be understood as time scale transformations of Lévy processes. We list several properties of these processes. In particular we show that if these process are time scale transforms of Lévy processes then they are not stationary unless we deal with classical OU process. Conversely, time scale transformations of stationary processes with independent regression property are not Lévy unless we deal with classical OU process.  相似文献   

11.
We compute the distributions of the size of the jumps of an increasing Markov process on N0 = {0, 1,…}, and we give a necessary and sufficient condition in order to have only jumps of size one.  相似文献   

12.
13.
We obtain upper and lower bounds of the exit times from balls of a jump-type symmetric Markov process. The proofs are delivered separately. The upper bounds are obtained by using the Levy system corresponding to the process, while the precise expression of the (L^2-)generator of the Dirichlet form associated with the process is used to obtain the lower bounds.  相似文献   

14.
Let Y be a Ornstein–Uhlenbeck diffusion governed by a stationary and ergodic Markov jump process X, i.e. dYt=a(Xt)Ytdt+σ(Xt)dWt, Y0=y0. Ergodicity conditions for Y have been obtained. Here we investigate the tail property of the stationary distribution of this model. A characterization of the only two possible cases is established: light tail or polynomial tail. Our method is based on discretizations and renewal theory. To cite this article: B. de Saporta, J.-F. Yao, C. R. Acad. Sci. Paris, Ser. I 339 (2004).  相似文献   

15.
We consider the optimization of the variance of the sum of costs as well as that of an average expected cost in Markov decision processes with unbounded cost. In case of general state and action space, we find the stationary policy which makes the average variance as small as possible in the class of policies which are ε-optimal in an average expected cost.  相似文献   

16.
Herein, we consider direct Markov chain approximations to the Duncan–Mortensen–Zakai equations for nonlinear filtering problems on regular, bounded domains. For clarity of presentation, we restrict our attention to reflecting diffusion signals with symmetrizable generators. Our Markov chains are constructed by employing a wide band observation noise approximation, dividing the signal state space into cells, and utilizing an empirical measure process estimation. The upshot of our approximation is an efficient, effective algorithm for implementing such filtering problems. We prove that our approximations converge to the desired conditional distribution of the signal given the observation. Moreover, we use simulations to compare computational efficiency of this new method to the previously developed branching particle filter and interacting particle filter methods. This Markov chain method is demonstrated to outperform the two-particle filter methods on our simulated test problem, which is motivated by the fish farming industry.  相似文献   

17.
18.
19.
We begin by studying the eigenvectors associated to irreducible finite birth and death processes, showing that the i nontrivial eigenvector φ i admits a succession of i decreasing or increasing stages, each of them crossing zero. Imbedding naturally the finite state space into a continuous segment, one can unequivocally define the zeros of φ i , which are interlaced with those of φ i+1. These kind of results are deduced from a general investigation of minimax multi-sets Dirichlet eigenproblems, which leads to a direct construction of the eigenvectors associated to birth and death processes. This approach can be generically extended to eigenvectors of Markov processes living on trees. This enables to reinterpret the eigenvalues and the eigenvectors in terms of the previous Dirichlet eigenproblems and a more general conjecture is presented about related higher order Cheeger inequalities. Finally, we carefully study the geometric structure of the eigenspace associated to the spectral gap on trees.  相似文献   

20.
Let X be a symmetric right process, and let be a multiplicative functional of X that is the product of a Girsanov transform, a Girsanov transform under time-reversal and a continuous Feynman–Kac transform. In this paper we derive necessary and sufficient conditions for the strong L2-continuity of the semigroup given by Ttf(x)=Ex[Ztf(Xt)], expressed in terms of the quadratic form obtained by perturbing the Dirichlet form of X in the appropriate way. The transformations induced by such Z include all those treated previously in the literature, such as Girsanov transforms, continuous and discontinuous Feynman–Kac transforms, and generalized Feynman–Kac transforms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号