首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
本文首先得到Euclid空间的n维嵌入闭超曲面上Paneitz算子的第一非零特征值的由Q曲率和平均曲率给出的一个最优估计,由此可得到一个等周上界.此外,本文还给出Euclid空间的n维嵌入闭超曲面上p-Laplace算子的第一非零特征值的一个等周上界.  相似文献   

2.
本文研究了LE算子的一类特征值问题.利用Bochner型公式,我们得到了此类问题第一非零特征值的一个Lichnerowicz-Obata型估计,进而将[3]和[7]中的结果推广到了LE算子的情形.  相似文献   

3.
§1 引言在非平衡态统计物理中,统计不可逆性与熵产生率是两个十分重要的概念.在[1]、[2]、[3]中,我们讨论了可以用马氏链描写的系统的可逆性与熵产生率,并证明一个平稳马氏链可逆的充分必要条件是熵产生率为零,进而又说明熵产生率是系统对时间的统计不可逆程度的一个刻划指标.但是,由于马氏链的状态空间的局限性,上述结果不能适应大量连续状态空间的物理问题研究的需要.为此,本文设法对一般的随机过程给出熵产生率的概率定义,并进而  相似文献   

4.
钱敏平在[2]中,给出了马氏链的可逆与环流分解定理:可数状态空间E上定义的马氏链,若具有平稳的初始分布,则其转移概率P可分解为  相似文献   

5.
本文建立了二进树上奇偶马氏链场关于状态和状态序偶出现频率的若干强极限定理,其中包括渐近熵密度上、下界的一个估计式及Shannon-McMillan定理的一种逼近。证明中将研究马氏链强极限定理的一种新的分析方法推广到马氏链场的情况。  相似文献   

6.
基于马氏链拟合的一种非负变权组合预测算法及其应用   总被引:3,自引:0,他引:3  
通过马氏链拟合的方法求取一种新的非负时变权组合预测算法公式.主要工作是:一、对组合预测问题以最小误差为准则给出了马氏链的状态和状态概率初步估计;二、用马氏链拟合状态概率分布时变规律,通过约束多元自回归模型导出了一步转移概率阵的LS解;三、给出一种非负时变权组合预测公式并举一应用实例.  相似文献   

7.
跳过程的主特征值   总被引:3,自引:0,他引:3  
陈木法 《数学学报》2000,43(5):769-772
本文给出了一般跳过程主特征值下界的一个变分公式.对于马氏链,所设条件常满足;所得估计在轻微条件下可达到精确.从这个意义上讲,所得到的公式是彻底的.  相似文献   

8.
本文使用耦合方法,通过对耦合时间的矩的估计得到紧流形上扩散过程依全变差范数指数式收敛的结果;并利用非零第一特征值与特征函数,给出了另外两个估计.  相似文献   

9.
本文研究了四类双漂移拉普拉斯算子的特征值问题.利用带权Reilly公式,当m-权重Ricci曲率满足一定条件时,得到了紧致带边光滑度量测度空间上四类双漂移拉普拉斯算子的第一非零特征值的最优估计.推广了双调和算子特征值的相应结果.  相似文献   

10.
本文研究了马氏链从一个状态子集到另一个状态子集的转移概率的极限性质.利用Doob鞅收敛定理,获得了任意随机序列的强大数定律、马氏链泛函的强大数定律和强遍历定理.推广了马氏链传统转移概率的极限性质和强极限定理.  相似文献   

11.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Markov Chain Monte-Carlo methods produce a random sample of a given distribution by simulating a Markov chain for which the desired distribution is a reversible measure. In order to generate a sample of size n, we propose to run n independent copies of the chain all starting from the same initial state. If n is large enough, the cutoff phenomenon yields a natural stopping rule. Indeed, the access to equilibrium can be detected using empirical estimates for the expectation of a state function. The method is illustrated by the generation of random samples of stable sets on an undirected graph.  相似文献   

13.
Sampling from an intractable probability distribution is a common and important problem in scientific computing. A popular approach to solve this problem is to construct a Markov chain which converges to the desired probability distribution, and run this Markov chain to obtain an approximate sample. In this paper, we provide two methods to improve the performance of a given discrete reversible Markov chain. These methods require the knowledge of the stationary distribution only up to a normalizing constant. Each of these methods produces a reversible Markov chain which has the same stationary distribution as the original chain, and dominates the original chain in the ordering introduced by Peskun [11]. We illustrate these methods on two Markov chains, one connected to hidden Markov models and one connected to card shuffling. We also prove a result which shows that the Metropolis-Hastings algorithm preserves the Peskun ordering for Markov transition matrices.  相似文献   

14.
Data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo algorithm. In this paper, an alternative to DA algorithm is proposed. It is shown that the modified Markov chain is always more efficient than DA in the sense that the asymptotic variance in the central limit theorem under the alternative chain is no larger than that under DA. The modification is based on Peskun’s (Biometrika 60:607–612, 1973) result which shows that asymptotic variance of time average estimators based on a finite state space reversible Markov chain does not increase if the Markov chain is altered by increasing all off-diagonal probabilities. In the special case when the state space or the augmentation space of the DA chain is finite, it is shown that Liu’s (Biometrika 83:681–682, 1996) modified sampler can be used to improve upon the DA algorithm. Two illustrative examples, namely the beta-binomial distribution, and a model for analyzing rank data are used to show the gains in efficiency by the proposed algorithms.  相似文献   

15.
In this paper we consider the problem of finding a low dimensional approximate model for a discrete time Markov process. This problem is of particular interest in systems that exhibit so-called metastable behavior, i.e. systems whose behavior is principally concentrated on a finite number of disjoint components of the state space. The approach developed here is based on a proper orthogonal decomposition and, unlike most existing approaches, does not require the Markov chain to be reversible. An example is presented to illustrate the effectiveness of the proposed method.  相似文献   

16.
For a Markov chain, both the detailed balance condition and the cycle Kolmogorov condition are algebraic binomials. This remark suggests to study reversible Markov chains with the tool of Algebraic Statistics, such as toric statistical models. One of the results of this study is an algebraic parameterization of reversible Markov transitions and their invariant probability.  相似文献   

17.
Mixing time quantifies the convergence speed of a Markov chain to the stationary distribution. It is an important quantity related to the performance of MCMC sampling. It is known that the mixing time of a reversible chain can be significantly improved by lifting, resulting in an irreversible chain, while changing the topology of the chain. We supplement this result by showing that if the connectivity graph of a Markov chain is a cycle, then there is an Ω(n2) lower bound for the mixing time. This is the same order of magnitude that is known for reversible chains on the cycle.  相似文献   

18.
This work is concerned with weak convergence of non-Markov random processes modulated by a Markov chain. The motivation of our study stems from a wide variety of applications in actuarial science, communication networks, production planning, manufacturing and financial engineering. Owing to various modelling considerations, the modulating Markov chain often has a large state space. Aiming at reduction of computational complexity, a two-time-scale formulation is used. Under this setup, the Markov chain belongs to the class of nearly completely decomposable class, where the state space is split into several subspaces. Within each subspace, the transitions of the Markov chain varies rapidly, and among different subspaces, the Markov chain moves relatively infrequently. Aggregating all the states of the Markov chain in each subspace to a single super state leads to a new process. It is shown that under such aggregation schemes, a suitably scaled random sequence converges to a switching diffusion process.  相似文献   

19.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

20.
The subdominant eigenvalue of the transition probability matrix of a Markov chain is a determining factor in the speed of transition of the chain to a stationary state. However, these eigenvalues can be difficult to estimate in a theoretical sense. In this paper we revisit the problem of dynamically organizing a linear list. Items in the list are selected with certain unknown probabilities and then returned to the list according to one of two schemes: the move-to-front scheme or the transposition scheme. The eigenvalues of the transition probability matrix Q of the former scheme are well-known but those of the latter T are not. Nevertheless the transposition scheme gives rise to a reversible Markov chain. This enables us to employ a generalized Rayleigh-Ritz theorem to show that the subdominant eigenvalue of T is at least as large as the subdominant eigenvalue of Q.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号