首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
We consider the Hamiltonian cycle problem embedded in singularly perturbed (controlled) Markov chains. We also consider a functional on the space of stationary policies of the process that consists of the (1,1)‐entry of the fundamental matrices of the Markov chains induced by these policies. We focus on the subset of these policies that induce doubly stochastic probability transition matrices which we refer to as the “doubly stochastic policies.” We show that when the perturbation parameter, ε, is sufficiently small, the minimum of this functional over the space of the doubly stochastic policies is attained at a Hamiltonian cycle, provided that the graph is Hamiltonian. We also show that when the graph is non‐Hamiltonian, the above minimum is strictly greater than that in a Hamiltonian case. We call the size of this difference the “Hamiltonicity Gap” and derive a conservative lower bound for this gap. Our results imply that the Hamiltonian cycle problem is equivalent to the problem of minimizing the variance of the first hitting time of the home node, over doubly stochastic policies. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2009  相似文献   

2.
In this paper, we present some algebraic properties of a particular class of probability transition matrices, namely, Hamiltonian transition matrices. Each matrix \(P\) in this class corresponds to a Hamiltonian cycle in a given graph \(G\) on \(n\) nodes and to an irreducible, periodic, Markov chain. We show that a number of important matrices traditionally associated with Markov chains, namely, the stationary, fundamental, deviation and the hitting time matrix all have elegant expansions in the first \(n-1\) powers of \(P\), whose coefficients can be explicitly derived. We also consider the resolvent-like matrices associated with any given Hamiltonian cycle and its reverse cycle and prove an identity about the product of these matrices. As an illustration of these analytical results, we exploit them to develop a new heuristic algorithm to determine a non-Hamiltonicity of a given graph.  相似文献   

3.
We consider the Hamiltonian cycle problem embedded in a singularly perturbed Markov decision process (MDP). More specifically, we consider the HCP as an optimization problem over the space of long-run state-action frequencies induced by the MDP's stationary policies. We show that Hamiltonian cycles (if any) correspond to the global minima of a suitably constructed indefinite quadratic programming problem over the frequency space. We show that the above indefinite quadratic can be approximated by quadratic functions that are `nearly convex' and as such suitable for the application of logarithmic barrier methods. We develop an interior-point type algorithm that involves an arc elimination heuristic that appears to perform rather well in moderate size graphs. The approach has the potential for further improvements.  相似文献   

4.
《Optimization》2012,61(4-5):441-458
We consider the Hamiltonian cycle problem (HCP) embedded in a singularly perturbed Markov decision process (MDP). More specifically, we consider the HCP as an optimization problem over the space of long-run state-action frequencies induced by the MDP's stationary policies. We also consider two quadratic functionals over the same space. We show that when the perturbation parameter, ? is sufficiently small the Hamiltonian cycles of the given directed graph are precisely the maximizers of one of these quadratic functionals over the frequency space intersected with an appropriate (single) contour of the second quadratic functional. In particular, all these maximizers have a known Euclidean distance of z m (?) from the origin. Geometrically, this means that Hamiltonian cycles, if any, are the points in the frequency polytope where the circle of radius z m (?) intersects a certain ellipsoid.  相似文献   

5.
Kingman and Williams [6] showed that a pattern of positive elements can occur in a transition matrix of a finite state, nonhomogeneous Markov chain if and only if it may be expressed as a finite product of reflexive and transitive patterns. In this paper we solve a similar problem for doubly stochastic chains. We prove that a pattern of positive elements can occur in a transition matrix of a doubly stochastic Markov chain if and only if it may be expressed as a finite product of reflexive, transitive, and symmetric patterns. We provide an algorithm for determining whether a given pattern may be expressed as a finite product of reflexive, transitive, and symmetric patterns. This result has implications for the embedding problem for doubly stochastic Markov chains. We also give the application of the obtained characterization to the chain majorization.  相似文献   

6.
We present an algorithm to find the determinant and its first and second derivatives of a rank-one corrected generator matrix of a doubly stochastic Markov chain. The motivation arises from the fact that the global minimiser of this determinant solves the Hamiltonian cycle problem. It is essential for algorithms that find global minimisers to evaluate both first and second derivatives at every iteration. Potentially the computation of these derivatives could require an overwhelming amount of work since for the Hessian N 2 cofactors are required. We show how the doubly stochastic structure and the properties of the objective may be exploited to calculate all cofactors from a single LU decomposition.  相似文献   

7.
We consider a two‐dimensional transport equation subject to small diffusive perturbations. The transport equation is given by a Hamiltonian flow near a compact and connected heteroclinic cycle. We investigate approximately harmonic functions corresponding to the generator of the perturbed transport equation. In particular, we investigate such functions in the boundary layer near the heteroclinic cycle; the space of these functions gives information about the likelihood of a particle moving a mesoscopic distance into one of the regions where the transport equation corresponds to periodic oscillations (i.e., a “well” of the Hamiltonian). We find that we can construct such approximately harmonic functions (which can be used as “corrector functions” in certain averaging questions) when certain macroscopic “gluing conditions” are satisfied. This provides a different perspective on some previous work of Freidlin and Wentzell on stochastic averaging of Hamiltonian systems. © 2004 Wiley Periodicals, Inc.  相似文献   

8.
We consider time‐homogeneous Markov chains with state space Ek≡{0,1,…,k} and initial distribution concentrated on the state 0. For pairs of such Markov chains, we study the Stochastic Tail Order and the stochastic order in the usual sense between the respective first passage times in the state k . On this purpose, we will develop a method based on a specific relation between two stochastic matrices on the state space Ek . Our method provides comparisons that are simpler and more refined than those obtained by the analysis based on the spectral gaps. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we suggest a new successive approximation method to compute the optimal discounted reward for finite state and action, discrete time, discounted Markov decision chains. The method is based on a block partitioning of the (stochastic) matrices corresponding to the stationary policies. The method is particularly attractive when the transition matrices are jointly nearly decomposable or nearly completely decomposable.  相似文献   

10.
We consider a zero-sum stochastic game with side constraints for both players with a special structure. There are two independent controlled Markov chains, one for each player. The transition probabilities of the chain associated with a player as well as the related side constraints depend only on the actions of the corresponding player; the side constraints also depend on the player’s controlled chain. The global cost that player 1 wishes to minimize and that player 2 wishes to maximize, depend however on the actions and Markov chains of both players. We obtain a linear programming formulations that allows to compute the value and saddle point policies for this problem. We illustrate the theoretical results through a zero-sum stochastic game in wireless networks in which each player has power constraints  相似文献   

11.
In this paper, we use the Markov chain censoring technique to study infinite state Markov chains whose transition matrices possess block-repeating entries. We demonstrate that a number of important probabilistic measures are invariant under censoring. Informally speaking, these measures involve first passage times or expected numbers of visits to certain levels where other levels are taboo; they are closely related to the so-called fundamental matrix of the Markov chain which is also studied here. Factorization theorems for the characteristic equation of the blocks of the transition matrix are obtained. Necessary and sufficient conditions are derived for such a Markov chain to be positive recurrent, null recurrent, or transient based either on spectral analysis, or on a property of the fundamental matrix. Explicit expressions are obtained for key probabilistic measures, including the stationary probability vector and the fundamental matrix, which could be potentially used to develop various recursive algorithms for computing these measures.  相似文献   

12.
We consider a Markov decision process with an uncountable state space for which the vector performance functional has the form of expected total rewards. Under the single condition that initial distribution and transition probabilities are nonatomic, we prove that the performance space coincides with that generated by nonrandomized Markov policies. We also provide conditions for the existence of optimal policies when the goal is to maximize one component of the performance vector subject to inequality constraints on other components. We illustrate our results with examples of production and financial problems.  相似文献   

13.
Let \begin{align*}{\mathcal T}\end{align*}n be the compact convex set of tridiagonal doubly stochastic matrices. These arise naturally in probability problems as birth and death chains with a uniform stationary distribution. We study ‘typical’ matrices T∈ \begin{align*}{\mathcal T}\end{align*}n chosen uniformly at random in the set \begin{align*}{\mathcal T}\end{align*}n. A simple algorithm is presented to allow direct sampling from the uniform distribution on \begin{align*}{\mathcal T}\end{align*}n. Using this algorithm, the elements above the diagonal in T are shown to form a Markov chain. For large n, the limiting Markov chain is reversible and explicitly diagonalizable with transformed Jacobi polynomials as eigenfunctions. These results are used to study the limiting behavior of such typical birth and death chains, including their eigenvalues and mixing times. The results on a uniform random tridiagonal doubly stochastic matrices are related to the distribution of alternating permutations chosen uniformly at random.© 2012 Wiley Periodicals, Inc. Random Struct. Alg., 42, 403–437, 2013  相似文献   

14.
Simon Rénier 《Discrete Mathematics》2009,309(23-24):6563-6571
We show that infinite locally finite doubly stochastic matrices are particular limits of sequences of finite doubly stochastic matrices and reciprocally. Thereby, we define the parity in the set of infinite locally finite doubly stochastic matrices. In particular, convexity and stability properties of the even matrix of this set are investigated, as well as the differences between the finite case and the infinite case. Moreover, the limits of the powers of locally finite infinite doubly stochastic matrices in this context are determined.  相似文献   

15.
《随机分析与应用》2013,31(2):419-441
We consider the stochastic model of water pollution, which mathematically can be written with a stochastic partial differential equation driven by Poisson measure noise. We use a stochastic particle Markov chain method to produce an implementable approximate solution. Our main result is the annealed law of large numbers establishing convergence in probability of our Markov chains to the solution of the stochastic reaction-diffusion equation while considering the Poisson source as a random medium for the Markov chains.  相似文献   

16.
A methodology is proposed that is suitable for efficient simulation of continuous-time Markov chains that are nearly-completely decomposable. For such Markov chains the effort to adequately explore the state space via Crude Monte Carlo (CMC) simulation can be extremely large. The purpose of this paper is to provide a fast alternative to the standard CMC algorithm, which we call Aggregate Monte Carlo (AMC). The idea of the AMC algorithm is to reduce the jumping back and forth of the Markov chain in small subregions of the state space. We accomplish this by aggregating such problem regions into single states. We discuss two methods to identify collections of states where the Markov chain may become ‘trapped’: the stochastic watershed segmentation from image analysis, and a graph-theoretic decomposition method. As a motivating application, we consider the problem of estimating the charge carrier mobility of disordered organic semiconductors, which contain low-energy regions in which the charge carrier can quickly become stuck. It is shown that the AMC estimator for the charge carrier mobility reduces computational costs by several orders of magnitude compared to the CMC estimator.  相似文献   

17.
The notion of a stochastic operator in an ordered Banach space is specialized to a finite dimensional ordered real vector space. The classical limit theorems are obtained, and an application is made to non-homogeneous Markov chains. Finally, groups of nonnegative matrices are discussed.  相似文献   

18.
The notion of a stochastic operator in an ordered Banach space is specialized to a finite dimensional ordered real vector space. The classical limit theorems are obtained, and an application is made to non-homogeneous Markov chains. Finally, groups of nonnegative matrices are discussed.  相似文献   

19.
Doubly nonnegative matrices arise naturally in many setting including Markov random fields (positively banded graphical models) and in the convergence analysis of Markov chains. In this short note, we settle a recent conjecture by C.R. Johnson et al. [Charles R. Johnson, Brian Lins, Olivia Walch, The critical exponent for continuous conventional powers of doubly nonnegative matrices, Linear Algebra Appl. 435 (9) (2011) 2175–2182] by proving that the critical exponent beyond which all continuous conventional powers of n-by-n   doubly nonnegative matrices are doubly nonnegative is exactly n−2n2. We show that the conjecture follows immediately by applying a general characterization from the literature. We prove a stronger form of the conjecture by classifying all powers preserving doubly nonnegative matrices, and proceed to generalize the conjecture for broad classes of functions. We also provide different approaches for settling the original conjecture.  相似文献   

20.
The study addresses the matrix operator equations of a special form which are used in the theory of Markov chains. Solving the operator equations with stochastic transition probability matrices of large finite or even countably infinite size reduces to the case of stochastic matrices of small size. In particular, the case of ternary chains is considered in detail. A Markov model for crack growth in a composite serves as an example of application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号