首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We consider generalizations of Schützenberger’s promotion operator on the set $\mathcal{L}$ of linear extensions of a finite poset of size n. This gives rise to a strongly connected graph on $\mathcal{L}$ . By assigning weights to the edges of the graph in two different ways, we study two Markov chains, both of which are irreducible. The stationary state of one gives rise to the uniform distribution, whereas the weights of the stationary state of the other have a nice product formula. This generalizes results by Hendricks on the Tsetlin library, which corresponds to the case when the poset is the anti-chain and hence $\mathcal{L}=S_{n}$ is the full symmetric group. We also provide explicit eigenvalues of the transition matrix in general when the poset is a rooted forest. This is shown by proving that the associated monoid is $\mathcal {R}$ -trivial and then using Steinberg’s extension of Brown’s theory for Markov chains on left regular bands to $\mathcal {R}$ -trivial monoids.  相似文献   

2.
3.
4.
5.
6.
We consider asymptotic expansions for sums Sn on the form Sn = ƒ0(X0) + ƒ(X1, X0) + … + ƒ(Xn, Xn−1), where Xi is a Markov chain. Under different ergodicity conditions on the Markov chain and certain conditional moment conditions on ƒ(Xi, Xi−1), a simple representation of the characteristic function of Sn is obtained. The representation is in term of the maximal eigenvalue of the linear operator sending a function g(x) into the function xE(g(Xi)exp[itƒ(Xi, x)]|Xi−1 = x).  相似文献   

7.
The controlled Markov chains (CMC) approach to software testing treats software testing as a control problem, where the software under test serves as a controlled object that is modeled as controlled Markov chain, and the software testing strategy serves as the corresponding controller. In this paper we extend the CMC approach to software testing to the case that the number of tests that can be applied to the software under test is limited. The optimal testing strategy is derived if the true values of all the software parameters of concern are known a priori. An adaptive testing strategy is employed if the true values of the software parameters of concern are not known a priori and need to be estimated on-line during software testing by using testing data. A random testing strategy ignores all the related information (true values or estimates) of the software parameters of concern and follows a uniform probability distribution to select a possible test case. Simulation results show that the performance of an adaptive testing strategy cannot compete that of the optimal testing strategy, but should be better than that of a random testing strategy. This paper further justifies the idea of software cybernetics that is aimed to explore the interplay between software theory/engineering and control theory/engineering.  相似文献   

8.
Translated from Problemy Ustoichivosti Stokhasticheskikh Modelei — Trudy Seminara, pp. 42–49, 1985.  相似文献   

9.
10.
Partially observable Markov decision chains with finite state, action and signal spaces are considered. The performance index is the risk-sensitive average criterion and, under conditions concerning reachability between the unobservable states and observability of the signals, it is shown that the value iteration algorithm can be implemented to approximate the optimal average cost, to determine a stationary policy whose performance index is arbitrarily close to the optimal one, and to establish the existence of solutions to the optimality equation. The results rely on an appropriate extension of the well-known Schweitzer's transformation.  相似文献   

11.
We investigate deviation matrix for discrete-time GI/M/1-type Markov chains in terms of the matrix-analytic method, and revisit the link between deviation matrix and the asymptotic variance. Parallel results are obtained for continuous-time GI/M/1-type Markov chains based on the technique of uniformization. We conclude with A. B. Clarke's tandem queue as an illustrative example, and compute the asymptotic variance for the queue length for this model.  相似文献   

12.
ABSTRACT

The asymptotic equipartition property is a basic theorem in information theory. In this paper, we study the strong law of large numbers of Markov chains in single-infinite Markovian environment on countable state space. As corollary, we obtain the strong laws of large numbers for the frequencies of occurrence of states and ordered couples of states for this process. Finally, we give the asymptotic equipartition property of Markov chains in single-infinite Markovian environment on countable state space.  相似文献   

13.
Motivated by the problem of finding a satisfactory quantum generalization of the classical random walks, we construct a new class of quantum Markov chains which are at the same time purely generated and uniquely determined by a corresponding classical Markov chain. We argue that this construction yields as a corollary, a solution to the problem of constructing quantum analogues of classical random walks which are “entangled” in a sense specified in the paper.The formula giving the joint correlations of these quantum chains is obtained from the corresponding classical formula by replacing the usual matrix multiplication by Schur multiplication.The connection between Schur multiplication and entanglement is clarified by showing that these quantum chains are the limits of vector states whose amplitudes, in a given basis (e.g. the computational basis of quantum information), are complex square roots of the joint probabilities of the corresponding classical chains. In particular, when restricted to the projectors on this basis, the quantum chain reduces to the classical one. In this sense we speak of entangled lifting, to the quantum case, of a classical Markov chain. Since random walks are particular Markov chains, our general construction also gives a solution to the problem that motivated our study.In view of possible applications to quantum statistical mechanics too, we prove that the ergodic type of an entangled Markov chain with finite state space (thus excluding random walks) is completely determined by the corresponding ergodic type of the underlying classical chain. Mathematics Subject Classification (2000) Primary 46L53, 60J99; Secondary 46L60, 60G50, 62B10  相似文献   

14.
Generalizing a theorem ofHofbauer (1979), we give conditions under which invariant measures for piecewise invertible dynamical systems can be lifted to Markov extensions. Using these results we prove:
  1. IfT is anS-unimodal map with an attracting invariant Cantor set, then ∫log|T′|dμ=0 for the unique invariant measure μ on the Cantor set.
  2. IfT is piecewise invertible, iff is the Radon-Nikodym derivative ofT with respect to a σ-finite measurem, if logf has bounded distortion underT, and if μ is an ergodicT-invariant measure satisfying a certain lower estimate for its entropy, then μ?m iffh μ (T)=Σlogf dμ.
  相似文献   

15.

By a density we mean any extension of the asymptotic density to a finitely additive measure defined on all sets of natural numbers. We consider densities associated to ultrafilters on and investigate two additivity properties of such densities. In particular, we show that there is a density for which is complete.

  相似文献   


16.
Conditions are obtained, as well as quantitative estimates, of Markov chains with a common set of states that are uniformly continuous in time. Within the framework of the method under consideration, it is possible to show that such chains can be finitely approximated.Translated from Problemy Ustoichivosti Stokhasticheskikh Modelei — Trudy Seminara, pp. 4–12, 1980.  相似文献   

17.
We consider the M/G/1 and GI/M/1 types of Markov chains for which their one step transitions depend on the times of the transitions. These types of Markov chains are encountered in several stochastic models, including queueing systems, dams, inventory systems, insurance risk models, etc. We show that for the cases when the time parameters are periodic the systems can be analyzed using some extensions of known results in the matrix-analytic methods literature. We have limited our examples to those relating to queueing systems to allow us a focus. An example application of the model to a real life problem is presented.  相似文献   

18.
We consider discrete-time nonlinear controlled stochastic systems, modeled by controlled Makov chains with denumerable state space and compact action space. The corresponding stochastic control problem of maximizing average rewards in the long-run is studied. Departing from the most common position which usesexpected values of rewards, we focus on a sample path analysis of the stream of states/rewards. Under a Lyapunov function condition, we show that stationary policies obtained from the average reward optimality equation are not only average reward optimal, but indeed sample path average reward optimal, for almost all sample paths.Research supported by a U.S.-México Collaborative Research Program funded by the National Science Foundation under grant NSF-INT 9201430, and by CONACyT-MEXICO.Partially supported by the MAXTOR Foundation for applied Probability and Statistics, under grant No. 01-01-56/04-93.Research partially supported by the Engineering Foundation under grant RI-A-93-10, and by a grant from the AT&T Foundation.  相似文献   

19.
This paper deals with denumerable-state continuous-time controlled Markov chains with possibly unbounded transition and reward rates. It concerns optimality criteria that improve the usual expected average reward criterion. First, we show the existence of average reward optimal policies with minimal average variance. Then we compare the variance minimization criterion with overtaking optimality. We present an example showing that they are opposite criteria, and therefore we cannot optimize them simultaneously. This leads to a multiobjective problem for which we identify the set of Pareto optimal policies (also known as nondominated policies).  相似文献   

20.
A general notion of positive dependence among successive observations in a finite-state stationary process is studied, with particular attention to the case of a stationary ergodic Markov chain. This dependence condition can be expressed as a positivity condition on the joint probability matrices of pairs of observations. Some useful conditions equivalent to positive dependence are obtained for reversible chains, but shown not to be equivalent for nonreversible chains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号