首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a uniformization of Reeken’s macroscopic differentiability (see [5]), discuss its relations to uniform differentiability (see [6]) and classical continuous differentiability, prove the corresponding chain rule, Taylor’s theorem, mean value theorem, and inverse mapping theorem. An attempt to compare it with the observability (see [1, 4]) is made too.  相似文献   

2.
1. IntroductionThe motivation of writing this paper was from calculating the blocking probability foran overloaded finite system. Our numerical experiments suggested that this probability canbe approximated efficiently by rotating the transition matrix by 180". Some preliminaryresults were obtained and can be found in [11 and [2]. Rotating the transition matrix definesa new Markov chain, which is often called the dual process in the literature, for example,[3--7]. For a finite Markov chain, …  相似文献   

3.
The present paper is a continuation of research of A. A. Atvinovskii and of the author in the area of functional calculus of closed operators on Banach spaces based on Markov and related functions as symbols. The following topics in the perturbation theory are considered: Estimates of bounded perturbations of operator functions with respect to general operator ideal norms, Lipschitz property, moment inequality, Fréchet differentiability, analyticity of operator functions under consideration with respect to the perturbation parameter, spectral shift function, and Lifshits–Krein trace formula.  相似文献   

4.
We propose the construction of a quantum Markov chain that corresponds to a “forward” quantum Markov chain. In the given construction, the quantum Markov chain is defined as the limit of finite-dimensional states depending on the boundary conditions. A similar construction is widely used in the definition of Gibbs states in classical statistical mechanics. Using this construction, we study the quantum Markov chain associated with an XY-model on a Cayley tree. For this model, within the framework of the given construction, we prove the uniqueness of the quantum Markov chain, i.e., we show that the state is independent of the boundary conditions.  相似文献   

5.
We present a Markov chain Monte Carlo (MCMC) method for generating Markov chains using Markov bases for conditional independence models for a four-way contingency table. We then describe a Markov basis characterized by Markov properties associated with a given conditional independence model and show how to use the Markov basis to generate random tables of a Markov chain. The estimates of exact p-values can be obtained from random tables generated by the MCMC method. Numerical experiments examine the performance of the proposed MCMC method in comparison with the χ 2 approximation using large sparse contingency tables.  相似文献   

6.
We show in this paper that the class of Lipschitz functions provides a suitable framework for the generalization of classical envelope theorems for a broad class of constrained programs relevant to economic models, in which nonconvexities play a key role, and where the primitives may not be continuously differentiable. We give sufficient conditions for the value function of a Lipschitz program to inherit the Lipschitz property and obtain bounds for its upper and lower directional Dini derivatives. With strengthened assumptions we derive sufficient conditions for the directional differentiability, Clarke regularity, and differentiability of the value function, thus obtaining a collection of generalized envelope theorems encompassing many existing results in the literature. Some of our findings are then applied to decision models with discrete choices, to dynamic programming with and without concavity, to the problem of existence and characterization of Markov equilibrium in dynamic economies with nonconvexities, and to show the existence of monotone controls in constrained lattice programming problems.  相似文献   

7.
The concepts of Markov process in random environment and homogeneous random transition functions are introduced. The necessary and sufficient conditions for homogeneous random transition function are given. The main results in this article are the analytical properties, such as continuity, differentiability, random Kolmogorov backward equation and random Kolmogorov forward equation of homogeneous random transition functions.  相似文献   

8.
9.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, an Envelope Theorem (ET) will be established for optimization problems on Euclidean spaces. In general, the Envelope Theorems permit analyzing an optimization problem and giving the solution by means of differentiability techniques. The ET will be presented in two versions. One of them uses concavity assumptions, whereas the other one does not require such kind of assumptions. Thereafter, the ET established will be applied to the Markov Decision Processes (MDPs) on Euclidean spaces, discounted and with infinite horizon. As the first application, several examples (including some economic models) of discounted MDPs for which the et allows to determine the value iteration functions will be presented. This will permit to obtain the corresponding optimal value functions and the optimal policies. As the second application of the ET, it will be proved that under differentiability conditions in the transition law, in the reward function, and the noise of the system, the value function and the optimal policy of the problem are differentiable with respect to the state of the system. Besides, various examples to illustrate these differentiability conditions will be provided. This work was partially supported by Benemérita Universidad Aut ónoma de Puebla (BUAP) under grant VIEP-BUAP 38/EXC/06-G, by Consejo Nacional de Ciencia y Tecnología (CONACYT), and by Evaluation-orientation de la COopération Scientifique (ECOS) under grant CONACyT-ECOS M06-M01.  相似文献   

11.
In this paper, the problem of stochastic stability for a class of time-delay Hopfield neural networks with Markovian jump parameters is investigated. The jumping parameters are modeled as a continuous-time, discrete-state Markov process. Without assuming the boundedness, monotonicity and differentiability of the activation functions, some results for delay-dependent stochastic stability criteria for the Markovian jumping Hopfield neural networks (MJDHNNs) with time-delay are developed. We establish that the sufficient conditions can be essentially solved in terms of linear matrix inequalities.  相似文献   

12.
Various definitions of directional derivatives in topological vector spaces are compared. Directional derivatives in the sense of Gâteaux, Fréchet, and Hadamard are singled out from the general framework of -directional differentiability. It is pointed out that, in the case of finite-dimensional spaces and locally Lipschitz mappings, all these concepts of directional differentiability are equivalent. The chain rule for directional derivatives of a composite mapping is discussed.  相似文献   

13.
In a Markov chain model of a social process, interest often centers on the distribution of the population by state. One question, the stability question, is whether this distribution converges to an equilibrium value. For an ordinary Markov chain (a chain with constant transition probabilities), complete answers are available. For an interactive Markov chain (a chain which allows the transition probabilities governing each individual to depend on the locations by state of the rest of the population), few stability results are available. This paper presents new results. Roughly, the main result is that an interactive Markov chain with unique equilibrium will be stable if the chain satisfies a certain monotonicity property. The property is a generalization to interactive Markov chains of the standard definition of monotonicity for ordinary Markov chains.  相似文献   

14.
In this paper we discuss three important kinds of Markov chains used in Web search algorithms-the maximal irreducible Markov chain, the miuimal irreducible Markov chain and the middle irreducible Markov chain, We discuss the stationary distributions, the convergence rates and the Maclaurin series of the stationary distributions of the three kinds of Markov chains. Among other things, our results show that the maximal and minimal Markov chains have the same stationary distribution and that the stationary distribution of the middle Markov chain reflects the real Web structure more objectively. Our results also prove that the maximal and middle Markov chains have the same convergence rate and that the maximal Markov chain converges faster than the minimal Markov chain when the damping factor α 〉1/√2.  相似文献   

15.
基于马氏链拟合的一种非负变权组合预测算法及其应用   总被引:3,自引:0,他引:3  
通过马氏链拟合的方法求取一种新的非负时变权组合预测算法公式.主要工作是:一、对组合预测问题以最小误差为准则给出了马氏链的状态和状态概率初步估计;二、用马氏链拟合状态概率分布时变规律,通过约束多元自回归模型导出了一步转移概率阵的LS解;三、给出一种非负时变权组合预测公式并举一应用实例.  相似文献   

16.
We consider a Markov chain with general state space and an embedded Markov chain sampled at the times of successive returns to a subsetA0 of the state space.We assume that the latter chain is uniformly ergodic but the originalMarkov chain need not possess this property.We develop amodification of the spectralmethod and utilize it in proving the central limit theorem for theMarkov chain under consideration.  相似文献   

17.
An algebraic decidable condition for a stationary Markov chain to consist of a single ergodic set, and a graph-theoretic decidable condition for a stationary Markov chain to consist of a single ergodic noncyclic set are formulated. In the third part of the paper a graph-theoretic condition for a nonstationary Markov chain to have the weakly-ergodic property is given. The paper is based on part of the author’s work towards the D. Sc. degree.  相似文献   

18.
A Markov chain plays an important role in an interacting multiple model (IMM) algorithm which has been shown to be effective for target tracking systems. Such systems are described by a mixing of continuous states and discrete modes. The switching between system modes is governed by a Markov chain. In real world applications, this Markov chain may change or needs to be changed. Therefore, one may be concerned about a target tracking algorithm with the switching of a Markov chain. This paper concentrates on fault-tolerant algorithm design and algorithm analysis of IMM estimation with the switching of a Markov chain. Monte Carlo simulations are carried out and several conclusions are given.  相似文献   

19.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

20.
This work is concerned with weak convergence of non-Markov random processes modulated by a Markov chain. The motivation of our study stems from a wide variety of applications in actuarial science, communication networks, production planning, manufacturing and financial engineering. Owing to various modelling considerations, the modulating Markov chain often has a large state space. Aiming at reduction of computational complexity, a two-time-scale formulation is used. Under this setup, the Markov chain belongs to the class of nearly completely decomposable class, where the state space is split into several subspaces. Within each subspace, the transitions of the Markov chain varies rapidly, and among different subspaces, the Markov chain moves relatively infrequently. Aggregating all the states of the Markov chain in each subspace to a single super state leads to a new process. It is shown that under such aggregation schemes, a suitably scaled random sequence converges to a switching diffusion process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号