首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a randomly forced Ginzburg–Landau equation on an unbounded domain. The forcing is smooth and homogeneous in space and white noise in time. We prove existence and smoothness of solutions, existence of an invariant measure for the corresponding Markov process and we define the spatial densities of topological entropy, of measure-theoretic entropy, and of upper box-counting dimension. We prove inequalities relating these different quantities. The proof of existence of an invariant measure uses the compact embedding of some space of uniformly smooth functions into the space of locally square-integrable functions and a priori bounds on the semi-flow in these spaces. The bounds on the entropy follow from spatially localised estimates on the rate of divergence of nearby orbits and on the smoothing effect of the evolution. Received: 21 June 2000 / Accepted: 28 September 2001  相似文献   

2.
A way to study ergodic and measure theoretic aspects of interval maps is by means of the Markov extension. This tool, which ties interval maps to the theory of Markov chains, was introduced by Hofbauer and Keller. More generally known are induced maps, i.e. maps that, restricted to an element of an interval partition, coincide with an iterate of the original map.We will discuss the relation between the Markov extension and induced maps. The main idea is that an induced map of an interval map often appears as a first return map in the Markov extension. For S-unimodal maps, we derive a necessary condition for the existence of invariant probability measures which are absolutely continuous with respect to Lebesgue measure. Two corollaries are given.  相似文献   

3.
We consider the survival probability of a state that evolves according to the Schrödinger dynamics generated by a self-adjoint operator H. We deduce from a classical result of Salem that upper bounds for the Hausdorff dimension of a set supporting the spectral measure associated with the initial state imply lower bounds on a subsequence of time scales for the survival probability. This general phenomenon is illustrated with applications to the Fibonacci operator and the critical almost Mathieu operator. In particular, this gives the first quantitative dynamical bound for the critical almost Mathieu operator.  相似文献   

4.
We show how to construct a topological Markov map of the interval whose invariant probability measure is the stationary law of a given stochastic chain of infinite order. In particular we characterize the maps corresponding to stochastic chains with memory of variable length. The problem treated here is the converse of the classical construction of the Gibbs formalism for Markov expanding maps of the interval.  相似文献   

5.
We introduce a statistical mechanics formalism for the study of constrained graph evolution as a Markovian stochastic process, in analogy with that available for spin systems, deriving its basic properties and highlighting the role of the ‘mobility’ (the number of allowed moves for any given graph). As an application of the general theory we analyze the properties of degree-preserving Markov chains based on elementary edge switchings. We give an exact yet simple formula for the mobility in terms of the graph’s adjacency matrix and its spectrum. This formula allows us to define acceptance probabilities for edge switchings, such that the Markov chains become controlled Glauber-type detailed balance processes, designed to evolve to any required invariant measure (representing the asymptotic frequencies with which the allowed graphs are visited during the process). As a corollary we also derive a condition in terms of simple degree statistics, sufficient to guarantee that, in the limit where the number of nodes diverges, even for state-independent acceptance probabilities of proposed moves the invariant measure of the process will be uniform. We test our theory on synthetic graphs and on realistic larger graphs as studied in cellular biology, showing explicitly that, for instances where the simple edge swap dynamics fails to converge to the uniform measure, a suitably modified Markov chain instead generates the correct phase space sampling.  相似文献   

6.
A Markov process which may be thought of as a classical lattice spin system is considered. States of the system are probability measures on the configuration space, and we study the evolution of the free energy of these states with time. It is proved that for all initial states the free energy is nonincreasing and that it strictly decreases from any initial state which is shift invariant but not an equilibrium state. Finally we show that the state of the system converges weakly to the set of Gibbsian Distributions for the given interaction, and that all shift invariant equilibrium states are Gibbsian Distributions.This work was done while the author was a postdoctoral fellow in the Adolph C. and Mary Sprague Miller Institute for Basic Research in Science.  相似文献   

7.
LetS:[0, 1][0,1] be a piecewise convex transformation satisfying some conditions which guarantee the existence of an absolutely continuous invariant probability measure. We prove the convergence of a class of Markov finite approximations for computing the invariant measure, using a compactness argument forL 1-spaces.Research was supported in part by a grant from the Minority Scholars Program through the University of Southern Mississippi.  相似文献   

8.
Joseph L. McCauley 《Physica A》2007,382(2):445-452
The purpose of this comment is to correct mistaken assumptions and claims made in the paper “Stochastic feedback, nonlinear families of Markov processes, and nonlinear Fokker-Planck equations” by T. D. Frank [T.D. Frank, Stochastic feedback, non-linear families of Markov processes, and nonlinear Fokker-Planck equations, Physica A 331 (2004) 391]. Our comment centers on the claims of a “non-linear Markov process” and a “non-linear Fokker-Planck equation.” First, memory in transition densities is misidentified as a Markov process. Second, the paper assumes that one can derive a Fokker-Planck equation from a Chapman-Kolmogorov equation, but no proof was offered that a Chapman-Kolmogorov equation exists for the memory-dependent processes considered. A “non-linear Markov process” is claimed on the basis of a non-linear diffusion pde for a 1-point probability density. We show that, regardless of which initial value problem one may solve for the 1-point density, the resulting stochastic process, defined necessarily by the conditional probabilities (the transition probabilities), is either an ordinary linearly generated Markovian one, or else is a linearly generated non-Markovian process with memory. We provide explicit examples of diffusion coefficients that reflect both the Markovian and the memory-dependent cases. So there is neither a “non-linear Markov process”, nor a “non-linear Fokker-Planck equation” for a conditional probability density. The confusion rampant in the literature arises in part from labeling a non-linear diffusion equation for a 1-point probability density as “non-linear Fokker-Planck,” whereas neither a 1-point density nor an equation of motion for a 1-point density can define a stochastic process. In a closely related context, we point out that Borland misidentified a translation invariant 1-point probability density derived from a non-linear diffusion equation as a conditional probability density. Finally, in the Appendix A we present the theory of Fokker-Planck pdes and Chapman-Kolmogorov equations for stochastic processes with finite memory.  相似文献   

9.
We consider Markov processes arising from small random perturbations of non-chaotic dynamical systems. Under rather general conditions we prove that, with large probability, the distance between two arbitrary paths starting close to a same attractor of the unperturbed system decreases exponentially fast in time. The case of paths starting in different basins of attraction is also considered as well as some applications to the analysis of the invariant measure and to elliptic problems with small parameter in front to the second derivatives. The proof is based on a multiscale analysis of the typical trajectories of the Markov process; this analysis is done using techniques involved in the proof of Anderson localization for disordered quantum systems.  相似文献   

10.
We establish bounds for the measure of deviation sets associated to continuous observables with respect to not necessarily invariant weak Gibbs measures. Under some mild assumptions, we obtain upper and lower bounds for the measure of deviation sets of some non-uniformly expanding maps, including quadratic maps and robust multidimensional non-uniformly expanding local diffeomorphisms. For that purpose, a measure theoretical weak form of specification is introduced and proved to hold for the robust classes of multidimensional non-uniformly expanding local diffeomorphisms and Viana maps.  相似文献   

11.
12.
We give a rigorous construction of a stochastic continuumP()2 model in finite Euclidean space-time volume. It is obtained by a weak solution of a non-linear stochastic differential equation in a space of distributions. The resulting Markov process has continuous sample paths, and is ergodic with the finite volume EuclideanP()2 measure as its unique invariant measure. The procedure may be called stochastic field quantization.Laboratoire Associé 280 au CNRSSupported in part by GNSM and INFN  相似文献   

13.
We study learning of probability distributions characterized by an unknown symmetry direction. Based on an entropic performance measure and the variational method of statistical mechanics we develop exact upper and lower bounds on the scaled critical number of examples below which learning of the direction is impossible. The asymptotic tightness of the bounds suggests an asymptotically optimal method for learning nonsmooth distributions.  相似文献   

14.
The sampling method proposed by Metropolis et al. (J. Chem. Phys. 21 (1953), 1087) requires the simulation of a Markov chain with a specified π as its stationary distribution. Hastings (Biometrika 57 (1970). 97) outlined a general procedure for constructing and simulating such a Markov chain. The matrix P = {pij} of transition probabilities is constructed using a defined symmetric function s and an arbitrary transition matrix Q. With respect to asymptotic variance reduction, Peskun (Biometrika 60 (1973), 607) determined, for a given Q, the optimum choice for sij. Here, guidelines are given for choosing Q so that the resulting Markov chain sampling method is as precise as is practically possible. Examples illustrating the use of the guidelines, including potential applications to problems in statistical mechanics and to the problem of estimating the probability of an simple event by “hit-ormiss” Monte Carlo in conjunction with Markov chain sampling, are discussed.  相似文献   

15.
One of the most difficult problems in the field of non-linear time series analysis is the unequivocal characterization of a measured signal. We present a practicable procedure which allows to decide if a given time series is pure noise, chaotic but distorted by noise, purely chaotic, or a Markov process. Furthermore, the method gives an estimate of the Kolmogorov-Sinai (KS) entropy and the noise level. The procedure is based on a measure of the sensitive dependence on the initial conditions which is called ε-information flow. This measure generalizes the concept of KS entropy and characterizes the underlying dynamics. The ε-information flow is approximated by the calculation of various correlation integrals.  相似文献   

16.
We study a class of dissipative nonlinear PDE's forced by a random force ηomega( t , x ), with the space variable x varying in a bounded domain. The class contains the 2D Navier–Stokes equations (under periodic or Dirichlet boundary conditions), and the forces we consider are those common in statistical hydrodynamics: they are random fields smooth in t and stationary, short-correlated in time t. In this paper, we confine ourselves to “kick forces” of the form
where the η k 's are smooth bounded identically distributed random fields. The equation in question defines a Markov chain in an appropriately chosen phase space (a subset of a function space) that contains the zero function and is invariant for the (random) flow of the equation. Concerning this Markov chain, we prove the following main result (see Theorem 2.2): The Markov chain has a unique invariant measure. To prove this theorem, we present a construction assigning, to any invariant measure, a Gibbs measure for a 1D system with compact phase space and apply a version of Ruelle–Perron–Frobenius uniqueness theorem to the corresponding Gibbs system. We also discuss ergodic properties of the invariant measure and corresponding properties of the original randomly forced PDE. Received: 24 January 2000 / Accepted: 17 February 2000  相似文献   

17.
We compute the pressure of the random energy model (REM) and generalized random energy model (GREM) by establishing variational upper and lower bounds. For the upper bound, we generalize Guerra’s “broken replica symmetry bounds,” and identify the random probability cascade as the appropriate random overlap structure for the model. For the REM the lower bound is obtained, in the high temperature regime using Talagrand’s concentration of measure inequality, and in the low temperature regime using convexity and the high temperature formula. The lower bound for the GREM follows from the lower bound for the REM by induction. While the argument for the lower bound is fairly standard, our proof of the upper bound is new.  相似文献   

18.
We consider an N-particle quantum systems in ? d , with interaction and in presence of a random external alloy-type potential (a continuous N-particle Anderson model). We establish Wegner-type bounds (inequalities) for such models, giving upper bounds for the probability that random spectra of Hamiltonians in finite volumes intersect a given set.  相似文献   

19.
We study correlation functions of the totally asymmetric simple exclusion process (TASEP) in discrete time with backward sequential update. We prove a determinantal formula for the generalized Green function which describes transitions between positions of particles at different individual time moments. In particular, the generalized Green function defines a probability measure at staircase lines on the space-time plane. The marginals of this measure are the TASEP correlation functions in the space-time region not covered by the standard Green function approach. As an example, we calculate the current correlation function that is the joint probability distribution of times taken by selected particles to travel given distance. An asymptotic analysis shows that current fluctuations converge to the Airy2 process.  相似文献   

20.
Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号