Abstract: | The probably approximately correct (PAC) learning theory creates a framework to assess the learning properties of static models for which the data are assumed to be independently and identically distributed (i.i.d.). The present article first extends the idea of PAC learning to cover the learning of modeling tasks with m‐dependent sequences of data. The data are assumed to be marginally distributed according to a fixed arbitrary probability. The resulting framework is then applied to evaluate learning of Volterra Kernel FIR models. © 2002 Wiley Periodicals, Inc. |