首页 | 本学科首页   官方微博 | 高级检索  
     


Context tree selection: A unifying view
Authors:A. Garivier  F. Leonardi
Affiliation:
  • a LTCI, CNRS, Telecom ParisTech, 75634 Paris Cedex 13, France
  • b Instituto de Matemática e Estatística, Universidade de São Paulo, Brazil
  • Abstract:Context tree models have been introduced by Rissanen in [25] as a parsimonious generalization of Markov models. Since then, they have been widely used in applied probability and statistics. The present paper investigates non-asymptotic properties of two popular procedures of context tree estimation: Rissanen’s algorithm Context and penalized maximum likelihood. First showing how they are related, we prove finite horizon bounds for the probability of over- and under-estimation. Concerning over-estimation, no boundedness or loss-of-memory conditions are required: the proof relies on new deviation inequalities for empirical probabilities of independent interest. The under-estimation properties rely on classical hypotheses for processes of infinite memory. These results improve on and generalize the bounds obtained in Duarte et al. (2006) [12], Galves et al. (2008) [18], Galves and Leonardi (2008) [17], Leonardi (2010) [22], refining asymptotic results of Bühlmann and Wyner (1999) [4] and Csiszár and Talata (2006) [9].
    Keywords:Algorithm Context   Penalized maximum likelihood   Model selection   Variable length Markov chains   Bayesian information criterion   Deviation inequalities
    本文献已被 ScienceDirect 等数据库收录!
    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号