首页 | 本学科首页   官方微博 | 高级检索  
     检索      


On oracle inequalities related to data-driven hard thresholding
Authors:Golubev Yuri
Institution:1. CNRS, 39 rue F. Joliot-Curie, 13453, Marseille, France
Abstract:This paper focuses on computing a nearly optimal penalty in the method of empirical risk minimization. It is assumed that we have at our disposal the noisy data Y?=??? +????, where ${\theta\in \mathbb{R}^n}$ is an unknown vector and ${\xi\in \mathbb{R}^n}$ is a standard white Gaussian noise. It is also assumed that the underling vector ?? is sparse, and therefore to recover ?? we use a hard thresholding estimate ${\hat\theta_i(Y,t)=Y_i{\bf 1}\{|Y_i|\ge t\}}$ . In order to adapt to an unknown sparsity of ??, the threshold t is assumed to be data-driven. The very popular approach for computing such thresholds is based on the principle of empirical risk minimization suggesting the following data-driven threshold ${\hat t =\text{arg\,min}_t\{\|Y-\hat\theta(Y,t)\|^2+Pen(Y,t)\}}$ , where Pen(Y, t) is a penalty function. In this paper, it is proved with the help of a sharp oracle inequality that the main term in the optimal penalty is given by 2?? 2#{i : |Y i | ?? t} logn/#{i : |Y i | ?? t}].
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号