首页 | 本学科首页   官方微博 | 高级检索  
     


An Iterative Sparse-Group Lasso
Authors:Juan C. Laria  M. Carmen Aguilera-Morillo  Rosa E. Lillo
Affiliation:1. Department of Statistics, University Carlos III of Madrid, Madrid, Spain;2. UC3M-BS Santander Big Data Institute, Madrid, Spainjlaria@est-econ.uc3m.es;4. UC3M-BS Santander Big Data Institute, Madrid, Spain
Abstract:In high-dimensional supervised learning problems, sparsity constraints in the solution often lead to better performance and interpretability of the results. For problems in which covariates are grouped and sparse structure are desired, both on group and within group levels, the sparse-group lasso (SGL) regularization method has proved to be very efficient. Under its simplest formulation, the solution provided by this method depends on two weight parameters that control the penalization on the coefficients. Selecting these weight parameters represents a major challenge. In most of the applications of the SGL, this problem is left aside, and the parameters are either fixed based on a prior information about the data, or chosen to minimize some error function in a grid of possible values. However, an appropriate choice of the parameters deserves more attention, considering that it plays a key role in the structure and interpretation of the solution. In this sense, we present a gradient-free coordinate descent algorithm that automatically selects the regularization parameters of the SGL. We focus on a more general formulation of this problem, which also includes individual penalizations for each group. The advantages of our approach are illustrated using both real and synthetic datasets. Supplementary materials for this article are available online.
Keywords:Coordinate descent  Gradient-free  High-dimension  Optimization  Regularization
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号