Multi-penalty regularization in learning theory |
| |
Institution: | Department of Mathematics, Indian Institute of Technology Delhi, New Delhi 110016, India |
| |
Abstract: | In this paper we establish the error estimates for multi-penalty regularization under the general smoothness assumption in the context of learning theory. One of the motivation for this work is to study the convergence analysis of two-parameter regularization theoretically in the manifold learning setting. In this spirit, we obtain the error bounds for the manifold learning problem using more general framework of multi-penalty regularization. We propose a new parameter choice rule “the balanced-discrepancy principle” and analyze the convergence of the scheme with the help of estimated error bounds. We show that multi-penalty regularization with the proposed parameter choice exhibits the convergence rates similar to single-penalty regularization. Finally on a series of test samples we demonstrate the superiority of multi-parameter regularization over single-penalty regularization. |
| |
Keywords: | Learning theory Manifold learning Multi-penalty regularization Error estimate Adaptive parameter choice |
本文献已被 ScienceDirect 等数据库收录! |