Options
Generalized regularized least-squares learning with predefined features in a Hilbert space
Author(s)
Date Issued
2007
ISBN
978-026219568-3
ISSN
10495258
Citation
Advances in Neural Information Processing Systems, 2007, pp. 881 - 888
Type
Conference Paper
Abstract
Kernel-based regularized learning seeks a model in a hypothesis space by minimizing the empirical error and the model's complexity. Based on the representer theorem, the solution consists of a linear combination of translates of a kernel. This paper investigates a generalized form of representer theorem for kernel-based learning. After mapping predefined features and translates of a kernel simultaneously onto a hypothesis space by a specific way of constructing kernels, we proposed a new algorithm by utilizing a generalized regularizer which leaves part of the space unregularized. Using a squared-loss function in calculating the empirical error, a simple convex solution is obtained which combines predefined features with translates of the kernel. Empirical evaluations have confirmed the effectiveness of the algorithm for supervised learning tasks.
Loading...
Availability at HKSYU Library

