首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
In this paper we study conditional quantile regression by learning algorithms generated from Tikhonov regularization schemes associated with pinball loss and varying Gaussian kernels. Our main goal is to provide convergence rates for the algorithm and illustrate differences between the conditional quantile regression and the least square regression. Applying varying Gaussian kernels improves the approximation ability of the algorithm. Bounds for the sample error are achieved by using a projection operator, a variance-expectation bound derived from a condition on conditional distributions and a tight bound for the covering numbers involving the Gaussian kernels.  相似文献   

2.
The least-square regression problem is considered by coefficient-based regularization schemes with ?1??penalty. The learning algorithm is analyzed with samples drawn from unbounded sampling processes. The purpose of this paper is to present an elaborate concentration estimate for the algorithms by means of a novel stepping stone technique. The learning rates derived from our analysis can be achieved in a more general setting. Our refined analysis will lead to satisfactory learning rates even for non-smooth kernels.  相似文献   

3.
In this paper, we propose a two-step kernel learning method based on the support vector regression (SVR) for financial time series forecasting. Given a number of candidate kernels, our method learns a sparse linear combination of these kernels so that the resulting kernel can be used to predict well on future data. The L 1-norm regularization approach is used to achieve kernel learning. Since the regularization parameter must be carefully selected, to facilitate parameter tuning, we develop an efficient solution path algorithm that solves the optimal solutions for all possible values of the regularization parameter. Our kernel learning method has been applied to forecast the S&P500 and the NASDAQ market indices and showed promising results.  相似文献   

4.
This paper studies the conditional quantile regression problem involving the pinball loss. We introduce a concept of τ-quantile of p-average logarithmic type q to complement the previous study by Steinwart and Christman (2008, 2011) [1] and [2]. A new comparison theorem is provided which can be used for further error analysis of some learning algorithms.  相似文献   

5.
This paper presents a procedure to solve the classical location median problem where the distances are measured with ? p -norms with p > 2. In order to do that we consider an approximated problem. The global convergence of the sequence generated by this iterative scheme is proved. Therefore, this paper closes the still open question of giving a modification of the Weiszfeld algorithm that converges to an optimal solution of the median problem with ? p norms and ${p \in (2, \infty)}$ . The paper ends with a computational analysis of the different provided iterative schemes.  相似文献   

6.
In this paper, we propose a two-step kernel learning method based on the support vector regression (SVR) for financial time series forecasting. Given a number of candidate kernels, our method learns a sparse linear combination of these kernels so that the resulting kernel can be used to predict well on future data. The L 1-norm regularization approach is used to achieve kernel learning. Since the regularization parameter must be carefully selected, to facilitate parameter tuning, we develop an efficient solution path algorithm that solves the optimal solutions for all possible values of the regularization parameter. Our kernel learning method has been applied to forecast the S&P500 and the NASDAQ market indices and showed promising results.  相似文献   

7.
This article introduces a new method for computing regression quantile functions. This method applies a finite smoothing algorithm based on smoothing the nondifferentiable quantile regression objective function ρτ. The smoothing can be done for all τ ∈ (0, 1), and the convergence is finite for any finite number of τi ∈ (0, 1), i = 1,…,N. Numerical comparison shows that the finite smoothing algorithm outperforms the simplex algorithm in computing speed. Compared with the powerful interior point algorithm, which was introduced in an earlier article, it is competitive overall; however, it is significantly faster than the interior point algorithm when the design matrix in quantile regression has a large number of covariates. Additionally, the new algorithm provides the same accuracy as the simplex algorithm. In contrast, the interior point algorithm gives only the approximate solutions in theory, and rounding may be necessary to improve the accuracy of these solutions in practice.  相似文献   

8.
We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve error bounds by presenting a new comparison theorem associated with general convex loss functions and Tsybakov noise conditions. Some concrete examples are provided to illustrate the improved learning rates which demonstrate the effect of various loss functions for learning algorithms. In our ...  相似文献   

9.
A grid approximation of the boundary value problem for a singularly perturbed parabolic reaction-diffusion equation is considered in a domain with the boundaries moving along the axis x in the positive direction. For small values of the parameter ? (this is the coefficient of the higher order derivatives of the equation, ? ∈ (0, 1]), a moving boundary layer appears in a neighborhood of the left lateral boundary S 1 L . In the case of stationary boundary layers, the classical finite difference schemes on piece-wise uniform grids condensing in the layers converge ?-uniformly at a rate of O(N ?1lnN + N 0), where N and N 0 define the number of mesh points in x and t. For the problem examined in this paper, the classical finite difference schemes based on uniform grids converge only under the condition N ?1 + N 0 ?1 ? ?. It turns out that, in the class of difference schemes on rectangular grids that are condensed in a neighborhood of S 1 L with respect to x and t, the convergence under the condition N ?1 + N 0 ?1 ≤ ?1/2 cannot be achieved. Examination of widths that are similar to Kolmogorov’s widths makes it possible to establish necessary and sufficient conditions for the ?-uniform convergence of approximations of the solution to the boundary value problem. These conditions are used to design a scheme that converges ?-uniformly at a rate of O(N ?1lnN + N 0).  相似文献   

10.
The task of computing an estimate for the quantile (ζq) for an unknown distribution F (i.e., F(ζq) = q) is usually performed by the “sample quantile” method, which computes the ?Nq? + 1 smallest element from the set of N observations, and thus requires that all N samples be retained in memory. This paper introduces a recursive method of estimating ζq based on the fact that if the terminal nodes of a uniform d-ary tree are assigned random values, independently drawn from a distribution F, then the minimax alue of the root node converges to a specified quantile of F for very tall trees. The new estimate is shown to be almost as precise as that produced by the sample quantile method and, like it, is guaranteed to converge to ζq when the sample is large for any arbitrary distribution F. However, in contrast to the sample quantile computation the proposed method requires the retention in storage of at most log2N representative data points, where N is the number of samples observed in the past. Moreover, the estimate can be updated quickly using an average of 4, and a maximum of 2 log2N, comparisons with each new observation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号