共查询到20条相似文献,搜索用时 46 毫秒
1.
基于SVM理论的一种新的数据分类方法 总被引:2,自引:0,他引:2
杨丽明 《数学的实践与认识》2003,33(12):61-65
基于 SVM分类器在模式识别问题中有独特的优势 ,本文通过对标准 SVM模型的改造 ,提出了一种新的简单的数据分类方法 .理论分析和实验表明 ,该方法与标准 SVM分类方法相比具有处理大规模数据识别的能力且保持较高的样本识别率 ,节省存储空间等优势 . 相似文献
2.
一种通用的基于梯度的SVM核参数选取算法 总被引:1,自引:0,他引:1
核函数的选取是SVM分类器选取的核心问题.核函数的自动选取既可以提高分类器的性能,又可以减少人为的干预.因此如何自动选取核函数已经成为SVM的热点问题,但是这个问题并没有获得很好的解决.近年来对核函数参数的自动选取的研究,特别是对基于梯度的优化算法的研究取得了一定的进展.提出了一种基于梯度的核函数选取的通用算法,并进行了实验. 相似文献
3.
胡舒合 《数学物理学报(A辑)》1995,15(2):132-136
基于{(Xi,Zi,δi),1≤i≤n},我们建立了E(Y│X=x)的估计mn(x)和m^n(x),并证明了估计量的强弱相合,积分绝对误差的强相合与平均相合性。 相似文献
4.
5.
6.
7.
8.
本文研究了基于相依函数型数据非参数回归函数的核估计.利用稳健的方法,在一定条件下获得了与i.i.d.场合下类似的估计量的几乎完全收敛速度,推广了现有文献中的相关结论. 相似文献
9.
分布自由的回归函数核估计的相合性 总被引:5,自引:0,他引:5
在合适条件下,我们获得了基于完全和截尾数据回归函数核估计的逐步相合性,所获的结果对于所有X的分布μ均成立,因此是分布自由的。 相似文献
10.
回归函数核估计的收敛速度 总被引:2,自引:0,他引:2
本文在P≥1的条件下,给出了回归函数m(x)的核估计m_n(x)的若干种p阶平均收敛速度,改进并推广了文献[1]及[2]中的若干结果。 相似文献
11.
Kernel function method has been successfully used for the
estimation of a variety of function. By using the kernel function theory, an imputation
method based on Epanechnikov kernel and its modification were proposed to solve the
problem that missing data in compositional caused the failures of existing statistical
methods and the k-nearest imputation didn't consider the different contributions of
the k nearest samples when it used them to estimated the missing data. The experimental
results illustrate that the modified imputation method based on Epanechnikov kernel
get a more accurate estimation than k-nearest imputation for compositional data. 相似文献
12.
13.
An iterative variant of the classical degenerate kernel method for solving Fredholm integral equations of the second kind is presented and its convergence properties are studied. 相似文献
14.
Tudor Barbu 《Numerical Functional Analysis & Optimization》2013,34(11):1269-1279
A Gaussian noise reduction technique for grayscale images is proposed in this article. It uses a modified Gaussian filter kernel based on a hyperbolic second-order equation. The introduced mathematical model differs from the classic Gaussian model provided by the heat equations, by a localization property. Our filtering approach reduces the amount of Gaussian noise and also enhances the image contrast. Some image denoising experiments that prove the effectiveness of the proposed method are also described in this article. 相似文献
15.
Satoshi Ishiwata 《Potential Analysis》2007,27(4):335-351
We obtain a condition on the modification of graphs which guarantees the preservation of the Gaussian upper bound for the
gradient of the heat kernel.
相似文献
16.
设$f_n$是基于核函数$K$和取值于$d$-维单位球面${\mathbb{S}}^{d-1}$的独立同分布随机变量列的非参数核密度估计. 我们证明了若核函数是有界变差函数, 随机变量的密度函数$f$是连续的和对称的, $\{\sup_{x\in {\mathbb{SS}}^{d-1}}|f_n(x)-f_n(-x)|,n\ge 1\}$的大偏差原理成立. 相似文献
17.
包括图像识别在内的很多应用领域里,把单个样本表示成向量的集合的形式是很自然的想法,利用一个合适的核函数我们可以把这些向量映射到一个更高维的Hilbert空间,在这个高维空间里用Kernel PCA方法找到样本的高斯分布族,这样就可以把样本上的核函数定义成它们所服从的高斯分布密度函数的Bhattacharrya仿射.这样得到的核函数具有比较好的性质,比如说在各种变换下有稳定性表现,从而也说明了即使还有别的表示样本的方法,用向量集合的形式来表示单个的样本也是具有合理性的. 相似文献
18.
19.
Pornsarp Pornsawad Christine Böckmann 《Numerical Functional Analysis & Optimization》2016,37(12):1562-1589
This work is devoted to the convergence analysis of a modified Runge-Kutta-type iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under a Hölder-type sourcewise condition if the Fréchet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt, Lobatto, and Radau methods. 相似文献
20.
Juan C. Laria M. Carmen Aguilera-Morillo Rosa E. Lillo 《Journal of computational and graphical statistics》2013,22(3):722-731
In high-dimensional supervised learning problems, sparsity constraints in the solution often lead to better performance and interpretability of the results. For problems in which covariates are grouped and sparse structure are desired, both on group and within group levels, the sparse-group lasso (SGL) regularization method has proved to be very efficient. Under its simplest formulation, the solution provided by this method depends on two weight parameters that control the penalization on the coefficients. Selecting these weight parameters represents a major challenge. In most of the applications of the SGL, this problem is left aside, and the parameters are either fixed based on a prior information about the data, or chosen to minimize some error function in a grid of possible values. However, an appropriate choice of the parameters deserves more attention, considering that it plays a key role in the structure and interpretation of the solution. In this sense, we present a gradient-free coordinate descent algorithm that automatically selects the regularization parameters of the SGL. We focus on a more general formulation of this problem, which also includes individual penalizations for each group. The advantages of our approach are illustrated using both real and synthetic datasets. Supplementary materials for this article are available online. 相似文献