首页 | 本学科首页   官方微博 | 高级检索  
     


Positive definite dot product kernels in learning theory
Authors:Fangyan?Lu  author-information"  >  author-information__contact u-icon-before"  >  mailto:fylu@pub.sz.jsinfo.net"   title="  fylu@pub.sz.jsinfo.net"   itemprop="  email"   data-track="  click"   data-track-action="  Email author"   data-track-label="  "  >Email author,Hongwei?Sun
Affiliation:(1) Department of Mathematics, Suzhou University, Suzhou, 215006, Peoples"rsquo"s Republic of China;(2) School of Science, Jinan University (West), Jinan, 230000, Peoples"rsquo"s Republic of China
Abstract:In the classical support vector machines, linear polynomials corresponding to the reproducing kernel K(x,y)=xsdoty are used. In many models of learning theory, polynomial kernels K(x,y)=suml=0Nal(xsdoty)l generating polynomials of degree N, and dot product kernels K(x,y)=suml=0+infinal(xsdoty)l are involved. For corresponding learning algorithms, properties of these kernels need to be understood. In this paper, we consider their positive definiteness. A necessary and sufficient condition for the dot product kernel K to be positive definite is given. Generally, we present a characterization of a function fthinsp:RrarrR such that the matrix [f(xisdotxj)]i,j=1m is positive semi-definite for any x1,x2,.thinsp.thinsp.,xmisinRn, nge2.Supported by CERG Grant No. CityU 1144/01P and City University of Hong Kong Grant No. 7001342.AMS subject classification 42A82, 41A05
Keywords:Mercer kernels  positive definite  conditionally positive definite  Gramian positive functions
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号