首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
用乘子语言来刻画全纯函数的Taylor系数的方法,将Duren和Shields所得Hp到lq(0<p<1,p≤q≤∞)乘子的充分必要条件推广到Cn中有界对称上Hp空间,在q》2时,所得到结论不能再改进,而对q<2则是另一种乘子刻画,文中还用函数平均值的增长性来刻画Hp到Hq(0<p<q<∞)的乘子.  相似文献   

2.
丁勇  陆善镇 《中国科学A辑》1999,29(6):518-526
证明了一类带齐性核的奇异积分算子的多线性算子是乘积空间Lp1×Lp2 ×…×LpK(Rn)到Hardy空间Hr(Rn)和弱Hardy空间Hr ,∞(Rn)的有界算子 .作为应用 ,获得了一类带齐性核的奇异积分算子交换子的Lp(Rn)有界性 .  相似文献   

3.
吴畏 《中国科学A辑》2000,30(12):1081-1087
讨论在C*-凸理论下C*-代数A的广义态空间SCn(A)中的Krein-Milman型问题.证明了SCn(A)的任意一个BW-紧的C*-凸子集K都具有一个C*-端点,而且K是其C*-端点的C*-凸包.  相似文献   

4.
对于单位圆盘上系数函数是解析函数的复微分方程
f(n)+An-1(z)f(n-1)+…+A1(z)f''+A0(z)f=0,
给出了方程的系数函数和解函数之间的关系, 即当系数函数Aj 满足给定的条件时, 方程的所有解属于QK型空间和Dirichlet 型空间.  相似文献   

5.
林正炎 《中国科学A辑》1996,39(10):873-883
设{Y(t),t≥0}={Xk(t),t≥0}k=1是独立的Gauss过程序列,σ2k(h)=E(Xk(t+h)-Xk(t))2.记σ(p,h)=(sum from k=1 to ∞ σpk(h))1/p,P≥1.考察σ(P,h)有界时Y(·)的大增量.作为一个例子,给出了无穷维分数Ornstein-Uhlenbeck过程在lp空间中的大增量.所建立的方法适用于某些其它类型的平稳增量过程.  相似文献   

6.
加权Hardy空间的分子刻画   总被引:3,自引:0,他引:3       下载免费PDF全文
在加权的Hardy空间Hp ,q,s w 上 ,建立了具有高阶消失矩的分子概念 ,并给出了其分子刻画 .作为应用 ,证明了Hilbert算子在Hp ,q,s w 空间上的有界性  相似文献   

7.
在经典Hp(Rn)空间原子分解理论基础上,给出了一种Hp(Rn)空间的新的更为精细的刻划,籍此,给出了一类异奇积分算子在所有Hr(Rn)(p<r≤1)中有界性的准则  相似文献   

8.
刘尚平 《中国科学A辑》1995,38(6):573-579
Hp(Rn×R+)(1<p<+∞)函数得渐近行为于70年代已获得整体上的描述,今利用扩张得空间EHp(Rn×R+)的不同层次间的内在关联,给出每个层次Hp函数的具体的渐近行为,以及EHp函数在整体上得渐近行为.  相似文献   

9.
文章主要研究n-Lie 代数的扩张问题. 首先利用n-Lie 代数的模作n-Lie 代数的Tθ- 扩张与Tθ*-扩张. 再利用模度量3-Lie 代数,做3-Lie 代数的双扩张. 文章最后利用4- 指标阵构造了m 维3-Lie代数的双扩张.  相似文献   

10.
本文利用加权形式的Journe覆盖引理及其在高维空间的推广,建立乘积空间上加权Hp(0<p≤1)空间的原子分解定理,并得到其中消失矩条件阶数的向量值表示,将单参数情形的有关结果推广到任意多个参数的情形,解决了由Chang及Fefferman在文献[1]中提出的问题。  相似文献   

11.
The least-square regression problem is considered by regularization schemes in reproducing kernel Hilbert spaces. The learning algorithm is implemented with samples drawn from unbounded sampling processes. The purpose of this paper is to present concentration estimates for the error based on ?2-empirical covering numbers, which improves learning rates in the literature.  相似文献   

12.
A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.  相似文献   

13.
Learning Rates of Least-Square Regularized Regression   总被引:1,自引:0,他引:1  
This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution.  相似文献   

14.
Learning with coefficient-based regularization has attracted a considerable amount of attention in recent years, on both theoretical analysis and applications. In this paper, we study coefficient-based learning scheme (CBLS) for regression problem with l q -regularizer (1 < q ? 2). Our analysis is conducted under more general conditions, and particularly the kernel function is not necessarily positive definite. This paper applies concentration inequality with l 2-empirical covering numbers to present an elaborate capacity dependence analysis for CBLS, which yields sharper estimates than existing bounds. Moreover, we estimate the regularization error to support our assumptions in error analysis, also provide an illustrative example to further verify the theoretical results.  相似文献   

15.
In this paper, we give several results of learning errors for linear programming support vector regression. The corresponding theorems are proved in the reproducing kernel Hilbert space. With the covering number, the approximation property and the capacity of the reproducing kernel Hilbert space are measured. The obtained result (Theorem 2.1) shows that the learning error can be controlled by the sample error and regularization error. The mentioned sample error is summarized by the errors of learning regression function and regularizing function in the reproducing kernel Hilbert space. After estimating the generalization error of learning regression function (Theorem 2.2), the upper bound (Theorem 2.3) of the regularized learning algorithm associated with linear programming support vector regression is estimated.  相似文献   

16.
The regression problem in learning theory is investigated with least square Tikhonov regularization schemes in reproducing kernel Hilbert spaces (RKHS). We follow our previous work and apply the sampling operator to the error analysis in both the RKHS norm and the L2 norm. The tool for estimating the sample error is a Bennet inequality for random variables with values in Hilbert spaces. By taking the Hilbert space to be the one consisting of Hilbert-Schmidt operators in the RKHS, we improve the error bounds in the L2 metric, motivated by an idea of Caponnetto and de Vito. The error bounds we derive in the RKHS norm, together with a Tsybakov function we discuss here, yield interesting applications to the error analysis of the (binary) classification problem, since the RKHS metric controls the one for the uniform convergence.  相似文献   

17.
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in with Borel probability measure.   相似文献   

18.
Least-squares regularized learning algorithms for regression were well-studied in the literature when the sampling process is independent and the regularization term is the square of the norm in a reproducing kernel Hilbert space (RKHS). Some analysis has also been done for dependent sampling processes or regularizers being the qth power of the function norm (q-penalty) with 0?q?≤?2. The purpose of this article is to conduct error analysis of the least-squares regularized regression algorithm when the sampling sequence is weakly dependent satisfying an exponentially decaying α-mixing condition and when the regularizer takes the q-penalty with 0?q?≤?2. We use a covering number argument and derive learning rates in terms of the α-mixing decay, an approximation condition and the capacity of balls of the RKHS.  相似文献   

19.
In the present paper,we provide an error bound for the learning rates of the regularized Shannon sampling learning scheme when the hypothesis space is a reproducing kernel Hilbert space(RKHS) derived by a Mercer kernel and a determined net.We show that if the sample is taken according to the determined set,then,the sample error can be bounded by the Mercer matrix with respect to the samples and the determined net.The regularization error may be bounded by the approximation order of the reproducing kernel Hilbert space interpolation operator.The paper is an investigation on a remark provided by Smale and Zhou.  相似文献   

20.
In this paper, we study the consistency of the regularized least-square regression in a general reproducing kernel Hilbert space. We characterize the compactness of the inclusion map from a reproducing kernel Hilbert space to the space of continuous functions and show that the capacity-based analysis by uniform covering numbers may fail in a very general setting. We prove the consistency and compute the learning rate by means of integral operator techniques. To this end, we study the properties of the integral operator. The analysis reveals that the essence of this approach is the isomorphism of the square root operator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号