首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
<正>We consider a finite difference scheme for a nonlinear wave equation,whose solutions may lose their smoothness in finite time,i.e.,blow up in finite time.In order to numerically reproduce blow-up solutions,we propose a rule for a time-stepping, which is a variant of what was successfully used in the case of nonlinear parabolic equations.A numerical blow-up time is defined and is proved to converge,under a certain hypothesis,to the real blow-up time as the grid size tends to zero.  相似文献   

2.
In the last years much progress has been achieved in KAM theory concerning bifurcation of quasi-periodic solutions of Hamiltonian or reversible partial differential equations.We provide an overview of the state of the art in this field.  相似文献   

3.
傅种孙先生在北京师大附中   总被引:1,自引:0,他引:1  
罗德建 《数学通报》2008,47(2):11-20,25
1 傅种孙先生简介 1.1 傅先生基本情况介绍 傅种孙先生(1898~1962)1898年2月27日生于江西省高安县.1920年在北京高等师范学校数理部毕业,留母校附中(即现北京师大附中)任教;1921年任母校数理部的讲师;1928年任教授:抗日战争以前,曾先后兼任北京女子师范大学,北平大学女子文理学院,北京大学,辅仁大学教授;1933年当选北平(北京)市数学会理事长秘书;1935年当选中国数学会评议委员兼<数学杂志>编辑;1945年11月至1946年8月在牛津大学、1946年9月至1947年11月在剑桥大学考察;1947~1962年任北京师范大学数学系教授,1956年前曾兼系主任;在1949~1957年间,曾任北京师范大学教务长(3年)及副校长(5年);1952~1957年任北京市人民代表大会代表,中国数学会及其北京市分会常务理事,<中国数学杂志>及其后身<数学通报>总编辑;1962年1月18日病逝于北京.  相似文献   

4.
In this paper, local unstable metric entropy, local unstable topological entropy and local unstable pressure for partially hyperbolic endomorphisms are introduced and investigated. Specially, two variational principles concerning relationships among the above mentioned numbers are formulated.  相似文献   

5.
We give the direct method of moving planes for solutions to the conformally invariant fractional power sub Laplace equation on the Heisenberg group.The method is based on four maximum principles derived here.Then symmetry and nonexistence of positive cylindrical solutions are proved.  相似文献   

6.
In this paper,nonconforming finite element methods(FEMs)are proposed for the constrained optimal control problems(OCPs)governed by the nonsmooth elliptic equations,in which the popular EQr1 ot element is employed to approximate the state and adjoint state,and the piecewise constant element is used to approximate the control.Firstly,the convergence and superconvergence properties for the nonsmooth elliptic equation are obtained by introducing an auxiliary problem.Secondly,the goal-oriented error estimates are obtained for the objective function through establishing the negative norm error estimate.Lastly,the methods are extended to some other well-known nonconforming elements.  相似文献   

7.
智慧窗     
1 趣解等式七个文字与三个拼音字头组成了一道加法等式,你能否把它们分别换成0~9的数字. (相同文字、字母必须要换相同的数字)使其等式成立吗? (上海市长宁路476弄8号1602室(200042) 张刘福) 2 质数趣题请将5~20各数分别填入圆圈内,使每相邻的两个数之和都是质数.试试看,你能填出  相似文献   

8.
We prove weighted mixed-norm Lqt(W2,px)and Lqt(C2,αx)estimates for 10,x∈Rn.x∈Rn,The coefficients a(t)=(aij(t))are just bounded,measurable,symmetric and uniformly elliptic.Furthermore,we show strong,weak type and BMO-Sobolev estimates with parabolic Muckenhoupt weights.It is quite remarkable that most of our results are new even for the classical heat equation?tu?Δu+u=f.  相似文献   

9.
In this paper,we study the algebraic differential and the difference independence between the Riemann zeta function and the Euler gamma function.It is proved that the Riemann zeta function and the Euler gamma function cannot satisfy a class of nontrivial algebraic differential equations and algebraic difference equations.  相似文献   

10.
The aim of this article is twofold.One aim is to establish the precise forms of Landau-Bloch type theorems for certain polyharmonic mappings in the unit disk by applying a geometric method.The other is to obtain the precise values of Bloch constants for certain log-p-harmonic mappings.These results improve upon the corresponding results given in Bai et al.(Complex Anal.Oper.Theory,13(2):321-340,2019).  相似文献   

11.
Least-squares regularized learning algorithms for regression were well-studied in the literature when the sampling process is independent and the regularization term is the square of the norm in a reproducing kernel Hilbert space (RKHS). Some analysis has also been done for dependent sampling processes or regularizers being the qth power of the function norm (q-penalty) with 0?q?≤?2. The purpose of this article is to conduct error analysis of the least-squares regularized regression algorithm when the sampling sequence is weakly dependent satisfying an exponentially decaying α-mixing condition and when the regularizer takes the q-penalty with 0?q?≤?2. We use a covering number argument and derive learning rates in terms of the α-mixing decay, an approximation condition and the capacity of balls of the RKHS.  相似文献   

12.
The previously known works describing the generalization of least-square regularized regression algorithm are usually based on the assumption of independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by studying the generalization of least-square regularized regression algorithm with Markov chain samples. We first establish a novel concentration inequality for uniformly ergodic Markov chains, then we establish the bounds on the generalization of least-square regularized regression algorithm with uniformly ergodic Markov chain samples, and show that least-square regularized regression algorithm with uniformly ergodic Markov chains is consistent.  相似文献   

13.
Semi-supervised learning is an emerging computational paradigm for machine learning,that aims to make better use of large amounts of inexpensive unlabeled data to improve the learning performance.While various methods have been proposed based on different intuitions,the crucial issue of generalization performance is still poorly understood.In this paper,we investigate the convergence property of the Laplacian regularized least squares regression,a semi-supervised learning algorithm based on manifold regularization.Moreover,the improvement of error bounds in terms of the number of labeled and unlabeled data is presented for the first time as far as we know.The convergence rate depends on the approximation property and the capacity of the reproducing kernel Hilbert space measured by covering numbers.Some new techniques are exploited for the analysis since an extra regularizer is introduced.  相似文献   

14.
The correntropy-induced loss (C-loss) has been employed in learning algorithms to improve their robustness to non-Gaussian noise and outliers recently. Despite its success on robust learning, only little work has been done to study the generalization performance of regularized regression with the C-loss. To enrich this theme, this paper investigates a kernel-based regression algorithm with the C-loss and ?1-regularizer in data dependent hypothesis spaces. The asymptotic learning rate is established for the proposed algorithm in terms of novel error decomposition and capacity-based analysis technique. The sparsity characterization of the derived predictor is studied theoretically. Empirical evaluations demonstrate its advantages over the related approaches.  相似文献   

15.
In this paper we study the learning performance of regularized least square regression with α-mixing and ϕ-mixing inputs. The capacity independent error bounds and learning rates are derived by means of an integral operator technique. Even for independent samples our learning rates improve those in the literature. The results are sharp in the sense that when the mixing conditions are strong enough the rates are shown to be close to or the same as those for learning with independent samples. They also reveal interesting phenomena of learning with dependent samples: (i) dependent samples contain less information and lead to worse error bounds than independent samples; (ii) the influence of the dependence between samples to the learning process decreases as the smoothness of the target function increases.  相似文献   

16.
陶燕芳  唐轶 《数学杂志》2015,35(2):281-286
本文研究了基于函数型输入和1-正则化的最小二乘回归问题的推广性能.利用基于Rademacher平均的分析技术,获得了学习速度的估计,推广了已有的欧式空间有限维输入结果.  相似文献   

17.
Analysis of Support Vector Machines Regression   总被引:1,自引:0,他引:1  
Support vector machines regression (SVMR) is a regularized learning algorithm in reproducing kernel Hilbert spaces with a loss function called the ε-insensitive loss function. Compared with the well-understood least square regression, the study of SVMR is not satisfactory, especially the quantitative estimates of the convergence of this algorithm. This paper provides an error analysis for SVMR, and introduces some recently developed methods for analysis of classification algorithms such as the projection operator and the iteration technique. The main result is an explicit learning rate for the SVMR algorithm under some assumptions. Research supported by NNSF of China No. 10471002, No. 10571010 and RFDP of China No. 20060001010.  相似文献   

18.
The problem of learning from data involving function values and gradients is considered in a framework of least-square regularized regression in reproducing kernel Hilbert spaces. The algorithm is implemented by a linear system with the coefficient matrix involving both block matrices for generating Graph Laplacians and Hessians. The additional data for function gradients improve learning performance of the algorithm. Error analysis is done by means of sampling operators for sample error and integral operators in Sobolev spaces for approximation error.  相似文献   

19.
We investigate the problem of model selection for learning algorithms depending on a continuous parameter. We propose a model selection procedure based on a worst-case analysis and on a data-independent choice of the parameter. For the regularized least-squares algorithm we bound the generalization error of the solution by a quantity depending on a few known constants and we show that the corresponding model selection procedure reduces to solving a bias-variance problem. Under suitable smoothness conditions on the regression function, we estimate the optimal parameter as a function of the number of data and we prove that this choice ensures consistency of the algorithm.  相似文献   

20.
Learning Rates of Least-Square Regularized Regression   总被引:1,自引:0,他引:1  
This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号