首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
基于EQrot1非协调元的两个特殊性质:一是诱导的有限元插值算子与传统的Ritz投影是一致的;二是当所考虑问题的精确解属于H3(Ω)时,其相容误差为O(h2)阶,比插值误差高一阶.本文对非线性Sine-Gordon方程提出一个新的二阶全离散格式,给出收敛性分析和最优阶误差估计.最后,讨论本文的结果对另外一些著名的非协调元的应用.  相似文献   

2.
考虑带线性约束的三块变量的凸优化模型,目标函数是可分的三个函数和.给出了一个新的分裂算法.首先,对每个块变量解极小化增广拉格朗日函数.然后,通过一个校正步得到新的迭代点.证明了新算法的整体收敛性和O(1/t)的收敛阶.  相似文献   

3.
对一类变系数四阶抛物方程利用EQ_1~(rot)及Q_(10)×Q_(01)元给出一个新的扩展的低阶非协调混合元格式.首先,证明逼近解的存在唯一性.其次,基于上述两个单元的高精度结果,利用对时间t的导数转移技巧,在半离散格式下,导出了原始变量u和扩散项v=-?·(a(t)?u)在H~1模及流量=-a(t)?u在L~2模意义下均具有O(h~2)阶的超逼近性质.进一步地,借助插值后处理技术,得到整体超收敛性.最后,通过构造一个适当的辅助问题,得到具有O(h~3)阶的外推解.  相似文献   

4.
主要研究类Wilson元对拟线性双曲方程的逼近.首先证明了当问题的解u∈H~3(Ω)或u∈H~4(Ω)时,u与其双线性插值之差的梯度与类Wilson元空间任意元素的梯度,在分片意义下的内积可以达到O(h~2)这一重要结论.其次运用能量模意义下该元的非协调误差可以分别达到O(h~2)/O(h~3),即比插值误差高一阶/二阶这一性质,并利用对时间t的导数转移技巧,结合双线性元的高精度结果及插值后处理技术,获得了O(h~2)阶的超逼近性和整体超收敛性,从而进一步拓广了该元的应用范围.  相似文献   

5.
本文利用Diethelm方法构造了一种逼近Riesz空间分数阶导数的O(△x3-α)格式,其中1 < α < 2,△x是空间步长.进一步对一阶时间导数采用Crank-Nicolson方法离散,得到了求解Riesz空间分数阶扩散方程的一种新的有限差分格式,并用矩阵方法证明了稳定性和收敛性,其误差估计为O(△t2+△x3-α),其中△t为时间步长.最后,数值算例验证了差分格式的正确性和有效性.  相似文献   

6.
空间-时间分数阶对流扩散方程的数值解法   总被引:1,自引:0,他引:1  
覃平阳  张晓丹 《计算数学》2008,30(3):305-310
本文考虑一个空间-时间分数阶对流扩散方程.这个方程是将一般的对流扩散方程中的时间一阶导数用α(0<α<1)阶导数代替,空间二阶导数用β(1<β<2)阶导数代替.本文提出了一个隐式差分格式,验证了这个格式是无条件稳定的,并证明了它的收敛性,其收敛阶为O(ι h).最后给出了数值例子.  相似文献   

7.
设{X_n,n≥1}是i.i.d.的B值随机变量序列,本文讨论了‖S_(?)‖超越ε(nlgn)~(1/2)的次数L(ε)、最大程度N(ε)和最后时刻H(ε)关于某一类特定的函数φ(t)的某种形式的矩同相应的尾概率级数收敛性之间的关系,得到了若干等价性命题.  相似文献   

8.
杨小云  李竹香 《数学学报》1991,34(4):440-450
设{X_n,n≥1}是i.i.d.的B值随机变量序列,本文讨论了‖S_(?)‖超越ε(nlgn)~(1/2)的次数L(ε)、最大程度N(ε)和最后时刻H(ε)关于某一类特定的函数φ(t)的某种形式的矩同相应的尾概率级数收敛性之间的关系,得到了若干等价性命题.  相似文献   

9.
本文在非一致时间网格上,使用有限差分方法求解变时间分数阶扩散方程?α(x,t)u(x,t)/tα(x,t)-2u(x,t)/x2=f(x,t),0α(x,t)q≤1,证明了该方法在最大范数下的稳定性与收敛性,收敛阶为C(Δt2-q+h2).数值实例验证了理论分析的结果.  相似文献   

10.
对于一阶常系数非齐线性微分方程组dX/dt=AX+ eαt (cosβt·P(1)m(t)+sinβt·P(2)m(t)).P(1)m(t),P(2)m(t)为次数不超过m关于实变量t的n维向量实值多项式,当n级实方阵A具有s≥1重特征根α+iβ时,给出了其特解(X)(t)的结构定理和计算方法,使求特解(X)(t)的积分运算转化为简单的代数运算.解决了利用计算机求特解(X)(t)的计算问题.  相似文献   

11.
指出直接推广的经典乘子交替方向法对三个算子的问题不能保证收敛的原因,并且给出将其改造成收敛算法的相应策略.同时,在一个统一框架下,证明了修正的乘子交替方向法的收敛性和遍历意义下具有0(1/t)收敛速率.  相似文献   

12.
半参数回归模型的渐近有效L-估计   总被引:2,自引:0,他引:2  
对半参数回归模型yi=χiTβ+g(χi)+ei,i=1,2,…,n,对非参数函数g(·)采用核估计的方法,构造了参数向量β的L-估计量λn,在一些正则条件下,获得了λn的渐近正态性和非参数函数g(·)的估计量gn(t)的最优收敛速度可达到O(n-(1/3)),并且给出了标准化L估计量λn的渐近分布的Berry-Esseen界.  相似文献   

13.
In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k~2) while the convergence rate of the classical augmented Lagrangian method(ALM) is O(1/k). Numerical experiments on the linearly constrained l_1-l_2minimization problem are presented to demonstrate the effectiveness of AALM.  相似文献   

14.
We analyze the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method’s iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from \(O(1/\sqrt{k})\) to O(1 / k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1 / k) to a linear convergence rate of the form \(O(\rho ^k)\) for \(\rho < 1\). Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. This extends our earlier work Le Roux et al. (Adv Neural Inf Process Syst, 2012), which only lead to a faster rate for well-conditioned strongly-convex problems. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.  相似文献   

15.
Recently,an indefinite linearized augmented Lagrangian method(IL-ALM)was proposed for the convex programming problems with linear constraints.The IL-ALM differs from the linearized augmented Lagrangian method in that the augmented Lagrangian is linearized by adding an indefinite quadratic proximal term.But,it preserves the algorithmic feature of the linearized ALM and usually has the advantage to improve the performance.The IL-ALM is proved to be convergent from contraction perspective,but its convergence rate is still missing.This is mainly because that the indefinite setting destroys the structures when we directly employ the contraction frameworks.In this paper,we derive the convergence rate for this algorithm by using a different analysis.We prove that a worst-case O(1/t)convergence rate is still hold for this algorithm,where t is the number of iterations.Additionally we show that the customized proximal point algorithm can employ larger step sizes by proving its equivalence to the linearized ALM.  相似文献   

16.
祝鹏  尹云辉  杨宇博 《计算数学》2013,35(3):323-336
本文在Bakhvalov-Shishkin网格上分析了采用高次元的内罚间断有限元方法求解一维对流扩散型奇异摄动问题的最优阶一致收敛性. 取k(k≥1)次分片多项式和网格剖分单元数为N时,在能量范数度量下, Bakhvalov-Shishkin网格上可获得O(N-k)的一致误差估计. 在数值算例部分对理论分析结果进行了验证.  相似文献   

17.
吴明新  沈家 《应用数学》2003,16(1):116-120
本文研究了连续时间下非参数回归的误差官度估计的收敛速度,给出了一定条件下误差密度的估计量^fT(x)的均方收敛速度,详细说明了以下重要结果:E[^fT(x)-f(x)]^2=O(T^-1/4)其中f(x)表示误差过程{et,t≥0}的未知密度。  相似文献   

18.
Summary We introduce in this article a new domain decomposition algorithm for parabolic problems that combines Mortar Mixed Finite Element methods for the space discretization with operator splitting schemes for the time discretization. The main advantage of this method is to be fully parallel. The algorithm is proven to be unconditionally stable and a convergence result in (Δt/h 1/2) is presented.  相似文献   

19.
NA序列中心极限定理的收敛速度   总被引:6,自引:0,他引:6  
本文对NA(NegativelyAssociated)序列建立了中心极限定理的一致收敛速度,只要其三阶矩有限及描述NA序列协方差结构的一个系数u(n)被负指数序列所控制,而无需平稳性便获得了其收敛速度O(n(-1/2)logn)。  相似文献   

20.
In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence \(O\Big ({n^2 \over k^2}\Big )\), where k is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence \(O\Big ({n \over k^{1/2}}\Big )\). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号