共查询到20条相似文献,搜索用时 607 毫秒
1.
《数学的实践与认识》2015,(20)
绝对值函数是一个非光滑函数,研究了绝对值函数的光滑逼近函数.给出了绝对值函数的上方一致光滑逼近函数和下方一致光滑逼近函数,分别研究了其性质,并通过图像展示了逼近效果. 相似文献
2.
4.
定义了线性小波算子序列,运用其概率性质研究其对Lp与△^p空间函数的逼近性质,建立了相应的逼近等价定理。 相似文献
5.
关于混合指数型积分算子的加权逼近 总被引:3,自引:0,他引:3
田军 《高等学校计算数学学报》1994,16(3):202-216
1987年,Z.Ditzian和V.Totik在[1]中研究了指数型算子的加权逼近问题,1989年,陈文忠教授在[2]中研究了混合指数型积分算子在C-空间的逼近性质,本文则是研究一类混合指数型积分算子在L_p空间的加权逼近问题。 相似文献
6.
7.
8.
本文研究了Fourier-Laplace级数Vall閑 Poussin平均的逼近性质,建立了Vall閑Poussin平均的一致逼近度估计和几乎处处逼近的阶. 相似文献
9.
多重网格法是一种求解椭圆边值问题离散所得的大型线性或非线性方程组的“最优”解法。在有限元离散情形,Hackbusch提出了一种多重网格法的收敛分析方法,即把线性或非线性的多重网格法收敛率的估计问题归结为所谓“光滑性质”与“逼近性质”的研究。在线性情形,若已知有限元解的误差估计,一般容易得到多重网格法的“逼近性质”。但对非线性多重网格法的“逼近性质”在什么条件下成立,尚未见到这方面的工 相似文献
10.
首先给出了广义算子半群的Abel-遍历和Cesaro-遍历的定义,对两种遍历的性质进行了刻画,研究了两种遍历的等价条件.其次,利用Pettis积分、算子值数学期望及广义连续修正模等工具给出广义算子半群的概率逼近表达式. 相似文献
11.
本文研究了正则化格式下随机梯度下降法的收敛速度.利用线性迭代的方法,并通过参数选择,得到了随机梯度下降法的收敛速度. 相似文献
12.
13.
Deep neural networks have successfully been trained in various application areas with stochastic gradient descent. However, there exists no rigorous mathematical explanation why this works so well. The training of neural networks with stochastic gradient descent has four different discretization parameters: (i) the network architecture; (ii) the amount of training data; (iii) the number of gradient steps; and (iv) the number of randomly initialized gradient trajectories. While it can be shown that the approximation error converges to zero if all four parameters are sent to infinity in the right order, we demonstrate in this paper that stochastic gradient descent fails to converge for ReLU networks if their depth is much larger than their width and the number of random initializations does not increase to infinity fast enough. 相似文献
14.
Mathematical Programming - We develop a new family of variance reduced stochastic gradient descent methods for minimizing the average of a very large number of smooth functions. Our... 相似文献
15.
Driggs Derek Ehrhardt Matthias J. Schönlieb Carola-Bibiane 《Mathematical Programming》2022,191(2):671-715
Mathematical Programming - Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown... 相似文献
16.
《Optimization》2012,61(4-5):395-415
The Barzilai and Borwein (BB) gradient method does not guarantee a descent in the objective function at each iteration, but performs better than the classical steepest descent (SD) method in practice. So far, the BB method has found many successful applications and generalizations in linear systems, unconstrained optimization, convex-constrained optimization, stochastic optimization, etc. In this article, we propose a new gradient method that uses the SD and the BB steps alternately. Hence the name “alternate step (AS) gradient method.” Our theoretical and numerical analyses show that the AS method is a promising alternative to the BB method for linear systems. Unconstrained optimization algorithms related to the AS method are also discussed. Particularly, a more efficient gradient algorithm is provided by exploring the idea of the AS method in the GBB algorithm by Raydan (1997). To establish a general R-linear convergence result for gradient methods, an important property of the stepsize is drawn in this article. Consequently, R-linear convergence result is established for a large collection of gradient methods, including the AS method. Some interesting insights into gradient methods and discussion about monotonicity and nonmonotonicity are also given. 相似文献
17.
In this work,we study the gradient projection method for solving a class of stochastic control problems by using a mesh free approximation ap-proach to implement spatial dimension approximation.Our main contribu-tion is to extend the existing gradient projection method to moderate high-dimensional space.The moving least square method and the general radial basis function interpolation method are introduced as showcase methods to demonstrate our computational framework,and rigorous numerical analysis is provided to prove the convergence of our meshfree approximation approach.We also present several numerical experiments to validate the theoretical re-sults of our approach and demonstrate the performance meshfree approxima-tion in solving stochastic optimal control problems. 相似文献
18.
On Early Stopping in Gradient Descent Learning 总被引:1,自引:0,他引:1
In this paper we study a family of gradient descent algorithms to approximate the regression function from reproducing kernel
Hilbert spaces (RKHSs), the family being characterized by a polynomial decreasing rate of step sizes (or learning rate). By
solving a bias-variance trade-off we obtain an early stopping rule and some
probabilistic upper bounds for the convergence of the algorithms. We also discuss the implication of these results in the
context of classification where some fast convergence rates can be achieved for plug-in classifiers. Some connections are
addressed with Boosting, Landweber iterations, and the online learning algorithms as stochastic approximations of the gradient
descent method. 相似文献
19.
Xuemei Dong 《Journal of Mathematical Analysis and Applications》2008,341(2):1018-1027
We propose a stochastic gradient descent algorithm for learning the gradient of a regression function from random samples of function values. This is a learning algorithm involving Mercer kernels. By a detailed analysis in reproducing kernel Hilbert spaces, we provide some error bounds to show that the gradient estimated by the algorithm converges to the true gradient, under some natural conditions on the regression function and suitable choices of the step size and regularization parameters. 相似文献
20.
Kevin J. Healy 《Queueing Systems》1992,12(3-4):257-272
We consider the problem of scheduling the arrivals of a fixed number of customers to a stochastic service mechanism to minimize an expected cost associated with operating the system. We consider the special case of exponentially distributed service times and the problems in general associated with obtaining exact analytic solutions. For general service time distributions we obtain approximate numerical solutions using a stochastic version of gradient search employing Infinitesimal Perturbation Analysis estimates of the objective function gradient obtained via simulation. 相似文献