首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
In this paper we explore the problem of tracking a near-field moving target using fuzzy neural networks (FNNs). The moving target radiates narrow band waves that impinge on an array of passive sensors. At a particular time instance, the location of the target is estimated by several judiciously constructed FNN-based angle and distance estimators. When the target is moving, its trajectory can be on-line estimated due to the parallel and real-time computational capability of the FNNs. Computer simulation results illustrate the performance of the FNN-based angle estimator, distance estimator, and the near-field moving target tracker.  相似文献   

2.
本文引进了集值函数的s-可微和模糊值(F值)函数的Fs-可微概念。给出了这两种可微性的几个判别条件。最后研究并得到了一类模糊神经网络(FNN)的Fs-可微性和连续性。  相似文献   

3.
This paper presents a type of feedforward neural networks (FNNs), which can be used to approximately interpolate, with arbitrary precision, any set of distinct data in multidimensional Euclidean spaces. They can also uniformly approximate any continuous functions of one variable or two variables. By using the modulus of continuity of function as metric, the rates of convergence of approximate interpolation networks are estimated, and two Jackson-type inequalities are established.  相似文献   

4.
There have been many studies on the dense theorem of approximation by radial basis feedforword neural networks, and some approximation problems by Gaussian radial basis feedforward neural networks(GRBFNs)in some special function space have also been investigated. This paper considers the approximation by the GRBFNs in continuous function space. It is proved that the rate of approximation by GRNFNs with n~d neurons to any continuous function f defined on a compact subset K(R~d)can be controlled by ω(f, n~(-1/2)), where ω(f, t)is the modulus of continuity of the function f .  相似文献   

5.
In this paper, we introduce a type of approximation operators of neural networks with sigmodal functions on compact intervals, and obtain the pointwise and uniform estimates of the ap- proximation. To improve the approximation rate, we further introduce a type of combinations of neurM networks. Moreover, we show that the derivatives of functions can also be simultaneously approximated by the derivatives of the combinations. We also apply our method to construct approximation operators of neural networks with sigmodal functions on infinite intervals.  相似文献   

6.
证明了具有单一隐层的神经网络在L_ω~q的逼近,获得了网络逼近的上界估计和下界估计.这一结果揭示了神经网络在加权逼近的意义下,网络的收敛阶与隐层单元个数之间的关系,为神经网络的应用提供了重要的理论基础.  相似文献   

7.
Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: PowerNets with $n$ hidden layers can exactly represent polynomials up to degree $s^n$, where $s$ is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.  相似文献   

8.
We prove that an artificial neural network with multiple hidden layers and akth-order sigmoidal response function can be used to approximate any continuous function on any compact subset of a Euclidean space so as to achieve the Jackson rate of approximation. Moreover, if the function to be approximated has an analytic extension, then a nearly geometric rate of approximation can be achieved. We also discuss the problem of approximation of a compact subset of a Euclidean space with such networks with a classical sigmoidal response function.Dedicated to Dr. C.A. Micchelli on the occasion of his fiftieth birthday, December 1992Research supported in part by AFOSR Grant No. 226 113 and by the AvH Foundation.  相似文献   

9.
In this paper, we introduce a new type neural networks by superpositions of a sigmoidal function and study its approximation capability. We investigate the multivariate quantitative constructive approximation of real continuous multivariate functions on a cube by such type neural networks. This approximation is derived by establishing multivariate Jackson-type inequalities involving the multivariate modulus of smoothness of the target function. Our networks require no training in the traditional sense.  相似文献   

10.
Constructing neural networks for function approximation is a classical and longstanding topic in approximation theory. In this paper, we aim at constructing deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness.  相似文献   

11.
单隐层神经网络与最佳多项式逼近   总被引:7,自引:1,他引:6  
研究单隐层神经网络逼近问题.以最佳多项式逼近为度量,用构造性方法估计单隐层神经网络逼近连续函数的速度.所获结果表明:对定义在紧集上的任何连续函数,均可以构造一个单隐层神经网络逼近该函数,并且其逼近速度不超过该函数的最佳多项式逼近的二倍.  相似文献   

12.
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tangent activation functionFirstly, an equation of partitions of unity for the hyperbolic tangent function is givenThen, two kinds of quasi-interpolation type neural network operators are constructed to approximate univariate and bivariate functions, respectivelyAlso, the errors of the approximation are estimated by means of the modulus of continuity of functionMoreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated.  相似文献   

13.
The aim of this paper is to investigate approximation operators with logarithmic sigmoidal function of a class of two neural networks weights and a class of quasi-interpolation operators. Using these operators as approximation tools, the upper bounds of estimate errors are estimated for approximating continuous functions.  相似文献   

14.
In this paper, we develop two algorithms for Chebyshev approximation of continuous functions on [0, 1] n using the modulus of continuity and the maximum norm estimated by a given finite data system. The algorithms are based on constructive versions of Kolmogorov's superposition theorem. One of the algorithms we apply to neural networks.  相似文献   

15.
In this paper, we consider the problem of approximation of continuous multivariate functions by neural networks with a bounded number of neurons in hidden layers. We prove the existence of single-hidden-layer networks with bounded number of neurons, which have approximation capabilities not worse than those of networks with arbitrarily many neurons. Our analysis is based on the properties of ridge functions.  相似文献   

16.
The approximation of a holomorphic eigenvalue problem is considered. The main purpose is to present a construction by which the derivation of the asymptotic error estimations for the approximate eigenvalues of Fredholm operator functions can be reduced to the derivation of these estimations for the case of matrix functions. (Some estimations for the latter problem can be derived, in one's turn, from the error estimations for the zeros of the corresponding determinants.) The asymptotic error estimations are considered in part II of this paper, in [10]. By the presented construction also the stability of the algebraic multiplicity of eigenvalues by regular approximation is proved in Section 3

The presented construction, in essence, reproduces the constructions in [7] for the case of the compact approximation in subspaces and in [9] for the case of projection—like methods. It is simpler to use than similiar construction in [8], and allows unified consideration of the general case and the case of projection—like methods, what in [8, 9] was not achieved  相似文献   

17.
距离空间中插值神经网络的误差估计   总被引:2,自引:0,他引:2  
研究距离空间中的神经网络插值与逼近问题.首先引进一类广义的激活函数,用比较简洁的方法讨论距离空间中插值神经网络的存在性,然后给出插值神经网络逼近连续函数的误差估计.  相似文献   

18.
In this article, we study approximation properties of single hidden layer neural networks with weights varying in finitely many directions and with thresholds from an open interval. We obtain a necessary and simultaneously su?cient measure theoretic condition for density of such networks in the space of continuous functions. Further, we prove a density result for neural networks with a specifically constructed activation function and a fixed number of neurons.  相似文献   

19.
借助于有关Fourier级数的Riesz平均构造出了一类含有一个隐含层的周期神经网络与平移网络,与已有的讨论相比较,在获得相同的逼近阶的情况下,此类网络的隐层单元要求较少的神经元个数.  相似文献   

20.
距离空间中的神经网络插值与逼近   总被引:4,自引:1,他引:3  
已有的关于插值神经网络的研究大多是在欧氏空间中进行的,但实际应用中的许多问题往往需要用非欧氏尺度进行度量.本文研究一般距离空间中的神经网络插值与逼近问题,即先在距离空间中构造新的插值网络,然后在此基础上构造近似插值网络,最后研究近似插值网络对连续泛函的逼近.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号