首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes and estimates a globally flexible functional form for the cost function, which we call Neural Cost Function (NCF). The proposed specification imposes a priori and satisfies globally all the properties that economic theory dictates. The functional form can be estimated easily using Markov Chain Monte Carlo (MCMC) techniques or standard iterative SURE. We use a large panel of U.S. banks to illustrate our approach. The results are consistent with previous knowledge about the sector and in accordance with mathematical production theory.  相似文献   

2.
Abstract

A highly flexible nonparametric regression model for predicting a response y given covariates {xk}d k=1 is the projection pursuit regression (PPR) model ? = h(x) = β0 + ΣjβjfjT jx) where the fj , are general smooth functions with mean 0 and norm 1, and Σd k=1α2 kj=1. The standard PPR algorithm of Friedman and Stuetzle (1981) estimates the smooth functions fj using the supersmoother nonparametric scatterplot smoother. Friedman's algorithm constructs a model with M max linear combinations, then prunes back to a simpler model of size MM max, where M and M max are specified by the user. This article discusses an alternative algorithm in which the smooth functions are estimated using smoothing splines. The direction coefficients αj, the amount of smoothing in each direction, and the number of terms M and M max are determined to optimize a single generalized cross-validation measure.  相似文献   

3.
The trial and error process of calculating the characteristics of an air vessel suitable to protect a rising main against the effects of hydraulic transients has proved to be cumbersome for the design engineer. The own experience and sets of charts, which can be found in the literature, can provide some help. The aim of this paper is to present a neural network allowing instantaneous and direct calculation of air and vessel volumes from the system parameters. This neural network has been implemented in the hydraulic transient simulation package DYAGATS.  相似文献   

4.
We construct explicit Morse functions on Grassmannian manifolds, and use them to find explicit taut embeddings of the quadratic hypersurfaces in complex projective spaces into Euclidean spaces.  相似文献   

5.
Email: Curry{at}Cardiff.ac.uk This paper investigates the approximation properties of standardfeedforward neural networks (NNs) through the application ofmultivanate Thylor-series expansions. The capacity to approximatearbitrary functional forms is central to the NN philosophy,but is usually proved by allowing the number of hidden nodesto increase to infinity. The Thylor-series approach does notdepend on such limiting cases, lie paper shows how the seriesapproximation depends on individual network weights. The roleof the bias term is taken as an example. We are also able tocompare the sigmoid and hyperbolic-tangent activation functions,with particular emphasis on their impact on the bias term. Thepaper concludes by discussing the potential importance of ourresults for NN modelling: of particular importance is the trainingprocess.  相似文献   

6.
The singular diffusion equation ut=(u?1ux)x:arises in many areas of application, e.g. in the central limit approximation to Carleman's model of Boltzman equation, or, in the expansion of a thermalized electron cloud in plasma physics. This paper concerns the existence and uniqueness of solution of a mixed boundary value problem of equation ut=(um=1ux)x for ?1 < m ≤0.  相似文献   

7.
We prove that an artificial neural network with multiple hidden layers and akth-order sigmoidal response function can be used to approximate any continuous function on any compact subset of a Euclidean space so as to achieve the Jackson rate of approximation. Moreover, if the function to be approximated has an analytic extension, then a nearly geometric rate of approximation can be achieved. We also discuss the problem of approximation of a compact subset of a Euclidean space with such networks with a classical sigmoidal response function.Dedicated to Dr. C.A. Micchelli on the occasion of his fiftieth birthday, December 1992Research supported in part by AFOSR Grant No. 226 113 and by the AvH Foundation.  相似文献   

8.
It is proved that there exists an integrable function on [0, 1]2 whose integral is nondifferentiable in each direction belonging to a set everywhere dense in [0, 2π] but is strongly differentiable. Translated fromMatematicheskie Zametki, Vol. 64, No. 5, pp. 749–762, November, 1998.  相似文献   

9.
距离空间中插值神经网络的误差估计   总被引:2,自引:0,他引:2  
研究距离空间中的神经网络插值与逼近问题.首先引进一类广义的激活函数,用比较简洁的方法讨论距离空间中插值神经网络的存在性,然后给出插值神经网络逼近连续函数的误差估计.  相似文献   

10.
Lets1 be an integer andW be the class of all functions having integrable partial derivatives on [0, 1] s . We are interested in the minimum number of neurons in a neural network with a single hidden layer required in order to provide a mean approximation order of a preassigned>0 to each function inW. We prove that this number cannot be if a spline-like localization is required. This cannot be improved even if one allows different neurons to evaluate different activation functions, even depending upon the target function. Nevertheless, for any>0, a network with neurons can be constructed to provide this order of approximation, with localization. Analogous results are also valid for otherL p norms.The research of this author was supported by NSF Grant # DMS 92-0698.The research of this author was supported, in part, by AFOSR Grant #F49620-93-1-0150 and by NSF Grant #DMS 9404513.  相似文献   

11.
The paper presents a comparison between two different flavors of nonlinear models to be used for the approximate solution of T-stage stochastic optimization (TSO) problems, a typical paradigm of Markovian decision processes. Specifically, the well-known class of neural networks is compared with a semi-local approach based on kernel functions, characterized by less demanding computational requirements. To this purpose, two alternative methods for the numerical solution of TSO are considered, one corresponding to the classic approximate dynamic programming (ADP) and the other based on a direct optimization of the optimal control functions, introduced here for the first time. Advantages and drawbacks in the TSO context of the two classes of approximators are analyzed, in terms of computational burden and approximation capabilities. Then, their performances are evaluated through simulations in two important high-dimensional TSO test cases, namely inventory forecasting and water reservoirs management.  相似文献   

12.
Morozov  A. N. 《Mathematical Notes》2001,70(5-6):688-697
In this paper, we generalize Bernstein's theorem characterizing the space by means of local approximations. The closed interval is partitioned into disjoint half-intervals on which best approximation polynomials of degree divided by the lengths of these half-intervals taken to the power are considered. The existence of the limits of these ratios as the lengths of the half-intervals tend to zero is a criterion for the existence of the th derivative of a function. We prove the theorem in a stronger form and extend it to the spaces .  相似文献   

13.
The introduction of high-speed circuits to realize an arithmetic function f as a piecewise linear approximation has created a need to understand how the number of segments depends on the interval axb and the desired approximation error ε. For the case of optimum non-uniform segments, we show that the number of segments is given as , (ε→0+), where . Experimental data shows that this approximation is close to the exact number of segments for a set of 14 benchmark functions. We also show that, if the segments have the same width (to reduce circuit complexity), then the number of segments is given by , (ε→0+), where .  相似文献   

14.
对任意正整数n,著名的Smarandache函数S(n)定义为最小的正整数m使得n|m!.即就是S(n)=min{m:m∈N,n|m!}.令PS(n)表示区间[1,n]中S(n)为素数的正整数n的个数.在一篇未发表的文献中,J. Castillo建议我们研究当n→∞时,比值PS(n)/n的极限存在问题.如果存在,确定其极限.本文的主要目的是利用初等方法研究这一问题,并得到彻底解决!即就是证明该极限存在且为1.  相似文献   

15.
In this paper, the multistability is studied for two-dimensional neural networks with multilevel activation functions. And it is showed that the system has n2 isolated equilibrium points which are locally exponentially stable, where the activation function has n segments. Furthermore, evoked by periodic external input, n2 periodic orbits which are locally exponentially attractive, can be found. And these results are extended to k-neuron networks, which is really enlarge the capacity of the associative memories. Examples and simulation results are used to illustrate the theory.  相似文献   

16.
In this paper, we consider a Lipschitz optimization problem (LOP) constrained by linear functions in Rn. In general, since it is hard to solve (LOP) directly, (LOP) is transformed into a certain problem (MP) constrained by a ball in Rn+1. Despite there is no guarantee that the objective function of (MP) is quasi-convex, by using the idea of the quasi-conjugate function defined by Thach (1991) [1], we can construct its dual problem (DP) as a quasi-convex maximization problem. We show that the optimal value of (DP) coincides with the multiplication of the optimal value of (MP) by −1, and that each optimal solution of the primal and dual problems can be easily obtained by the other. Moreover, we formulate a bidual problem (BDP) for (MP) (that is, a dual problem for (DP)). We show that the objective function of (BDP) is a quasi-convex function majorized by the objective function of (MP) and that both optimal solution sets of (MP) and (BDP) coincide. Furthermore, we propose an outer approximation method for solving (DP).  相似文献   

17.
In this paper, we introduce a new type neural networks by superpositions of a sigmoidal function and study its approximation capability. We investigate the multivariate quantitative constructive approximation of real continuous multivariate functions on a cube by such type neural networks. This approximation is derived by establishing multivariate Jackson-type inequalities involving the multivariate modulus of smoothness of the target function. Our networks require no training in the traditional sense.  相似文献   

18.
A procedure is described for smoothing a convex function which not only preserves its convexity, but also, under suitable conditions, leaves the function unchanged over nearly all the regions where it is already smooth. The method is based on a convolution followed by a gluing. Controlling the Hessian of the resulting function is the key to this process, and it is shown that it can be done successfully provided that the original function is strictly convex over the boundary of the smooth regions.  相似文献   

19.
In general, there is a great difference between usual three-layer feedforward neural networks with local basis functions in the hidden processing elements and those with standard sigmoidal transfer functions (in the following often called global basis functions). The reason for this difference in nature can be seen in the ridge-type arguments which are commonly used. It is the aim of this paper to show that the situation completely changes when instead of ridge-type arguments were so-called hyperbolic-type arguments. In detail, we show that usual sigmoidal transfer functions evaluated at hyperbolic-type arguments—usually called sigma—pi units—can be used to construct local basis functions which vanish at infinity and, moreover, are integrable and give rise to a partition of unity, both in Cauchy's principal value sense. At this point, standard strategies for approximation with local basis functions can be used without giving up the concept of non-local sigmoidal transfer functions.  相似文献   

20.
The generalization problem considered in this paper assumes that a limited amount of input and output data from a system is available, and that from this information an estimate of the output produced by another input is required. The ideas arose in the study of neural networks, but apply equally to any approximation approach. The main result is that the type of neural network to be used for generalization should be determined by the prior knowledge about the nature of the output from the system. Without such information, either of two networks matching the training data is equally likely to be the better at estimating the output generated by the same system at a new input. Therefore, the search for an optimum generalization network for use on all problems is inappropriate.For both (0, 1) and accurate real outputs, it is shown that simple approximations exist that fit the data, so these will be equally likely to generalize better than more sophisticated networks, unless prior knowledge is available that excludes them. For noisy real outputs, it is shown that the standard least squares approach forces the neural network to approximate an incorrect process; an alternative approach is outlined, which again is much easier to learn and use.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号