共查询到20条相似文献,搜索用时 0 毫秒
1.
Global approximation to arbitrary cost functions: A Bayesian approach with application to US banking
Panayotis G. Michaelides Efthymios G. Tsionas Angelos T. Vouldis Konstantinos N. Konstantakis 《European Journal of Operational Research》2015
This paper proposes and estimates a globally flexible functional form for the cost function, which we call Neural Cost Function (NCF). The proposed specification imposes a priori and satisfies globally all the properties that economic theory dictates. The functional form can be estimated easily using Markov Chain Monte Carlo (MCMC) techniques or standard iterative SURE. We use a large panel of U.S. banks to illustrate our approach. The results are consistent with previous knowledge about the sector and in accordance with mathematical production theory. 相似文献
2.
Charles B. Roosen Trevor J. Hastie 《Journal of computational and graphical statistics》2013,22(3):235-248
Abstract A highly flexible nonparametric regression model for predicting a response y given covariates {xk}d k=1 is the projection pursuit regression (PPR) model ? = h(x) = β0 + Σjβjfj(αT jx) where the fj , are general smooth functions with mean 0 and norm 1, and Σd k=1α2 kj=1. The standard PPR algorithm of Friedman and Stuetzle (1981) estimates the smooth functions fj using the supersmoother nonparametric scatterplot smoother. Friedman's algorithm constructs a model with M max linear combinations, then prunes back to a simpler model of size M ≤ M max, where M and M max are specified by the user. This article discusses an alternative algorithm in which the smooth functions are estimated using smoothing splines. The direction coefficients αj, the amount of smoothing in each direction, and the number of terms M and M max are determined to optimize a single generalized cross-validation measure. 相似文献
3.
Email: Curry{at}Cardiff.ac.uk This paper investigates the approximation properties of standardfeedforward neural networks (NNs) through the application ofmultivanate Thylor-series expansions. The capacity to approximatearbitrary functional forms is central to the NN philosophy,but is usually proved by allowing the number of hidden nodesto increase to infinity. The Thylor-series approach does notdepend on such limiting cases, lie paper shows how the seriesapproximation depends on individual network weights. The roleof the bias term is taken as an example. We are also able tocompare the sigmoid and hyperbolic-tangent activation functions,with particular emphasis on their impact on the bias term. Thepaper concludes by discussing the potential importance of ourresults for NN modelling: of particular importance is the trainingprocess. 相似文献
4.
The paper presents a comparison between two different flavors of nonlinear models to be used for the approximate solution of T-stage stochastic optimization (TSO) problems, a typical paradigm of Markovian decision processes. Specifically, the well-known class of neural networks is compared with a semi-local approach based on kernel functions, characterized by less demanding computational requirements. To this purpose, two alternative methods for the numerical solution of TSO are considered, one corresponding to the classic approximate dynamic programming (ADP) and the other based on a direct optimization of the optimal control functions, introduced here for the first time. Advantages and drawbacks in the TSO context of the two classes of approximators are analyzed, in terms of computational burden and approximation capabilities. Then, their performances are evaluated through simulations in two important high-dimensional TSO test cases, namely inventory forecasting and water reservoirs management. 相似文献
5.
距离空间中插值神经网络的误差估计 总被引:2,自引:0,他引:2
研究距离空间中的神经网络插值与逼近问题.首先引进一类广义的激活函数,用比较简洁的方法讨论距离空间中插值神经网络的存在性,然后给出插值神经网络逼近连续函数的误差估计. 相似文献
6.
Lets1 be an integer andW be the class of all functions having integrable partial derivatives on [0, 1]
s
. We are interested in the minimum number of neurons in a neural network with a single hidden layer required in order to provide a mean approximation order of a preassigned>0 to each function inW. We prove that this number cannot be
if a spline-like localization is required. This cannot be improved even if one allows different neurons to evaluate different activation functions, even depending upon the target function. Nevertheless, for any>0, a network with
neurons can be constructed to provide this order of approximation, with localization. Analogous results are also valid for otherL
p
norms.The research of this author was supported by NSF Grant # DMS 92-0698.The research of this author was supported, in part, by AFOSR Grant #F49620-93-1-0150 and by NSF Grant #DMS 9404513. 相似文献
7.
Gan Huang Jinde Cao 《Communications in Nonlinear Science & Numerical Simulation》2008,13(10):2279-2289
In this paper, the multistability is studied for two-dimensional neural networks with multilevel activation functions. And it is showed that the system has n2 isolated equilibrium points which are locally exponentially stable, where the activation function has n segments. Furthermore, evoked by periodic external input, n2 periodic orbits which are locally exponentially attractive, can be found. And these results are extended to k-neuron networks, which is really enlarge the capacity of the associative memories. Examples and simulation results are used to illustrate the theory. 相似文献
8.
9.
The trial and error process of calculating the characteristics of an air vessel suitable to protect a rising main against the effects of hydraulic transients has proved to be cumbersome for the design engineer. The own experience and sets of charts, which can be found in the literature, can provide some help. The aim of this paper is to present a neural network allowing instantaneous and direct calculation of air and vessel volumes from the system parameters. This neural network has been implemented in the hydraulic transient simulation package DYAGATS. 相似文献
10.
In this paper we prove convergence rates for the problem of approximating functions f by neural networks and similar constructions. We show that the rates are the better the smoother the activation functions are, provided that f satisfies an integral representation. We give error bounds not only in Hilbert spaces but also in general Sobolev spaces Wm, r(Ω). Finally, we apply our results to a class of perceptrons and present a sufficient smoothness condition on f guaranteeing the integral representation. 相似文献
11.
12.
H. N. Mhaskar 《Advances in Computational Mathematics》1993,1(1):61-80
We prove that an artificial neural network with multiple hidden layers and akth-order sigmoidal response function can be used to approximate any continuous function on any compact subset of a Euclidean space so as to achieve the Jackson rate of approximation. Moreover, if the function to be approximated has an analytic extension, then a nearly geometric rate of approximation can be achieved. We also discuss the problem of approximation of a compact subset of a Euclidean space with such networks with a classical sigmoidal response function.Dedicated to Dr. C.A. Micchelli on the occasion of his fiftieth birthday, December 1992Research supported in part by AFOSR Grant No. 226 113 and by the AvH Foundation. 相似文献
13.
In this paper, a family of interpolation neural network operators are introduced. Here, ramp functions as well as sigmoidal functions generated by central B-splines are considered as activation functions. The interpolation properties of these operators are proved, together with a uniform approximation theorem with order, for continuous functions defined on bounded intervals. The relations with the theory of neural networks and with the theory of the generalized sampling operators are discussed. 相似文献
14.
In this paper, we introduce a new type neural networks by superpositions of a sigmoidal function and study its approximation capability. We investigate the multivariate quantitative constructive approximation of real continuous multivariate functions on a cube by such type neural networks. This approximation is derived by establishing multivariate Jackson-type inequalities involving the multivariate modulus of smoothness of the target function. Our networks require no training in the traditional sense. 相似文献
15.
提出了一种用于多维函数逼近的进化策略修正泛函网络基函数系数的新算法,并给出了其算法学习过程.利用进化策略的自适应性来确定基函数前的系数,改进了泛函网络的参数通过解方程组来得到这一传统方法.仿真结果表明,这种新的逼近算法简单可行,能够逼近给定的函数到预先给定的精度,具有较快的收敛速度和良好的逼近性能. 相似文献
16.
In this paper, we discuss some analytic properties of hyperbolic tangent function and estimate some approximation errors of neural network operators with the hyperbolic tangent activation functionFirstly, an equation of partitions of unity for the hyperbolic tangent function is givenThen, two kinds of quasi-interpolation type neural network operators are constructed to approximate univariate and bivariate functions, respectivelyAlso, the errors of the approximation are estimated by means of the modulus of continuity of functionMoreover, for approximated functions with high order derivatives, the approximation errors of the constructed operators are estimated. 相似文献
17.
Nonparametric nonlinear regression using polynomial and neural approximators: a numerical comparison
The solution of nonparametric regression problems is addressed via polynomial approximators and one-hidden-layer feedforward neural approximators. Such families of approximating functions are compared as to both complexity and experimental performances in finding a nonparametric mapping that interpolates a finite set of samples according to the empirical risk minimization approach. The theoretical background that is necessary to interpret the numerical results is presented. Two simulation case studies are analyzed to fully understand the practical issues that may arise in solving such problems. The issues depend on both the approximation capabilities of the approximating functions and the effectiveness of the methodologies that are available to select the tuning parameters, i.e., the coefficients of the polynomials and the weights of the neural networks. The simulation results show that the neural approximators perform better than the polynomial ones with the same number of parameters. However, this superiority can be jeopardized by the presence of local minima, which affects the neural networks but does not regard the polynomial approach. 相似文献
18.
19.
Peter C. Bell 《Operations Research Letters》1982,1(6):230-235
A number of approximation methods for the analysis of restricted queuing networks that have been presented in the literature are applied to three simple networks. The techniques are shown to be quite fragile in that under reasonable and very general conditions they compute throughout rates which exceed theoretically derived upper bounds. 相似文献
20.
The existing scheme of rational polynomial approximants, defined by multivariate power series, is extended to define approximants with branch points. The existence theorem is obtained. The basic properties used to define the rational approximants can be preserved almost intactly. Especially, the local behavior of the 相似文献