首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 7 毫秒
1.
The subspace [Mtilde] of L2(Cn) which is composed of Gaussian series and contains the subspace M spanned by Gaussian functions given in the paper [6] by Du and Wong has the proporety that the product of two Daubechies operators with symbols in [Mtilde] is a Daubechies operator with symbol H in [Mtilde]. Furthermore, an explicit expression for the symbol H is given  相似文献   

2.
A nonnegative, infinitely differentiable function defined on the real line is called a Friedrichs mollifier function if it has support in [0, 1] and 0 1 (t)dt=1. In this article, the following problem is considered. Determine k =inf 0 1 |(k)(t)|dt,k=1, 2, ..., where (k) denotes thekth derivative of and the infimum is taken over the set of all mollifier functions , which is a convex set. This problem has applications to monotone polynomial approximation as shown by this author elsewhere. The problem is reducible to three equivalent problems, a nonlinear programming problem, a problem on the functions of bounded variation, and an approximation problem involving Tchebycheff polynomials. One of the results of this article shows that k =k!22k–1,k=1, 2, .... The numerical values of the optimal solutions of the three problems are obtained as a function ofk. Some inequalities of independent interest are also derived.This research was supported in part by the National Science Foundation, Grant No. GK-32712.  相似文献   

3.
《Optimization》2012,61(3-4):239-259
In this paper we propose a new class of continuously differentiable globally exact penalty functions for the solution of minimization problems with simple bounds on some (all) of the variables. The penalty functions in this class fully exploit the structure of the problem and are easily computable. Furthermore we introduce a simple updating rule for the penalty parameter that can be used in conjunction with unconstrained minimization techniques to solve the original problem.  相似文献   

4.
It is well known that nonlinear approximation has an advantage over linear schemes in the sense that it provides comparable approximation rates to those of the linear schemes, but to a larger class of approximands. This was established for spline approximations and for wavelet approximations, and more recently by DeVore and Ron (in press) [2] for homogeneous radial basis function (surface spline) approximations. However, no such results are known for the Gaussian function, the preferred kernel in machine learning and several engineering problems. We introduce and analyze in this paper a new algorithm for approximating functions using translates of Gaussian functions with varying tension parameters. At heart it employs the strategy for nonlinear approximation of DeVore-Ron, but it selects kernels by a method that is not straightforward. The crux of the difficulty lies in the necessity to vary the tension parameter in the Gaussian function spatially according to local information about the approximand: error analysis of Gaussian approximation schemes with varying tension are, by and large, an elusive target for approximators. We show that our algorithm is suitably optimal in the sense that it provides approximation rates similar to other established nonlinear methodologies like spline and wavelet approximations. As expected and desired, the approximation rates can be as high as needed and are essentially saturated only by the smoothness of the approximand.  相似文献   

5.
We show that, for a certain class of nonlinear functions of Gaussian sequences, the limiting distribution of normalized sums of the nonlinear function values of a sequence is the convolution of a Gaussian distribution with another non-Gaussian distribution.  相似文献   

6.
This paper describes the identification of nonlinear dynamic systems with a Gaussian process (GP) prior model. This model is an example of the use of a probabilistic non-parametric modelling approach. GPs are flexible models capable of modelling complex nonlinear systems. Also, an attractive feature of this model is that the variance associated with the model response is readily obtained, and it can be used to highlight areas of the input space where prediction quality is poor, owing to the lack of data or complexity (high variance). We illustrate the GP modelling technique on a simulated example of a nonlinear system.  相似文献   

7.
Gaussian radial basis functions (RBFs) on an infinite interval with uniform grid pacing h are defined by ?(x;α,h)exp(-[α2/h2]x2). The only significant numerical parameter is α, the inverse width of the RBF functions relative to h. In the limit α→0, we demonstrate that the coefficients of the interpolant of a typical function f(x) grow proportionally to exp(π2/[4α2]). However, we also show that the approximation to the constant f(x)1 is a Jacobian theta function whose coefficients do not blow up as α→0. The subtle interplay between the complex-plane singularities of f(x) (the function being approximated) and the RBF inverse width parameter α are analyzed. For α≈1/2, the size of the RBF coefficients and the condition number of the interpolation matrix are both no larger than O(104) and the error saturation is smaller than machine epsilon, so this α is the center of a “safe operating range” for Gaussian RBFs.  相似文献   

8.
Robust optimization problems, which have uncertain data, are considered. We prove surrogate duality theorems for robust quasiconvex optimization problems and surrogate min–max duality theorems for robust convex optimization problems. We give necessary and sufficient constraint qualifications for surrogate duality and surrogate min–max duality, and show some examples at which such duality results are used effectively. Moreover, we obtain a surrogate duality theorem and a surrogate min–max duality theorem for semi-definite optimization problems in the face of data uncertainty.  相似文献   

9.
Using a measure change, an exact estimate and an approximate recursive estimate are obtained for the conditional density of a hidden signal and a parameter in a state space model, where the hidden signal has deterministic dynamics and it is observed in fractional Gaussian noise.  相似文献   

10.
Let {X n , n1} be a sequence of independent Gaussian random vectors in R d d2. In this paper an asymptotic evaluation of P{max1in X i a n Z+b n } with Z another Gaussian random vector is obtained for a n, b n R d two vectors obeying certain conditions.  相似文献   

11.
This research presents a new constrained optimization approach for solving systems of nonlinear equations. Particular advantages are realized when all of the equations are convex. For example, a global algorithm for finding the zero of a convex real-valued function of one variable is developed. If the algorithm terminates finitely, then either the algorithm has computed a zero or determined that none exists; if an infinite sequence is generated, either that sequence converges to a zero or again no zero exists. For solving n-dimensional convex equations, the constrained optimization algorithm has the capability of determining that the system of equations has no solution. Global convergence of the algorithm is established under weaker conditions than previously known and, in this case, the algorithm reduces to Newton’s method together with a constrained line search at each iteration. It is also shown how this approach has led to a new algorithm for solving the linear complementarity problem.  相似文献   

12.
A nonnegative, infinitely differentiable function ø defined on the real line is called a Friedrichs mollifier function if it has support in [0, 1] and 0 1 ø(t)dt=1. In this article the following problem is considered. Determine k =inf 0 1(k)(t)dt, k=1,..., where ø(k) denotes thekth derivative of ø and the infimum is taken over the set of all mollifier functions. This problem has applications to monotone polynomial approximation as shown by this author elsewhere. In this article, the structure of the problem of determining k is analyzed, and it is shown that the problem is reducible to a nonlinear programming problem involving the minimization of a strictly convex function of [(k–1)/2] variables, subject to a simple ordering restriction on the variables. An optimization problem on the functions of bounded variation, which is equivalent to the nonlinear programming problem, is also developed. The results of this article and those from approximation of functions theory are applied elsewhere to derive numerical values of various mathematical quantities involved in this article, e.g., k =k~22k–1 for allk=1, 2, ..., and to establish certain inequalities of independent interest. This article concentrates on problem reduction and equivalence, and not numerical value.This research was supported in part by the National Science Foundation under Grant No. GK-32712.  相似文献   

13.
Let {X n, n ≥1} be a sequence of standard Gaussian random vectors in ℝ d ,d ≥ 2. In this paper we derive lower and upper bounds for the tail probabilityP{X n >t n } witht n ∈ ℝ d some threshold. We improve for instance bounds on Mills ratio obtained by Savage (1962,J. Res. Nat. Bur. Standards Sect. B,66, 93–96). Furthermore, we prove exact asymptotics under fairly general conditions on bothX n andt n , as ‖t n ‖→∞ where the correlation matrix Σ n ofX n may also depend onn.  相似文献   

14.
An explicit formula is obtained for the nonlinear predictor of Y(t) = X(t)2E(X(t)2), where X(t) is an N-ple Gaussian Markov process.  相似文献   

15.
Explicit bounds for the quadrature error of thenth Gauss-Legendre quadrature rule applied to themth Chebyshev polynomial are derived. They are precise up to the orderO(m 4 n –6). As an application, error constants for classes of functions which are analytic in the interior of an ellipse are estimated. The location of the maxima of the corresponding kernel function is investigated.Dedicated to Luigi Gatteschi on the occasion of his 70th birthday  相似文献   

16.
We describe an inexact version of Fletcher's QL algorithm with second-order corrections for minimizing composite nonsmooth functions. The method is shown to retain the global and local convergence properties of the original version, if the parameters are chosen appropriately. It is shown how the inexact method can be implemented, for the case in which the function to be minimized is an exact penalty function arising from the standard nonlinear programming problem. The method can also be applied to the problems of nonlinearl 1 - andl -approximation.This research supported in part by the National Science Foundation under Grant DMS-8619903, and by the Air Force Office of Scientific Research under Grant AFOSR-ISSA-870092.  相似文献   

17.
Let λ be a positive number, and let be a fixed Riesz-basis sequence, namely, (xj) is strictly increasing, and the set of functions is a Riesz basis (i.e., unconditional basis) for L2[−π,π]. Given a function whose Fourier transform is zero almost everywhere outside the interval [−π,π], there is a unique sequence in , depending on λ and f, such that the function
is continuous and square integrable on (−,), and satisfies the interpolatory conditions Iλ(f)(xj)=f(xj), . It is shown that Iλ(f)converges to f in , and also uniformly on , as λ→0+. In addition, the fundamental functions for the univariate interpolation process are defined, and some of their basic properties, including their exponential decay for large argument, are established. It is further shown that the associated interpolation operators are bounded on for every p[1,].  相似文献   

18.
Mathematical model and optimization in production investment   总被引:4,自引:0,他引:4  
We study a kind of nonlinear programming models derived from various investment problems, like industrial production investment, educational investment, farming investment, etc. We will analyze the properties of solutions of the models and obtain a polynomial algorithm. We will also apply our theory to a concrete example to demonstrate the algorithm complexity.  相似文献   

19.
20.
A fundamental problem in constrained nonlinear optimization algorithms is the design of a satisfactory stepsize strategy which converges to unity. In this paper, we discuss stepsize strategies for Newton or quasi-Newton algorithms which require the solution of quadratic optimization subproblems. Five stepsize strategies are considered for three different subproblems, and the conditions under which the stepsizes will converge to unity are established. It is shown that these conditions depend critically on the convergence of the Hessian approximations used in the algorithms. The stepsize strategies are constructed using basic principles from which the conditions to unit stepsizes follow. Numerical results are discussed in an Appendix.Paper presented to the XI Symposium on Mathematical Programming, Bonn, Germany, 1982.This work was completed while the author was visiting the European University in Florence where, in particular, Professors Fitoussi and Velupillai provided the opportunity for its completion. The author is grateful to Dr. L. C. W. Dixon for his helpful comments and criticisms on numerous versions of the paper, and to R. G. Becker for programming the algorithms in Section 3 and for helpful discussions concerning these algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号