首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study approximation of univariate functions defined over the reals. We assume that the rth derivative of a function is bounded in a weighted Lp norm with a weight ψ. Approximation algorithms use the values of a function and its derivatives up to order r−1. The worst case error of an algorithm is defined in a weighted Lq norm with a weight ρ. We study the worst case (information) complexity of the weighted approximation problem, which is equal to the minimal number of function and derivative evaluations needed to obtain error . We provide necessary and sufficient conditions in terms of the weights ψ and ρ, as well as the parameters r, p, and q for the weighted approximation problem to have finite complexity. We also provide conditions which guarantee that the complexity of weighted approximation is of the same order as the complexity of the classical approximation problem over a finite interval. Such necessary and sufficient conditions are also provided for a weighted integration problem since its complexity is equivalent to the complexity of the weighted approximation problem for q=1.  相似文献   

2.
We study the worst case tractability of multivariate linear problems defined on separable Hilbert spaces. Information about a problem instance consists of noisy evaluations of arbitrary bounded linear functionals, where the noise is either deterministic or random. The cost of a single evaluation depends on its precision and is controlled by a cost function. We establish mutual interactions between tractability of a problem with noisy information, the cost function, and tractability of the same problem, but with exact information.  相似文献   

3.
In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each evaluation contributes to the localization of good candidates for the role of global minimizer, a sequential choice of evaluation points is usually carried out. In particular, when Kriging is used to interpolate past evaluations, the uncertainty associated with the lack of information on the function can be expressed and used to compute a number of criteria accounting for the interest of an additional evaluation at any given point. This paper introduces minimizers entropy as a new Kriging-based criterion for the sequential choice of points at which the function should be evaluated. Based on stepwise uncertainty reduction, it accounts for the informational gain on the minimizer expected from a new evaluation. The criterion is approximated using conditional simulations of the Gaussian process model behind Kriging, and then inserted into an algorithm similar in spirit to the Efficient Global Optimization (EGO) algorithm. An empirical comparison is carried out between our criterion and expected improvement, one of the reference criteria in the literature. Experimental results indicate major evaluation savings over EGO. Finally, the method, which we call IAGO (for Informational Approach to Global Optimization), is extended to robust optimization problems, where both the factors to be tuned and the function evaluations are corrupted by noise.  相似文献   

4.
Summary We seek a approximation to a zero of an infinitely differentiable functionf: [0, 1] such thatf(0)0 andf(1)0. It is known that the error of the bisection method usingn function evaluations is 2–(n+1). If the information used are function values, then it is known that bisection information and the bisection algorithm are optimal. Traub and Woniakowski conjectured in [5] that the bisection information and algorithm are optimal even if far more general information is permitted. They permit adaptive (sequential) evaluations of arbitrary linear functionals and arbitrary transformations of this information as algorithms. This conjecture was established in [2]. That is forn fixed, the bisection information and algorithm are optimal in the worst case setting. Thus nothing is lost by restricting oneself to function values.One may then ask whether bisection is nearly optimal in theasymptotic worst case sense, that is,possesses asymptotically nearly the best rate of convergence. Methods converging fast asymptotically, like Newton or secant type, are of course, widely used in scientific computation. We prove that the answer to this question is positive for the classF of functions having zeros ofinfinite multiplicity and information consisting of evaluations of continuous linear functionals. Assuming that everyf inF has zeroes withbounded multiplicity, there are known hybrid methods which have at least quadratic rate of convergence asn tends to infinity, see e.g., Brent [1], Traub [4] and Sect. 1.  相似文献   

5.
We study the average case complexity of linear multivariate problems, that is, the approximation of continuous linear operators on functions of d variables. The function spaces are equipped with Gaussian measures. We consider two classes of information. The first class Λstd consists of function values, and the second class Λall consists of all continuous linear functionals. Tractability of a linear multivariate problem means that the average case complexity of computing an ε-approximation is O((1/)p) with p independent of d. The smallest such p is called the exponent of the problem. Under mild assumptions, we prove that tractability in Λall is equivalent to tractability in Λstd and that the difference of the exponents is at most 2. The proof of this result is not constructive. We provide a simple condition to check tractability in Λall. We also address the issue of how to construct optimal (or nearly optimal) sample points for linear multivariate problems. We use relations between average case and worst case settings. These relations reduce the study of the average case to the worst case for a different class of functions. In this way we show how optimal sample points from the worst case setting can be used in the average case. In Part II we shall apply the theoretical results to obtain optimal or almost optimal sample points, optimal algorithms, and average case complexity functions for linear multivariate problems equipped with the folded Wiener sheet measure. Of particular interest will be the multivariate function approximation problem.  相似文献   

6.
We use Lagrange interpolation polynomials to obtain good gradient estimations. This is e.g. important for nonlinear programming solvers. As an error criterion, we take the mean squared error, which can be split up into a deterministic error and a stochastic error. We analyze these errors using N-times replicated Lagrange interpolation polynomials. We show that the mean squared error is of order if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate. As a result, the order of the mean squared error converges to N −1 if the number of evaluation points increases to infinity. Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved. We provide also an optimal division between the number of gridpoints and replicates in case the number of evaluations is fixed. Further, it is shown that the estimation of the derivatives is more robust when the number of evaluation points is increased. Finally, test results show the practical use of the proposed method. We thank Jack Kleijnen, Gül Gürkan, and Peter Glynn for useful remarks on an earlier version of this paper. We thank Henk Norde for the proof of Lemma 2.2.  相似文献   

7.
In this paper, we establish two new classes of derivative-involved methods for solving single valued nonlinear equations of the form f(x) = 0. The first contributed two-step class includes two evaluations of the function and one of its first derivative where its error analysis shows a fourth-order convergence. Next, we construct a three-step high-order class of methods including four evaluations per full cycle to achieve the seventh-order of convergence. Numerical examples are included to re-verify the theoretical results and moreover put on show the efficiency of the new methods from our classes.  相似文献   

8.
We study algorithms for the approximation of functions, the error is measured in an L2 norm. We consider the worst case setting for a general reproducing kernel Hilbert space of functions. We analyze algorithms that use standard information consisting in n function values and we are interested in the optimal order of convergence. This is the maximal exponent b for which the worst case error of such an algorithm is of order n-b.Let p be the optimal order of convergence of all algorithms that may use arbitrary linear functionals, in contrast to function values only. So far it was not known whether p>b is possible, i.e., whether the approximation numbers or linear widths can be essentially smaller than the sampling numbers. This is (implicitly) posed as an open problem in the recent paper [F.Y. Kuo, G.W. Wasilowski, H. Woźniakowski, On the power of standard information for multivariate approximation in the worst case setting, J. Approx. Theory, to appear] where the authors prove that implies . Here we prove that the case and b=0 is possible, hence general linear information can be exponentially better than function evaluation. Since the case is quite different, it is still open whether b=p always holds in that case.  相似文献   

9.
Summary We study zero finding for functions fromC r ([0, 1]) withf(0) f(1) < 0 and for monotone functions fromC([0, 1]). We show that a lower bound n with a constant holds for the average error of any method usingn function or derivative evaluations. Here the average is defined with respect to conditionalr-fold Wiemer measures or Ulam measures, and the error is understood in the root or residual sense. As in the worst case, we therefore cannot obtain superlinear convergence even for classes of smooth functions which have only simple zeros.  相似文献   

10.
Abstract

In this paper, we follow Kuroiwa’s set approach in set optimization, which proposes to compare values of a set-valued objective map F with respect to various set order relations. We introduce a Hausdorff-type distance relative to an ordering cone between two sets in a Banach space and use it to define a directional derivative for F. We show that the distance has nice properties regarding set order relations and the directional derivative enjoys most properties of the one of a scalar single-valued function. These properties allow us to derive necessary and/or sufficient conditions for various types of maximizers and minimizers of F.  相似文献   

11.
Under weak conditions, we present an iteration formula to improve Newton's method for solving nonlinear equations. The method is free from second derivatives, permitting f(x)=0 in some points and per iteration it requires two evaluations of the given function and one evaluation of its derivative. Analysis of convergence demonstrates that the new method is cubically convergent. Some numerical examples illustrate that the algorithm is more efficient and performs better than classical Newton's method.  相似文献   

12.
This is the first of two papers on multilinear problems, and it is devoted to the worst case setting. A second paper will analyze the average case setting. We show how to reduce the analysis of multilinear problems to linear subproblems. In particular, it is proven that adaption can help by a factor of at most k and k-linear problems. The error of multilinear algorithms is analyzed and optimality properties of spline algorithms for the Hilbert case are established. We illustrate our analysis with an example of a multilinear problem from signal processing.  相似文献   

13.
In the paper, a global optimization problem is considered where the objective function f (x) is univariate, black-box, and its first derivative f ′(x) satisfies the Lipschitz condition with an unknown Lipschitz constant K. In the literature, there exist methods solving this problem by using an a priori given estimate of K, its adaptive estimates, and adaptive estimates of local Lipschitz constants. Algorithms working with a number of Lipschitz constants for f ′(x) chosen from a set of possible values are not known in spite of the fact that a method working in this way with Lipschitz objective functions, DIRECT, has been proposed in 1993. A new geometric method evolving its ideas to the case of the objective function having a Lipschitz derivative is introduced and studied in this paper. Numerical experiments executed on a number of test functions show that the usage of derivatives allows one to obtain, as it is expected, an acceleration in comparison with the DIRECT algorithm. This research was supported by the RFBR grant 07-01-00467-a and the grant 4694.2008.9 for supporting the leading research groups awarded by the President of the Russian Federation.  相似文献   

14.
A one-parameter family of derivative free multipoint iterative methods of orders three and four are derived for finding the simple and multiple roots off(x)=0. For simple roots, the third order methods require three function evaluations while the fourth order methods require four function evaluations. For multiple roots, the third order methods require six function evaluations while the fourth order methods require eight function evaluations. Numerical results show the robustness of these methods.  相似文献   

15.
The computation of Brouwer fixed points is a central tool in economic modeling. Although there have been several algorithms for computing a fixed point of a Brouwer map, starting with Scarf's algorithm of 1965, the question of worst-case complexity was not addressed. It has been conjectured that Scarf's algorithm has typical behavior that is polynomial in the dimension. Here we show that any algorithm for computing the Brouwer fixed point of a function based on function evaluations (a class that includes all known general purpose algorithms) must in the worst case perform a number of function evaluations that is exponential in both the number of digits of accuracy and the dimension. Our lower bounds are very close to the known upper bounds.  相似文献   

16.
We study the worst case setting for approximation of d variate functions from a general reproducing kernel Hilbert space with the error measured in the L norm. We mainly consider algorithms that use n arbitrary continuous linear functionals. We look for algorithms with the minimal worst case errors and for their rates of convergence as n goes to infinity. Algorithms using n function values will be analyzed in a forthcoming paper.We show that the L approximation problem in the worst case setting is related to the weighted L2 approximation problem in the average case setting with respect to a zero-mean Gaussian stochastic process whose covariance function is the same as the reproducing kernel of the Hilbert space. This relation enables us to find optimal algorithms and their rates of convergence for the weighted Korobov space with an arbitrary smoothness parameter α>1, and for the weighted Sobolev space whose reproducing kernel corresponds to the Wiener sheet measure. The optimal convergence rates are n-(α-1)/2 and n-1/2, respectively.We also study tractability of L approximation for the absolute and normalized error criteria, i.e., how the minimal worst case errors depend on the number of variables, d, especially when d is arbitrarily large. We provide necessary and sufficient conditions on tractability of L approximation in terms of tractability conditions of the weighted L2 approximation in the average case setting. In particular, tractability holds in weighted Korobov and Sobolev spaces only for weights tending sufficiently fast to zero and does not hold for the classical unweighted spaces.  相似文献   

17.
An improvement of the iterative methods based on one point iteration function, with or without memory, using n points with the same amount of information in each point and generated by the inverse polynomial interpolation is given. The adaptation of the strategy presented here gives a new iteration function with a new evaluation of the function which increases the local order of convergence dramatically. This method is generalized to r evaluations of the function. This method for the computation of solutions of nonlinear equations is interesting when it is necessary to get high precision because it provides a lower cost when we use adaptive multi-precision arithmetics. AMS subject classification 65H05  相似文献   

18.
We study approximation of multivariate functions defined over d. We assume that all rth order partial derivatives of the functions considered are continuous and uniformly bounded. Approximation algorithms (f) only use the values of f or its partial derivatives up to order r. We want to recover the function f with small error measured in a weighted Lq norm with a weight function ρ. We study the worst case (information) complexity which is equal to the minimal number of function and derivative evaluations needed to obtain error . We provide necessary and sufficient conditions in terms of the weight ρ and the parameters q and r for the weighted approximation problem to have finite complexity. We also provide conditions guaranteeing that the complexity is of the same order as the complexity of the classical approximation problem over a finite domain. Since the complexity of the weighted integration problem is equivalent to the complexity of the weighted approximation problem with q=1, the results of this paper also hold for weighted integration. This paper is a continuation of [7], where weighted approximation over was studied.  相似文献   

19.
We showed earlier that the level set function of a monotonic advancing front is twice differentiable everywhere with bounded second derivative and satisfies the equation classically. We show here that the second derivative is continuous if and only if the flow has a single singular time where it becomes extinct and the singular set consists of a closed C1 manifold with cylindrical singularities. © 2017 Wiley Periodicals, Inc.  相似文献   

20.
We study the worst case complexity of solving problems for which information is partial and contaminated by random noise. It is well known that if information is exact then adaption does not help for solving linear problems, i.e., for approximating linear operators over convex and symmetric sets. On the other hand, randomization can sometimes help significantly. It turns out that for noisy information, adaption may lead to much better approximations than nonadaption, even for linear problems. This holds because, in the presence of noise, adaption is equivalent to randomization. We present sharp bounds on the worst case complexity of problems with random noise in terms of the randomized complexity with exact information. The results obtained are applied to thed-variate integration andL-approximation of functions belonging to Hölder and Sobolev classes. Information is given by function evaluations with Gaussian noise of variance σ2. For exact information, the two problems are intractable since the complexity is proportional to (1/ε)qwhereqgrows linearly withd. For noisy information the situation is different. For integration, the ε-complexity is of order σ22as ε goes to zero. Hence the curse of dimensionality is broken due to random noise. for approximation, the complexity is of order σ2(1/ε)q+2ln(1/ε), and the problem is intractable also with random noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号