首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study weighted approximation of multivariate functions for classes of standard and linear information in the worst case and average case settings. Under natural assumptions, we show a relation between n th minimal errors for these two classes of information. This relation enables us to infer convergence and error bounds for standard information, as well as the equivalence of tractability and strong tractability for the two classes. April 11, 2001. Final version received: May 29, 2001.  相似文献   

2.
We discuss algorithms for scheduling, greedy for the Euclidean norm, with inputs in a family of polytopes lying in an affine space and the corresponding outputs chosen among the vertices of the respective polytopes. Such scheduling problems arise in various settings. We provide simple examples where the error remains bounded, including cases when there are infinitely many polytopes. In the case of a single polytope the boundedness of the cumulative error is known to be equivalent to the existence of an invariant region for a dynamical system in the affine space that contains this polytope. We show here that, on the contrary, no bounded invariant region can be found in affine space in general, as soon as there are at least two different polytopes. To cite this article: C. Tresser, C. R. Acad. Sci. Paris, Ser. I 338 (2004).  相似文献   

3.
This paper deals with the optimal solution of ill-posed linear problems, i.e..linear problems for which the solution operator is unbounded. We consider worst-case ar,and averagecase settings. Our main result is that algorithms having finite error (for a given setting) exist if and only if the solution operator is bounded (in that setting). In the worst-case setting, this means that there is no algorithm for solving ill-posed problems having finite error. In the average-case setting, this means that algorithms having finite error exist if and only lf the solution operator is bounded on the average. If the solution operator is bounded on the average, we find average-case optimal information of cardinality n and optimal algorithms using this information, and show that the average error of these algorithms tends to zero as n→∞. These results are then used to determine the [euro]-complexity, i.e., the minimal costof finding an [euro]-accurate approximation. In the worst-case setting, the [euro]comp1exity of an illposed problem is infinite for all [euro]>0; that is, we cannot find an approximation having finite error and finite cost. In the average-case setting, the [euro]-complexity of an ill-posed problem is infinite for all [euro]>0 iff the solution operator is not bounded on the average, moreover, if the the solutionoperator is bounded on the average, then the [euro]-complexity is finite for all [euro]>0.  相似文献   

4.
《Journal of Complexity》1993,9(4):427-446
In this paper we review recent results on nonparametric approaches to identification of linear dynamic systems, under nonprobabilistic assumptions on measurement uncertainties. Two main categories of problems are considered in the paper: H and l1 settings. The H setting assumes that the true system is linear time-invariant and the available information is represented by samples of the frequency response of the system, corrupted by an l-norm bounded noise. The aim is to estimate a proper, stable finite-dimensional model. The estimation error is quantified according to an H norm, measuring the "distance" of the estimated model from the worst-case system in the class of allowable systems, for the worst-case realization of the measurement error. In the l1 setting, the aim is to identify the samples of the impulse response of an unknown linear time-invariant system. The available information is given by input/output measurements corrupted by l-bounded noise and the estimation error is measured according to an l1 norm, for the worst case with respect to allowable systems and noise. In this paper, the main results available in the literature for both settings are reviewed, with particular attention to (a) evaluation of the diameter of information under various experimental conditions, (b) convergence to zero of the diameter of information (i.e., existence of robustly convergent identification procedures), and (c) computation of optimal and almost-optimal algorithms. Some results are also reported for the l setting, similar to the l1 setting, with the exception of the estimation error, which is measured by an l norm.  相似文献   

5.
We study the following game: each agent i chooses a lottery over nonnegative numbers whose expectation is equal to his budget b i . The agent with the highest realized outcome wins (and agents only care about winning). This game is motivated by various real-world settings where agents each choose a gamble and the primary goal is to come out ahead. Such settings include patent races, stock market competitions, and R&D tournaments. We show that there is a unique symmetric equilibrium when budgets are equal. We proceed to study and solve extensions, including settings where agents choose their budgets (at a cost) and where budgets are private information.  相似文献   

6.
The celebrated nondiscrete mathematical induction has been used to improve error bounds of distances involved in the discrete case but not the sufficient convergence conditions for Secant?Ctype methods. We show that using the same information as before, the following advantages can be obtained: weaker sufficient convergence conditions; tighter error bounds on the distances involved and a more precise information on the location of the solution. Numerical examples validating the theoretical conclusions are also provided in this study.  相似文献   

7.
We use nondiscrete mathematical induction to extend the applicability of the Secant methods for solving equations in a Banach setting. Our approach has the following advantages over earlier works under the same information: weaker sufficient convergence conditions; tighter error bounds on the distances involved and more information on the location of the solution. Numerical examples where our results apply but earlier ones fail to solve nonlinear equation as well as tighter error bounds are also provided in this study.  相似文献   

8.
We consider the problem of integrating a function f : [-1,1] → R which has an analytic extension to an open disk Dr of radius r and center the origin, such that for any . The goal of this paper is to study the minimal error among all algorithms which evaluate the integrand at the zeros of the n-degree Chebyshev polynomials of first or second kind (Fejer type quadrature formulas) or at the zeros of (n-2)-degree Chebyshev polynomials jointed with the endpoints -1,1 (Clenshaw-Curtis type quadrature formulas), and to compare this error to the minimal error among all algorithms which evaluate the integrands at n points. In the case r > 1, it is easy to prove that Fejer and Clenshaw-Curtis type quadrature are almost optimal. In the case r = 1, we show that Fejer type formulas are not optimal since the error of any algorithm of this type is at least about n-2. These results hold for both the worst-case and the asymptotic settings.  相似文献   

9.
Random forests are a commonly used tool for classification and for ranking candidate predictors based on the so-called variable importance measures. These measures attribute scores to the variables reflecting their importance. A drawback of variable importance measures is that there is no natural cutoff that can be used to discriminate between important and non-important variables. Several approaches, for example approaches based on hypothesis testing, were developed for addressing this problem. The existing testing approaches require the repeated computation of random forests. While for low-dimensional settings those approaches might be computationally tractable, for high-dimensional settings typically including thousands of candidate predictors, computing time is enormous. In this article a computationally fast heuristic variable importance test is proposed that is appropriate for high-dimensional data where many variables do not carry any information. The testing approach is based on a modified version of the permutation variable importance, which is inspired by cross-validation procedures. The new approach is tested and compared to the approach of Altmann and colleagues using simulation studies, which are based on real data from high-dimensional binary classification settings. The new approach controls the type I error and has at least comparable power at a substantially smaller computation time in the studies. Thus, it might be used as a computationally fast alternative to existing procedures for high-dimensional data settings where many variables do not carry any information. The new approach is implemented in the R package vita.  相似文献   

10.
We study approximation schemes for effective Hamiltonians arising in the homogenization of first order Hamilton-Jacobi equations in stationary ergodic settings. In particular, we prove error estimates concerning the rate of convergence of the approximated solution to the effective Hamiltonian. Our main motivations are front propagation problems, but our results can be generalized to other types of Hamiltonians.  相似文献   

11.
High-dimensional multivariate time series are challenging due to the dependent and high-dimensional nature of the data, but in many applications there is additional structure that can be exploited to reduce computing time along with statistical error. We consider high-dimensional vector autoregressive processes with spatial structure, a simple and common form of additional structure. We propose novel high-dimensional methods that take advantage of such structure without making model assumptions about how distance affects dependence. We provide nonasymptotic bounds on the statistical error of parameter estimators in high-dimensional settings and show that the proposed approach reduces the statistical error. An application to air pollution in the USA demonstrates that the estimation approach reduces both computing time and prediction error and gives rise to results that are meaningful from a scientific point of view, in contrast to high-dimensional methods that ignore spatial structure. In practice, these high-dimensional methods can be used to decompose high-dimensional multivariate time series into lower-dimensional multivariate time series that can be studied by other methods in more depth. Supplementary materials for this article are available online.  相似文献   

12.
质量特性的过程均值与过程标准差的选定是一个重要的命题,它们是影响质量成本的重要因素.由顾客定义的特性目标均值以及初始标准差往往并不一定能使得总体的质量损失最小化。针对此问题,本文首先讨论了非对称的质量损失函数,并在此基础上构造出质量特性目标均值与标准差的优选模型,进而得到最优目标过程均值与标准差。在模型的应用中表明:其他情况不变的情况下,质量特性最优均值与最优标准差对质量损失系数具有一定的稳健性。  相似文献   

13.
Previously it has been shown that some classes of mixing dynamical systems have limiting return times distributions that are almost everywhere Poissonian. Here we study the behaviour of return times at periodic points and show that the limiting distribution is a compound Poissonian distribution. We also derive error terms for the convergence to the limiting distribution. We also prove a very general theorem that can be used to establish compound Poisson distributions in many other settings.  相似文献   

14.
The varying coefficient models (VCMs) are extremely important tools in the statistical literature and are widely used in many subject areas for data modeling and exploration. In linear VCMs, typically the errors are assumed to be independent. However, in many situations, especially in spatial or spatiotemporal settings, this is not a viable assumption. In this article, we consider nonparametric VCMs with a general dependent error structure which allows for both spatially autoregressive and spatial moving average models as special cases. We investigate asymptotic properties of local polynomial estimators of the model components. Specifically, we show that the estimates of the unknown functions and their derivatives are consistent and asymptotically normally distributed. We show that the rate of convergence and the asymptotic covariance matrix depend on the error dependence structure and we derive the explicit formula for the convergence results.  相似文献   

15.
We present an ensemble tree-based algorithm for variable selection in high-dimensional datasets, in settings where a time-to-event outcome is observed with error. This work is motivated by self-reported outcomes collected in large-scale epidemiologic studies, such as the Women’s Health Initiative. The proposed methods equally apply to imperfect outcomes that arise in other settings such as data extracted from electronic medical records. To evaluate the performance of our proposed algorithm, we present results from simulation studies, considering both continuous and categorical covariates. We illustrate this approach to discover single nucleotide polymorphisms that are associated with incident Type 2 diabetes in the Women’s Health Initiative. A freely available R package icRSF has been developed to implement the proposed methods. Supplementary material for this article is available online.  相似文献   

16.
We consider the mass‐in‐mass (MiM) lattice when the internal resonators are very small. When there are no internal resonators the lattice reduces to a standard Fermi‐Pasta‐Ulam‐Tsingou (FPUT) system. We show that the solution of the MiM system, with suitable initial data, shadows the FPUT system for long periods of time. Using some classical oscillatory integral estimates we can conclude that the error of the approximation is (in some settings) higher than one may expect.  相似文献   

17.
We answer this question using the competitive ratio as an indicator for the quality of information about the future. Analytical results show that the better the information the better the worst-case competitive ratios. However, experimental analysis gives a slightly different view. We calculate the empirical-case competitive ratios of different variants of a threat-based online algorithm. The results are based on historical data of the German Dax-30 index. We compare our experimental empirical-case results to the analytical worst-case results given in the literature. We show that better information does not always lead to a better performance in real life applications. The empirical-case competitive ratio is not always better with better information, and some a-priori information is more valuable than other for practical settings.  相似文献   

18.
This paper deals with real-time disruption management of rolling stock in passenger railway transportation. We describe a generic framework for dealing with disruptions of railway rolling stock schedules. The framework is presented as an online combinatorial decision problem, where the uncertainty of a disruption is modeled by a sequence of information updates. To decompose the problem and to reduce the computation time, we propose a rolling horizon approach: rolling stock decisions are only considered if they are within a certain time horizon from the time of rescheduling. The schedules are then revised as time progresses and new information becomes available. We extend an existing model for rolling stock scheduling to the specific requirements of the real-time situation, and we apply it in the rolling horizon framework. We perform computational tests on instances constructed from real-life cases of Netherlands Railways (NS), the main operator of passenger trains in the Netherlands. We explore the consequences of different settings of the approach for the trade-off between solution quality and computation time.  相似文献   

19.
Non-stationary time series arise in many settings, such as seismology, speech-processing, and finance. In many of these settings we are interested in points where a model of local stationarity is violated. We consider the problem of how to detect these change-points, which we identify by finding sharp changes in the time-varying power spectrum. Several different methods are considered, and we find that the symmetrized Kullback-Leibler information discrimination performs best in simulation studies. We derive asymptotic normality of our test statistic, and consistency of estimated change-point locations. We then demonstrate the technique on the problem of detecting arrival phases in earthquakes.  相似文献   

20.
We study the approximation problem (or problem of optimal recovery in the $L_2$-norm) for weighted Korobov spaces with smoothness parameter $\a$. The weights $\gamma_j$ of the Korobov spaces moderate the behavior of periodic functions with respect to successive variables. The nonnegative smoothness parameter $\a$ measures the decay of Fourier coefficients. For $\a=0$, the Korobov space is the $L_2$ space, whereas for positive $\a$, the Korobov space is a space of periodic functions with some smoothness and the approximation problem corresponds to a compact operator. The periodic functions are defined on $[0,1]^d$ and our main interest is when the dimension $d$ varies and may be large. We consider algorithms using two different classes of information. The first class $\lall$ consists of arbitrary linear functionals. The second class $\lstd$ consists of only function values and this class is more realistic in practical computations. We want to know when the approximation problem is tractable. Tractability means that there exists an algorithm whose error is at most $\e$ and whose information cost is bounded by a polynomial in the dimension $d$ and in $\e^{-1}$. Strong tractability means that the bound does not depend on $d$ and is polynomial in $\e^{-1}$. In this paper we consider the worst case, randomized, and quantum settings. In each setting, the concepts of error and cost are defined differently and, therefore, tractability and strong tractability depend on the setting and on the class of information. In the worst case setting, we apply known results to prove that strong tractability and tractability in the class $\lall$ are equivalent. This holds if and only if $\a>0$ and the sum-exponent $s_{\g}$ of weights is finite, where $s_{\g}= \inf\{s>0 : \xxsum_{j=1}^\infty\g_j^s\,<\,\infty\}$. In the worst case setting for the class $\lstd$ we must assume that $\a>1$ to guarantee that functionals from $\lstd$ are continuous. The notions of strong tractability and tractability are not equivalent. In particular, strong tractability holds if and only if $\a>1$ and $\xxsum_{j=1}^\infty\g_j<\infty$. In the randomized setting, it is known that randomization does not help over the worst case setting in the class $\lall$. For the class $\lstd$, we prove that strong tractability and tractability are equivalent and this holds under the same assumption as for the class $\lall$ in the worst case setting, that is, if and only if $\a>0$ and $s_{\g} < \infty$. In the quantum setting, we consider only upper bounds for the class $\lstd$ with $\a>1$. We prove that $s_{\g}<\infty$ implies strong tractability. Hence for $s_{\g}>1$, the randomized and quantum settings both break worst case intractability of approximation for the class $\lstd$. We indicate cost bounds on algorithms with error at most $\e$. Let $\cc(d)$ denote the cost of computing $L(f)$ for $L\in \lall$ or $L\in \lstd$, and let the cost of one arithmetic operation be taken as unity. The information cost bound in the worst case setting for the class $\lall$ is of order $\cc (d) \cdot \e^{-p}$ with $p$ being roughly equal to $2\max(s_\g,\a^{-1})$. Then for the class $\lstd$ in the randomized setting, we present an algorithm with error at most $\e$ and whose total cost is of order $\cc(d)\e^{-p-2} + d\e^{-2p-2}$, which for small $\e$ is roughly $$ d\e^{-2p-2}. $$ In the quantum setting, we present a quantum algorithm with error at most $\e$ that uses about only $d + \log \e^{-1}$ qubits and whose total cost is of order $$ (\cc(d) +d) \e^{-1-3p/2}. $$ The ratio of the costs of the algorithms in the quantum setting and the randomized setting is of order $$ \frac{d}{\cc(d)+d}\,\left(\frac1{\e}\right)^{1+p/2}. $$ Hence, we have a polynomial speedup of order $\e^{-(1+p/2)}$. We stress that $p$ can be arbitrarily large, and in this case the speedup is huge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号