首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
The paper deals with an epi-convergence of random real functions defined on a topological space. We follow the idea due to Vogel (1994) to split the epi-convergence into the lower semicontinuous approximation and the epi-upper approximation and localize them onto a given set. The approximations are shown to be connected to the miss- resp. hit-part of the ordinary Fell topology on sets. We introduce two procedures, called “localization”, separately for the miss-topology and the hit-topology on sets. Localization of the miss- resp. hit-part of the Fell topology on sets allows us to give a suggestion how to define the approximations in probability and in distribution. It is shown in the paper that in case of the finite-dimensional Euclidean space, the suggested approximations in probability coincide with the definition from Vogel and Lachout (2003). The research has been partially supported by Deutsche Forschungsgemeinschaft under grant No. 436TSE113/40, by the Ministry of Education, Youth and Sports of the Czech Republic under Project MSM 113200008 and by the Grant Agency of the Czech Republic under grant No. 201/03/1027.  相似文献   

2.
This article treats the problem of the approximation of an analytic function f on the unit disk by rational functions having integral coefficients, with the goodness of each approximation being judged in terms of the maximum of the absolute values of the coefficients of the rational function. This relates to the more usual approximation by a rational function in that it could imply how many decimal places are needed when applying a particularly good rational function approximation having non-integrad coefficients. It is shown how to obtain “good” approximations of this type and it is also shown how under certain circumstances “very good” bounds are not possible. As in diophantine approximation this means that many merely “good” approximations do exist, which may be the preferable case. The existence or nonexistence of “very good” approximations is closely related to the diophantine approximation of the first nonzero power series coefficient of at z=0. Nevanlinna theory methods are used in the proofs.  相似文献   

3.
Collocation methods are a well-developed approach for the numerical solution of smooth and weakly singular Volterra integral equations. In this paper, we extend these methods through the use of partitioned quadrature based on the qualocation framework, to allow the efficient numerical solution of linear, scalar Volterra integral equations of the second kind with smooth kernels containing sharp gradients. In this case, the standard collocation methods may lose computational efficiency despite the smoothness of the kernel. We illustrate how the qualocation framework can allow one to focus computational effort where necessary through improved quadrature approximations, while keeping the solution approximation fixed. The computational performance improvement introduced by our new method is examined through several test examples. The final example we consider is the original problem that motivated this work: the problem of calculating the probability density associated with a continuous-time random walk in three dimensions that may be killed at a fixed lattice site. To demonstrate how separating the solution approximation from quadrature approximation may improve computational performance, we also compare our new method to several existing Gregory, Sinc, and global spectral methods, where quadrature approximation and solution approximation are coupled.  相似文献   

4.
We consider the Euler discretisation of a scalar linear test equation with positive solutions and show for both strong and weak approximations that the probability of positivity over any finite interval of simulation tends to unity as the step size approaches zero. Although a.s. positivity in an approximation is impossible to achieve, we develop for the strong (Maruyama) approximation an asymptotic estimate of the number of mesh points required for positivity as our tolerance of non-positive trajectories tends to zero, and examine the effectiveness of this estimate in the context of practical numerical simulation. We show how this analysis generalises to equations with a drift coefficient that may display a high level of nonlinearity, but which must be linearly bounded from below (i.e. when acting towards zero), and a linearly bounded diffusion coefficient. Finally, in the linear case we develop a refined asymptotic estimate that is more useful as an a priori guide to the number of mesh points required to produce positive approximations with a given probability.  相似文献   

5.
We use the Strassen theorem to solve stochastic optimization problems with stochastic dominance constraints. First, we show that a dominance-constrained problem on general probability spaces can be expressed as an infinite-dimensional optimization problem with a convenient representation of the dominance constraints provided by the Strassen theorem. This result generalizes earlier work which was limited to finite probability spaces. Second, we derive optimality conditions and a duality theory to gain insight into this optimization problem. Finally, we present a computational scheme for constructing finite approximations along with a convergence rate analysis on the approximation quality.  相似文献   

6.
Carsten Proppe 《PAMM》2006,6(1):673-674
For failure probability estimates of large structural systems, the numerical expensive evaluations of the limit state function have to be replaced by suitable approximations. Most of the methods proposed in the literature so far construct global approximations of the failure hypersurface. The global approximation of the failure hypersurface does not correspond to the local character of the most likely failure, which is often concentrated in one or several regions in the design space, and may therefore introduce a high approximation error for the probability of failure. Moreover, it is noted that global approximations are often constructed for parameter spaces that ignore constraints imposed by the physical nature of the problem. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
We show how local approximations, each accurate on a subinterval, can be blended together to form a global approximation which is accurate over the entire interval. The blending functions are smoothed approximations to a step function, constructed using the error function. The local approximations may be power series, asymptotic expansion, or other more exotic species. As an example, for the dilogarithm function, we construct a one-line analytic approximation which is accurate to one part in 700. This can be generalized to higher order merely by adding more terms in the local approximations. We also show the failure of the alternative strategy of subtracting singularities.  相似文献   

8.
This paper discusses the rationale for the use of additive models involving multiple objectives as approximations to normative analyses. Experience has shown us that organizations often evaluate important decisions with multiple objective models rather than reducing all aspects of the problem to a single criterion, dollars, as many normative economic models prescribe. We justify this practice on two grounds: managers often prefer to think about a problem in terms of several dimensions and a multiple objective model may provide an excellent approximation to the more complex normative model. We argue that a useful analysis based on a multiple objective model will fulfill both conditions—it will provide insights for the decision maker as well as a good approximation to the normative model. We report several real-world examples of managers using multiple objective models to approximate such normative models as the risk-adjusted net present value and the value of information models. The agreement between the approximate models and the normative models is shown to be quite good. Next, we cite a portion of the behavioral decision theory literature which establishes that linear models of multiple attributes provide quite robust approximations to individual decision-making processes. We then present more general theoretical and empirical results which support our contention that linear multiple attribute models can provide good approximations to more complex models.  相似文献   

9.
We propose the use of the Bubnov-Galerkin procedure to the search for self-oscillations. We establish the existence and the convergence of the approximations. In the basic case we have obtained the asymptotics of the rate of convergence. In [1] it was shown, on the basis of the results in [2], how we can construct finite-dimensional approximations to the periodic solutions of autonomous systems. Below we have pointed out another approach to solving the approximation problem, based on the parameter functionalization method proposed in [3].  相似文献   

10.
Inderfurth [OR Spektrum 19 (1997) 111] and Simpson [Operations Research 26 (1978) 270] have shown how the optimal decision rules in a stochastic one product recovery system with equal leadtimes can be characterized. Using these results we provide in this paper a method for the exact computation of the parameters which determine the optimal periodic policy. Since exact computation is, especially in case of dynamic demands and returns, quite time consuming, we also provide two different approximations. One is based on an approximation of the value-function in the dynamic programming problem while the other approximation is based on a deterministic model. By means of numerical examples we compare our results and discuss the performance of the approximations.  相似文献   

11.
The subject of this paper is the analytic approximation method for solving stochastic differential equations with time-dependent delay. Approximate equations are defined on equidistant partitions of the time interval, and their coefficients are Taylor approximations of the coefficients of the initial equation. It will be shown, without making any restrictive assumption for the delay function, that the approximate solutions converge in Lp-norm and with probability 1 to the solution of the initial equation. Also, the rate of the Lp convergence increases when the degrees in the Taylor approximations increase, analogously to what is found in real analysis. At the end, a procedure will be presented which allows the application of this method, with the assumption of continuity of the delay function.  相似文献   

12.
We study approximations to a class of vector‐valued equations of Burgers type driven by a multiplicative space‐time white noise. A solution theory for this class of equations has been developed recently in Probability Theory Related Fields by Hairer and Weber. The key idea was to use the theory of controlled rough paths to give definitions of weak/mild solutions and to set up a Picard iteration argument. In this article the limiting behavior of a rather large class of (spatial) approximations to these equations is studied. These approximations are shown to converge and convergence rates are given, but the limit may depend on the particular choice of approximation. This effect is a spatial analogue to the Itô‐Stratonovich correction in the theory of stochastic ordinary differential equations, where it is well known that different approximation schemes may converge to different solutions.© 2014 Wiley Periodicals, Inc.  相似文献   

13.
Multiple and multidimensional zero-correlation linear cryptanalysis have been two of the most powerful cryptanalytic techniques for block ciphers, and it has been shown that the differentiating factor of these two statistical models is whether distinct plaintexts are assumed or not. Nevertheless, questions remain regarding how these analyses can be universalized without any limitations and can be used to accurately estimate the data complexity and the success probability. More concretely, the current models for multiple zero-correlation (MPZC) and multidimensional zero-correlation (MDZC) cryptanalysis are not valid in the setting with a limited number of approximations and the accuracy of the estimation for data complexity can not be guaranteed. Besides, in a lot of cases, using too many approximations may cause an exhaustive search when we want to launch key-recovery attacks. In order to generalize the original models using the normal approximation of the \(\chi ^2\)-distribution, we provide a more accurate approach to estimate the data complexity and the success probability for MPZC and MDZC cryptanalysis without such approximation. Since these new models directly rely on the \(\chi ^{2}\)-distribution, we call them the \(\chi ^{2}\) MPZC and MDZC models. An interesting thing is that the chi-square-multiple zero-correlation (\(\chi ^{2}\)-MPZC) model still works even though we only have a single zero-correlation linear approximation. This fact puts an end to the situation that the basic zero-correlation linear cryptanalysis requires the full codebook under the known-plaintext attack setting. As an illustration, we apply the \(\chi ^{2}\)-MPZC model to analyze TEA and XTEA. These new attacks cover more rounds than the previous MPZC attacks. Moreover, we reconsider the multidimensional zero-correlation (MDZC) attack on 14-round CLEFIA-192 by utilizing less zero-correlation linear approximations. In addition, some other ciphers which already have MDZC analytical results are reevaluated and the data complexities under the new model are all less than or equal to those under the original model. Some experiments are conducted in order to verify the validity of the new models, and the experimental results convince us that the new models provide more precise estimates of the data complexity and the success probability.  相似文献   

14.
The method of Laplace is used to approximate posterior probabilities for a collection of polynomial regression models when the errors follow a process with a noninvertible moving average component. These results are useful in the problem of period-change analysis of variable stars and in assessing the posterior probability that a time series with trend has been overdifferenced. The nonstandard covariance structure induced by a noninvertible moving average process can invalidate the standard Laplace method. A number of analytical tools is used to produce corrected Laplace approximations. These tools include viewing the covariance matrix of the observations as tending to a differential operator. The use of such an operator and its Green's function provides a convenient and systematic method of asymptotically inverting the covariance matrix.In certain cases there are two different Laplace approximations, and the appropriate one to use depends upon unknown parameters. This problem is dealt with by using a weighted geometric mean of the candidate approximations, where the weights are completely data-based and such that, asymptotically, the correct approximation is used. The new methodology is applied to an analysis of the prototypical long-period variable star known as Mira.  相似文献   

15.
The principle of maximum entropy is used to analyse a G/G/1 queue at equilibrium when the constraints involve only the first two moments of the interarrival-time and service-time distributions. Robust recursive relations for the queue-length distribution are determined, and a probability density function analogue is characterized. Furthermore, connections with classical queueing theory and operational analysis are established, and an overall approximation, based on the concept of ‘global’ maximum entropy, is introduced. Numerical examples provide useful information on how critically system behaviour is affected by the distributional form of the interarrival and service times, and favourable comparisons are made with diffusion and other approximations. Comments on the implication of the work to the analysis of more general queueing systems are included.  相似文献   

16.
Approximating Probability Distributions Using Small Sample Spaces   总被引:2,自引:0,他引:2  
We formulate the notion of a "good approximation" to a probability distribution over a finite abelian group ?. The quality of the approximating distribution is characterized by a parameter ɛ which is a bound on the difference between corresponding Fourier coefficients of the two distributions. It is also required that the sample space of the approximating distribution be of size polynomial in and 1/ɛ. Such approximations are useful in reducing or eliminating the use of randomness in certain randomized algorithms. We demonstrate the existence of such good approximations to arbitrary distributions. In the case of n random variables distributed uniformly and independently over the range , we provide an efficient construction of a good approximation. The approximation constructed has the property that any linear combination of the random variables (modulo d) has essentially the same behavior under the approximating distribution as it does under the uniform distribution over . Our analysis is based on Weil's character sum estimates. We apply this result to the construction of a non-binary linear code where the alphabet symbols appear almost uniformly in each non-zero code-word. Received: September 22, 1990/Revised: First revision November 11, 1990; last revision November 10, 1997  相似文献   

17.
In this paper, we consider various moment inequalities for sums of random matrices—which are well-studied in the functional analysis and probability theory literature—and demonstrate how they can be used to obtain the best known performance guarantees for several problems in optimization. First, we show that the validity of a recent conjecture of Nemirovski is actually a direct consequence of the so-called non-commutative Khintchine’s inequality in functional analysis. Using this result, we show that an SDP-based algorithm of Nemirovski, which is developed for solving a class of quadratic optimization problems with orthogonality constraints, has a logarithmic approximation guarantee. This improves upon the polynomial approximation guarantee established earlier by Nemirovski. Furthermore, we obtain improved safe tractable approximations of a certain class of chance constrained linear matrix inequalities. Secondly, we consider a recent result of Delage and Ye on the so-called data-driven distributionally robust stochastic programming problem. One of the assumptions in the Delage–Ye result is that the underlying probability distribution has bounded support. However, using a suitable moment inequality, we show that the result in fact holds for a much larger class of probability distributions. Given the close connection between the behavior of sums of random matrices and the theoretical properties of various optimization problems, we expect that the moment inequalities discussed in this paper will find further applications in optimization.  相似文献   

18.
Summary It is shown that a boundary-value problem based on a holonomic elastic-plastic constitutive law may be formulated equivalently as a variational inequality of the second kind. A regularised form of the problem is analysed, and finite element approximations are considered. It is shown that solutions based on finite element approximation of the regularised problem converge.  相似文献   

19.
The main purpose of this paper is to consider strict approximations from subspaces of spline functions of degree m-1 with k fixed knots. Rice defines the strict approximation which is a particular unique best Chebyshev approximation for problems defined on a finite set. In order to determine best approximations on an interval I we define a sequence of strict approximations on finite subsets of I where the subsets fill up the interval. It is shown that the sequences always converge if k≤m. In the case k>m the sequences are convergent if we restrict ourselves to problems defined on certain subsets of I. It seems to be natural to denote these limits as strict approximations. To be able to compute these functions we also develop a Remez type algorithm.  相似文献   

20.
In this paper a higher order approximation for single server queues and tandem queueing networks is proposed and studied. Different from the most popular two-moment based approximations in the literature, the higher order approximation uses the higher moments of the interarrival and service distributions in evaluating the performance measures for queueing networks. It is built upon the MacLaurin series analysis, a method that is recently developed to analyze single-node queues, along with the idea of decomposition using higher orders of the moments matched to a distribution. The approximation is computationally flexible in that it can use as many moments of the interarrival and service distributions as desired and produce the corresponding moments for the waiting and interdeparture times. Therefore it can also be used to study several interesting issues that arise in the study of queueing network approximations, such as the effects of higher moments and correlations. Numerical results for single server queues and tandem queueing networks show that this approximation is better than the two-moment based approximations in most cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号