首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Higgins and Tichenor [Appl. Math. and Comp. 3 (1977), 113-126] considered “window estimates” of location and reciprocal scale parameters for a general class of distributions and showed them to be asymptotically efficient for the Cauchy distribution. In this study, efficiencies of these estimates for the Cauchy distribution are investigated for small and moderate sample sizes by Monte Carlo methods. For n?40, window estimates of location are nearly optimal, and for n?20, they compare favorably with other easy-to-compute estimates. Window estimates of reciprocal scale are very good even for small samples and are nearly optimal for n?10. Thus, window estimates appear to have high efficiency for moderate as well as large sample sizes. Approximate normality is also investigated. The estimate of location converges rapidly to normality, whereas the estimate of reciprocal scale does not.  相似文献   

2.
Several techniques for resampling dependent data have already been proposed. In this paper we use missing values techniques to modify the moving blocks jackknife and bootstrap. More specifically, we consider the blocks of deleted observations in the blockwise jackknife as missing data which are recovered by missing values estimates incorporating the observation dependence structure. Thus, we estimate the variance of a statistic as a weighted sample variance of the statistic evaluated in a “complete” series. Consistency of the variance and the distribution estimators of the sample mean are established. Also, we apply the missing values approach to the blockwise bootstrap by including some missing observations among two consecutive blocks and we demonstrate the consistency of the variance and the distribution estimators of the sample mean. Finally, we present the results of an extensive Monte Carlo study to evaluate the performance of these methods for finite sample sizes, showing that our proposal provides variance estimates for several time series statistics with smaller mean squared error than previous procedures.  相似文献   

3.
Hybrids of equidistribution and Monte Carlo methods of integration can achieve the superior accuracy of the former while allowing the simple error estimation methods of the latter. In particular, randomized (0, m, s)-nets in basebproduce unbiased estimates of the integral, have a variance that tends to zero faster than 1/nfor any square integrable integrand and have a variance that for finitenis never more thane?2.718 times as large as the Monte Carlo variance. Lower bounds thaneare known for special cases. Some very important (t, m, s)-nets havet>0. The widely used Sobol' sequences are of this form, as are some recent and very promising nets due to Niederreiter and Xing. Much less is known about randomized versions of these nets, especially ins>1 dimensions. This paper shows that scrambled (t, m, s)-nets enjoy the same properties as scrambled (0, m, s)-nets, except the sampling variance is guaranteed only to be belowbt[(b+1)/(b−1)]stimes the Monte Carlo variance for a least-favorable integrand and finiten.  相似文献   

4.
Recently an O(n4) volume algorithm has been presented for convex bodies by Lovász and Vempala, where n is the number of dimensions of the convex body. Essentially the algorithm is a series of Monte Carlo integrations. In this paper we describe a computer implementation of the volume algorithm, where we improved the computational aspects of the original algorithm by adding variance decreasing modifications: a stratified sampling strategy, double point integration and orthonormalised estimators. Formulas and methodology were developed so that the errors in each phase of the algorithm can be controlled. Some computational results for convex bodies in dimensions ranging from 2 to 10 are presented as well.  相似文献   

5.
Using Monte Carlo simulation techniques, we look at statistical properties of two numerical methods (the extended counting method and the variance counting method) developed to estimate the Hausdorff dimension of a time series and applied to the fractional Brownian motion.  相似文献   

6.
This paper proposes and estimates a globally flexible functional form for the cost function, which we call Neural Cost Function (NCF). The proposed specification imposes a priori and satisfies globally all the properties that economic theory dictates. The functional form can be estimated easily using Markov Chain Monte Carlo (MCMC) techniques or standard iterative SURE. We use a large panel of U.S. banks to illustrate our approach. The results are consistent with previous knowledge about the sector and in accordance with mathematical production theory.  相似文献   

7.
This paper focuses on the approximation of continuous functions on the unit sphere by spherical polynomials of degree n via hyperinterpolation. Hyperinterpolation of degree n is a discrete approximation of the L2-orthogonal projection of the same degree with its Fourier coefficients evaluated by a positive-weight quadrature rule that exactly integrates all spherical polynomials of degree at most 2n. This paper aims to bypass this quadrature exactness assumption by replacing it with the Marcinkiewicz–Zygmund property proposed in a previous paper. Consequently, hyperinterpolation can be constructed by a positive-weight quadrature rule (not necessarily with quadrature exactness). This scheme is referred to as unfettered hyperinterpolation. This paper provides a reasonable error estimate for unfettered hyperinterpolation. The error estimate generally consists of two terms: a term representing the error estimate of the original hyperinterpolation of full quadrature exactness and another introduced as compensation for the loss of exactness degrees. A guide to controlling the newly introduced term in practice is provided. In particular, if the quadrature points form a quasi-Monte Carlo (QMC) design, then there is a refined error estimate. Numerical experiments verify the error estimates and the practical guide.  相似文献   

8.
Stochastic measures of the distance between a density f and its estimate fn have been used to compare the accuracy of density estimators in Monte Carlo trials. The practice in the past has been to select a measure largely on the basis of its ease of computation, using only heuristic arguments to explain the large sample behaviour of the measure. Steele [11] has shown that these arguments can lead to incorrect conclusions. In the present paper we obtain limit theorems for the stochastic processes derived from stochastic measures, thereby explaining the large sample behaviour of the measures.  相似文献   

9.
We consider an homogenous Markov chain {Xn}. We estimate its transition probability density with kernel estimators. We apply these methods to the estimation of the unknown function f of the process defined by X1 and Xn+1 = f(Xn) + εn, where {εn} is a noise (sequence of independent identically distributed random variables) of unknown law. The mean quadratic integrated rates of convergence are identical to those of classical density estimations. These risks are used here because we want some global informations about our estimates. We also study the average of those risks when the variance changes; it is shown that they reach a minimal value for some optimal variance. We study uniform convergence of our estimators. We finally estimate the variance of the noise and its density.  相似文献   

10.
A general framework is proposed for what we call the sensitivity derivative Monte Carlo (SDMC) solution of optimal control problems with a stochastic parameter. This method employs the residual in the first-order Taylor series expansion of the cost functional in terms of the stochastic parameter rather than the cost functional itself. A rigorous estimate is derived for the variance of the residual, and it is verified by numerical experiments involving the generalized steady-state Burgers equation with a stochastic coefficient of viscosity. Specifically, the numerical results show that for a given number of samples, the present method yields an order of magnitude higher accuracy than a conventional Monte Carlo method. In other words, the proposed variance reduction method based on sensitivity derivatives is shown to accelerate convergence of the Monte Carlo method. As the sensitivity derivatives are computed only at the mean values of the relevant parameters, the related extra cost of the proposed method is a fraction of the total time of the Monte Carlo method.  相似文献   

11.
Semiparametric models with both nonparametric and parametric components have become increasingly useful in many scientific fields, due to their appropriate representation of the trade-off between flexibility and efficiency of statistical models. In this paper we focus on semi-varying coefficient models (a.k.a. varying coefficient partially linear models) in a “large n, diverging p” situation, when both the number of parametric and nonparametric components diverges at appropriate rates, and we only consider the case p=o(n). Consistency of the estimator based on B-splines and asymptotic normality of the linear components are established under suitable assumptions. Interestingly (although not surprisingly) our analysis shows that the number of parametric components can diverge at a faster rate than the number of nonparametric components and the divergence rates of the number of the nonparametric components constrain the allowable divergence rates of the parametric components, which is a new phenomenon not established in the existing literature as far as we know. Finally, the finite sample behavior of the estimator is evaluated by some Monte Carlo studies.  相似文献   

12.
13.
This paper describes an efficient algorithm to find a smooth trajectory joining two points A and B with minimum length constrained to avoid fixed subsets. The basic assumption is that the locations of the obstacles are measured several times through a mechanism that corrects the sensors at each reading using the previous observation. The proposed algorithm is based on the penalized nonparametric method previously introduced that uses confidence ellipses as a fattening of the avoidance set. In this paper we obtain consistent estimates of the best trajectory using Monte Carlo construction of the confidence ellipse.  相似文献   

14.
This paper proposes new methods for computation of greeks using the binomial tree and the discrete Malliavin calculus. In the last decade, the Malliavin calculus has come to be considered as one of the main tools in financial mathematics. It is particularly important in the computation of greeks using Monte Carlo simulations. In previous studies, greeks were usually represented by expectation formulas that are derived from the Malliavin calculus and these expectations are computed using Monte Carlo simulations. On the other hand, the binomial tree approach can also be used to compute these expectations. In this article, we employ the discrete Malliavin calculus to obtain expectation formulas for greeks by the binomial tree method. All the results are obtained in an elementary manner.  相似文献   

15.
The regression‐based Monte Carlo methods for backward stochastic differential equations (BSDEs) have been the object of considerable research, particularly for solving nonlinear partial differential equations (PDEs). Unfortunately, such methods often become unstable when implemented with small time steps because the variance of gradient estimates is inversely proportional to the time step (σ2∼ 1/Δ t). Recently new variance reduction techniques were introduced to address this problem in~a paper by the author and Avellaneda. The purpose of this paper is to provide a rigorous justification for these techniques in the context of the discrete‐time BSDE scheme of Bouchard and Touzi. We also suggest a new higher‐order scheme that makes the variance proportional to the time step (σ2∼Δ t). These techniques are easy to implement. Numerical examples strongly indicate that they render the regression‐based Monte Carlo methods stable for small time steps and thus viable for numerical solution of nonlinear PDEs.© 2016 Wiley Periodicals, Inc.  相似文献   

16.
Monte Carlo numerical modeling of a problem with a known exact solution is used to test linear multiplicative generators with modulus M = 231 ? 1 for their applicability in parallel computing. The deviations of the calculated values of the rotational temperature from the known theoretical values are compared with the possible errors of the Monte Carlo method due to the finiteness of statistical samples. In addition, sample correlation coefficients are used to estimate true correlation coefficients between the values of the rotational temperature obtained on different processors with different multipliers, as well as in the case where additional samples were drawn at the terminal state on each processor in order to increase the size of the total sample. For this purpose, by means of special partial averaging, the random values of the rotational temperature were transformed into approximately normally distributed variables; then, for the variables obtained, true correlation coefficients were estimated by sample correlation coefficients. It was discovered that 204 different multipliers suggested by G.S. Fishman, L.R. Moore exhibit the best performance when used in a parallel implementation of the Monte Carlo method: all deviations are less than the theoretical Monte Carlo errors. Moreover, no correlations between the random variables produced by generators with different multipliers were detected. This suggests that generators with different multipliers produce independent sequences of pseudorandom numbers. However, drawing additional samples on each processor, which is frequently done to increase the size of the total sample, gives rise to correlations. Moreover, in many such cases, the theoretical errors of the Monte Carlo method for multipliers in the bottom part of the list proposed by Fishman and Moore are less than the values of temperature deviations and, therefore, should not be used in this way.  相似文献   

17.
In this paper we discuss algorithms forL p-methods, i.e. minimizers of theL p-norm of the residual vector. The statistical “goodness” of the different methods when applied to regression problems is compared in a Monte Carlo experiment.  相似文献   

18.
In this paper, we use the kernel method to estimate sliced average variance estimation (SAVE) and prove that this estimator is both asymptotically normal and root n consistent. We use this kernel estimator to provide more insight about the differences between slicing estimation and other sophisticated local smoothing methods. Finally, we suggest a Bayes information criterion (BIC) to estimate the dimensionality of SAVE. Examples and real data are presented for illustrating our method.  相似文献   

19.
The problem of marginal density estimation for a multivariate density function f(x) can be generally stated as a problem of density function estimation for a random vector λ(x) of dimension lower than that of x. In this article, we propose a technique, the so-called continuous Contour Monte Carlo (CCMC) algorithm, for solving this problem. CCMC can be viewed as a continuous version of the contour Monte Carlo (CMC) algorithm recently proposed in the literature. CCMC abandons the use of sample space partitioning and incorporates the techniques of kernel density estimation into its simulations. CCMC is more general than other marginal density estimation algorithms. First, it works for any density functions, even for those having a rugged or unbalanced energy landscape. Second, it works for any transformation λ(x) regardless of the availability of the analytical form of the inverse transformation. In this article, CCMC is applied to estimate the unknown normalizing constant function for a spatial autologistic model, and the estimate is then used in a Bayesian analysis for the spatial autologistic model in place of the true normalizing constant function. Numerical results on the U.S. cancer mortality data indicate that the Bayesian method can produce much more accurate estimates than the MPLE and MCMLE methods for the parameters of the spatial autologistic model.  相似文献   

20.
In this paper, we consider the estimation of a parameter of interest where the estimator is one of the possibly several solutions of a set of nonlinear empirical equations. Since Newton's method is often used in such a setting to obtain a solution, it is important to know whether the so obtained iteration converges to the locally unique consistent root to the aforementioned parameter of interest. Under some conditions, we show that this is eventually the case when starting the iteration from within a ball about the true parameter whose size does not depend on n. Any preliminary almost surely consistent estimate will eventually lie in such a ball and therefore provides a suitable starting point for large enough n. As examples, we will apply our results in the context of M-estimates, kernel density estimates, as well as minimum distance estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号