首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper addresses the estimation of the variance of the sample mean from steady-state simulations without requiring the knowledge of simulation run length a priori. Dynamic batch means is a new and useful approach to implementing the traditional batch means in limited memory without the knowledge of the simulation run length. However, existing dynamic batch means estimators do not allow one to control the value of batch size, which is the performance parameter of the batch means estimators. In this work, an algorithm is proposed based on two dynamic batch means estimators to dynamically estimate the optimal batch size as the simulation runs. The simulation results show that the proposed algorithm requires reasonable computation time and possesses good statistical properties such as small mean-squared-error (mse).  相似文献   

2.
A general depth measure, based on the use of one-dimensional linear continuous projections, is proposed. The applicability of this idea in different statistical setups (including inference in functional data analysis, image analysis and classification) is discussed. A special emphasis is made on the possible usefulness of this method in some statistical problems where the data are elements of a Banach space.The asymptotic properties of the empirical approximation of the proposed depth measure are investigated. In particular, its asymptotic distribution is obtained through U-statistics techniques. The practical aspects of these ideas are discussed through a small simulation study and a real-data example.  相似文献   

3.
Estimating real-world parameter values by means of Monte-Carlo/stochastic simulation is usually accomplished by carrying out a number ‘n’ of computer runs, each using random numbers taken from a pseudo-random number generator. In order to improve the accuracy of the estimate (reduce the estimate's variance), the most common recourse is to increase n, as the estimate's variance is inversely proportional to n. Variance reduction techniques provide an alternative to increasing n. They use statistical approaches which obtain more information from the computer runs conducted, or control and direct the pseudo-random streams to optimize the information likely to be produced by a run.  相似文献   

4.
Recently, Grabner et al. [Combinatorics of geometrically distributed random variables: run statistics, Theoret. Comput. Sci. 297 (2003) 261-270] and Louchard and Prodinger [Ascending runs of sequences of geometrically distributed random variables: a probabilistic analysis, Theoret. Comput. Sci. 304 (2003) 59-86] considered the run statistics of geometrically distributed independent random variables. They investigated the asymptotic properties of the number of runs and the longest run using the corresponding probability generating functions and a Markov chain approach. In this note, we reconsider the asymptotic properties of such statistics using another approach. Our approach of finding the asymptotic distributions is based on the construction of runs in a sequence of m-dependent random variables. This approach enables us to find the asymptotic distributions of many run statistics via the theorems established for m-dependent sequence of random variables. We also provide the asymptotic distribution of the total number of non-decreasing runs and the longest non-decreasing run.  相似文献   

5.
6.
A natural and intuitively appealing generalization of the runs principle arises if instead of looking at fixed-length strings with all their positions occupied by successes, we allow the appearance of a small number of failures. Therefore, the focus is on clusters of consecutive trials which contain large proportion of successes. Such a formation is traditionally called “scan” or alternatively, due to the high concentration of successes within it, almost perfect (success) run. In the present paper, we study in detail the waiting time distribution for random variables related to the first occurrence of an almost perfect run in a sequence of Bernoulli trials. Using an appropriate Markov chain embedding approach we present an efficient recursive scheme that permits the construction of the associated transition probability matrix in an algorithmically efficient way. It is worth mentioning that, the suggested methodology, is applicable not only in the case of almost perfect runs, but can tackle the general discrete scan case as well. Two interesting applications in statistical process control are also discussed.  相似文献   

7.
A fast method for enclosing all eigenvalues in generalized eigenvalue problems Ax=λBx is proposed. Firstly a theorem for enclosing all eigenvalues, which is applicable even if A is not Hermitian and/or B is not Hermitian positive definite, is presented. Next a theorem for accelerating the enclosure is presented. The proposed method is established based on these theorems. Numerical examples show the performance and property of the proposed method. As an application of the proposed method, an efficient method for enclosing all eigenvalues in polynomial eigenvalue problems is also sketched.  相似文献   

8.
The main goal of this article is to introduce a new notion of qualitative robustness that applies also to tail-dependent statistical functionals and that allows us to compare statistical functionals in regards to their degree of robustness. By means of new versions of the celebrated Hampel theorem, we show that this degree of robustness can be characterized in terms of certain continuity properties of the statistical functional. The proofs of these results rely on strong uniform Glivenko-Cantelli theorems in fine topologies, which are of independent interest. We also investigate the sensitivity of tail-dependent statistical functionals w.r.t. infinitesimal contaminations, and we introduce a new notion of infinitesimal robustness. The theoretical results are illustrated by means of several examples including general L- and V-functionals.  相似文献   

9.
In this article, we partially solve a conjecture by Kochar and Korwar (1996) [9] in relation to the normalized spacings of the order statistics of a sample of independent exponential random variables with different scale parameters. In the case of a sample of size n=3, they proved the ordering of the normalized spacings and conjectured that result holds for all n. We prove this conjecture for n=4 for both spacings and normalized spacings and generalize some results to n>4.  相似文献   

10.
The aim of this paper is to present the basic principles and recent advances in the area of statistical process control charting with the aid of runs rules. More specifically, we review the well known Shewhart type control charts supplemented with additional rules based on the theory of runs and scans. The motivation for this article stems from the fact that during the last decades, the performance improvement of the Shewhart charts by exploiting runs rules has attracted continuous research interest. Furthermore, we briefly discuss the Markov chain approach which is the most popular technique for studying the run length distribution of run based control charts.   相似文献   

11.
We consider a panel data semiparametric partially linear regression model with an unknown vector β of regression coefficients, an unknown nonparametric function g(·) for nonlinear component, and unobservable serially correlated errors. The correlated errors are modeled by a vector autoregressive process which involves a constant intraclass correlation. Applying the pilot estimators of β and g(·), we construct estimators of the autoregressive coefficients, the intraclass correlation and the error variance, and investigate their asymptotic properties. Fitting the error structure results in a new semiparametric two-step estimator of β, which is shown to be asymptotically more efficient than the usual semiparametric least squares estimator in terms of asymptotic covariance matrix. Asymptotic normality of this new estimator is established, and a consistent estimator of its asymptotic covariance matrix is presented. Furthermore, a corresponding estimator of g(·) is also provided. These results can be used to make asymptotically efficient statistical inference. Some simulation studies are conducted to illustrate the finite sample performances of these proposed estimators.  相似文献   

12.
An exhaustive search as required for traditional variable selection methods is impractical in high dimensional statistical modeling. Thus, to conduct variable selection, various forms of penalized estimators with good statistical and computational properties, have been proposed during the past two decades. The attractive properties of these shrinkage and selection estimators, however, depend critically on the size of regularization which controls model complexity. In this paper, we consider the problem of consistent tuning parameter selection in high dimensional sparse linear regression where the dimension of the predictor vector is larger than the size of the sample. First, we propose a family of high dimensional Bayesian Information Criteria (HBIC), and then investigate the selection consistency, extending the results of the extended Bayesian Information Criterion (EBIC), in Chen and Chen (2008) to ultra-high dimensional situations. Second, we develop a two-step procedure, the SIS+AENET, to conduct variable selection in p>n situations. The consistency of tuning parameter selection is established under fairly mild technical conditions. Simulation studies are presented to confirm theoretical findings, and an empirical example is given to illustrate the use in the internet advertising data.  相似文献   

13.
Given a target contained in a constrained set and an impulse control system governing the evolutions of runs or executions, that are hybrids of continuous and discrete evolutions, this paper studies and provides several characterizations of the capture basin of the target viable in the constrained set. It is the subset of initial runs from which start at least one run viable in the constrained set until it reaches the target in finite time. It also provides algorithms and regulation rules governing the runs that reach the targets while obeying state constraints.  相似文献   

14.
We provide asymptotic results for time-changed Lévy processes sampled at random instants. The sampling times are given by the first hitting times of symmetric barriers, whose distance with respect to the starting point is equal to ε. For a wide class of Lévy processes, we introduce a renormalization depending on ε, under which the Lévy process converges in law to an α-stable process as ε goes to 0. The convergence is extended to moments of hitting times and overshoots. These results can be used to build high frequency statistical procedures. As examples, we construct consistent estimators of the time change and, in the case of the CGMY process, of the Blumenthal-Getoor index. Convergence rates and a central limit theorem for suitable functionals of the increments of the observed process are established under additional assumptions.  相似文献   

15.
We prove that admissible functions for Fubini-Study metric on the complex projective space PmC of complex dimension m, invariant by a convenient automorphisms group, are lower bounded by a function going to minus infinity on the boundary of usual charts of PmC. A similar lower bound holds on some projective manifolds. This gives an optimal constant in a Hörmander type inequality on these manifolds, which allows us, using Tian's invariant, to establish the existence of Einstein-Kähler metrics on them.  相似文献   

16.
A computer experiment-based optimization approach employs design of experiments and statistical modeling to represent a complex objective function that can only be evaluated pointwise by running a computer model. In large-scale applications, the number of variables is huge, and direct use of computer experiments would require an exceedingly large experimental design and, consequently, significant computational effort. If a large portion of the variables have little impact on the objective, then there is a need to eliminate these before performing the complete set of computer experiments. This is a variable selection task. The ideal variable selection method for this task should handle unknown nonlinear structure, should be computationally fast, and would be conducted after a small number of computer experiment runs, likely fewer runs (N) than the number of variables (P). Conventional variable selection techniques are based on assumed linear model forms and cannot be applied in this “large P and small N” problem. In this paper, we present a framework that adds a variable selection step prior to computer experiment-based optimization, and we consider data mining methods, using principal components analysis and multiple testing based on false discovery rate, that are appropriate for our variable selection task. An airline fleet assignment case study is used to illustrate our approach.  相似文献   

17.
For annuity providers, longevity risk, i.e. the risk that future mortality trends differ from those anticipated, constitutes an important risk factor. In order to manage this risk, new financial products, so-called longevity derivatives, may be needed, even though a first attempt to issue a longevity bond in 2004 was not successful.While different methods of how to price such securities have been proposed in recent literature, no consensus has been reached. This paper reviews, compares and comments on these different approaches. In particular, we use data from the United Kingdom to derive prices for the proposed first longevity bond and an alternative security design based on the different methods.  相似文献   

18.
Procedures for continuously monitoring binary attribute data processes are of utmost relevance for fields like electrical engineering, chemical production, software quality engineering, healthcare monitoring, and many more. In this article, new approaches are proposed, where kth order runs in a binary process are monitored. We derive methods for evaluating the performance of the new control charts, discuss computational issues of these methods and give design recommendations for the control charts. A real-data example demonstrates the successful application of the new control procedures.  相似文献   

19.
Using the Markowitz mean–variance portfolio optimization theory, researchers have shown that the traditional estimated return greatly overestimates the theoretical optimal return, especially when the dimension to sample size ratio p/n is large. Bai et al. (2009) propose a bootstrap-corrected estimator to correct the overestimation, but there is no closed form for their estimator. To circumvent this limitation, this paper derives explicit formulas for the estimator of the optimal portfolio return. We also prove that our proposed closed-form return estimator is consistent when n → ∞ and p/n → y ∈ (0, 1). Our simulation results show that our proposed estimators dramatically outperform traditional estimators for both the optimal return and its corresponding allocation under different values of p/n ratios and different inter-asset correlations ρ, especially when p/n is close to 1. We also find that our proposed estimators perform better than the bootstrap-corrected estimators for both the optimal return and its corresponding allocation. Another advantage of our improved estimation of returns is that we can also obtain an explicit formula for the standard deviation of the improved return estimate and it is smaller than that of the traditional estimate, especially when p/n is large. In addition, we illustrate the applicability of our proposed estimate on the US stock market investment.  相似文献   

20.
Both technology and market demands within the high-tech electronics manufacturing industry change rapidly. Accurate and efficient estimation of cycle-time (CT) distribution remains a critical driver of on-time delivery and associated customer satisfaction metrics in these complex manufacturing systems. Simulation models are often used to emulate these systems in order to estimate parameters of the CT distribution. However, execution time of such simulation models can be excessively long limiting the number of simulation runs that can be executed for quantifying the impact of potential future operational changes. One solution is the use of simulation metamodeling which is to build a closed-form mathematical expression to approximate the input–output relationship implied by the simulation model based on simulation experiments run at selected design points in advance. Metamodels can be easily evaluated in a spreadsheet environment “on demand” to answer what-if questions without needing to run lengthy simulations. The majority of previous simulation metamodeling approaches have focused on estimating mean CT as a function of a single input variable (i.e., throughput). In this paper, we demonstrate the feasibility of a quantile regression based metamodeling approach. This method allows estimation of CT quantiles as a function of multiple input variables (e.g., throughput, product mix, and various distributional parameters of time-between-failures, repair time, setup time, loading and unloading times). Empirical results are provided to demonstrate the efficacy of the approach in a realistic simulation model representative of a semiconductor manufacturing system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号