首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Simulation modellers frequently face a choice between fidelity and variety in their input scenarios. Using an historical trace provides only one realistic scenario. Using the input modelling facilities in commercial simulation software may provide any number of unrealistic scenarios. We ease this dilemma by developing a way to use the moving blocks bootstrap to convert a single trace into an unlimited number of realistic input scenarios. We do this by setting the bootstrap block size to make the bootstrap samples mimic independent realizations in terms of the distribution of distance between pairs of inputs. We measure distance using a new statistic computed from zero crossings. We estimate the best block size by scaling up an estimate computed by analysing subseries of the trace.  相似文献   

2.
The study of the rodent fluctuations of the North was initiated in its modern form with Elton’s pioneering work. Many scientific studies have been designed to collect yearly rodent abundance data, but the resulting time series are generally subject to at least two “problems”: being short and non-linear. We explore the use of the continuous threshold autoregressive (TAR) models for analyzing such data. In the simplest case, the continuous TAR models are additive autoregressive models, being piecewise linear in one lag, and linear in all other lags. The location of the slope change is called the threshold parameter. The continuous TAR models for rodent abundance data can be derived from a general prey-predator model under some simplifying assumptions. The lag in which the threshold is located sheds important insights on the structure of the prey-predator system. We propose to assess the uncertainty on the location of the threshold via a new bootstrap called the nearest block bootstrap (NBB) which combines the methods of moving block bootstrap and the nearest neighbor bootstrap. The NBB assumes an underlying finite-order time-homogeneous Markov process. Essentially, the NBB bootstraps blocks of random block sizes, with each block being drawn from a non-parametric estimate of the future distribution given the realized past bootstrap series. We illustrate the methods by simulations and on a particular rodent abundance time series from Kilpisjärvi, Northern Finland.  相似文献   

3.
Random effects models for hierarchically dependent data, for example, clustered data, are widely used. A popular bootstrap method for such data is the parametric bootstrap based on the same random effects model as that used in inference. However, it is hard to justify this type of bootstrap when this model is known to be an approximation. In this article, we describe a random effect block bootstrap approach for clustered data that is simple to implement, free of both the distribution and the dependence assumptions of the parametric bootstrap, and is consistent when the mixed model assumptions are valid. Results based on Monte Carlo simulation show that the proposed method seems robust to failure of the dependence assumptions of the assumed mixed model. An application to a realistic environmental dataset indicates that the method produces sensible results. Supplementary materials for the article, including the data used for the application, are available online.  相似文献   

4.
In this report, the distribution for setting up a system reliability exposed to some stress is studied. The standard two-sided power distribution is assumed to be the underlying distribution. We obtained the exact expressions and estimates for the reliability by applying different methods such as maximum likelihood and Bayesian estimators. Three different scenarios were examined: known and equal reflection parameters, known but unequal reflection parameters, and all parameters are unknown, providing practical guidance and recommendations for the estimator design. For large samples, we recommend use of the parametric bootstrap method with the maximum likelihood estimate. Real data sets were used to illustrate the performances of the estimators.  相似文献   

5.
Monte Carlo simulation is a common method for studying the volatility of market traded instruments. It is less employed in retail lending, because of the inherent nonlinearities in consumer behaviour. In this paper, we use the approach of Dual-time Dynamics to separate loan performance dynamics into three components: a maturation function of months-on-books, an exogenous function of calendar date, and a quality function of vintage origination date. The exogenous function captures the impacts from the macroeconomic environment. Therefore, we want to generate scenarios for the possible futures of these environmental impacts. To generate such scenarios, we must go beyond the random walk methods most commonly applied in the analysis of market-traded instruments. Retail portfolios exhibit autocorrelation structure and variance growth with time that requires more complex modelling. This paper is aimed at practical application and describes work using ARMA and ARIMA models for scenario generation, rules for selecting the correct model form given the input data, and validation methods on the scenario generation. We find when the goal is capturing the future volatility via Monte Carlo scenario generation, that model selection does not follow the same rules as for forecasting. Consequently, tests more appropriate to reproducing volatility are proposed, which assure that distributions of scenarios have the proper statistical characteristics. These results are supported by studies of the variance growth properties of macroeconomic variables and theoretical calculations of the variance growth properties of various models. We also provide studies on historical data showing the impact of training length on model accuracy and the existence of differences between macroeconomic epochs.  相似文献   

6.
A scenario tree is an efficient way to represent a stochastic data process in decision problems under uncertainty. This paper addresses how to efficiently generate appropriate scenario trees. A knowledge‐based scenario tree generation method is proposed; the new method is further improved by accounting for subjective judgements or expectations about the random future. Compared with existing approaches, complicated mathematical models and time‐consuming estimation, simulation and optimization problem solution are avoided in our knowledge‐based algorithms, and large‐scale scenario trees can be quickly generated. To show the advantages of the new algorithms, a multiperiod portfolio selection problem is considered, and a dynamic risk measure is adopted to control the intermediate risk, which is superior to the single‐period risk measure used in the existing literature. A series of numerical experiments are carried out by using real trading data from the Shanghai stock market. The results show that the scenarios generated by our algorithms can properly represent the underlying distribution; our algorithms have high performance, say, a scenario tree with up to 10,000 scenarios can be generated in less than a half minute. The applications in the multiperiod portfolio management problem demonstrate that our scenario tree generation methods are stable, and the optimal trading strategies obtained with the generated scenario tree are reasonable, efficient and robust. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
8.
The validity of the moving block bootstrap for the empirical distribution of a short memory causal linear process is established under simple conditions that do not involve mixing or association. Sufficient conditions can be expressed in terms of the existence of moments of the innovations and summability of the coefficients of the linear model. Applications to one and two sample tests are discussed.  相似文献   

9.
The analysis of seasonal or annual block maxima is of interest in fields such as hydrology, climatology or meteorology. In connection with the celebrated method of block maxima, we study several tests that can be used to assess whether the available series of maxima is identically distributed. It is assumed that block maxima are independent but not necessarily generalized extreme value distributed. The asymptotic null distributions of the test statistics are investigated and the practical computation of approximate p-values is addressed. Extensive Monte-Carlo simulations show the adequate finite-sample behavior of the studied tests for a large number of realistic data generating scenarios. Illustrations on several environmental datasets conclude the work.  相似文献   

10.
Random weighting method for Cox’s proportional hazards model   总被引:1,自引:0,他引:1  
Variance of parameter estimate in Cox’s proportional hazards model is based on asymptotic variance. When sample size is small, variance can be estimated by bootstrap method. However, if censoring rate in a survival data set is high, bootstrap method may fail to work properly. This is because bootstrap samples may be even more heavily censored due to repeated sampling of the censored observations. This paper proposes a random weighting method for variance estimation and confidence interval estimation for proportional hazards model. This method, unlike the bootstrap method, does not lead to more severe censoring than the original sample does. Its large sample properties are studied and the consistency and asymptotic normality are proved under mild conditions. Simulation studies show that the random weighting method is not as sensitive to heavy censoring as bootstrap method is and can produce good variance estimates or confidence intervals.  相似文献   

11.
UK military satellite communications systems are currently provided by the Skynet programme. The current generation is collectively known as Skynet 4, and is due to be replaced from 2003 by the next generation. CDA has been asked to conduct an operational research study to review the potential communications requirements of the Ministry of Defence (MoD) in the period 2003–2013, and assess what contribution could be made to them by the proposed future military satellite communications system. This paper discusses some of the political and technical issues surrounding the work carried out. Technically, the study looked at the dynamic loading of five different satellite options in three scenarios set in 2010. Each combination of scenario and satellite was modelled, and the results produced in terms of the chosen measures of effectiveness. The sensitivity of these results to various assumptions was tested to determine the robustness of the results. The scenarios included land, air and maritime operations. In an area where little or no analysis had previously been attempted (at least in the military context) in the UK, this ‘intervention’ succeeded in clarifying the main issues, and setting a course for future work.  相似文献   

12.
A two-stage stochastic program is formulated for day-ahead commitment of thermal generating units to minimize total expected cost considering uncertainties in the day-ahead load and the availability of variable generation resources. Commitments of thermal units in the stochastic reliability unit commitment are viewed as first-stage decisions, and dispatch is relegated to the second stage. It is challenging to solve such a stochastic program if many scenarios are incorporated. A heuristic scenario reduction method termed forward selection in recourse clusters (FSRC), which selects scenarios based on their cost and reliability impacts, is presented to alleviate the computational burden. In instances down-sampled from data for an Independent System Operator in the US, FSRC results in more reliable commitment schedules having similar costs, compared to those from a scenario reduction method based on probability metrics. Moreover, in a rolling horizon study, FSRC preserves solution quality even if the reduction is substantial.  相似文献   

13.
We develop a scenario optimization model for asset and liability management of individual investors. The individual has a given level of initial wealth and a target goal to be reached within some time horizon. The individual must determine an asset allocation strategy so that the portfolio growth rate will be sufficient to reach the target. A scenario optimization model is formulated which maximizes the upside potential of the portfolio, with limits on the downside risk. Both upside and downside are measured vis-à-vis the goal. The stochastic behavior of asset returns is captured through bootstrap simulation, and the simulation is embedded in the model to determine the optimal portfolio. Post-optimality analysis using out-of-sample scenarios measures the probability of success of a given portfolio. It also allows us to estimate the required increase in the initial endowment so that the probability of success is improved.  相似文献   

14.
This study utilizes the variance ratio test to examine the behavior of Brazilian exchange rate. We show that adjustments for multiple tests and a bootstrap methodology must be employed in order to avoid size distortions. We propose a block bootstrap scheme and show that it has much nicer properties than the traditional Chow–Denning [Chow, K.V., Denning, K.C., 1993. A simple multiple variance ratio test. Journal of Econometrics 58 (3), 385–401] multiple variance ratio tests. Overall, the method proposed in the paper provides evidence refuting the random walk behavior for the Brazilian exchange rate for long investment horizon, but consistent with the random walk hypothesis for short-run horizon. Additionally, we also test for the predictive power of variable moving average (VMA) and trading range break (TRB) technical rules and find evidence of forecasting ability for these rules. Nonetheless, the excess return that can be obtained from such rules is not significant, suggesting that such predictability is not economically significant.  相似文献   

15.
非连续变形分析(discontinuous deformatrion analysis, DDA)通过引入虚拟节理网格将块体离散成子块体系统进行断裂扩展数值模拟.针对这种方法难以获得精确块体应力分布的问题, 提出一种基于无网格法移动最小二乘(moving least squares, MLS)插值的应力恢复算法.利用DDA计算得到的节点位移, 通过恰当构造MLS形函数及其导数, 推导了块体任意点应力的计算公式.数值算例将基于MLS后处理的结果与解析解及平均值法后处理结果进行比较, 验证了所提出方法的精确性和有效性.  相似文献   

16.
As well known,the jackknife and the bootstrap methods fail for the mean of thedependent observations.Recently,the moving blocks jackknife and bootstrap havebeen proposed in the case of the dependent observations.For the mean of the strictlystationary and m-dependent observations,it has been proved that the proposeddistribution and variance estimators are weakly consistent.This paper proves that thedistribution and variance estimators are strongly consistent for the mean(and theregular functions of mean)of the strictly stationary and m-dependent or(?)-mixingobservations.  相似文献   

17.
Several techniques for resampling dependent data have already been proposed. In this paper we use missing values techniques to modify the moving blocks jackknife and bootstrap. More specifically, we consider the blocks of deleted observations in the blockwise jackknife as missing data which are recovered by missing values estimates incorporating the observation dependence structure. Thus, we estimate the variance of a statistic as a weighted sample variance of the statistic evaluated in a “complete” series. Consistency of the variance and the distribution estimators of the sample mean are established. Also, we apply the missing values approach to the blockwise bootstrap by including some missing observations among two consecutive blocks and we demonstrate the consistency of the variance and the distribution estimators of the sample mean. Finally, we present the results of an extensive Monte Carlo study to evaluate the performance of these methods for finite sample sizes, showing that our proposal provides variance estimates for several time series statistics with smaller mean squared error than previous procedures.  相似文献   

18.
The core of the classical block maxima method consists of fitting an extreme value distribution to a sample of maxima over blocks extracted from an underlying series. In asymptotic theory, it is usually postulated that the block maxima are an independent random sample of an extreme value distribution. In practice however, block sizes are finite, so that the extreme value postulate will only hold approximately. A more accurate asymptotic framework is that of a triangular array of block maxima, the block size depending on the size of the underlying sample in such a way that both the block size and the number of blocks within that sample tend to infinity. The copula of the vector of componentwise maxima in a block is assumed to converge to a limit, which, under mild conditions, is then necessarily an extreme value copula. Under this setting and for absolutely regular stationary sequences, the empirical copula of the sample of vectors of block maxima is shown to be a consistent and asymptotically normal estimator for the limiting extreme value copula. Moreover, the empirical copula serves as a basis for rank-based, nonparametric estimation of the Pickands dependence function of the extreme value copula. The results are illustrated by theoretical examples and a Monte Carlo simulation study.  相似文献   

19.
Summary. A simple mapping approach is proposed to study the bootstrap accuracy in a rather general setting. It is demonstrated that the bootstrap accuracy can be obtained through this method for a broad class of statistics to which the commonly used Edgeworth expansion approach may not be successfully applied. We then consider some examples to illustrate how this approach may be used to find the bootstrap accuracy and show the advantage of the bootstrap approximation over the Gaussian approximation. For the multivariate Kolmogorov–Smirnov statistic, we show the error of bootstrap approximation is as small as that of the Gaussian approximation. For the multivariate kernel type density estimate, we obtain an order of the bootstrap error which is smaller than the order of the error of the Gaussian approximation given in Rio (1994). We also consider an application of the bootstrap accuracy for empirical process to that for the copula process. Received: 23 June 1995 / In revised form: 18 June 1996  相似文献   

20.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号