首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the efficiency and the power of the normal theory test for independence after a Box-Cox transformation. We obtain an expression for the correlation between the variates after a Box-Cox transformation in terms of the correlation on the normal scale. We discuss the efficiency of test of independence after a Box-Cox transformation and show that for the family considered it is always more efficient to conduct the test of independence based on Pearson correlation coefficient after transformation to normality. Power of test of independence before and after a Box-Cox transformation is studied for a finite sample size using Monte Carlo simulation. Our results show that we can increase the power of the normal-theory test for independence after estimating the transformation parameter from the data. The procedure has application for generating non-negative random variables with prescribed correlation.  相似文献   

2.
In this paper we examine the possibility of using the standard Kruskal-Wallis (KW) rank test in order to evaluate whether the distribution of efficiency scores resulting from Data Envelopment Analysis (DEA) is independent of the input (or output) mix of the observations. Since the DEA frontier is estimated, many standard assumptions for evaluating the KW test statistic are violated. Therefore, we propose to explore its statistical properties by the use of simulation studies. The simulations are performed conditional on the observed input mixes. The method, unlike existing approaches in the literature, is also applicable when comparing distributions of efficiency scores in more than two groups and does not rely on bootstrapping of, or questionable distributional assumptions about, the efficiency scores. The approach is illustrated using an empirical case of demolition projects. Since the assumption of mix independence is rejected the implication is that it, for example, is impossible to determine whether machine intensive project are more or less efficient than labor intensive projects.  相似文献   

3.
Multivariate autoregressive models with exogenous variables (VARX) are often used in econometric applications. Many properties of the basic statistics for this class of models rely on the assumption of independent errors. Using results of Hong (Econometrica 64 (1996) 837), we propose a new test statistic for checking the hypothesis of non-correlation or independence in the Gaussian case. The test statistic is obtained by comparing the spectral density of the errors under the null hypothesis of independence with a kernel-based spectral density estimator. The asymptotic distribution of the statistic is derived under the null hypothesis. This test generalizes the portmanteau test of Hosking (J. Amer. Statist. Assoc. 75 (1980) 602). The consistency of the test is established for a general class of static regression models with autocorrelated errors. Its asymptotic slope is derived and the asymptotic relative efficiency within the class of possible kernels is also investigated. Finally, the level and power of the resulting tests are also studied by simulation.  相似文献   

4.
The “classical” random graph models, in particular G(n,p), are “homogeneous,” in the sense that the degrees (for example) tend to be concentrated around a typical value. Many graphs arising in the real world do not have this property, having, for example, power‐law degree distributions. Thus there has been a lot of recent interest in defining and studying “inhomogeneous” random graph models. One of the most studied properties of these new models is their “robustness”, or, equivalently, the “phase transition” as an edge density parameter is varied. For G(n,p), p = c/n, the phase transition at c = 1 has been a central topic in the study of random graphs for well over 40 years. Many of the new inhomogeneous models are rather complicated; although there are exceptions, in most cases precise questions such as determining exactly the critical point of the phase transition are approachable only when there is independence between the edges. Fortunately, some models studied have this property already, and others can be approximated by models with independence. Here we introduce a very general model of an inhomogeneous random graph with (conditional) independence between the edges, which scales so that the number of edges is linear in the number of vertices. This scaling corresponds to the p = c/n scaling for G(n,p) used to study the phase transition; also, it seems to be a property of many large real‐world graphs. Our model includes as special cases many models previously studied. We show that, under one very weak assumption (that the expected number of edges is “what it should be”), many properties of the model can be determined, in particular the critical point of the phase transition, and the size of the giant component above the transition. We do this by relating our random graphs to branching processes, which are much easier to analyze. We also consider other properties of the model, showing, for example, that when there is a giant component, it is “stable”: for a typical random graph, no matter how we add or delete o(n) edges, the size of the giant component does not change by more than o(n). © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 31, 3–122, 2007  相似文献   

5.
The data driven Neyman statistic consists of two elements: a score statistic in a finite dimensional submodel and a selection rule to determine the best fitted submodel. For instance, Schwarz BIC and Akaike AIC rules are often applied in such constructions. For moderate sample sizes AIC is sensitive in detecting complex models, while BIC works well for relatively simple structures. When the sample size is moderate, the choice of selection rule for determining a best fitted model from a number of models has a substantial influence on the power of the related data driven Neyman test. This paper proposes a new solution, in which the type of penalty (AIC or BIC) is chosen on the basis of the data. The resulting refined data driven test combines the advantages of these two selection rules.  相似文献   

6.
We present a nonlinear technique to correct a general finite volume scheme for anisotropic diffusion problems, which provides a discrete maximum principle. We point out general properties satisfied by many finite volume schemes and prove the proposed corrections also preserve these properties. We then study two specific corrections proving, under numerical assumptions, that the corresponding approximate solutions converge to the continuous one as the size of the mesh tends to zero. Finally we present numerical results showing that these corrections suppress local minima produced by the original finite volume scheme.  相似文献   

7.
陈敏 《应用数学学报》2002,25(4):577-590
门限自回归模型被广泛地用于许多领域,当建立或使用这类模型时,一个重要问题是需要知道是否存在条件异方差。在本文中,我们对这个问题提出一个非参数检验,检验的大样本理论被给出,我们还通过数值模拟研究了检验方法的有限样本性质。结果表示检验有好的功效。经验百分位点还被给出。  相似文献   

8.
纵向数据下广义估计方程估计   总被引:1,自引:0,他引:1  
广义估计方程方法是一种最一般的参数估计方法,广泛地应用于生物统计、经济计量、医疗保险等领域.在纵向数据下,由于组间数据是相关的,为了提高估计的效率,广义估计方程方法一般需要考虑个体组内相关性.因此,大多数文献对个体组内的协方差矩阵进行参数假设,但假设的合理性及协方差矩阵估计的好坏对参数估计效率产生很大影响,同时参数假设也可能导致模型误判.针对纵向数据下广义估计方程,本文提出了改进的GMM方法和经验似然方法,并对给出的估计量建立了大样本性质.其中分块的思想,避免了对个体组内相关性结构进行假设,从这种意义上说,这种方法具有一定的稳健性.我们还通过两个模拟的例子,考察了文中提出估计量的有限样本性质.  相似文献   

9.
We consider partially ordered models. We introduce the notions of a weakly (quasi-)p.o.-minimal model and a weakly (quasi-)p.o.-minimal theory. We prove that weakly quasi-p.o.-minimal theories of finite width lack the independence property, weakly p.o.-minimal directed groups are Abelian and divisible, weakly quasi-p.o.-minimal directed groups with unique roots are Abelian, and the direct product of a finite family of weakly p.o.-minimal models is a weakly p.o.-minimal model. We obtain results on existence of small extensions of models of weakly quasi-p.o.-minimal atomic theories. In particular, for such a theory of finite length, we find an upper estimate of the Hanf number for omitting a family of pure types. We also find an upper bound for the cardinalities of weakly quasi-p.o.-minimal absolutely homogeneous models of moderate width.  相似文献   

10.
《Optimization》2012,61(3):195-211
We consider generalized semi-infinite programming problems. Second order necessary and sufficient conditionsfor local optimality are given. The conditions are derived under assumptions such that the feasible set can be described by means of a finite number of optimal value functions. Since we do not require a strict complementary condition for the local reduction these functions are only of class C1 A sufficient condition for optimality is proven under much weaker assumptions.  相似文献   

11.
In multivariate categorical data, models based on conditional independence assumptions, such as latent class models, offer efficient estimation of complex dependencies. However, Bayesian versions of latent structure models for categorical data typically do not appropriately handle impossible combinations of variables, also known as structural zeros. Allowing nonzero probability for impossible combinations results in inaccurate estimates of joint and conditional probabilities, even for feasible combinations. We present an approach for estimating posterior distributions in Bayesian latent structure models with potentially many structural zeros. The basic idea is to treat the observed data as a truncated sample from an augmented dataset, thereby allowing us to exploit the conditional independence assumptions for computational expediency. As part of the approach, we develop an algorithm for collapsing a large set of structural zero combinations into a much smaller set of disjoint marginal conditions, which speeds up computation. We apply the approach to sample from a semiparametric version of the latent class model with structural zeros in the context of a key issue faced by national statistical agencies seeking to disseminate confidential data to the public: estimating the number of records in a sample that are unique in the population on a set of publicly available categorical variables. The latent class model offers remarkably accurate estimates of population uniqueness, even in the presence of a large number of structural zeros.  相似文献   

12.
For about thirty years, time series models with time-dependent coefficients have sometimes been considered as an alternative to models with constant coefficients or non-linear models. Analysis based on models with time-dependent models has long suffered from the absence of an asymptotic theory except in very special cases. The purpose of this paper is to provide such a theory without using a locally stationary spectral representation and time rescaling. We consider autoregressive-moving average (ARMA) models with time-dependent coefficients and a heteroscedastic innovation process. The coefficients and the innovation variance are deterministic functions of time which depend on a finite number of parameters. These parameters are estimated by maximising the Gaussian likelihood function. Deriving conditions for consistency and asymptotic normality and obtaining the asymptotic covariance matrix are done using some assumptions on the functions of time in order to attenuate non-stationarity, mild assumptions for the distribution of the innovations, and also a kind of mixing condition. Theorems from the theory of martingales and mixtingales are used. Some simulation results are given and both theoretical and practical examples are treated. Received 2004; Final version 23 December 2004  相似文献   

13.
Many processes can be represented in a simple form as infinite-order linear series. In such cases, an approximate model is often derived as a truncation of the infinite-order process, for estimation on the finite sample. The literature contains a number of asymptotic distributional results for least squares estimation of such finite truncations, but for quantile estimation, results are not available at a level of generality that accommodates time series models used as finite approximations to processes of potentially unbounded order. Here we establish consistency and asymptotic normality for conditional quantile estimation of truncations of such infinite-order linear models, with the truncation order increasing in sample size. We focus on estimation of the model at a given quantile. The proofs use the generalized functions approach and allow for a wide range of time series models as well as other forms of regression model. The results are illustrated with both analytical and simulation examples.  相似文献   

14.
We propose a model order reduction approach for balanced truncation of linear switched systems. Such systems switch among a finite number of linear subsystems or modes. We compute pairs of controllability and observability Gramians corresponding to each active discrete mode by solving systems of coupled Lyapunov equations. Depending on the type, each such Gramian corresponds to the energy associated to all possible switching scenarios that start or, respectively end, in a particular operational mode. In order to guarantee that hard to control and hard to observe states are simultaneously eliminated, we construct a transformed system, whose Gramians are equal and diagonal. Then, by truncation, directly construct reduced order models. One can show that these models preserve some properties of the original model, such as stability and that it is possible to obtain error bounds relating the observed output, the control input and the entries of the diagonal Gramians.  相似文献   

15.
Artificial neural network (ANN) is a nonlinear dynamic computational system suitable for simulations which are hard to be described by physical models where, rather than relying on a number of predetermined assumptions, data is used to form the model. In order to predict the mechanical properties of A356 including yield stress, ultimate tensile strength and elongation percentage, a relatively new approach that uses artificial neural network and finite element technique is presented which combines mechanical properties data in the form of experimental and simulated solidification conditions. It was observed that predictions of this study are consistent with experimental measurements for A356 alloy. The results of this research were also used for solidification codes of SUT CAST software.  相似文献   

16.
A shock and wear system standing a finite number of shocks and subject to two types of repairs is considered. The failure of the system can be due to wear or to a fatal shock. Associated to these failures there are two repair types: normal and severe. Repairs are as good as new. The shocks arrive following a Markovian arrival process, and the lifetime of the system follows a continuous phase-type distribution. The repair times follow different continuous phase-type distributions, depending on the type of failure. Under these assumptions, two systems are studied, depending on the finite number of shocks that the system can stand before a fatal failure that can be random or fixed. In the first case, the number of shocks is governed by a discrete phase-type distribution. After a finite (random or fixed) number of non-fatal shocks the system is repaired (severe repair). The repair due to wear is a normal repair. For these systems, general Markov models are constructed and the following elements are studied: the stationary probability vector; the transient rate of occurrence of failures; the renewal process associated to the repairs, including the distribution of the period between replacements and the number of non-fatal shocks in this period. Special cases of the model with random number of shocks are presented. An application illustrating the numerical calculations is given. The systems are studied in such a way that several particular cases can be deduced from the general ones straightaway. We apply the matrix-analytic methods for studying these models showing their versatility.  相似文献   

17.
Combined heat and power (CHP) production is an important energy production technology which can help to improve the efficiency of energy production and to reduce the emission of CO2. Cost-efficient operation of a CHP system can be planned using an optimisation model based on hourly load forecasts. A long-term planning model decomposes into hourly models, which can be formulated as linear programming (LP) problems. Such problems can be solved efficiently using the specialized Power Simplex algorithm. However, Power Simplex can only manage one heat and one power balance. Since heat cannot be transported over long distances, Power Simplex applies only for local CHP planning.In this paper we formulate the hourly multi-site CHP planning problem with multiple heat balances as an LP model with a special structure. We then develop the Extended Power Simplex (EPS) algorithm for solving such models efficiently. Even though the problem can be quite large as the number of demand sites increases, EPS demonstrates very good efficiency. In test runs with realistic models, EPS is from 29 to 85 times faster than an efficient sparse Simplex code using the product form of inverse (PFI). Furthermore, the relative efficiency of EPS improves as the problem size grows.  相似文献   

18.
Spearman’s rank-correlation coefficient (also called Spearman’s rho) represents one of the best-known measures to quantify the degree of dependence between two random variables. As a copula-based dependence measure, it is invariant with respect to the distribution’s univariate marginal distribution functions. In this paper, we consider statistical tests for the hypothesis that all pairwise Spearman’s rank correlation coefficients in a multivariate random vector are equal. The tests are nonparametric and their asymptotic distributions are derived based on the asymptotic behavior of the empirical copula process. Only weak assumptions on the distribution function, such as continuity of the marginal distributions and continuous partial differentiability of the copula, are required for obtaining the results. A nonparametric bootstrap method is suggested for either estimating unknown parameters of the test statistics or for determining the associated critical values. We present a simulation study in order to investigate the power of the proposed tests. The results are compared to a classical parametric test for equal pairwise Pearson’s correlation coefficients in a multivariate random vector. The general setting also allows the derivation of a test for stochastic independence based on Spearman’s rho.  相似文献   

19.
This study utilizes the variance ratio test to examine the behavior of Brazilian exchange rate. We show that adjustments for multiple tests and a bootstrap methodology must be employed in order to avoid size distortions. We propose a block bootstrap scheme and show that it has much nicer properties than the traditional Chow–Denning [Chow, K.V., Denning, K.C., 1993. A simple multiple variance ratio test. Journal of Econometrics 58 (3), 385–401] multiple variance ratio tests. Overall, the method proposed in the paper provides evidence refuting the random walk behavior for the Brazilian exchange rate for long investment horizon, but consistent with the random walk hypothesis for short-run horizon. Additionally, we also test for the predictive power of variable moving average (VMA) and trading range break (TRB) technical rules and find evidence of forecasting ability for these rules. Nonetheless, the excess return that can be obtained from such rules is not significant, suggesting that such predictability is not economically significant.  相似文献   

20.
We define the conformity of marginal and conditional models with a joint model within Walley's theory of coherent lower previsions. Loosely speaking, conformity means that the joint can reproduce the marginal and conditional models we started from. By studying conformity with and without additional assumptions of epistemic irrelevance and independence, we establish connections with a number of prominent models in Walley's theory: the marginal extension, the irrelevant natural extension, the independent natural extension and the strong product.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号