首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A trace test for the mean parameters of the growth curve model is proposed. It is constructed using the restricted maximum likelihood followed by an estimated likelihood ratio approach. The statistic reduces to the Lawley-Hotelling trace test for the Multivariate Analysis of Variance (MANOVA) models. Our test statistic is, therefore, a natural extension of the classical trace test to GMANOVA models. We show that the distribution of the test under the null hypothesis does not depend on the unknown covariance matrix Σ. We also show that the distributions under the null and alternative hypotheses can be represented as sums of weighted central and non-central chi-square random variables, respectively. Under the null hypothesis, the Satterthwaite approximation is used to get an approximate critical point. A novel Satterthwaite type approximation is proposed to obtain an approximate power. A simulation study is performed to evaluate the performance of our proposed test and numerical examples are provided as illustrations.  相似文献   

2.
In accelerated life tests (ALTs), test units are often tested in multiple test chambers along with different stress conditions. The nonhomogeneity of test chambers precludes the complete randomized experiment and may affect the life‐stress relationship of the test product. The chamber‐to‐chamber variation should be taken into account for ALT planning so as to obtain more accurate test results. In this paper, planning ALTs under a nested experimental design structure with random test chamber effects is studied. First, by a 2‐phase approach, we illustrate to what extent different test chamber assignments to stress conditions may impact the estimation of unknown parameters. Then, D‐optimal test plans with 2 test chambers are considered. To construct the optimal design, we establish the generalized linear mixed model for failure‐time data and apply a quasi‐likelihood method, where test chamber assignments, as well as other decision variables that are required for planning ALTs, are simultaneously determined.  相似文献   

3.
Modularization modeling and simulation of turbine test rig main test system   总被引:2,自引:0,他引:2  
Comprehensive applications of modularization modeling method have proven its effectiveness and versatility in system simulation field. This paper establishes the modularization numerical model of a turbine test rig main test system by using a finite volume numerical system developed. The simulation study based on an experiment is conducted. The comparison with available experimental data indicates that the general trends of simulation curves are in agreement with test curves and that there is obvious thermal stratification phenomenon at different positions along combustion gas flow direction. Accordingly, it can be concluded that the analysis of experimental data is reasonable and the established numerical system is effective. It is also found that the modeling of valve spool throttling and the modeling of components-wall heat transfer are two key factors of affecting simulation accuracy.  相似文献   

4.
This paper suggests a modified serial correlation test for linear panel data models, which is based on the parameter estimates for an artificial autoregression modeled by differencing and centering residual vectors. Specifically, the differencing operator over the time index and the centering operator over the individual index are, respectively, used to eliminate the potential individual effects and time effects so that the resultant serial correlation test is robust to the two potential effects. Clearly, the test is also robust to the potential correlation between the covariates and the random effects. The test is asymptotically chi-squared distributed under the null hypothesis. Power study shows that the test can detect local alternatives distinct at the parametric rate from the null hypothesis. The finite sample properties of the test are investigated by means of Monte Carlo simulation experiments, and a real data example is analyzed for illustration.  相似文献   

5.
The Poisson distribution is often a good approximation to the underlying sampling distribution and is central to the study of categorical data. In this paper, we propose a new unified approach to an investigation of point properties of simultaneous estimations of Poisson population parameters with general quadratic loss functions. The main accent is made on the shrinkage estimation. We build a series of estimators that could be represented as a convex combination of linear statistics such as maximum likelihood estimator (benchmark estimator), restricted estimator, composite estimator, preliminary test estimator, shrinkage estimator, positive rule shrinkage estimator (James-Stein type estimator). All these estimators are represented in a general integrated estimation approach, which allows us to unify our investigation and order them with respect to the risk. A simulation study with numerical and graphical results is conducted to illustrate the properties of the investigated estimators.  相似文献   

6.
Conditional random field model can make best use of limited site investigation data to properly characterize the spatial variation of soil properties. This paper aims to propose a simplified approach for generating conditional random fields of soil undrained shear strength. A numerical method is adopted to validate the effectiveness of the proposed approach. With the proposed approach, the analytical posterior statistics of spatially varying undrained shear strength conditioned on the known values at measurement locations can be obtained. The conditional random field model of undrained shear strength is constructed using the field vane shear test data at a site of the west side highway in New York and the probability of slope failure is estimated by subset simulation. A clay slope under undrained conditions is investigated as an example to illustrate the proposed approach. The effects of borehole location and borehole layout scheme on the slope reliability are addressed. The results indicate that the proposed approach not only can well incorporate the limited site investigation data into modelling of the actual spatial variation of soil parameters by conditional random fields, but also can capture the depth-dependent nature of soil properties. The realizations of conditional random fields generated by the proposed approach can be well constrained to the site investigation data.  相似文献   

7.
We propose the maximin efficiency robust test (MERT) for multiple nuisance parameters based on theories about the maximin efficiency robust test for only one nuisance parameter and investigate some theoretical properties about this robust test. We explore some theoretical properties about the power of the MERT for multiple nuisance parameters in a specified scenario intuitively further more. We also propose a meaningful example from statistical genetic field to which the MERT for multiple nuisance parameters can be well applied. Extensive simulation studies are conducted to testify the robustness of the MERT for multiple nuisance parameters.  相似文献   

8.
Empirical Bayes test for scale exponential family   总被引:1,自引:0,他引:1  
In this paper, we consider the empirical Bayes (EB) test problem for the scale parameters in the scale exponential family with a weighted linear loss function. The EB test rules are constructed by the kernel estimation method. The asymptotical optimality and convergence rates of the EB test rules are obtained. The main results are illustrated by applying the proposed test to type II censored data from the exponential distribution and to the test problem for the dispersion parameter in the linear regression model. __________ Translated from Journal of University of Science and Technology of China, 2004, 34(1): 1–10  相似文献   

9.
We propose an alternative approach to the classical nonparametric test problems, such as the goodness of fit test and the two-sample nonparametric test. In this approach, those problems are reviewed from the viewpoint of the estimation of the underlying population distributions and are formulated as the problem of model selection between Bayesian models which were recently proposed by the present authors. The model selection can be easily realized by choosing a model with the smallest ABIC, Akaike Bayesian information criterion. The approach provides the estimates of the density of the underlying population distribution(s) of any shape as well as the evaluation of the goodness of fit or the check of homogeneity of distributions. The practical utility of the present procedure is demonstrated by numerical examples. The difference in behavior between the present procedure and a density estimator GALTHY proposed by Akaike and Arahata is also briefly discussed.This paper was originally read at the Conference on Graphical Models to Analyze Structures (Organizer: N. Wermuth, Johannes Gutenberg University), June 30-July 2, 1986, Wiesbaden, West Germany.  相似文献   

10.
For a number of applications testing the structural integrity of some cavity is of importance. A particular application we have in mind is the monitoring of the structural integrity of the fusion reactor ITER by electromagnetic waves, but the methods developed in this work can be applied to a collection of rather general settings.

We use the solution of the Cauchy problem by potential methods and the range test to test the integrity of the boundary of some cavity using acoustic waves. The main idea of this approach is to test whether the scattered field can be analytically extended into the interior of some test domains and to calculate this extension. If the extension is possible, then we might reconstruct the field either by the inversion of Green's formula, a Green's approach incorporating the Dirichlet-to-Neumann map or a single-layer approach. If it is not the case, then the integral equations which arise from these approaches do not have solutions and we prove that in principle we can test this by observing the norm of the reconstruction density. As an alternative new approach to the range test we show that also the approximation error can be used as a discriminating criterion. We will show numerical results for the above cases, which provide a prove of concept to show the practicability of the method. For our application, the approximation error has turned out to be a more precise indicator for some singularity than the norm of the approximation density.  相似文献   


11.
用于检测生产服务过程的传统控制图多数都假定过程的分布是已知的。这些控制困经常是在正态分布的假设下构建的,然而在服务质量实时监控中数据往往是非正态的。在这种情况下,基于正态分布假设的控制图的结果是不可靠的。为了解决这个问题,通常考虑非参数方法,因为在过程分布未知情况下,非参数控制图比参数图更加稳健有效。本文提出一个新的基于Van der Waerden和Klotz检验的Lepage型非参数Shewhart控制图(称为LPN图)用于同时检测未知连续过程分布的位置参数和尺度参数。文中给出了LPN图在不同参数下的控制限。依据运行长度分布的均值,方差和分位数,分析了LPN图在过程受控和失控时的性能,并与其他一些现有的非参数控制图进行比较。基于蒙特卡洛的模拟结果表明,LPN图对非正态分布具有很好的稳健性,并且在不同的过程分布下对检测位置参数和尺度参数,尤其对检测尺度参数的漂移都具有很好的性能。最后通过监控出租车服务质量说明LPN图在实际中的应用。  相似文献   

12.
研究GARCH模型参数变点的Ratio检验.首先构造了基于残量累积平方和的Ratio统计量,推导了原假设下统计量的极限分布,其次采用Monte Carlo方法检验其有效性,最后以数据为例进一步说明该方法的实用性.  相似文献   

13.
This paper deals with the solution of the wave parameter identification problem for ocean test structure data. A discrete formulation is assumed. An ocean test structure is considered, and wave elevation and velocities are assumed to be measured with a number of sensors. Within the frame of linear wave theory, a Fourier series model is chosen for the wave elevation and velocities. Then, the following problem is posed: Find the amplitudes of the various wave components of specified frequency and direction, so that the assumed model of wave elevation and velocities provides the best fit to the measured data. Here, the term best fit is employed in the least-square sense over a given time interval.At each time instant, the wave representation involves four indexes (frequency, direction, instrument, time); hence, four-dimensional arrays are required. This formal difficulty can be avoided by switching to an alternative representation involving only two indexes (frequency-direction, instrument-time); hence, standard vector-matrix notation can be used. Within this frame, optimality conditions are derived for the amplitudes of the assumed wave model.A characteristic of the wave parameter identification problem is that the condition number of the system matrix can be large. Therefore, the numerical solution is not an easy task and special procedures must be employed. Specifically, Gaussian elimination is avoided and advantageous use is made of the Householder transformation, in the light of the least-square nature of the problem and the discretized approach to the problem.Numerical results are presented. The effect of various system parameters (number of frequencies, number of directions, sampling time, number of sensors, and location of sensors) is investigated in connection with global or strong accuracy, local or weak accuracy, integral accuracy, and condition number of the system matrix.From the numerical experiments, it appears that the wave parameter identification problem has a unique solution if the number of directions is smaller than or equal to the number of sensors; it has an infinite number of solutions otherwise. In the case where a unique solution exists, the condition number of the system matrix increases as the size of the system increases, and this has a detrimental effect on the accuracy. However, the accuracy can be improved by proper selection of the sampling time and by proper choice of the number and location of the sensors.Generally speaking, the computations done for the discrete case exhibit better accuracy than the computations done for the continuous case (Ref. 5). This improved accuracy is a direct consequence of having used advantageously the Householder transformation and is obtained at the expense of increased memory requirements and increased CPU time.This work was supported by Exxon Production Research Company, Houston, Texas. This paper is based partly on Refs. 1–4.  相似文献   

14.
A brand-switching model is developed based on a hierarchical elimination decision process. The model incorporates consumers' first-order variety seeking (avoidance) behavior. Some interesting properties of the model are derived and using these properties, it is shown that the model can be used to infer market structure from brand switching data. Finally, the proposed approach is compared with other approaches for inferring market structure suggested in the extant literature.  相似文献   

15.
Probability bounds can be derived for distributions whose covariance matrices are ordered with respect to Löwner partial ordering, a relation that is based on whether the difference between two matrices is positive definite. One example is Anderson’s Theorem. This paper develops a probability bound that follows from Anderson’s Theorem that is useful in the assessment of multivariate process capability. A statistical hypothesis test is also derived that allows one to test the null hypothesis that a given process is capable versus the alternative hypothesis that it is not capable on the basis of a sample of observed quality characteristic vectors from the process. It is argued that the proposed methodology is viable outside the multivariate normal model, where the p-value for the test can be computed using the bootstrap. The methods are demonstrated using example data, and the performance of the bootstrap approach is studied empirically using computer simulations.  相似文献   

16.
A test of conditional heteroscedasticity in time series   总被引:1,自引:0,他引:1  
A new test of conditional heteroscedasticity for time series is proposed. The new testing method is based on a goodness of fit type test statistics and a Cramer-von Mises type test statistic. The asymptotic properties of the new test statistic is establised. The results demonstrate that such a test is consistent. Project supported by the National Natural Science Foundation of China (Grant No. 19231050) and Postdoctoral Fund of China.  相似文献   

17.
In developing decision-making models for the evaluation of medical procedures, the model parameters can be estimated by fitting the model to data observed in (randomized) trials. For complex models that are implemented by discrete event simulation (microsimulation) of individual life histories, the Score Function (SF) method can potentially be an appropriate approach for such estimation exercises. We test this approach for a microsimulation model for breast cancer screening that is fitted to data from the HIP randomized trial for early detection of breast cancer. Comparison of the parameter values estimated using the SF method and the analytical solution shows that method performs well on this simple model. The precision of the estimated parameter values depends (as expected) on the size of the sample of simulated life histories, and on the number of parameters estimated. Using analytical representations for parts of the microsimulation model can increase the precision of the estimated parameter values. Compared to the Nelder and Mead Simplex method which is often used in stochastic simulation because of its ease of implementation, the SF method is clearly more efficient (ratio computer time: precision of estimates). The additional analytical investment needed to implement the SF method in an (existing) simulation model may well be worth the effort.  相似文献   

18.
Random forests are a commonly used tool for classification and for ranking candidate predictors based on the so-called variable importance measures. These measures attribute scores to the variables reflecting their importance. A drawback of variable importance measures is that there is no natural cutoff that can be used to discriminate between important and non-important variables. Several approaches, for example approaches based on hypothesis testing, were developed for addressing this problem. The existing testing approaches require the repeated computation of random forests. While for low-dimensional settings those approaches might be computationally tractable, for high-dimensional settings typically including thousands of candidate predictors, computing time is enormous. In this article a computationally fast heuristic variable importance test is proposed that is appropriate for high-dimensional data where many variables do not carry any information. The testing approach is based on a modified version of the permutation variable importance, which is inspired by cross-validation procedures. The new approach is tested and compared to the approach of Altmann and colleagues using simulation studies, which are based on real data from high-dimensional binary classification settings. The new approach controls the type I error and has at least comparable power at a substantially smaller computation time in the studies. Thus, it might be used as a computationally fast alternative to existing procedures for high-dimensional data settings where many variables do not carry any information. The new approach is implemented in the R package vita.  相似文献   

19.
This paper proposes fuzzy symbolic modeling as a framework for intelligent data analysis and model interpretation in classification and regression problems. The fuzzy symbolic modeling approach is based on the eigenstructure analysis of the data similarity matrix to define the number of fuzzy rules in the model. Each fuzzy rule is associated with a symbol and is defined by a Gaussian membership function. The prototypes for the rules are computed by a clustering algorithm, and the model output parameters are computed as the solutions of a bounded quadratic optimization problem. In classification problems, the rules’ parameters are interpreted as the rules’ confidence. In regression problems, the rules’ parameters are used to derive rules’ confidences for classes that represent ranges of output variable values. The resulting model is evaluated based on a set of benchmark datasets for classification and regression problems. Nonparametric statistical tests were performed on the benchmark results, showing that the proposed approach produces compact fuzzy models with accuracy comparable to models produced by the standard modeling approaches. The resulting model is also exploited from the interpretability point of view, showing how the rule weights provide additional information to help in data and model understanding, such that it can be used as a decision support tool for the prediction of new data.  相似文献   

20.
Pyroshocks are transient motions of structural elements due to explosive loading induced by the detonation of ordnance devices incorporated into or attached to the structure. In space programs the simulation of pyroshocks is a fixed part of the test requirement for instruments and equipments of space vehicles. Therefore, by the use of various test devices such as hammer pendulums the excitation of a pyroshock has to be reproduced which, so far, has led to rather empirical knowledge. In the current work a better predictability of pyroshocks is focussed in order to reduce the duration of test periods. As an approach numerical and analytical calculations are used for the simulation of in-plane wave propagation in rods and rectangular disks due to mechanical impacts. Furthermore, the results obtained from different mathematical and mechanical theories are compared with data received from conducted experiments. (© 2009 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号