首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
This paper develops a Bayesian approach to analyzing quantile regression models for censored dynamic panel data. We employ a likelihood-based approach using the asymmetric Laplace error distribution and introduce lagged observed responses into the conditional quantile function. We also deal with the initial conditions problem in dynamic panel data models by introducing correlated random effects into the model. For posterior inference, we propose a Gibbs sampling algorithm based on a location-scale mixture representation of the asymmetric Laplace distribution. It is shown that the mixture representation provides fully tractable conditional posterior densities and considerably simplifies existing estimation procedures for quantile regression models. In addition, we explain how the proposed Gibbs sampler can be utilized for the calculation of marginal likelihood and the modal estimation. Our approach is illustrated with real data on medical expenditures.  相似文献   

2.
对一个统计控制问题的再探讨   总被引:1,自引:0,他引:1  
本文对利用回归模型进行统计控制的问题进行了进一步研究,发现已有的方法有值得改进之处,并为此提出了一种新的求解控制阈值的方法。模拟和实例研究表明我们的方法适用于一般的误差分布情形,而且在误差分布为非正态时优于已有的方法。  相似文献   

3.
In this article, we propose an unbiased estimating equation approach for a two-component mixture model with correlated response data. We adapt the mixture-of-experts model and a generalized linear model for component distribution and mixing proportion, respectively. The new approach only requires marginal distributions of both component densities and latent variables. We use serial correlations from subjects’ subgroup memberships, which improves estimation efficiency and classification accuracy, and show that estimation consistency does not depend on the choice of the working correlation matrix. The proposed estimating equation is solved by an expectation-estimating-equation (EEE) algorithm. In the E-step of the EEE algorithm, we propose a joint imputation based on the conditional linear property for the multivariate Bernoulli distribution. In addition, we establish asymptotic properties for the proposed estimators and the convergence property using the EEE algorithm. Our method is compared to an existing competitive mixture model approach in both simulation studies and an election data application. Supplementary materials for this article are available online.  相似文献   

4.
We propose a heteroscedastic replicated measurement error model based on the class of scale mixtures of skew-normal distributions, which allows the variances of measurement errors to vary across subjects. We develop EM algorithms to calculate maximum likelihood estimates for the model with or without equation error. An empirical Bayes approach is applied to estimate the true covariate and predict the response. Simulation studies show that the proposed models can provide reliable results and the inference is not unduly affected by outliers and distribution misspecification. The method has also been used to analyze a real data of plant root decomposition.  相似文献   

5.
This paper aims to develop a new robust U-type test for high dimensional regression coefficients using the estimated U-statistic of order two and refitted cross-validation error variance estimation. It is proved that the limiting null distribution of the proposed new test is normal under two kinds of ordinary models.We further study the local power of the proposed test and compare with other competitive tests for high dimensional data. The idea of refitted cross-validation approach is utilized to reduce the bias of sample variance in the estimation of the test statistic. Our theoretical results indicate that the proposed test can have even more substantial power gain than the test by Zhong and Chen(2011) when testing a hypothesis with outlying observations and heavy tailed distributions. We assess the finite-sample performance of the proposed test by examining its size and power via Monte Carlo studies. We also illustrate the application of the proposed test by an empirical analysis of a real data example.  相似文献   

6.
This work presents a Bayesian semiparametric approach for dealing with regression models where the covariate is measured with error. Given that (1) the error normality assumption is very restrictive, and (2) assuming a specific elliptical distribution for errors (Student-t for example), may be somewhat presumptuous; there is need for more flexible methods, in terms of assuming only symmetry of errors (admitting unknown kurtosis). In this sense, the main advantage of this extended Bayesian approach is the possibility of considering generalizations of the elliptical family of models by using Dirichlet process priors in dependent and independent situations. Conditional posterior distributions are implemented, allowing the use of Markov Chain Monte Carlo (MCMC), to generate the posterior distributions. An interesting result shown is that the Dirichlet process prior is not updated in the case of the dependent elliptical model. Furthermore, an analysis of a real data set is reported to illustrate the usefulness of our approach, in dealing with outliers. Finally, semiparametric proposed models and parametric normal model are compared, graphically with the posterior distribution density of the coefficients.  相似文献   

7.
We describe a Bayesian model for simultaneous linear quantile regression at several specified quantile levels. More specifically, we propose to model the conditional distributions by using random probability measures, known as quantile pyramids, introduced by Hjort and Walker. Unlike many existing approaches, this framework allows us to specify meaningful priors on the conditional distributions, while retaining the flexibility afforded by the nonparametric error distribution formulation. Simulation studies demonstrate the flexibility of the proposed approach in estimating diverse scenarios, generally outperforming other competitive methods. We also provide conditions for posterior consistency. The method is particularly promising for modeling the extremal quantiles. Applications to extreme value analysis and in higher dimensions are also explored through data examples. Supplemental material for this article is available online.  相似文献   

8.
Recently some methods have been proposed to find the distance and weight distribution of cyclic codes using Gröbner bases. We identify a class of codes for which these methods can be generalized. We show that this class contains all interesting linear codes and we provide variants and improvements. This approach sometimes reveals an unexpected algebraic structure in the code. We also investigate the decoding for a subclass, proving the existence of general error locator polynomials.  相似文献   

9.
This paper aims to develop a new robust U-type test for high dimensional regression coefficients using the estimated U-statistic of order two and refitted cross-validation error variance estimation. It is proved that the limiting null distribution of the proposed new test is normal under two kinds of ordinary models. We further study the local power of the proposed test and compare with other competitive tests for high dimensional data. The idea of refitted cross-validation approach is utilized to reduce the bias of sample variance in the estimation of the test statistic. Our theoretical results indicate that the proposed test can have even more substantial power gain than the test by Zhong and Chen (2011) when testing a hypothesis with outlying observations and heavy tailed distributions. We assess the finite-sample performance of the proposed test by examining its size and power via Monte Carlo studies. We also illustrate the application of the proposed test by an empirical analysis of a real data example.  相似文献   

10.
The main purpose of this study is to propose a new technology scoring model for reflecting the total perception scoring phenomenon which happens often in many evaluation settings. A base model used is a logistic regression for non-default prediction of a firm. The point estimator used to predict the probability for non-default based on this model does not consider the risk involved in the estimation error. We propose to update the point estimator within its confidence interval using the evaluator’s perception. The proposed approach takes into account not only the risk involved in the estimation error of the point estimator but also the total perception scoring phenomenon. Empirical evidence of a better prediction ability of the proposed model is displayed in terms of the area under the ROC curves. Additionally, we showed that the proposed model can take advantage when it is applied to smaller data size. It is expected that the proposed approach can be applied to various technology related decision-makings such as R&D investment, alliance, transfer, and loan.  相似文献   

11.
The relationship between viral load and CD4 cell count is one of the interesting questions in AIDS research. Statistical models are powerful tools for clarifying this important problem. Partially linear mixed-effects (PLME) model which accounts for the unknown function of time effect is one of the important models for this purpose. Meanwhile, the mixed-effects modeling approach is suitable for the longitudinal data analysis. However, the complex process of data collection in clinical trials has made it impossible to rely on one particular model to address the issues. Asymmetric distribution, measurement error and left censoring are features commonly arisen in longitudinal studies. It is crucial to take into account these features in the modeling process to achieve reliable estimation and valid conclusion. In this article, we establish a joint model that accounts for all these features in the framework of PLME models. A Bayesian inferential procedure is proposed to estimate parameters in the joint model. A real data example is analyzed to demonstrate the proposed modeling approach for inference and the results are reported by comparing various scenarios-based models.  相似文献   

12.
A finite mixture model has been used to fit the data from heterogeneous populations to many applications. An Expectation Maximization (EM) algorithm is the most popular method to estimate parameters in a finite mixture model. A Bayesian approach is another method for fitting a mixture model. However, the EM algorithm often converges to the local maximum regions, and it is sensitive to the choice of starting points. In the Bayesian approach, the Markov Chain Monte Carlo (MCMC) sometimes converges to the local mode and is difficult to move to another mode. Hence, in this paper we propose a new method to improve the limitation of EM algorithm so that the EM can estimate the parameters at the global maximum region and to develop a more effective Bayesian approach so that the MCMC chain moves from one mode to another more easily in the mixture model. Our approach is developed by using both simulated annealing (SA) and adaptive rejection metropolis sampling (ARMS). Although SA is a well-known approach for detecting distinct modes, the limitation of SA is the difficulty in choosing sequences of proper proposal distributions for a target distribution. Since ARMS uses a piecewise linear envelope function for a proposal distribution, we incorporate ARMS into an SA approach so that we can start a more proper proposal distribution and detect separate modes. As a result, we can detect the maximum region and estimate parameters for this global region. We refer to this approach as ARMS annealing. By putting together ARMS annealing with the EM algorithm and with the Bayesian approach, respectively, we have proposed two approaches: an EM-ARMS annealing algorithm and a Bayesian-ARMS annealing approach. We compare our two approaches with traditional EM algorithm alone and Bayesian approach alone using simulation, showing that our two approaches are comparable to each other but perform better than EM algorithm alone and Bayesian approach alone. Our two approaches detect the global maximum region well and estimate the parameters in this region. We demonstrate the advantage of our approaches using an example of the mixture of two Poisson regression models. This mixture model is used to analyze a survey data on the number of charitable donations.  相似文献   

13.
Various random effects models have been developed for clustered binary data; however, traditional approaches to these models generally rely heavily on the specification of a continuous random effect distribution such as Gaussian or beta distribution. In this article, we introduce a new model that incorporates nonparametric unobserved random effects on unit interval (0,1) into logistic regression multiplicatively with fixed effects. This new multiplicative model setup facilitates prediction of our nonparametric random effects and corresponding model interpretations. A distinctive feature of our approach is that a closed-form expression has been derived for the predictor of nonparametric random effects on unit interval (0,1) in terms of known covariates and responses. A quasi-likelihood approach has been developed in the estimation of our model. Our results are robust against random effects distributions from very discrete binary to continuous beta distributions. We illustrate our method by analyzing recent large stock crash data in China. The performance of our method is also evaluated through simulation studies.  相似文献   

14.
The aim of this paper is to model lifetime data for systems that have failure modes by using the finite mixture of Weibull distributions. It involves estimating of the unknown parameters which is an important task in statistics, especially in life testing and reliability analysis. The proposed approach depends on different methods that will be used to develop the estimates such as MLE through the EM algorithm. In addition, Bayesian estimations will be investigated and some other extensions such as Graphic, Non-Linear Median Rank Regression and Monte Carlo simulation methods can be used to model the system under consideration. A numerical application will be used through the proposed approach. This paper also presents a comparison of the fitted probability density functions, reliability functions and hazard functions of the 3-parameter Weibull and Weibull mixture distributions using the proposed approach and other conventional methods which characterize the distribution of failure times for the system components. GOF is used to determine the best distribution for modeling lifetime data, the priority will be for the proposed approach which has more accurate parameter estimates.  相似文献   

15.
This paper proposes a new approach to analyze stock return asymmetry and quantiles. We also present a new scale mixture of uniform (SMU) representation for the asymmetric Laplace distribution (ALD). The use of the SMU for a probability distribution is a data augmentation technique that simplifies the Gibbs sampler of the Bayesian Markov chain Monte Carlo algorithms. We consider a stochastic volatility (SV) model with an ALD error distribution. With the SMU representation, the full conditional distribution for some parameters is shown to have closed form. It is also known that the ALD can be used to obtain the coefficients of quantile regression models. This paper also considers a quantile SV model by fixing the skew parameter of the ALD at specific quantile level. Simulation study shows that the proposed methodology works well in both SV and quantile SV models using Bayesian approach. In the empirical study, we analyze index returns of the stock markets in Australia, Japan, Hong Kong, Thailand, and the UK and study the effect of S&P 500 on these returns. The results show the significant return asymmetry in some markets and the influence by S&P 500 in all markets at all quantile levels. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
Metagenomics is a rapidly growing field, which has been greatly driven by the ongoing advancements in high-throughput sequencing technologies. As a result, both the data preparation and the subsequent in silico experiments pose unsolved technical and theoretical challenges, as there are not any well-established approaches, and new expertise and software are constantly emerging.Our project main focus is the creation and evaluation of a novel error detection and correction approach to be used inside a metagenomic processing workflow. The approach, together with an indirect validation technique and the already obtained empirical results, are described in detail in this paper. To aid the development and testing, we are also building a workflow execution system to run our experiments that is designed to be extensible beyond the scope of error detection which will be released as a free/open-source software package.  相似文献   

17.
The present work is associated with Bayesian finite element (FE) model updating using modal measurements based on maximizing the posterior probability instead of any sampling based approach. Such Bayesian updating framework usually employs normal distribution in updating of parameters, although normal distribution has usual statistical issues while using non-negative parameters. These issues are proposed to be dealt with incorporating lognormal distribution for non-negative parameters. Detailed formulations are carried out for model updating, uncertainty-estimation and probabilistic detection of changes/damages of structural parameters using combined normal-lognormal probability distribution in this Bayesian framework. Normal and lognormal distributions are considered for eigen-system equation and structural (mass and stiffness) parameters respectively, while these two distributions are jointly considered for likelihood function. Important advantages in FE model updating (e.g. utilization of incomplete measured modal data, non-requirement of mode-matching) are also retained in this combined normal-lognormal distribution based proposed FE model updating approach. For demonstrating the efficiency of this proposed approach, a two dimensional truss structure is considered with multiple damage cases. Satisfactory performances are observed in model updating and subsequent probabilistic estimations, however level of performances are found to be weakened with increasing levels in damage scenario (as usual). Moreover, performances of this proposed FE model updating approach are compared with the typical normal distribution based updating approach for those damage cases demonstrating quite similar level of performances. The proposed approach also demonstrates better computational efficiency (achieving higher accuracy in lesser computation time) in comparison with two prominent Markov Chain Monte Carlo (MCMC) techniques (viz. Metropolis-Hastings algorithm and Gibbs sampling).  相似文献   

18.
This study develops a new use of data envelopment analysis for estimating a stochastic frontier cost function that is assumed to have two different error components: a one-sided disturbance (representing technical and allocative inefficiencies) and a two-sided disturbance (representing an observational error). The two error components are handled by data envelopment analysis in combination with goal programming/constrained regression. The approach proposed in this study can avoid several statistical assumptions used in conventional methods for estimating a stochastic frontier function. As an important application, this study uses the estimation technique to obtain an AT&T stochastic frontier cost function. As a result, this study measures technical and allocative efficiencies of AT&T production process and review its natural monopoly issue. The estimated stochastic frontier cost function is also compared with the other cost function models used for previous studies concerning the divestiture of the telephone industry.  相似文献   

19.
Inference on the largest mean of a multivariate normal distribution is a surprisingly difficult and unexplored topic. Difficulties arise when two or more of the means are simultaneously the largest mean. Our proposed solution is based on an extension of R.A. Fisher’s fiducial inference methods termed generalized fiducial inference. We use a model selection technique along with the generalized fiducial distribution to allow for equal largest means and alleviate the overestimation that commonly occurs. Our proposed confidence intervals for the largest mean have asymptotically correct frequentist coverage and simulation results suggest that they possess promising small sample empirical properties. In addition to the theoretical calculations and simulations we also applied this approach to the air quality index of the four largest cities in the northeastern United States (Baltimore, Boston, New York, and Philadelphia).  相似文献   

20.
Yuzhi Cai 《Extremes》2010,13(3):291-314
In this paper we propose a polynomial power-Pareto quantile function model and a Bayesian method for parameters estimation. We also carried out simulation studies and applied our methodology to real data sets empirically. The results show that a quantile function approach to statistical modelling is very flexible due to the properties of quantile functions, and that the combination of a power and a Pareto distribution enables us to model both the main body and the tails of a distribution, even though the mathematical form of the distribution does not exist. Our research also suggests a new approach to studying extreme values based on a whole data set rather than group maximum/minimum or exceedances above/below a proper threshold value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号