首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
We propose a unified strategy for estimator construction, selection, and performance assessment in the presence of censoring. This approach is entirely driven by the choice of a loss function for the full (uncensored) data structure and can be stated in terms of the following three main steps. (1) First, define the parameter of interest as the minimizer of the expected loss, or risk, for a full data loss function chosen to represent the desired measure of performance. Map the full data loss function into an observed (censored) data loss function having the same expected value and leading to an efficient estimator of this risk. (2) Next, construct candidate estimators based on the loss function for the observed data. (3) Then, apply cross-validation to estimate risk based on the observed data loss function and to select an optimal estimator among the candidates. A number of common estimation procedures follow this approach in the full data situation, but depart from it when faced with the obstacle of evaluating the loss function for censored observations. Here, we argue that one can, and should, also adhere to this estimation road map in censored data situations.Tree-based methods, where the candidate estimators in Step 2 are generated by recursive binary partitioning of a suitably defined covariate space, provide a striking example of the chasm between estimation procedures for full data and censored data (e.g., regression trees as in CART for uncensored data and adaptations to censored data). Common approaches for regression trees bypass the risk estimation problem for censored outcomes by altering the node splitting and tree pruning criteria in manners that are specific to right-censored data. This article describes an application of our unified methodology to tree-based estimation with censored data. The approach encompasses univariate outcome prediction, multivariate outcome prediction, and density estimation, simply by defining a suitable loss function for each of these problems. The proposed method for tree-based estimation with censoring is evaluated using a simulation study and the analysis of CGH copy number and survival data from breast cancer patients.  相似文献   

2.
Length-biased data arise in many important fields, including epidemiological cohort studies, cancer screening trials and labor economics. Analysis of such data has attracted much attention in the literature. In this paper we propose a quantile regression approach for analyzing right-censored and length-biased data. We derive an inverse probability weighted estimating equation corresponding to the quantile regression to correct the bias due to length-bias sampling and informative censoring. This method can easily handle informative censoring induced by length-biased sampling. This is an appealing feature of our proposed method since it is generally difficult to obtain unbiased estimates of risk factors in the presence of length-bias and informative censoring. We establish the consistency and asymptotic distribution of the proposed estimator using empirical process techniques. A resampling method is adopted to estimate the variance of the estimator. We conduct simulation studies to evaluate its finite sample performance and use a real data set to illustrate the application of the proposed method.  相似文献   

3.
A monotone estimate of the conditional variance function in a heteroscedastic, nonparametric regression model is proposed. The method is based on the application of a kernel density estimate to an unconstrained estimate of the variance function and yields an estimate of the inverse variance function. The final monotone estimate of the variance function is obtained by an inversion of this function. The method is applicable to a broad class of nonparametric estimates of the conditional variance and particularly attractive to users of conventional kernel methods, because it does not require constrained optimization techniques. The approach is also illustrated by means of a simulation study.  相似文献   

4.
本文综述混合效应模型参数估计方面的若干新进展. 平衡混合效应方差分析模型的协方差阵具有一定结构. 对这类模型, 文献[1]提出了参数估计的一种新方法, 称为谱分解法. 新方法的突出特点是, 能同时给出固定效应和方差分量的估计, 前者是线性的, 后者是二次的,且相互独立. 而后, 文献[2--9]证明了谱分解估计的进一步的统计性质, 同时给出了协方差阵对应的估计, 它不仅是正定阵, 而且可获得它的风险函数, 这些文献还研究了谱分解估计与方差分析估计, 极大似然估计, 限制极大似然估计以及最小范数二次无偏估计的关系. 本文综述这一方向的部分研究成果, 并提出一些待进一步研究的问题.  相似文献   

5.
The problem considered here is that of fitting a linear function to a set of points. The criterion normally used for this is least squares. We consider two alternatives, viz., least sum of absolute deviations (called the L1 criterion) and the least maximum absolute deviation (called the Chebyshev criterion). Each of these criteria give rise to a linear program. We develop some theoretical properties of the solutions and in the light of these, examine the suitability of these criteria for linear estimation. Some of the estimates obtained by using them are shown to be counter-intuitive.  相似文献   

6.
Two methods are frequently used for modeling the choice among uncertain outcomes: stochastic dominance and mean-risk approaches. The former is based on an axiomatic model of risk-averse preferences but does not provide a convenient computational recipe. The latter quantifies the problem in a lucid form of two criteria with possible trade-off analysis, but cannot model all risk-averse preferences. In particular, if variance is used as a measure of risk, the resulting mean–variance (Markowitz) model is, in general, not consistent with stochastic dominance rules. This paper shows that the standard semideviation (square root of the semivariance) as the risk measure makes the mean-risk model consistent with the second degree stochastic dominance, provided that the trade-off coefficient is bounded by a certain constant. Similar results are obtained for the absolute semideviation, and for the absolute and standard deviations in the case of symmetric or bounded distributions. In the analysis we use a new tool, the Outcome–Risk (O–R) diagram, which appears to be particularly useful for comparing uncertain outcomes.  相似文献   

7.
A population-based cohort consisting of 126,141 men and 122,208 women born between 1874 and 1931 and at risk for breast or colorectal cancer after 1965 was identified by linking the Utah Population Data Base and the Utah Cancer Registry. The hazard function for cancer incidence is estimated from left truncated and right censored data based on the conditional likelihood. Four estimation procedures based on the conditional likelihood are used to estimate the age-specific hazard function from the data; these were the life-table method, a kernel method based on the Nelson Aalen estimator, a spline estimate, and a proportional hazards estimate based on splines with birth year as sole covariate.The results are consistent with an increasing hazard for both breast and colorectal cancer through age 85 or 90. After age 85 or 90, the hazard function for female breast and colorectal cancer may reach a plateua or decrease, although the hazard function for male colorectal cancer appears to continue to rise through age 105. The hazard function for both breast and colorectal cancer appears to be higher for more recent birth cohorts, with a more pronounced birth-cohort effect for breast cancer than for colorectal cancer. The age specific for colorectal cancer appears to be higher for men than for women. The shape of the hazard function for both breast and colorectal cancer appear to be consistent with a two-stage model for spontaneous carcinogenesis in which the initiation rate is constant or increasing. Inheritance of initiated cells appears to play a minor role.  相似文献   

8.
9.
Inference based on ratio of two independent Poisson rates is common in epidemiological studies. We study the performance of a variety of unconditional method of variance estimates recovery (MOVER) methods of combining separate confidence intervals for two single Poisson rates to form a confidence interval for their ratio. We consider confidence intervals derived from (1) the Fieller’s theorem, (2) the logarithmic transformation with the delta method and (3) the substitution method. We evaluate the performance of 13 such types of confidence intervals by comparing their empirical coverage probabilities, empirical confidence widths, ratios of mesial non-coverage probability and total non-coverage probabilities. Our simulation results suggest that the MOVER Rao score confidence intervals based on the Fieller’s theorem and the substitution method are preferable. We provide two applications to construct confidence intervals for the ratio of two Poisson rates in a breast cancer study and in a study that examines coronary heart diseases incidences among post menopausal women treated with or without hormones.  相似文献   

10.
When model the heteroscedasticity in a broad class of partially linear models, we allow the variance function to be a partial linear model as well and the parameters in the variance function to be different from those in the mean function. We develop a two-step estimation procedure, where in the first step some initial estimates of the parameters in both the mean and variance functions are obtained and then in the second step the estimates are updated using the weights calculated based on the initial estimates. The resulting weighted estimators of the linear coefficients in both the mean and variance functions are shown to be asymptotically normal, more efficient than the initial un-weighted estimators, and most efficient in the sense of semiparametric efficiency for some special cases. Simulation experiments are conducted to examine the numerical performance of the proposed procedure, which is also applied to data from an air pollution study in Mexico City.  相似文献   

11.
ABC (approximate Bayesian computation) is a general approach for dealing with models with an intractable likelihood. In this work, we derive ABC algorithms based on QMC (quasi-Monte Carlo) sequences. We show that the resulting ABC estimates have a lower variance than their Monte Carlo counter-parts. We also develop QMC variants of sequential ABC algorithms, which progressively adapt the proposal distribution and the acceptance threshold. We illustrate our QMC approach through several examples taken from the ABC literature.  相似文献   

12.
本文主要应用了Enrique Ballestero提出的一个新的随机目标规划框架,采用了幂效用函数和双曲绝对风险厌恶,以资产组合选择问题为背景,构造了两个具有分数形式目标函数的随机目标规划模型,给出了解法,并讨论了解的经济意义。本文的随机目标规划产生了一种相对风险极小的有效解,为决策提供了一种新的方案选择途径。  相似文献   

13.
In this paper we consider stochastic programming problems where the objective function is given as an expected value function. We discuss Monte Carlo simulation based approaches to a numerical solution of such problems. In particular, we discuss in detail and present numerical results for two-stage stochastic programming with recourse where the random data have a continuous (multivariate normal) distribution. We think that the novelty of the numerical approach developed in this paper is twofold. First, various variance reduction techniques are applied in order to enhance the rate of convergence. Successful application of those techniques is what makes the whole approach numerically feasible. Second, a statistical inference is developed and applied to estimation of the error, validation of optimality of a calculated solution and statistically based stopping criteria for an iterative alogrithm. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Supported by CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico), Brasília, Brazil, through a Doctoral Fellowship under grant 200595/93-8.  相似文献   

14.
The popularity of downside risk among investors is growing and mean return–downside risk portfolio selection models seem to oppress the familiar mean–variance approach. The reason for the success of the former models is that they separate return fluctuations into downside risk and upside potential. This is especially relevant for asymmetrical return distributions, for which mean–variance models punish the upside potential in the same fashion as the downside risk.The paper focuses on the differences and similarities between using variance or a downside risk measure, both from a theoretical and an empirical point of view. We first discuss the theoretical properties of different downside risk measures and the corresponding mean–downside risk models. Against common beliefs, we show that from the large family of downside risk measures, only a few possess better theoretical properties within a return–risk framework than the variance. On the empirical side, we analyze the differences between some US asset allocation portfolios based on variances and downside risk measures. Among other things, we find that the downside risk approach tends to produce – on average – slightly higher bond allocations than the mean–variance approach. Furthermore, we take a closer look at estimation risk, viz. the effect of sampling error in expected returns and risk measures on portfolio composition. On the basis of simulation analyses, we find that there are marked differences in the degree of estimation accuracy, which calls for further research.  相似文献   

15.
基于模拟退火算法的最小一乘回归新算法   总被引:2,自引:0,他引:2  
最小一乘准则由于其稳健性较好而在工程中得到广泛的应用,但求解最小一乘回归模型系数的算法往往过于复杂或只能用于样本和变量个数较少的情形.本文根据最小一乘的性质,把最小一乘问题变为组合优化问题,将模拟退火算法用在最小一乘模型的求解上,在后面的数值实验中取得了较好的效果。  相似文献   

16.
We consider the estimation problem of misspecified ergodic Lévy driven stochastic differential equation models based on high-frequency samples. We utilize a widely applicable and tractable Gaussian quasi-likelihood approach which focuses on mean and variance structure. It is shown that the Gaussian quasi-likelihood estimators of the drift and scale parameters still satisfy polynomial type probability estimates and asymptotic normality at the same rate as the correctly specified case. In their derivation process, the theory of extended Poisson equation for time-homogeneous Feller Markov processes plays an important role. Our result confirms the reliability of the Gaussian quasi-likelihood approach for SDE models.  相似文献   

17.
Infinite variance processes have attracted growing interest in recent years due to its applications in many areas of statistics (see [1] and references therein). For example, ARIMA time-series models with infinite variance innovations are widely used in financial modelling. However, a little attention has been paid to incorporate infinite variance innovations for time-series models with random coefficients introduced by [2]. This paper considers the problem of nonparametric estimation for some time-series models using the smoothed least absolute deviation (SLAD) estimating function approach. We introduce a class of kernels in order to smooth the LAD estimators. It is also shown that this new SLAD estimators are superior than some existing ones.  相似文献   

18.
Many risk measures have been recently introduced which (for discrete random variables) result in Linear Programs (LP). While some LP computable risk measures may be viewed as approximations to the variance (e.g., the mean absolute deviation or the Gini’s mean absolute difference), shortfall or quantile risk measures are recently gaining more popularity in various financial applications. In this paper we study LP solvable portfolio optimization models based on extensions of the Conditional Value at Risk (CVaR) measure. The models use multiple CVaR measures thus allowing for more detailed risk aversion modeling. We study both the theoretical properties of the models and their performance on real-life data.  相似文献   

19.
Abstract

Adaptive importance sampling using kernel density estimation techniques was introduced by West. This technique adapts the importance sampling function to the underlying integrand, thus yielding small-variance estimates. One drawback of this approach is that evaluation of the kernel mixture density is slow. We present a linear tensor spline representation of the adaptive importance function using variable bandwidth kernels that retains the small variance properties of West's approach but executes more quickly.  相似文献   

20.
We propose a stochastic goal programming (GP) model leading to a structure of mean–variance minimisation. The solution to the stochastic problem is obtained from a linkage between the standard expected utility theory and a strictly linear, weighted GP model under uncertainty. The approach essentially consists in specifying the expected utility equation corresponding to every goal. Arrow's absolute risk aversion coefficients play their role in the calculation process. Once the model is defined and justified, an illustrative example is developed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号