首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
部分线性模型也就是响应变量关于一个或者多个协变量是线性的, 但对于其他的协变量是非线性的关系\bd 对于部分线性模型中的参数和非参数部分的估计方法, 惩罚最小二乘估计是重要的估计方法之一\bd 对于这种估计方法, 广义交叉验证法提供了一种确定光滑参数的方法\bd 但是, 在部分线性模型中, 用广义交叉验证法确定光滑参数的最优性还没有被证明\bd 本文证明了利用惩罚最小二乘估计对于部分线性模型估计时, 用广义交叉验证法选择光滑参数的最优性\bd 通过模拟验证了本文中所提出的用广义交叉验证法选择光滑参数具有很好的效果, 同时, 本文在模拟部分比较了广义交叉验证和最小二乘交叉验证的优劣.  相似文献   

2.
本文使用蒙特卡罗方法, 求得广义线性混合模型之最大似然估计, 并提供用来评估统计参数之收敛和精确度之实用方法\bd 仿真研究显示无偏之固定效应参数估计, 而方差分量估计之误差则相近于前人结果\bd 应用举例为使用泊松分布求取乳癌死亡率之小区域估计.  相似文献   

3.
本文研究具有均匀结构的多元$t$\,-模型的局部影响分析问题\bd 依据Cook的曲率度量, 我们考虑了微小扰动对统计推断的影响, 由此导出了局部影响分析中最为关心的统计量---最大曲率方向\bd作为一种应用, 本文还祥细讨论了常见的协方差加权扰动形式.  相似文献   

4.
本文对两个样本数据不完全的线性模型展开讨论, 其中线性模型协变量的观测值不缺失, 响应变量的观测值随机缺失(MAR). 我们采用逆概率加权填补方法对响应变量的缺失值进行补足, 得到两个线性回归模型``完全'样本数据, 在``完全'样本数据的基础上构造了响应变量分位数差异的对数经验似然比统计量. 与以往研究结果不同的是本文在一定条件下证明了该统计量的极限分布为标准, 降低了由于权系数估计带来的误差, 进一步构造出了精度更高的分位数差异的经验似然置信区间.  相似文献   

5.
本文基于隐变量的有限混合模型, 提出了一种用于有序数据的Bayes聚类方法\bd 我们采用EM算法获得模型参数的估计, 用BIC准则确定类数, 用类似于Bayes判别的方法对各观测分类\bd 模拟研究结果表明, 本文提出的方法有较好的聚类效果, 对于中等规模的数据集, 计算量是可以接受的.  相似文献   

6.
部分线性度量误差模型(Partial linear measurement error model)是经典的部分线性模型的推广.在此模型中,我们假定解释变量含有度量误差.本文,我们把经验似然推广到部分线性度量误差模型,得到了非参数的Wilk's定理.我们的方法可以用来构建置信区间(域),也可以用来检验.数值模拟表明,我们的方法在构建的置信区间长度以及覆盖率方面有很好的结果.  相似文献   

7.
在一般的线性回归模型中,自变量都被认为是没有误差的准确值。然而当所有观测变量都可能存在误差时,通常的统计方法就会产生偏差。本文利用数据分组的思想,提出分组最小二乘估计法进行参数估计,减少了参数估计的偏差。并且在只知道误差界的情况下,利用分组最小二乘能给出几种度量误差模型的相合估计。  相似文献   

8.
重复试验随机效应模型误差方差的齐性检验$(k=2)$   总被引:1,自引:0,他引:1       下载免费PDF全文
本文对重复试验次数为2的随机效应方差分析模型, 给出了误差方差齐性, 即$H_0:\sigma_1^2 =\sigma_2^2\leftrightarrow H_1:\sigma_1^2\ne\sigma_2^2$的一种具有相合性的检验方法\bd 并对三个常见模型给出了检验统计量和拒绝域的具体表达式, 最后是两个应用实例\bd  相似文献   

9.
冯予 《应用概率统计》2006,22(4):365-380
对指数族非线性混合效应模型, 本文基于$Q$函数(朱宏图, 2001)方法, 给出几种度量数据删除影响的统计量\bd 其主要思想是将随机效应视为缺失数据, 并利用EM算法来处理完全数据对数似然函数的条件期望\bd 一个实际例子说明我们方法是有效的  相似文献   

10.
价值风险(VaR)模型是当今最流行的金融资产风险管理和控制的工具之一\bd 本文提出了用局部分位数回归的方法来估计某一投资组合的VaR值\bd 该方法可用于计算投资组合多持续期的VaR, 使得人们可以了解到该投资组合在一定持续期内的动态风险\bd 本文通过模拟和美国三个月到期国债利率数据的分析说明了该方法的具体执行情况, 并与J.P. Morgan的时间开方规则作了比较\bd 结果表明我们的VaR估计有令人满意的效果.  相似文献   

11.
为了编制和优化施工进度计划,计算构成施工项目的各项工作最早开始时间、最迟开始时间、最早完成时间、最迟完成时间、总时差和自由时差等时间参数十分重要.提出了一种计算工作时间参数新方法.该方法以工作完成时间为决策变量,通过建立和求解线性规划模型来得到各种工作时间参数.其建模思路清晰,不需绘制网络图,能用通用办公软件EXCEL求解.模拟计算表明,用该方法与用标准网络计划技术计算出的工作时间参数完全一致.  相似文献   

12.
A data analysis method is proposed to derive a latent structure matrix from a sample covariance matrix. The matrix can be used to explore the linear latent effect between two sets of observed variables. Procedures with which to estimate a set of dependent variables from a set of explanatory variables by using latent structure matrix are also proposed. The proposed method can assist the researchers in improving the effectiveness of the SEM models by exploring the latent structure between two sets of variables. In addition, a structure residual matrix can also be derived as a by-product of the proposed method, with which researchers can conduct experimental procedures for variables combinations and selections to build various models for hypotheses testing. These capabilities of data analysis method can improve the effectiveness of traditional SEM methods in data property characterization and models hypotheses testing. Case studies are provided to demonstrate the procedure of deriving latent structure matrix step by step, and the latent structure estimation results are quite close to the results of PLS regression. A structure coefficient index is suggested to explore the relationships among various combinations of variables and their effects on the variance of the latent structure.  相似文献   

13.
A general procedure for creating Markovian interest rate models is presented. The models created by this procedure automatically fit within the HJM framework and fit the initial term structure exactly. Therefore they are arbitrage free. Because the models created by this procedure have only one state variable per factor, twoand even three-factor models can be computed efficiently, without resorting to Monte Carlo techniques. This computational efficiency makes calibration of the new models to market prices straightforward. Extended Hull- White, extended CIR, Black-Karasinski, Jamshidian's Brownian path independent models, and Flesaker and Hughston's rational log normal models are one-state variable models which fit naturally within this theoretical framework. The ‘separable’ n-factor models of Cheyette and Li, Ritchken, and Sankarasubramanian - which require n(n + 3)/2 state variables - are degenerate members of the new class of models with n(n + 3)/2 factors. The procedure is used to create a new class of one-factor models, the ‘β-η models.’ These models can match the implied volatility smiles of swaptions and caplets, and thus enable one to eliminate smile error. The β-η models are also exactly solvable in that their transition densities can be written explicitly. For these models accurate - but not exact - formulas are presented for caplet and swaption prices, and it is indicated how these closed form expressions can be used to efficiently calibrate the models to market prices.  相似文献   

14.
The syntenic distance between two species is the minimum number of fusions, fissions, and translocations required to transform one genome into the other. The linear syntenic distance, a restricted form of this model, has been shown to be close to the syntenic distance. Both models are computationally difficult to compute and have resisted efficient approximation algorithms with non-trivial performance guarantees. In this paper, we prove that many useful properties of syntenic distance carry over to linear syntenic distance. We also give a reduction from the general linear synteny problem to the question of whether a given instance can be solved using the maximum possible number of translocations. Our main contribution is an algorithm exactly computing linear syntenic distance in nested instances of the problem. This is the first polynomial time algorithm exactly solving linear synteny for a non-trivial class of instances. It is based on a novel connection between the syntenic distance and a scheduling problem that has been studied in the operations research literature.  相似文献   

15.
On statistical models for regression diagnostics   总被引:2,自引:0,他引:2  
In regression diagnostics, the case deletion model (CDM) and the mean shift outlier model (MSOM) are commonly used in practice. In this paper we show that the estimates of CDM and MSOM are equal in a wide class of statistical models, which include LSE, MLE, Bayesian estimate andM-estimate in linear and nonlinear regression models; MLE in generalized linear models and exponential family nonlinear models; MLEs of transformation parameters of explanatory variables in a Box-Cox regression models and so on. Furthermore, we study some models, in which, the estimates are not exactly equal but are approximately equal for CDM and MSOM.  相似文献   

16.
A general approach for developing distribution free tests for general linear models based on simplicial depth is applied to multiple regression. The tests are based on the asymptotic distribution of the simplicial regression depth, which depends only on the distribution law of the vector product of regressor variables. Based on this formula, the spectral decomposition and thus the asymptotic distribution is derived for multiple regression through the origin and multiple regression with Cauchy distributed explanatory variables. The errors may be heteroscedastic and the concrete form of the error distribution does not need to be known. Moreover, the asymptotic distribution for multiple regression with intercept does not depend on the location and scale of the explanatory variables. A simulation study suggests that the tests can be applied also to normal distributed explanatory variables. An application on multiple regression for shape analysis of fishes demonstrates the applicability of the new tests and in particular their outlier robustness.  相似文献   

17.
在非寿险费率厘定中,经常遇到的一个实际问题是某些风险类别的费率不能过高或不能过低。在这种约束条件下,传统的广义线性模型将不能直接用于费率厘定。本文给出了一种在一般线性约束条件下,如何应用迭代算法对常用的广义线性模型进行调整,从而得到满足特定约束条件的费率厘定结果。本文的实证研究结果表明,该方法具有灵活性和现实可行性,能够解决非寿险费率厘定中常见的市场约束问题。  相似文献   

18.
We consider models for the covariance between two blocks of variables. Such models are often used in situations where latent variables are believed to present. In this paper we characterize exactly the set of distributions given by a class of models with one-dimensional latent variables. These models relate two blocks of observed variables, modeling only the cross-covariance matrix. We describe the relation of this model to the singular value decomposition of the cross-covariance matrix. We show that, although the model is underidentified, useful information may be extracted. We further consider an alternative parameterization in which one latent variable is associated with each block, and we extend the result to models with r-dimensional latent variables.  相似文献   

19.
In problems of portfolio selection the reward-risk ratio criterion is optimized to search for a risky portfolio offering the maximum increase of the mean return, compared to the risk-free investment opportunities. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several polyhedral risk measures, being linear programming (LP) computable in the case of discrete random variables represented by their realizations under specified scenarios, have been introduced and applied in portfolio optimization. The reward-risk ratio optimization with polyhedral risk measures can be transformed into LP formulations. The LP models typically contain the number of constraints proportional to the number of scenarios while the number of variables (matrix columns) proportional to the total of the number of scenarios and the number of instruments. Real-life financial decisions are usually based on more advanced simulation models employed for scenario generation where one may get several thousands scenarios. This may lead to the LP models with huge number of variables and constraints thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by alternative models based on the inverse ratio minimization and taking advantages of the LP duality. In the introduced models the number of structural constraints (matrix rows) is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability.  相似文献   

20.
Summary We consider the Neumann-Dirichlet domain decomposition method for the solution of linear elliptic boundary value problems. We study the following question. Suppose that the auxiliary problems on the subdomains are not solved exactly, but only with a fixed, mesh size independent accuracy. Does the speed of convergence remain mesh size independently bounded? We show that the answer is no in general, but that mesh size independent convergence can be obtained if the accuracy requirement on the subsolvers becomes increasingly severe as the mesh size tends to zero.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号