共查询到20条相似文献,搜索用时 171 毫秒
1.
部分线性模型也就是响应变量关于一个或者多个协变量是线性的, 但对于其他的协变量是非线性的关系\bd 对于部分线性模型中的参数和非参数部分的估计方法, 惩罚最小二乘估计是重要的估计方法之一\bd 对于这种估计方法, 广义交叉验证法提供了一种确定光滑参数的方法\bd 但是, 在部分线性模型中, 用广义交叉验证法确定光滑参数的最优性还没有被证明\bd 本文证明了利用惩罚最小二乘估计对于部分线性模型估计时, 用广义交叉验证法选择光滑参数的最优性\bd 通过模拟验证了本文中所提出的用广义交叉验证法选择光滑参数具有很好的效果, 同时, 本文在模拟部分比较了广义交叉验证和最小二乘交叉验证的优劣. 相似文献
2.
本文使用蒙特卡罗方法, 求得广义线性混合模型之最大似然估计, 并提供用来评估统计参数之收敛和精确度之实用方法\bd 仿真研究显示无偏之固定效应参数估计, 而方差分量估计之误差则相近于前人结果\bd 应用举例为使用泊松分布求取乳癌死亡率之小区域估计. 相似文献
3.
4.
本文对两个样本数据不完全的线性模型展开讨论,
其中线性模型协变量的观测值不缺失, 响应变量的观测值随机缺失(MAR).
我们采用逆概率加权填补方法对响应变量的缺失值进行补足, 得到两个线性回归模型``完全'样本数据,
在``完全'样本数据的基础上构造了响应变量分位数差异的对数经验似然比统计量.
与以往研究结果不同的是本文在一定条件下证明了该统计量的极限分布为标准,
降低了由于权系数估计带来的误差, 进一步构造出了精度更高的分位数差异的经验似然置信区间. 相似文献
5.
6.
7.
在一般的线性回归模型中,自变量都被认为是没有误差的准确值。然而当所有观测变量都可能存在误差时,通常的统计方法就会产生偏差。本文利用数据分组的思想,提出分组最小二乘估计法进行参数估计,减少了参数估计的偏差。并且在只知道误差界的情况下,利用分组最小二乘能给出几种度量误差模型的相合估计。 相似文献
8.
本文对重复试验次数为2的随机效应方差分析模型, 给出了误差方差齐性, 即$H_0:\sigma_1^2 =\sigma_2^2\leftrightarrow H_1:\sigma_1^2\ne\sigma_2^2$的一种具有相合性的检验方法\bd 并对三个常见模型给出了检验统计量和拒绝域的具体表达式, 最后是两个应用实例\bd 相似文献
9.
对指数族非线性混合效应模型, 本文基于$Q$函数(朱宏图, 2001)方法, 给出几种度量数据删除影响的统计量\bd 其主要思想是将随机效应视为缺失数据, 并利用EM算法来处理完全数据对数似然函数的条件期望\bd 一个实际例子说明我们方法是有效的 相似文献
10.
11.
宇德明 《数学的实践与认识》2008,38(6):73-79
为了编制和优化施工进度计划,计算构成施工项目的各项工作最早开始时间、最迟开始时间、最早完成时间、最迟完成时间、总时差和自由时差等时间参数十分重要.提出了一种计算工作时间参数新方法.该方法以工作完成时间为决策变量,通过建立和求解线性规划模型来得到各种工作时间参数.其建模思路清晰,不需绘制网络图,能用通用办公软件EXCEL求解.模拟计算表明,用该方法与用标准网络计划技术计算出的工作时间参数完全一致. 相似文献
12.
Chi-Ming Tsou 《Applied Mathematical Modelling》2012,36(12):6154-6166
A data analysis method is proposed to derive a latent structure matrix from a sample covariance matrix. The matrix can be used to explore the linear latent effect between two sets of observed variables. Procedures with which to estimate a set of dependent variables from a set of explanatory variables by using latent structure matrix are also proposed. The proposed method can assist the researchers in improving the effectiveness of the SEM models by exploring the latent structure between two sets of variables. In addition, a structure residual matrix can also be derived as a by-product of the proposed method, with which researchers can conduct experimental procedures for variables combinations and selections to build various models for hypotheses testing. These capabilities of data analysis method can improve the effectiveness of traditional SEM methods in data property characterization and models hypotheses testing. Case studies are provided to demonstrate the procedure of deriving latent structure matrix step by step, and the latent structure estimation results are quite close to the results of PLS regression. A structure coefficient index is suggested to explore the relationships among various combinations of variables and their effects on the variance of the latent structure. 相似文献
13.
A general procedure for creating Markovian interest rate models is presented. The models created by this procedure automatically fit within the HJM framework and fit the initial term structure exactly. Therefore they are arbitrage free. Because the models created by this procedure have only one state variable per factor, twoand even three-factor models can be computed efficiently, without resorting to Monte Carlo techniques. This computational efficiency makes calibration of the new models to market prices straightforward. Extended Hull- White, extended CIR, Black-Karasinski, Jamshidian's Brownian path independent models, and Flesaker and Hughston's rational log normal models are one-state variable models which fit naturally within this theoretical framework. The ‘separable’ n-factor models of Cheyette and Li, Ritchken, and Sankarasubramanian - which require n(n + 3)/2 state variables - are degenerate members of the new class of models with n(n + 3)/2 factors. The procedure is used to create a new class of one-factor models, the ‘β-η models.’ These models can match the implied volatility smiles of swaptions and caplets, and thus enable one to eliminate smile error. The β-η models are also exactly solvable in that their transition densities can be written explicitly. For these models accurate - but not exact - formulas are presented for caplet and swaption prices, and it is indicated how these closed form expressions can be used to efficiently calibrate the models to market prices. 相似文献
14.
The syntenic distance between two species is the minimum number of fusions, fissions, and translocations required to transform one genome into the other. The linear syntenic distance, a restricted form of this model, has been shown to be close to the syntenic distance. Both models are computationally difficult to compute and have resisted efficient approximation algorithms with non-trivial performance guarantees. In this paper, we prove that many useful properties of syntenic distance carry over to linear syntenic distance. We also give a reduction from the general linear synteny problem to the question of whether a given instance can be solved using the maximum possible number of translocations. Our main contribution is an algorithm exactly computing linear syntenic distance in nested instances of the problem. This is the first polynomial time algorithm exactly solving linear synteny for a non-trivial class of instances. It is based on a novel connection between the syntenic distance and a scheduling problem that has been studied in the operations research literature. 相似文献
15.
On statistical models for regression diagnostics 总被引:2,自引:0,他引:2
In regression diagnostics, the case deletion model (CDM) and the mean shift outlier model (MSOM) are commonly used in practice. In this paper we show that the estimates of CDM and MSOM are equal in a wide class of statistical models, which include LSE, MLE, Bayesian estimate andM-estimate in linear and nonlinear regression models; MLE in generalized linear models and exponential family nonlinear models; MLEs of transformation parameters of explanatory variables in a Box-Cox regression models and so on. Furthermore, we study some models, in which, the estimates are not exactly equal but are approximately equal for CDM and MSOM. 相似文献
16.
A general approach for developing distribution free tests for general linear models based on simplicial depth is applied to multiple regression. The tests are based on the asymptotic distribution of the simplicial regression depth, which depends only on the distribution law of the vector product of regressor variables. Based on this formula, the spectral decomposition and thus the asymptotic distribution is derived for multiple regression through the origin and multiple regression with Cauchy distributed explanatory variables. The errors may be heteroscedastic and the concrete form of the error distribution does not need to be known. Moreover, the asymptotic distribution for multiple regression with intercept does not depend on the location and scale of the explanatory variables. A simulation study suggests that the tests can be applied also to normal distributed explanatory variables. An application on multiple regression for shape analysis of fishes demonstrates the applicability of the new tests and in particular their outlier robustness. 相似文献
17.
18.
We consider models for the covariance between two blocks of variables. Such models are often used in situations where latent variables are believed to present. In this paper we characterize exactly the set of distributions given by a class of models with one-dimensional latent variables. These models relate two blocks of observed variables, modeling only the cross-covariance matrix. We describe the relation of this model to the singular value decomposition of the cross-covariance matrix. We show that, although the model is underidentified, useful information may be extracted. We further consider an alternative parameterization in which one latent variable is associated with each block, and we extend the result to models with r-dimensional latent variables. 相似文献
19.
Wlodzimierz Ogryczak Michał Przyłuski Tomasz Śliwiński 《Mathematical Methods of Operations Research》2017,86(3):625-653
In problems of portfolio selection the reward-risk ratio criterion is optimized to search for a risky portfolio offering the maximum increase of the mean return, compared to the risk-free investment opportunities. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several polyhedral risk measures, being linear programming (LP) computable in the case of discrete random variables represented by their realizations under specified scenarios, have been introduced and applied in portfolio optimization. The reward-risk ratio optimization with polyhedral risk measures can be transformed into LP formulations. The LP models typically contain the number of constraints proportional to the number of scenarios while the number of variables (matrix columns) proportional to the total of the number of scenarios and the number of instruments. Real-life financial decisions are usually based on more advanced simulation models employed for scenario generation where one may get several thousands scenarios. This may lead to the LP models with huge number of variables and constraints thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by alternative models based on the inverse ratio minimization and taking advantages of the LP duality. In the introduced models the number of structural constraints (matrix rows) is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. 相似文献
20.
Christoph Börgers 《Numerische Mathematik》1989,55(2):123-136
Summary We consider the Neumann-Dirichlet domain decomposition method for the solution of linear elliptic boundary value problems. We study the following question. Suppose that the auxiliary problems on the subdomains are not solved exactly, but only with a fixed, mesh size independent accuracy. Does the speed of convergence remain mesh size independently bounded? We show that the answer is no in general, but that mesh size independent convergence can be obtained if the accuracy requirement on the subsolvers becomes increasingly severe as the mesh size tends to zero. 相似文献