首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 129 毫秒
1.
针对财务危机预警模型指标存在信息冗余及Logistic回归模型预测精度有待提高的不足,利用L_(1/2)范数惩罚技术优化Logistic回归模型,构建基于L_(1/2)正则化Logistic回归的上市公司财务危机预警新模型.通过以沪深股市制造业股票交易得到特别处理(Special Treatment, ST)公司和非ST公司为研究对象,对比研究传统Logistic回归和L_1正则化Logistic回归模型的预测结果,实证研究表明:通过L_(1/2)正则化的Logistic回归模型不仅可以实现参数估计和变量选择,而且具有更高的预测精度和泛化能力.研究体现了新模型对预警问题的合理性和优越性,为上市公司财务危机预警后续研究提供一定的借鉴.  相似文献   

2.
在保证适当学习精度前提下,神经网络的神经元个数应该尽可能少(结构稀疏化),从而降低成本,提高稳健性和推广精度.本文采用正则化方法研究前馈神经网络的结构稀疏化.除了传统的用于稀疏化的L1正则化之外,本文主要采用近几年流行的L1/2正则化.为了解决L1/2正则化算子不光滑、容易导致迭代过程振荡这一问题,本文试图在不光滑点的一个小邻域内采用磨光技巧,构造一种光滑化L1/2正则化算子,希望达到比L1正则化更高的稀疏化效率.本文综述了近年来作者在用于神经网络稀疏化的L1/2正则化的一些工作,涉及的神经网络包括BP前馈神经网络、高阶神经网络、双并行前馈神经网络,以及Takagi-Sugeno模糊模型.  相似文献   

3.
主要研究了分裂可行问题的1-范数正则化.首先利用1-范数正则化方法,将分裂可行问题转化为无约束优化问题.其次讨论了1-范数正则化解的若干性质,并给出了求解1-范数正则化解的邻近梯度算法.最后通过数值试验验证了算法的可行性和有效性.  相似文献   

4.
正1引言考虑求解岭回归或者Tikhonov正则化最小二乘回归问题■这里X是一个m×n的复矩阵,β是一个n维未知向量,y是一个m维的复向量,λ是正则化参数,‖·‖2表示向量的欧拉范数.岭回归问题对病态数据的拟合效果要强于最小二乘法.目前,岭回归问题已广泛应用于数据分析、机器学习、电网等领域.近年来,一系列随机算法被用来求解大规模线性系统.Strohmer和Vershynin [1]提出  相似文献   

5.
回归模型一般采取传统的最小二乘估计(LSE)方法,然而当数据包含非正态特征或异常值时该估计方法会导致不稳健的参数估计.与LSE方法相比,即使出现非正态误差或异常数据,复合分位回归(CQR)方法也能提供更稳健的估计结果.基于复合反对称拉普拉斯分布(CALD),本文提出了贝叶斯框架下的加权复合分量回归(WCQR)方法.正则化方法已经被验证可以有效处理高维稀疏回归模型,它可以同时进行变量选择和参数估计.本文结合贝叶斯LASSO正则化方法和WCQR方法来拟合线性回归模型,建立了 WCQR的贝叶斯LASSO正则化分层模型,并导出了所有参数的条件后验分布以进行统计推断.最后,通过蒙特卡罗模拟和实际数据分析演示了所提出方法.  相似文献   

6.
王倩  戴华 《计算数学》2013,35(2):195-204
迭代极小残差方法是求解大型线性方程组的常用方法, 通常用残差范数控制迭代过程.但对于不适定问题, 即使残差范数下降, 误差范数未必下降. 对大型离散不适定问题,组合广义最小误差(GMERR)方法和截断奇异值分解(TSVD)正则化方法, 并利用广义交叉校验准则(GCV)确定正则化参数,提出了求解大型不适定问题的正则化GMERR方法.数值结果表明, 正则化GMERR方法优于正则化GMRES方法.  相似文献   

7.
数独是一个难以求解的整数规划问题,可以通过实数编码的方式去除整数约束的限制,将整数规划模型转化为一个l0范数极小化模型.已有算法大多是求解松弛的l1范数极小化模型,只能求解部分数独问题.本文证明对于数独这样一个特殊的问题,lq(0<q<1)范数极小化模型等价于l0范数极小化模型,同时用l1/2-SLP(sequenti...  相似文献   

8.
提取两个随机向量X与Y之间的相关性是非常重要的问题.核方法被用来提取非线性的相关性.本文通过极小化方差Var[f(X)-g(Y)]得到最大相关性,称为同时回归,其中f(X)和g(Y)分别是两个不同的再生核空间中的函数.本文利用正则经验方差极小化得到估计.为了所得的估计函数具有稀疏性,本文采用系数的l_1范数作为惩罚项,在一些常规条件下建立学习率.同时回归问题与典型相关分析、切片逆回归等密切相关.  相似文献   

9.
研究了线性抛物型方程不连续参数的识别算法.根据原有算法对于加噪观测数据计算不收敛的问题,本文基于分段常值水平集方法,根据水平集函数和优化过程的特点,修正原有Uzawa型算法中的带有总变差(TV)正则化的极小化模型和对常值向量的极小化模型,并且利用分裂Bregman迭代算法处理TV范数的优越性,构造一种新的参数识别算法格式.数值实验结果显示,新算法具有计算时间短、精度高、抗噪性强的优点.  相似文献   

10.
空间变系数回归模型是空间线性回归模型的重要推广,在实际中有广泛的应用.然而,这个模型的变量选择问题还没有解决.本文通过一般的M型损失函数将均值回归、中位数回归、分位数回归和稳健均值回归纳入同一框架下,然后基于B样条近似,提出一个能够同时进行变量选择和函数系数估计的自适应组内(adaptive group)L_r(r≥1)范数惩罚的M型估计量.新方法有几个显著的特点:(1)对异常点和重尾分布稳健;(2)能够兼容异方差性,允许显著变量集合随所考虑的分位点不同而变化;(3)兼顾了估计量的有效性和稳健性.在较弱假设条件下,建立了变量选择的oracle性质.随机模拟和实例分析验证了所提方法在有限样本时的表现.  相似文献   

11.
We study the large-sample properties of the penalized maximum likelihood estimator of a multivariate stochastic regression model with contemporaneously correlated data. The penalty is in terms of the square norm of some (vector) linear function of the regression coefficients. The model subsumes the so-called common transfer function model useful for extracting common signals in a panel of short time series. We show that, under mild regularity conditions, the penalized maximum likelihood estimator is consistent and asymptotically normal. The asymptotic bias of the regression coefficient estimator is also derived.  相似文献   

12.
In this paper, we consider the issue of variable selection in partial linear single-index models under the assumption that the vector of regression coefficients is sparse. We apply penalized spline to estimate the nonparametric function and SCAD penalty to achieve sparse estimates of regression parameters in both the linear and single-index parts of the model. Under some mild conditions, it is shown that the penalized estimators have oracle property, in the sense that it is asymptotically normal with the same mean and covariance that they would have if zero coefficients are known in advance. Our model owns a least square representation, therefore standard least square programming algorithms can be implemented without extra programming efforts. In the meantime, parametric estimation, variable selection and nonparametric estimation can be realized in one step, which incredibly increases computational stability. The finite sample performance of the penalized estimators is evaluated through Monte Carlo studies and illustrated with a real data set.  相似文献   

13.
The penalized profile sampler for semiparametric inference is an extension of the profile sampler method [B.L. Lee, M.R. Kosorok, J.P. Fine, The profile sampler, Journal of the American Statistical Association 100 (2005) 960-969] obtained by profiling a penalized log-likelihood. The idea is to base inference on the posterior distribution obtained by multiplying a profiled penalized log-likelihood by a prior for the parametric component, where the profiling and penalization are applied to the nuisance parameter. Because the prior is not applied to the full likelihood, the method is not strictly Bayesian. A benefit of this approximately Bayesian method is that it circumvents the need to put a prior on the possibly infinite-dimensional nuisance components of the model. We investigate the first and second order frequentist performance of the penalized profile sampler, and demonstrate that the accuracy of the procedure can be adjusted by the size of the assigned smoothing parameter. The theoretical validity of the procedure is illustrated for two examples: a partly linear model with normal error for current status data and a semiparametric logistic regression model. Simulation studies are used to verify the theoretical results.  相似文献   

14.
We assessed the ability of several penalized regression methods for linear and logistic models to identify outcome-associated predictors and the impact of predictor selection on parameter inference for practical sample sizes. We studied effect estimates obtained directly from penalized methods (Algorithm 1), or by refitting selected predictors with standard regression (Algorithm 2). For linear models, penalized linear regression, elastic net, smoothly clipped absolute deviation (SCAD), least angle regression and LASSO had a low false negative (FN) predictor selection rates but false positive (FP) rates above 20 % for all sample and effect sizes. Partial least squares regression had few FPs but many FNs. Only relaxo had low FP and FN rates. For logistic models, LASSO and penalized logistic regression had many FPs and few FNs for all sample and effect sizes. SCAD and adaptive logistic regression had low or moderate FP rates but many FNs. 95 % confidence interval coverage of predictors with null effects was approximately 100 % for Algorithm 1 for all methods, and 95 % for Algorithm 2 for large sample and effect sizes. Coverage was low only for penalized partial least squares (linear regression). For outcome-associated predictors, coverage was close to 95 % for Algorithm 2 for large sample and effect sizes for all methods except penalized partial least squares and penalized logistic regression. Coverage was sub-nominal for Algorithm 1. In conclusion, many methods performed comparably, and while Algorithm 2 is preferred to Algorithm 1 for estimation, it yields valid inference only for large effect and sample sizes.  相似文献   

15.
We propose a new binary classification and variable selection technique especially designed for high-dimensional predictors. Among many predictors, typically, only a small fraction of them have significant impact on prediction. In such a situation, more interpretable models with better prediction accuracy can be obtained by variable selection along with classification. By adding an ?1-type penalty to the loss function, common classification methods such as logistic regression or support vector machines (SVM) can perform variable selection. Existing penalized SVM methods all attempt to jointly solve all the parameters involved in the penalization problem altogether. When data dimension is very high, the joint optimization problem is very complex and involves a lot of memory allocation. In this article, we propose a new penalized forward search technique that can reduce high-dimensional optimization problems to one-dimensional optimization by iterating the selection steps. The new algorithm can be regarded as a forward selection version of the penalized SVM and its variants. The advantage of optimizing in one dimension is that the location of the optimum solution can be obtained with intelligent search by exploiting convexity and a piecewise linear or quadratic structure of the criterion function. In each step, the predictor that is most able to predict the outcome is chosen in the model. The search is then repeatedly used in an iterative fashion until convergence occurs. Comparison of our new classification rule with ?1-SVM and other common methods show very promising performance, in that the proposed method leads to much leaner models without compromising misclassification rates, particularly for high-dimensional predictors.  相似文献   

16.
Recently, penalized regression methods have attracted much attention in the statistical literature. In this article, we argue that such methods can be improved for the purposes of prediction by utilizing model averaging ideas. We propose a new algorithm that combines penalized regression with model averaging for improved prediction. We also discuss the issue of model selection versus model averaging and propose a diagnostic based on the notion of generalized degrees of freedom. The proposed methods are studied using both simulated and real data.  相似文献   

17.
The ‘Signal plus Noise’ model for nonparametric regression can be extended to the case of observations taken at the vertices of a graph. This model includes many familiar regression problems. This article discusses the use of the edges of a graph to measure roughness in penalized regression. Distance between estimate and observation is measured at every vertex in the L2 norm, and roughness is penalized on every edge in the L1 norm. Thus the ideas of total variation penalization can be extended to a graph. The resulting minimization problem presents special computational challenges, so we describe a new and fast algorithm and demonstrate its use with examples.

The examples include image analysis, a simulation applicable to discrete spatial variation, and classification. In our examples, penalized regression improves upon kernel smoothing in terms of identifying local extreme values on planar graphs. In all examples we use fully automatic procedures for setting the smoothing parameters. Supplemental materials are available online.  相似文献   

18.
基于遗传算法的上市公司财务危机预测模型研究   总被引:5,自引:1,他引:4  
本文以我国沪深A股上市公司为研究对象,选取制造业公司376家,其中被特别处理的ST公司188家,与其配对的健康公司188家,使用遗传算法和21个财务比率建立了财务危机预测模型,并与Logistic回归和BP神经网络模型进行了比较,结果表明,使用遗传算法可以获得不受统计约束且预测准确率更高的模型。  相似文献   

19.
上市公司财务危机预警分析——基于数据挖掘的研究   总被引:3,自引:0,他引:3  
刘旻  罗慧 《数理统计与管理》2004,23(3):51-56,68
本文以我国上市公司为研究对象,选取了1999-2001年被ST的公司和正常公司各73家作为训练样本,2002年被ST的公司和正常公司各43家作为检验样本,分析了财务危机出现前2年内各年两类公司15个财务指标。在进行数据挖掘中,我们运用了三种独立的方法,分别为判别分析、Logistic回归和神经网络,结果发现神经网络预测的效果要优于其它两种方法。最后,结合了这些方法的优点,建立了一种混合模型,研究表明预测的正确性要高于每种单独方法,从而提高了模型的预警效果。  相似文献   

20.
It is known that the accuracy of the maximum likelihood-based covariance and precision matrix estimates can be improved by penalized log-likelihood estimation. In this article, we propose a ridge-type operator for the precision matrix estimation, ROPE for short, to maximize a penalized likelihood function where the Frobenius norm is used as the penalty function. We show that there is an explicit closed form representation of a shrinkage estimator for the precision matrix when using a penalized log-likelihood, which is analogous to ridge regression in a regression context. The performance of the proposed method is illustrated by a simulation study and real data applications. Computer code used in the example analyses as well as other supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号