首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   45篇
  免费   6篇
  国内免费   2篇
化学   12篇
数学   31篇
物理学   10篇
  2022年   3篇
  2021年   5篇
  2020年   3篇
  2019年   3篇
  2018年   5篇
  2017年   6篇
  2016年   2篇
  2015年   1篇
  2014年   5篇
  2013年   13篇
  2012年   3篇
  2010年   1篇
  2009年   2篇
  2006年   1篇
排序方式: 共有53条查询结果,搜索用时 15 毫秒
1.
为缓解我国木浆供应压力,满足混合原料制浆的实际需求,该文进行了近红外光谱快速分析混合制浆原料的研究。采集145个人为控制尾巨桉含量的尾巨桉-马占相思混合样品的近红外光谱,用常规方法测定其综纤维素、聚戊糖、Klason木质素含量。对原始光谱进行一阶导数与标准正态变换预处理后,分别运用偏最小二乘法、支持向量机法、人工神经网络法和LASSO算法建立尾巨桉、综纤维素、聚戊糖、Klason木质素含量分析模型。其中LASSO法建立的尾巨桉和综纤维素含量分析模型最优,预测均方根误差(RMSEP)分别为1.80%、0.60%;绝对偏差(AD)分别为-3.03%~3.17%、-1.03%~0.98%,模型性能可满足较精确的快速分析。偏最小二乘法建立的聚戊糖含量分析模型最优,RMSEP为0.75%,AD为-1.26%~1.33%;支持向量机法建立的Klason木质素含量分析模型最优,RMSEP为0.48%,AD为-0.82%~0.86%,两个模型性能适用于非精确性的分析。该研究为混合制浆原料的快速分析提供了可能,同时也证实了LASSO算法的适用性。  相似文献   
2.
The importance of variable selection and regularization procedures in multiple regression analysis cannot be overemphasized. These procedures are adversely affected by predictor space data aberrations as well as outliers in the response space. To counter the latter, robust statistical procedures such as quantile regression which generalizes the well-known least absolute deviation procedure to all quantile levels have been proposed in the literature. Quantile regression is robust to response variable outliers but very susceptible to outliers in the predictor space (high leverage points) which may alter the eigen-structure of the predictor matrix. High leverage points that alter the eigen-structure of the predictor matrix by creating or hiding collinearity are referred to as collinearity influential points. In this paper, we suggest generalizing the penalized weighted least absolute deviation to all quantile levels, i.e., to penalized weighted quantile regression using the RIDGE, LASSO, and elastic net penalties as a remedy against collinearity influential points and high leverage points in general. To maintain robustness, we make use of very robust weights based on the computationally intensive high breakdown minimum covariance determinant. Simulations and applications to well-known data sets from the literature show an improvement in variable selection and regularization due to the robust weighting formulation.  相似文献   
3.
Convex clustering, a convex relaxation of k-means clustering and hierarchical clustering, has drawn recent attentions since it nicely addresses the instability issue of traditional nonconvex clustering methods. Although its computational and statistical properties have been recently studied, the performance of convex clustering has not yet been investigated in the high-dimensional clustering scenario, where the data contains a large number of features and many of them carry no information about the clustering structure. In this article, we demonstrate that the performance of convex clustering could be distorted when the uninformative features are included in the clustering. To overcome it, we introduce a new clustering method, referred to as Sparse Convex Clustering, to simultaneously cluster observations and conduct feature selection. The key idea is to formulate convex clustering in a form of regularization, with an adaptive group-lasso penalty term on cluster centers. To optimally balance the trade-off between the cluster fitting and sparsity, a tuning criterion based on clustering stability is developed. Theoretically, we obtain a finite sample error bound for our estimator and further establish its variable selection consistency. The effectiveness of the proposed method is examined through a variety of numerical experiments and a real data application. Supplementary material for this article is available online.  相似文献   
4.
We develop an approach to tuning of penalized regression variable selection methods by calculating the sparsest estimator contained in a confidence region of a specified level. Because confidence intervals/regions are generally understood, tuning penalized regression methods in this way is intuitive and more easily understood by scientists and practitioners. More importantly, our work shows that tuning to a fixed confidence level often performs better than tuning via the common methods based on Akaike information criterion (AIC), Bayesian information criterion (BIC), or cross-validation (CV) over a wide range of sample sizes and levels of sparsity. Additionally, we prove that by tuning with a sequence of confidence levels converging to one, asymptotic selection consistency is obtained, and with a simple two-stage procedure, an oracle property is achieved. The confidence-region-based tuning parameter is easily calculated using output from existing penalized regression computer packages. Our work also shows how to map any penalty parameter to a corresponding confidence coefficient. This mapping facilitates comparisons of tuning parameter selection methods such as AIC, BIC, and CV, and reveals that the resulting tuning parameters correspond to confidence levels that are extremely low, and can vary greatly across datasets. Supplemental materials for the article are available online.  相似文献   
5.
Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes toward constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplementary materials for the article are available online.  相似文献   
6.
近红外光谱技术是一种通过分析样本的特征光谱数据,实现定性或定量分析的无损检测方法,特征数据的完整性和代表性决定了所建模型的性能,而现有分析方法只能实现光谱子区间特征筛选,导致分析模型稳定性差、且难以再优化。为实现近红外光谱区间高维数特征提取,有效提高近红外光谱定性分析模型的精度和稳定性,提出一种基于最小绝对收缩和选择算法(LASSO)的光谱特征筛选方法,并以我国特色高值外贸产品云南松茸为分析对象进行聚类应用研究,讨论了该方法对于高维光谱特征筛选的有效性、分析对比了LASSO筛选特征变量及主元分析(PCA)降维算法所建松茸真伪甄别及食用菌分类模型的预测精度及稳定性。通过调研发现,云南产鲜松茸因其独特外形易于分辨,而片状的干松茸失去其独有的外形特征,导致国内干松茸掺假事件屡禁不止。选取云南产松茸、杏鲍菇、老人头、姬松茸四种干样共166样本数据进行分析,采用光谱范围为900~1 700 nm的NIRQuest512型近红外光谱仪获得166×512维原始光谱数据,剔除异常数据后采用标准正态变换对光谱数据进行预处理。在此基础上,利用LASSO筛选出全光谱区间的特征变量,再使用Kennard-Stone法并结合典型线性(KNN)和非线性建模(BP)算法,构建松茸真伪甄别模型和食用菌分类模型,对两种模型进行盲样测试,并分析了LASSO与PCA算法的不同点,最后使用蒙特卡罗方法检测两种模型的稳定性。实验结果表明基于LASSO光谱特征选择的松茸真伪甄别模型和食用菌分类模型预测精度和稳定性均高于PCA方法,其中基于原始光谱数据所建真伪甄别模型的预测准确率为69.57% (BP)和60.87% (KNN),食用菌分类模型准确率为67.39% (BP)和65.22% (KNN),基于LASSO特征筛选的真伪甄别模型预测准确率分别达到100% (BP)和78.26% (KNN),食用菌分类模型预测准确率分别达到89.13% (BP)和80.43% (KNN),对两种模型进行10次蒙特卡罗实验,其结果平均值分别为99.93%和97.22%,由此可知,与PCA等数据降维算法相比,LASSO可实现全光谱区间的光谱特征选择和数据降维,有效地提高了近红外定性分析模型的预测性能,为近红外分析提供了一种新的特征筛选方法。  相似文献   
7.
近红外技术广泛应用于食品、药品等生产过程和产品质量检测,具有样品无需预处理、成本低、无破坏性、测定速度快等优点。但是,全光谱数据维数高、冗余信息多,直接应用于建模会导致模型复杂性高、稳定性差等问题。siPLS是最常见的光谱数据降维方法,但是难以处理光谱数据的共线性问题。LASSO是一种相对新的数据降维方法,但在小样本应用中具有不稳定性。针对siPLS和LASSO在近红外光谱数据应用中存在的问题,提出了基于siPLS-LASSO的近红外特征波长选择方法,并将其应用于秸秆饲料蛋白固态发酵过程pH值监测。该方法首先采用siPLS算法,实现对光谱波长最佳联合子区间的优选;然后,对优选联合子区间使用LASSO算法进行特征波长选择,在此基础上建立PLS校正模型。同时,将siPLS-LASSO方法与其他传统特征波长选择方法进行了对比。结果表明:建立在siPLS-LASSO方法优选33个特征波长基础上的PLS模型预测结果更好,其预测方差(RMSEP)和相关系数(Rp)分别为0.071 1和0.980 8;所提siPLS-LASSO方法有效选取了特征波长,提高了模型预测性能。  相似文献   
8.
采用红外光谱技术对未知气体组分进行监测,需要对气体组分进行定性识别分析。基于多元线性回归模型的LASSO变量选择技术广泛应用于数据分析领域。将LASSO方法引入到红外光谱分析领域,提出一种LASSO变量选择技术结合循环线性最小二乘(LCLS)分析的定性识别方法,并开展了相关的实验对其进行验证。实验采集CO,C2H4,NH3,C3H8,C4H10和C6H14六种单组分傅里叶变换红外(FTIR)光谱吸光度谱以及一组C2H4和NH3混合组分的吸光度谱,结合实验室自建光谱数据库,先采用LASSO方法对采集的光谱进行初步定性分析,然后使用LCLS方法剔除干扰组分。实验结果表明,LASSO结合LCLS的方法能有效识别出光谱中的目标组分,即使是在干扰严重的光谱波段也可以剔除掉大部分的干扰组分。  相似文献   
9.
删失回归模型是一种很重要的模型,它在计量经济学中有着广泛的应用. 然而,它的变量选择问题在现今的参考文献中研究的比较少.本文提出了一个LASSO型变量选择和估计方法,称之为多样化惩罚$L_1$限制方法, 简称为DPLC. 另外,我们给出了非0回归系数估计的大样本渐近性质. 最后,大量的模拟研究表明了DPLC方法和一般的最优子集选择方法在变量选择和估计方面有着相同的能力.  相似文献   
10.
为避免模型出现过拟合,将自适应LASSO变量选择方法引入二元选择分位回归模型,利用贝叶斯方法构建Gibbs抽样算法并在抽样中设置不影响预测结果的约束条件‖β‖=1以提高抽样值的稳定性.通过数值模拟,表明改进的模型有更为良好的参数估计效率、变量选择功能和分类能力.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号