首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 125 毫秒
1.
磁共振图像K空间中的尖峰噪声会严重影响图像质量.该文在磁共振图像压缩感知的共轭梯度重建法的基础上,提出一种新的利用磁共振图像稀疏性进行尖峰噪声修复的方法.传统的共轭梯度重建是通过小波域迭代进行的,对于K空间的尖峰噪声的消除不是最适合.首先提出压缩感知的K空间重建算法,该算法与小波域重建等效.在此基础上,提出可以较好地修复尖峰噪声的K空间部分重建算法.即在迭代过程中,以图像的稀疏性作为约束条件,仅修改尖峰噪声所遮盖区域的数据,其他位置的数据保持不变.该算法与传统的插值算法及共轭梯度算法相比,能够更好地修复K空间尖峰噪声点,减少图像伪影,同时降低了对尖峰噪声定位准确性的要求.  相似文献   

2.
字典的选择影响基于稀疏编码的图像超分辨率重建模型的重建质量。提出了一种基于协作稀疏表达的字典学习算法。在训练阶段,通过K-Means聚类算法将样本图像块划分为不同的聚类;构建基于同时稀疏约束条件的协作稀疏字典学习模型对每个聚类训练高、低分辨率字典;应用基于L_2范数的稀疏编码模型将图像超分辨率重建过程中输入图像块由低分辨率到高分辨率的映射转变为简单的线性映射,并针对不同聚类求得相应的线性映射矩阵。在重建阶段,输入图像块通过搜索与自身结构最相似的聚类来选择相应映射矩阵获得重建后的高分辨率图像。结果表明,本文算法通过改进字典学习过程实现了更好的图像超分辨率重建质量。  相似文献   

3.
压缩感知是一种新兴技术,该技术能够用远低于奈奎斯特采样频率采集的信号恢复出原始信号. 压缩感知成像方法大大提高了心脏磁共振成像的采集速度,已有的方法主要利用动态图像时间相关及心脏的周期性运动特征,如采用在时间维做傅立叶变换或求解每帧数据跟参考帧数据的差异获取稀疏数据,满足压缩感知重建的要求. 该文提出了选择性双向顺序压缩感知重建算法,利用相邻帧的差异更小的特点,获取更加稀疏的差异数据,同时利用动态图像的周期性,以目标函数积分为判据,在时间顺序和时间逆序两个方向选择效果更好的方向进行数据重建,降低图像伪影和噪声. 该选择算法,可以在不增加重建时间的情况下,选择双向顺序重建中最佳的结果. 该文对心脏磁共振图像数据进行了数据处理实验,并且跟传统压缩感知算法、参考帧差异方法及匙孔成像方法进行了比较. 结果表明:该方法无论从视觉效果还是从统计结果上,都有很大的改善.  相似文献   

4.
结合稀疏编码和空间约束的红外图像聚类分割研究   总被引:1,自引:0,他引:1       下载免费PDF全文
宋长新*  马克  秦川  肖鹏 《物理学报》2013,62(4):40702-040702
提出了结合稀疏编码和空间约束的红外图像聚类分割新算法, 在稀疏编码的基础上融合聚类算法, 扩展了传统的基于K-means聚类的图像分割方法. 结合稀疏编码的聚类分割算法能有效融合图像的局部信息, 便于利用像素之间的内在相关性, 但是对于分割会出现过分割和像素难以归类的问题.为此, 在字典的学习过程中, 将原子的聚类算法引入其中, 有助于缩减字典中原子所属类别的数目, 防止出现过分割; 考虑到像素及其邻域像素具有类别属性一致性的特点, 引入了空间类别属性约束信息, 并给出了一种交替优化算法. 联合学习字典、稀疏系数、聚类中心和隶属度, 将稀疏编码系数同原子对聚类中心的隶属程度相结合, 构造像素归属度来判断像素所属的类别. 实验结果表明, 该方法能够有效提高红外图像重要区域的分割效果, 具有较好的鲁棒性. 关键词: 图像分割 稀疏编码 聚类 空间约束  相似文献   

5.
压缩感知理论常用在磁共振快速成像上,仅采样少量的K空间数据即可重建出高质量的磁共振图像.压缩感知磁共振成像技术的原理是将磁共振图像重建问题建模成一个包含数据保真项、稀疏先验项和全变分项的线性组合最小化问题,显著减少磁共振扫描时间.稀疏表示是压缩感知理论的一个关键假设,重建结果很大程度上依赖于稀疏变换.本文将双树复小波变换和小波树稀疏联合作为压缩感知磁共振成像中的稀疏变换,提出了基于双树小波变换和小波树稀疏的压缩感知低场磁共振图像重建算法.实验表明,本文所提算法可以在某些磁共振图像客观评价指标中表现出一定的优势.  相似文献   

6.
王平  李娜  杜炜  罗汉武  崔士刚 《声学学报》2017,42(6):713-720
针对目前常见的稀疏字典缺乏针对性,在合成孔径医学超声成像中的应用效果不佳,难以在低压缩率下保证重构图像质量的问题,本文设计了一种高效能的稀疏字典。根据超声回波信号是由发射脉冲信号经过不同延时衰减后叠加的特点,利用发射脉冲作为基函数构造稀疏字典,回波信号在该稀疏字典确定的变换域中具备很好的稀疏性,理论上能使其稀疏表示系数的稀疏度等于超声阵元接收到的反射回波数。通过FieldⅡ对简单点目标和复杂目标的仿真结果表明:在相同的重构算法和压缩率下该稀疏字典重构的平均绝对误差明显小于常见的稀疏字典,其值仅为DWT的几分之一,DFT和DCT的几十分之一,能让回波信号以更低的压缩率实现相同的恢复效果。本文最后使用体模的实际采集数据对算法的实际效果进行检测,实验结果也与仿真结果基本一致。基于该稀疏字典的压缩感知算法可以进一步减少合成孔径成像所需存储的数据量、降低系统的复杂度。   相似文献   

7.
何阳  黄玮  王新华  郝建坤 《中国光学》2016,9(5):532-539
为了解决基于字典学习的超分辨重构算法耗时过长的问题,提出了基于稀疏阈值模型的图像超分辨率重建方法。首先,将联合字典理论与图像块稀疏阈值方法相结合,训练得到高、低分辨率过完备图像字典对。接着,通过稀疏阈值OMP算法对图像特征块进行稀疏表示。然后,通过高分辨率字典重构出初始的超分辨图像。最后,通过改进迭代反投影算法对初始的超分辨图像进行全局优化,从而进一步提高图像重构质量。实验结果表明,超分辨图像重构平均峰值信噪比(PSNR)为30.1 d B,平均结构自相似度(SSIM)为0.937 9,平均计算时间为10.2 s。有效提高了超分辨重构的速度,改善了重构高分辨图像的质量。  相似文献   

8.
磁共振成像(MRI)无创无害、对比度多、可以任意剖面成像的特点特别适合用于心脏成像,却因扫描时间长限制了其在临床上的应用.为了解决心脏磁共振电影成像屏气扫描时间过长的问题,该文提出了一种基于同时多层激发的多倍加速心脏磁共振电影成像及其影像重建的方法,该方法将相位调制多层激发(CAIPIRINHA)技术与并行加速(PPA)技术相结合,运用到分段采集心脏电影成像序列中,实现了在相位编码方向和选层方向的四倍加速,并使用改进的SENSE/GRAPPA算法对图像进行重建.分别在水模以及人体上进行了实验,将加速序列图像与不加速序列图像进行对比,结果验证了重建算法的有效性,表明该方法可以在保障图像质量以及准确测量心脏功能的前提下成倍节省扫描时间.  相似文献   

9.
基于字典学习的稠密光场重建算法   总被引:1,自引:0,他引:1       下载免费PDF全文
相机阵列是获取空间中目标光场信息的重要手段,采用大规模密集相机阵列获取高角度分辨率光场的方法增加了采样难度和设备成本,同时产生的大量数据的同步和传输需求也限制了光场采样规模.为了实现稀疏光场采样的稠密重建,本文基于稀疏光场数据,分析同一场景多视角图像的空间、角度信息的关联性和冗余性,建立有效的光场字典学习和稀疏编码数学模型,并根据稀疏编码元素间的约束关系,建立虚拟角度图像稀疏编码恢复模型,提出变换域稀疏编码恢复方法,并结合多场景稠密重建实验,验证提出方法的有效性.实验结果表明,本文方法能够对场景中的遮挡、阴影以及复杂的光影变化信息进行高质量恢复,可以用于复杂场景的稀疏光场稠密重建.本研究实现了线性采集稀疏光场的稠密重建,未来将针对非线性采集稀疏光场的稠密重建进行研究,以推进光场成像在实际工程中的应用.  相似文献   

10.
在压缩感知-磁共振成像(CS-MRI)中,随机欠采样矩阵与重建图像质量密切相关.而选取随机欠采样矩阵一般是通过计算点扩散函数(PSF),以可能产生的伪影的最大值为评价参数,评估欠采样对图像重建的影响,然而最大值只反应了伪影的最坏情况.该文引入了两种新的统计学评价参数平均值(MV)和标准差(SD),其中平均值评估了伪影的平均大小,标准差可以反映伪影的波动情况.该文分别使用这3种参数对小鼠和人体脑部MRI数据以不同的采样比率进行CS图像重建,实验结果表明,当采样比率不低于4倍稀疏度时,使用平均值获得了质量更优的重建图像.因此,通过稀疏度先验知识指导合理选取采样比率,并以平均值为评价参数选取随机欠采样矩阵,能够获得更优的CS-MRI重建图像.
  相似文献   

11.
In this article, we propose batch-type learning vector quantization (LVQ) segmentation techniques for the magnetic resonance (MR) images. Magnetic resonance imaging (MRI) segmentation is an important technique to differentiate abnormal and normal tissues in MR image data. The proposed LVQ segmentation techniques are compared with the generalized Kohonen's competitive learning (GKCL) methods, which were proposed by Lin et al. [Magn Reson Imaging 21 (2003) 863-870]. Three MRI data sets of real cases are used in this article. The first case is from a 2-year-old girl who was diagnosed with retinoblastoma in her left eye. The second case is from a 55-year-old woman who developed complete left side oculomotor palsy immediately after a motor vehicle accident. The third case is from an 84-year-old man who was diagnosed with Alzheimer disease (AD). Our comparisons are based on sensitivity of algorithm parameters, the quality of MRI segmentation with the contrast-to-noise ratio and the accuracy of the region of interest tissue. Overall, the segmentation results from batch-type LVQ algorithms present good accuracy and quality of the segmentation images, and also flexibility of algorithm parameters in all the comparison consequences. The results support that the proposed batch-type LVQ algorithms are better than the previous GKCL algorithms. Specifically, the proposed fuzzy-soft LVQ algorithm works well in segmenting AD MRI data set to accurately measure the hippocampus volume in AD MR images.  相似文献   

12.
Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n > 3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen.  相似文献   

13.
Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching.In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm.The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm.  相似文献   

14.
In this paper, we extend the multiplicative intrinsic component optimization (MICO) algorithm to multichannel MR image segmentation, with focus on segmentation of multiple sclerosis (MS) lesions. The MICO algorithm was originally proposed by Li et al. in Ref. [1] for normal brain tissue segmentation and intensity inhomogeneity correction of a single channel MR image, which exhibits desirable advantages over other methods for MR image segmentation and intensity inhomogeneity correction in terms of segmentation accuracy and robustness. In this paper, we extend the MICO algorithm to multi-channel MR image segmentation and enable the segmentation of MS lesions. We assign different weights for different channels to control the impact of each channel. The weighted channels allow the enhancement of the impact of the FLAIR image on the segmentation of MS lesions by assigning a larger weight to the FLAIR image channel than the other channels. With the inherent mechanism of estimation of the bias field, our method is able to deal with the intensity inhomogeneity in the input multi-channel MR images. In the application of our method, we only use T1-w and FLAIR images as the input two channel MR images. Experimental results show promising result of our method.  相似文献   

15.
针对单幅RGB图像重建光谱图像中的病态问题,提出一种基于非线性光谱字典学习的非线性重建方法。为了适应线性和非线性数据,该方法首先改进了基于自联想神经网络模型的非线性主成分分析算法,并利用其从训练光谱集中学习低维光谱字典,用于光谱重建的求逆方程中,以缓解病态状况。再在此光谱字典基础上,利用阻尼高斯牛顿法结合截断奇异值分解的正则化方法,进一步缓解该非线性反演的病态问题,实现单幅RGB图像重建光谱图像。在实验中,采用Munsell以及Munsell+Pantone两个光谱训练集学习光谱字典,同时利用CAVE和UEA光谱图像库进行光谱重建测试。该方法测试结果与现有方法比较发现,该方法在不同光谱训练集下重建CAVE和UEA两库光谱图像的均方根差的平均值最低,分别为0.212 4,0.255 4,0.229 4和0.294 9,均方根差的标准偏差接近最好方法的效果,分别为0.068 5,0.084 7,0.066 8和0.087 0。此结果表明该方法针对单幅RGB图像重建光谱图像在重建精度和稳定性上均存在优势。  相似文献   

16.
The reconstruction of magnetic resonance (MR) images from the partial samples of their k-space data using compressed sensing (CS)-based methods has generated a lot of interest in recent years. To reconstruct the MR images, these techniques exploit the sparsity of the image in a transform domain (wavelets, total variation, etc.). In a recent work, it has been shown that it is also possible to reconstruct MR images by exploiting their rank deficiency. In this work, it will be shown that, instead of exploiting the sparsity of the image or rank deficiency alone, better reconstruction results can be achieved by combining transform domain sparsity with rank deficiency.To reconstruct an MR image using its transform domain sparsity and its rank deficiency, this work proposes a combined l1-norm (of the transform coefficients) and nuclear norm (of the MR image matrix) minimization problem. Since such an optimization problem has not been encountered before, this work proposes and derives a first-order algorithm to solve it.The reconstruction results show that the proposed approach yields significant improvements, in terms of both visual quality as well as the signal to noise ratio, over previous works that reconstruct MR images either by exploiting rank deficiency or by the standard CS-based technique popularly known as the ‘Sparse MRI.’  相似文献   

17.
An algorithm for sparse MRI reconstruction by Schatten p-norm minimization   总被引:1,自引:0,他引:1  
In recent years, there has been a concerted effort to reduce the MR scan time. Signal processing research aims at reducing the scan time by acquiring less K-space data. The image is reconstructed from the subsampled K-space data by employing compressed sensing (CS)-based reconstruction techniques. In this article, we propose an alternative approach to CS-based reconstruction. The proposed approach exploits the rank deficiency of the MR images to reconstruct the image. This requires minimizing the rank of the image matrix subject to data constraints, which is unfortunately a nondeterministic polynomial time (NP) hard problem. Therefore we propose to replace the NP hard rank minimization problem by its nonconvex surrogate — Schatten p-norm minimization. The same approach can be used for denoising MR images as well.Since there is no algorithm to solve the Schatten p-norm minimization problem, we derive an efficient first-order algorithm. Experiments on MR brain scans show that the reconstruction and denoising accuracy from our method is at par with that of CS-based methods. Our proposed method is considerably faster than CS-based methods.  相似文献   

18.
In CT (computed tomography), reconstruction from undersampling projection data is often ill-posed and suffers from severe artifact in the reconstructed images. To overcome this problem, this paper proposes a sinogram inpainting method based on recently rising sparse representation technology. In this approach, a dictionary learning based inpainting is used to estimate the missing projection data. The final image can be reconstructed by the analytic filtered back projection (FBP) reconstruction. We conduct experiments using both simulated and real phantom data. Compared to the comparative interpolation method, visual and numerical results validate the clinical potential of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号