首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
杨艳  邵枫 《光电子.激光》2019,30(2):200-207
为辅助诊断眼底疾病和部分心血管疾病,本文提 出一种基于双字典学习和多尺度线状结构检测的眼底图 像血管分割方法。首先在HSV颜色空间利用伽马矫正均衡眼底图像的亮度,并在Lab颜色空间 采用CLAHE 算法提升图像对比度,再采用多尺度线状结构检测算法突出血管结构得到增强后的特征图像 ;然后利用 K-SVD算法训练特征图像块和对应的手绘血管标签图像块,得到表示字典和分割字典,采用 表示字典得到 新输入特征图像块的重构稀疏系数,由该系数和分割字典获得血管图像块;最后进行图像块 拼接、噪声去 除和空洞填充等后处理得到最终分割结果。在DRIVE和HRF数据库测试,利用准确率、特异度 、敏感度 等八种评估指标来检验分割性能。其中,平均准确率分别达0.958和0.951,平均特异度分别 达到0.982 和0.967,平均敏感度分别达到0.709和0.762,表明该 方法具有较好的分割性能和通用性。  相似文献   

2.
In this paper, we introduce a procedure for separating a multivariate distribution into nearly independent components based on minimizing a criterion defined in terms of the Kullback-Leibner distance. By replacing the unknown density with a kernel estimate, we derive useful forms of this criterion when only a sample from that distribution is available. We also compute the gradient and Hessian of our criteria for use in an iterative minimization. Setting this gradient to zero yields a set of separating functions similar to the ones considered in the source separation problem, except that here, these functions are adapted to the observed data. Finally, some simulations are given, illustrating the good performance of the method  相似文献   

3.
盲源分离的目的在于只利用接收数据把被瞬时线性混合的源信号恢复出来,该文讨论的是一种在复各向同性的SS噪声中的盲源分离方法,SS过程能够很好地描述许多具有冲激特性的信号和噪声,但其二阶和高阶统计量是不存在的,所以首先用基于子空间逼近和白化的方法对观测数据进行处理,然后利用特征矩阵近似联合对角化方法来估计源信号和混合矩阵。仿真结果说明该方法具有良好的性能。  相似文献   

4.
字典学习中字典尺度对DICOM图像压缩的影响   总被引:1,自引:1,他引:0       下载免费PDF全文
酉霞  陈菲  贾小林  刘雨娇  杨勇 《液晶与显示》2015,30(6):1045-1051
随着医院数字化医疗进程的加快,医学影像的数据量日益增大,医学影像资料的存储空间和获取速度受到很大的限制。文章在研究主流字典学习算法基础上,提出使用不同尺度的MOD、K-SVD、ILS-DLA、RLS-DLA字典算法对DICOM图像进行压缩存储,以及恢复再现的方法。与经典的JPEG和JPEG2000压缩算法相比,字典学习算法压缩和恢复效果较好,特别是采用较小尺度的字典时,压缩效果更为突出:当压缩比为20时,采用4×4尺度的RLS-DLA字典,论文算法的峰值信噪比(PSNR)较JPEG算法高出7.8dB,比JPEG2000算法高出1dB。  相似文献   

5.
We address independent component analysis (ICA) of piecewise stationary and non-Gaussian signals and propose a novel ICA algorithm called Block EFICA that is based on this generalized model of signals. The method is a further extension of the popular non-Gaussianity-based FastICA algorithm and of its recently optimized variant called EFICA. In contrast to these methods, Block EFICA is developed to effectively exploit varying distribution of signals, thus, also their varying variance in time (nonstationarity) or, more precisely, in time-intervals (piecewise stationarity). In theory, the accuracy of the method asymptotically approaches Cramér–Rao lower bound (CRLB) under common assumptions when variance of the signals is constant. On the other hand, the performance is practically close to the CRLB even when variance of the signals is changing. This is demonstrated by comparing our algorithm with various methods that are asymptotically efficient within ICA models based either on the non-Gaussianity or the nonstationarity. The benefit of our algorithm is demonstrated by examples with real-world audio signals.  相似文献   

6.
Blind separation of instantaneous mixtures of nonstationary sources   总被引:7,自引:0,他引:7  
Most source separation algorithms are based on a model of stationary sources. However, it is a simple matter to take advantage of possible nonstationarities of the sources to achieve separation. This paper develops novel approaches in this direction based on the principles of maximum likelihood and minimum mutual information. These principles are exploited by efficient algorithms in both the off-line case (via a new joint diagonalization procedure) and in the on-line case (via a Newton-like procedure). Some experiments showing the good performance of our algorithms and evidencing an interesting feature of our methods are presented: their ability to achieve a kind of super-efficiency. The paper concludes with a discussion contrasting separating methods for non-Gaussian and nonstationary models and emphasizing that, as a matter of fact, “what makes the algorithms work” is-strictly speaking-not the nonstationarity itself but rather the property that each realization of the source signals has a time-varying envelope  相似文献   

7.
Most digital cameras are overlaid with color filter arrays (CFA) on their electronic sensors, and thus only one particular color value would be captured at every pixel location. When producing the output image, one needs to recover the full color image from such incomplete color samples, and this process is known as demosaicking. In this paper, we propose a novel context-constrained demosaicking algorithm via sparse-representation based joint dictionary learning. Given a single mosaicked image with incomplete color samples, we perform color and texture constrained image segmentation and learn a dictionary with different context categories. A joint sparse representation is employed on different image components for predicting the missing color information in the resulting high-resolution image. During the dictionary learning and sparse coding processes, we advocate a locality constraint in our algorithm, which allows us to locate most relevant image data and thus achieve improved demosaicking performance. Experimental results show that the proposed method outperforms several existing or state-of-the-art techniques in terms of both subjective and objective evaluations.  相似文献   

8.
魏乐 《电光与控制》2004,11(2):38-41,53
独立分量分析(ICA)已被广泛运用于线性混合模型的盲源分离问题,但却有两个重要的限制:信源统计独立和信源非高斯分布。然而更有意义的线性混合模型是:观测信号是非负信源的非负线性混合,信源之间可以统计相关且可以为高斯分布。本文针对盲源分离问题,提出了一种运用新近国际上提出的一种非负矩阵分解算法(NMF算法)进行统计相关信源的盲源分离方法,该方法没有信源统计独立和信源非高斯分布的限制,只要信源之间没有一阶原点统计相关,则可很好实现对信源的分离。大量仿真及与传统ICA进行盲源分离的比较,验证了运用NMF进行包括统计相关信源和高斯分布信源的盲源分离的可行性和有效性。  相似文献   

9.
10.
一种应用于图像修复的非负字典学习算法   总被引:3,自引:2,他引:1  
提出了一种基于非负稀疏字典学习的图像修复算法,在非负矩阵分解(NMF)的目标函数中增加稀疏约束项,再通过稀疏编码和字典更新两步迭代学习得到训练样本的非负字典,稀疏编码采用的是非负正交匹配追踪(OMP)算法,字典更新则类似经典的KSVD算法;最终根据字典通过光滑L0范数算法得到待修复图像的稀疏系数,进而实现图像的修复。图像修复实验结果表明,本文算法能够对不同类型缺失的图像做到较好的修复,修复的视觉效果和技术指标都优于当前主流算法。  相似文献   

11.
The authors present a simple method for estimating the mixing matrix in the two source separation problem. It is proven that the separation can be obtained by solving a second-degree polynomial equation that involves fourth-order cumulants  相似文献   

12.
提出一种基于自适应核字典学习的合成孔径雷达(synthetic aperture radar,SAR)目标识别方法.该方法首先将SAR图像的特征信息通过核函数映射到高维度的核空间中并进行字典学习;然后根据更新后的字典动态计算稀疏度;最后依据最小重构误差准则实现SAR目标识别.在公开数据集MSTAR上的仿真实验结果表明,该方法提取到的特征信息可分度高,对SAR目标的识别具有较好的性能.  相似文献   

13.
提出了一种新的基于细菌趋药性(BC)算法的盲图像分离方法,利用图像信号的规范四阶累积量作为目标函数,使用BC算法对目标函数进行优化以实现图像的盲分离。每分离出一幅图像后,从混合图像中消除该幅图像成分后再进行下一次分离,从而最终实现所有源图像的逐次分离。仿真结果表明,本文算法能够有效实现对多幅混合自然图像的盲分离,且具有较好的分离效果。  相似文献   

14.
大多数的盲分离算法假设源信号峭度的正负性是己知的,并据此选择相应的非线性函数近似评价函数(score function)。针对源信号峭度的正负性未知的情况,本文提出了一个评价函数的参数估计方法,本算法能有效地分离混合在一起的超高斯信号和亚高斯信号,仿真结果验证了算法的有效性。  相似文献   

15.
Blind separation of speech mixtures via time-frequency masking   总被引:10,自引:0,他引:10  
Binary time-frequency masks are powerful tools for the separation of sources from a single mixture. Perfect demixing via binary time-frequency masks is possible provided the time-frequency representations of the sources do not overlap: a condition we call W-disjoint orthogonality. We introduce here the concept of approximate W-disjoint orthogonality and present experimental results demonstrating the level of approximate W-disjoint orthogonality of speech in mixtures of various orders. The results demonstrate that there exist ideal binary time-frequency masks that can separate several speech signals from one mixture. While determining these masks blindly from just one mixture is an open problem, we show that we can approximate the ideal masks in the case where two anechoic mixtures are provided. Motivated by the maximum likelihood mixing parameter estimators, we define a power weighted two-dimensional (2-D) histogram constructed from the ratio of the time-frequency representations of the mixtures that is shown to have one peak for each source with peak location corresponding to the relative attenuation and delay mixing parameters. The histogram is used to create time-frequency masks that partition one of the mixtures into the original sources. Experimental results on speech mixtures verify the technique. Example demixing results can be found online at http://alum.mit.edu/www/rickard/bss.html.  相似文献   

16.
Learned dictionaries have been validated to perform better than predefined ones in many application areas. Focusing on synthetic aperture radar (SAR) images, a structure preserving dictionary learning (SPDL) algorithm, which can capture and preserve the local and distant structures of the datasets for SAR target configuration recognition is proposed in this paper. Due to the target aspect angle sensitivity characteristic of SAR images, two structure preserving factors are embedded into the proposed SPDL algorithm. One is constructed to preserve the local structure of the datasets, and the other one is established to preserve the distant structure of the datasets. Both the local and distant structures of the datasets are preserved using the learned dictionary to realize target configuration recognition. Experimental results on the moving and stationary target acquisition and recognition (MSTAR) database demonstrate that the proposed algorithm is capable of handling the situations with limited number of training samples and under noise conditions.  相似文献   

17.
In this paper, we introduce a novel procedure for separating an instantaneous mixture of sources based on order statistics. The method is derived in a general context of independence component analysis, using a contrast function defined in term of the Kullback-Leibler divergence or of the mutual information. We introduce a discretized form of this contrast permitting its easy estimation through order statistics. We show that the local contrast property is preserved and derive a global contrast, exploiting only the information of the support of the distribution (in case this support is finite). Some simulations are given, illustrating the good performance of the method  相似文献   

18.
19.
一种自适应算法的语音信号盲分离   总被引:1,自引:0,他引:1  
梁淑芬  江太辉 《信号处理》2010,26(7):1094-1098
盲信号处理算法主要有批处理算法和自适应算法两类,本文导出了一种批处理和自适应相结合的快速独立分量分析(Fast Independent Component Analysis, Fast ICA)算法,将该算法应用于语音信号盲分离处理,通过综合实验,从分离前后的波形、频谱图和主要评价参数说明该算法具有良好的信号分离效果。与扩展联合对角化(The Joint Approximative Diagonalization ofEigenmatrix,JADE)算法和自然梯度(Natural Gradient,NG)算法比较, fast ICA算法具有更好的分离效果。   相似文献   

20.
盲源分离有一个重要假设:源信号最多只含一个高斯信号。否则,基于统计量的盲分离算法性能会恶化。本文从广义矩形分布出发,通过把时域中的一维信号映射到二维的时-频表示来提供信号的频谱内容随时间变化的信息,并对时频谱进行Hough变换处理,利用不同高斯源的时频分布差异性,避开统计量提出了一种能分离多个高斯源的盲分离算法,扩展了盲源分离的应用领域。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号