首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
Image super-resolution as high-quality image enlargement is achieved by some type of restoration for high-frequency components that deteriorate through the image enlargement. The estimation methods using the given image itself are effective for the restoration, and we have proposed a method employing the codebook describing edge blurring properties that are derived from the given image. It is, however, unfavourable to apply those image-dependent methods to movies whose scene varies momentarily. In this paper, an image-independent codebook incorporating local edge patterns of images is proposed, and then the predefined codebook is applied. The effectiveness is shown through some experiments.  相似文献   

2.
In this paper, a new image enlargement method applying the backprojection for lost pixel (BPLP) to the predefined codebook-based method is proposed. BPLP is a method for image restoration. In BPLP, the eigenspace reflecting the characteristics of an input image is generated from the remained pixels and is used to restore the missing pixels. In the proposed method, the eigenspace is replaced by one generated from the predefined codebook (PDC). PDC represents edge-blurring properties in a small image patch and consists of pairs of low- and high-frequency image patches on various edge patterns. By replacing the PDC-based estimation of lost high-frequency components with BPLP, a fast image enlargement method retaining its performance can be developed. Through some experiments, the effectiveness of the proposed method was demonstrated. Especially, it was confirmed that the processing time of the proposed method was shortened to about 1/50 that of the PDC-based method.  相似文献   

3.
Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.  相似文献   

4.
Multimodal medical image fusion aims to fuse images with complementary multisource information. In this paper, we propose a novel multimodal medical image fusion method using pulse coupled neural network (PCNN) and a weighted sum of eight-neighborhood-based modified Laplacian (WSEML) integrating guided image filtering (GIF) in non-subsampled contourlet transform (NSCT) domain. Firstly, the source images are decomposed by NSCT, several low- and high-frequency sub-bands are generated. Secondly, the PCNN-based fusion rule is used to process the low-frequency components, and the GIF-WSEML fusion model is used to process the high-frequency components. Finally, the fused image is obtained by integrating the fused low- and high-frequency sub-bands. The experimental results demonstrate that the proposed method can achieve better performance in terms of multimodal medical image fusion. The proposed algorithm also has obvious advantages in objective evaluation indexes VIFF, QW, API, SD, EN and time consumption.  相似文献   

5.
陈清江  李毅  柴昱洲 《应用光学》2018,39(5):655-666
遥感图像融合是指将不同传感器得到的具有不同观测特性的图像信息有选择、有策略地结合起来,以得到具有更优观测特性的新图像的方法。提出一种深度学习结合非下采样剪切波变换(NSST)的遥感图像融合算法,利用改进的超分辨率重建网络对多光谱图像(MS)进行空间分辨率增强,全色图像(PAN)参考重建后的多光谱图像的每个分量进行直方图匹配。将对应通道的图像进行NSST变换,分别得到低频子带和若干高频子带。低频子带通过使用基于梯度域的自适应加权平均规则来获得低频融合系数,高频子带采用局部空间频率最大值规则来获得高频融合系数,最后经逆NSST变换重构获得融合图像。对不同数据集中的City和Inland多光谱图像采用双三次插值方法进行上采样,作者提出算法的通用图像质量指数(UIQI)分别为0.988 6和0.932 1,光谱角映射(SAM)分别为1.872 1和2.143 2。实验结果表明,图像结构更加清晰,保存的光谱信息更加完整,融合图像质量优于对比算法,融合图像更利于人类视觉观察。  相似文献   

6.
针对近红外与彩色可见光图像融合后对比度低、细节丢失和颜色失真等问题,提出一种基于多尺度变换和自适应脉冲耦合神经网络(PCNN-pulse coupled neural network,PCNN)的红外与彩色可见光图像融合的新算法。首先将彩色可见光图像转换到HSI(hue saturation intensity)空间,HSI色彩空间包含亮度、色度和饱和度三个分量,并且这三个分量互不相关,因此利用这个特点可对三个分量分别进行处理。将其亮度分量与近红外图像分别进行多尺度变换,变换方法选择Tetrolet变换。变换后分别得到低频和高频分量,针对图像低频分量,提出一种期望最大的低频分量融合规则;针对图像高频分量,采用高斯差分算子调节PCNN模型的阈值,提出一种自适应的PCNN模型作为融合规则。处理后的高低频分量经过Tetrolet逆变换得到的融合图像作为新的亮度图像。然后将新的亮度图像和原始的色度和饱和度分量反向映射到RGB空间,得到融合后的彩色图像。为了解决融合带来的图像平滑化和原始图像光照不均的问题,引入颜色与锐度校正机制(colour and sharpness correction, CSC)来提高融合图像的质量。为了验证方法的有效性,选取了5组分辨率为1 024×680近红外与彩色可见光图像进行试验,并与当前高效的四种融合方法以及未进行颜色校正的本方法进行了对比。实验结果表明,同其他图像融合算法进行对比分析,该方法在有无CSC颜色的情况下均能保留最多的细节和纹理,可见度均大大提高,同时本方法的结果在光照条件较弱的情况下具有更多的细节和纹理,均具有更好的对比度和良好的色彩再现性。在信息保留度、颜色恢复、图像对比度和结构相似性等客观指标上均具有较大优势。  相似文献   

7.
一种基于小波变换的红外图像放大算法   总被引:1,自引:1,他引:1  
图像放大技术的关键在于使放大后的图像尽可能地保持原始图像的清晰度。对于红外图像而言,传统的内插法存在着一定的缺陷。提出了一种基于小波变换的图像放大新算法,该算法对原始图像先进行小波变换获得高频系数,然后运用牛顿插值算法放大高频系数,以此作为放大图像的高频成份,而将原始图像作为低频成份,最后进行小波逆变换,重构出放大图像。实验证明该方法在图像细节方面具有很好的放大效果。  相似文献   

8.
针对传统红外与弱可见光图像融合算法中存在的亮度与对比度低、细节轮廓信息缺失、可视性差等问题,提出一种基于潜在低秩表示与复合滤波的红外与弱可见光增强图像融合方法.该方法首先利用改进的高动态范围压缩增强方法增强可见光图像提高亮度;然后利用基于潜在低秩表示与复合滤波的分解方法分别对红外与增强后的弱可见光图像进行分解,得到相应的低频和高频层;再分别使用改进的对比度增强视觉显著图融合方法与改进的加权最小二乘优化融合方法对得到的低频和高频层进行融合;最后将得到的低频和高频融合层进行线性叠加得到最终的融合图像.与其他方法的对比实验结果表明,用该方法得到的融合图像细节信息丰富,清晰度高,具有良好的可视性.  相似文献   

9.
In this paper, a novel greyscale image coding technique based on vector quantization (VQ) is proposed. In VQ, the reconstructed image quality is restricted by the codebook used in the image encoding/decoding procedures. To provide a better image quality using a fixed-sized codebook, the codebook expansion technique is introduced in the proposed technique. In addition, the block prediction technique and the relatively address technique are employed to cut down the required storage cost of the compressed codes. From the results, it is shown that the proposed technique adaptively provides better image quality at low bit rates than VQ.  相似文献   

10.
席志红  曾继琴  李爽 《应用声学》2017,25(3):197-200
在医学影像图像处理过程中,由于成像技术和成像时间的限制,还无法获取满足诊断需求的清晰图像,这使得在现有技术和极短时间内所获取的医学病理图像需要进行超分辨率的重建处理;基于学习的图像超分辨率思想是从已建立的先验模型中重建出高频细节;在文章中,将要估计的高频信息认为是由主要高频和冗余高频两部分组成,提出了一种基于双字典学习和稀疏表示的医学图像超分辨率重建算法,由主要字典学习和冗余字典学习组成,分别渐近地恢复出主要高频细节和冗余高频细节;实验结果的数据分析和视觉效果显示,所提出双层递进方法能够恢复更多的图像细节且在性能指标上比现有的其他几种方法均有所提高。  相似文献   

11.
A novel image fusion algorithm based on wavelet-based contourlet transform (WBCT) and principal component analysis (PCA) is proposed. The PCA method is adopted for the low-frequency components. Using the proposed algorithm to choose the greater of the active measures, the region consistency test is performed for the high-frequency components. Experiments show that the proposed method works better in preserving the edge and texture information than wavelet transform method and Laplacian pyramid (LP) method do in image fusion. Four indicators for the fusion image are given to compare the proposed method with other methods.  相似文献   

12.
Codebook-based single-microphone noise suppressors, which exploit prior knowledge about speech and noise statistics, provide better performance in nonstationary noise. However, as the enhancement involves a joint optimization over speech and noise codebooks, this results in high computational complexity. A codebook-based method is proposed that uses a reference signal observed by a bone-conduction microphone, and a mapping between air- and bone-conduction codebook entries generated during an offline training phase. A smaller subset of air-conducted speech codebook entries that accurately models the clean speech signal is selected using this reference signal. Experiments support the expected improvement in performance at low computational complexity.  相似文献   

13.
In this work we propose a method for scatter compensation in single photon emission computed tomography imaging, by which we can estimate the scatter components in projections in high speed with good accuracy. The method is that we first estimate the scatter components in projections based on scatter response kernels by one time of ordered subsets expectation maximization iterative reconstruction, and then subtract the estimated scatter components from the projections and complete reconstruction by filtered back-projection method. The principle is that the image corresponding to the scatter components in projections consists largely of low-frequency components of an activity distribution; these low-frequency components will converge faster than the high ones in iterative reconstruction. Therefore, we can estimate the low-frequency component image before the image converges with the high-frequency ones, and obtain the scatter components by re-projecting the low-frequency component image with scatter response kernels. The effects of the proposed method were compared with the dual- and triple-energy window methods using experimental measurements. The results show that good accuracy in estimated scatter components, good uniformity of scatter compensation at the center and the side of an object, and good noise property can be acquired by this method.  相似文献   

14.
In this paper, an improved fusion algorithm for infrared and visible images based on multi-scale transform is proposed. First of all, Morphology-Hat transform is used for an infrared image and a visible image separately. Then two images were decomposed into high-frequency and low-frequency images by contourlet transform (CT). The fusion strategy of high-frequency images is based on mean gradient and the fusion strategy of low-frequency images is based on Principal Component Analysis (PCA). Finally, the final fused image is obtained by using the inverse contourlet transform (ICT). The experiments and results demonstrate that the proposed method can significantly improve image fusion performance, accomplish notable target information and high contrast and preserve rich details information at the same time.  相似文献   

15.
在增强压缩图像过程中,采用传统图像增强方法存在细节恢复困难,易产生块状效应等局限性,针对此问题,本文将Retinex理论引入JPEG2000压缩框架中,提出一种新的JPEG2000压缩图像增强方法.它将小波变换后的低频系数看作入射光分量,高频系数看作反射光分量,通过对低频系数进行两次非线性映射,来调整场景光照的动态范围;通过调整高频系数,提高对比度,实现图像细节的整体拉升;通过判断各子块的活动性,自适应来修改亮度量化表,达到可以保留更多的细节,抑制块状效应的目的.实验结果表明,该算法在增强效果、压缩质量等方面都有着较好的优势.  相似文献   

16.
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.  相似文献   

17.
A medical image fusion method based on bi-dimensional empirical mode decomposition (BEMD) and dual-channel PCNN is proposed in this paper. The multi-modality medical images are decomposed into intrinsic mode function (IMF) components and a residue component. IMF components are divided into high-frequency and low-frequency components based on the component energy. Fusion coefficients are achieved by the following fusion rule: high frequency components and the residue component are superimposed to get more textures; low frequency components contain more details of the source image which are input into dual-channel PCNN to select fusion coefficients, the fused medical image is achieved by inverse transformation of BEMD. BEMD is a self-adaptive tool for analyzing nonlinear and non-stationary data; it doesn’t need to predefine filter or basis function. Dual-channel PCNN reduces the computational complexity and has a good ability in selecting fusion coefficients. A combined application of BEMD and dual-channel PCNN can extract the details of the image information more effectively. The experimental result shows the proposed algorithm gets better fusion result and has more advantages comparing with traditional fusion algorithms.  相似文献   

18.
基于经验模态分解和小波阈值的冲击信号去噪   总被引:2,自引:0,他引:2  
苏秀红  李皓 《应用声学》2017,25(1):204-208, 220
冲击信号是非线性的并且容易受到噪声污染。为研究冲击信号去噪的问题,本文针对经验模态分解(Empirical Mode Decomposition,EMD)去噪和小波阈值去噪方法存在的不足,提出了基于EMD的小波阈值去噪方法。单纯的EMD去噪方法会在去除高频噪声的同时压制高频的有效信息。本文将EMD与小波阈值去噪相结合,利用连续均方误差准则确定含噪较多的高频固有模态函数(Intrinsic Mode Function, IMF),对高频IMF分量进行小波阈值去噪,以分离并保留这些分量中的有效信息,同时保持低频IMF分量不变。对模拟数据和实际冲击信号进行去噪处理,结果表明,基于EMD的小波阈值去噪方法的去噪效果优于单纯的EMD去噪方法和小波阈值去噪方法。  相似文献   

19.
An analytical solution to the boundary-value problem of an electric field and electrons in a metal-filled half-space is obtained for arbitrary values of the tangential-momentum accommodation coefficient. The frequency of an external electromagnetic field directed tangentially to the surface is allowed to take on complex values. Both the normal and anomalous skin effects are considered. In the latter case, the low-and high-frequency limits are examined.  相似文献   

20.
一种结合小波分析与直方图的红外图像增强方法   总被引:6,自引:2,他引:4  
针对传统红外图像增强算法中存在的问题,提出一种结合小波分析与直方图的红外图像增强方法.采用正交小波变换对红外图像进行处理,得到小波各层的分解系数;运用双向直方图均衡法对红外图像的低频子带小波系数进行处理,利用阈值滤波的细节系数增强法对红外图像的高频子带小波系数进行处理,经小波逆变换图像重构得到增强后的红外图像.实验结果...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号