首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于光谱响应函数的遥感图像融合对比研究   总被引:3,自引:0,他引:3  
遥感图像融合是一个十分重要的问题,目前已出现了很多融合方法.一些现有方法可以从高空间分辨率全色数据中提取细节特征,并注入到低空间分辨牢多光谱数据中,同时尽可能保持多光谱数据的光谱特性.现有方法一般都不能利用遥感成像系统的物理信息,因此可能导致融合结果发生严重的光谱扭曲.该文采用适当的遥感图像融合模型对图像融合问题进行分解,将融合问题归结为空间细节调制参数构建与空间细节信息提取两个子问题,在对传感器光谱响应函数(SRF)的分析基础上,构建合理的空间细节调制参数.依据对现有方法的分类,文章将三种基于SRF的空间细节调制参数构建方法,与高斯高通滤波提取的空间细节信息结合,产生三种基于SRF的遥感图像融合方法.这些方法在Ikonos数据上进行了试验和分析,并与GS和HPM方法进行对比.  相似文献   

2.
现有的基于单个红外宽波段的海面舰船目标探测系统在面对复杂海天背景、岛岸背景、恶劣天气、亮带干扰或诱饵弹干扰等情况时,系统的探测率、虚警率、探测距离等性能指标均会受到严重的影响;为此,开展了基于多波段红外图像的海面舰船目标检测方法的研究。通过中波红外多波段数据采集系统实际采集107组五个中波红外波段的图像;波段1-5分别为3.7~4.8,3.7~4.1,4.4~4.8,3.7~3.9和4.65~4.75 μm;对多波段图像进行手动标注构建样本数据集,其中,正样本舰船目标298个,负样本非舰船目标353个。对于多波段红外图像,首先进行PCA降维并采用选择性搜索算法生成初始目标候选区域;针对候选区域中存在大量明显的非舰船目标区域的问题,利用积分图像计算候选区域的局部对比度,依据红外舰船目标的几何和灰度特征从初始目标候选区域中筛选出舰船目标可能性大的区域作为舰船目标候选区域。然后对舰船目标候选区域进行拓展以融入局部上下文信息,对于候选区域对应的5波段红外图像,分别提取每个波段图像的稠密SIFT特征,并将128维SIFT特征向量降为64维,融入SIFT特征的空间和波段位置分布信息得到新的特征向量,基于高斯混合模型对候选区域的特征向量集合进行编码融合得到舰船目标候选区域的费舍尔向量表示,最后利用线性SVM分类器识别出舰船目标。对多波段图像进行舰船目标候选区域生成实验,所提出的基于红外舰船目标的几何和灰度特征的约束方法可以有效地克服选择性搜索算法的不足,从初始目标候选区域中快速定位出舰船目标候选区域,对25组多波段图像进行实验,舰船目标候选区域生成的整体耗时为0.353 s,定位舰船目标区域耗时0.005 s。对100个正负样本进行目标识别测试,所提出的目标识别算法融合了目标的多波段图像特征信息,通过引入费舍尔向量挖掘了多波段图像梯度统计特征的深层次信息,算法的识别率达到了0.97,显著高于单波段红外图像的目标识别率。对25组多波段图像进行舰船目标检测实验,所提出的舰船目标检测方法能够在海天背景、岛岸背景以及亮带干扰等不同场景下完成海面舰船目标的检测工作,舰船目标定位准确,舰船目标召回率达到了0.95,每组多波段图像的平均检测耗时为1.33 s。研究结果表明,充分考虑海面舰船目标在红外图像中与局部海洋背景的辐射差异以及有效地融合舰船目标在多个红外波段图像中的辐射特征,可以增强舰船目标的可分性,提高舰船目标的识别率以及检测率,为基于多波段红外图像的海面舰船目标检测提供了新的技术支持。  相似文献   

3.
Existing fusion rules focus on retaining detailed information in the source image, but as the thermal radiation information in infrared images is mainly characterized by pixel intensity, these fusion rules are likely to result in reduced saliency of the target in the fused image. To address this problem, we propose an infrared and visible image fusion model based on significant target enhancement, aiming to inject thermal targets from infrared images into visible images to enhance target saliency while retaining important details in visible images. First, the source image is decomposed with multi-level Gaussian curvature filtering to obtain background information with high spatial resolution. Second, the large-scale layers are fused using ResNet50 and maximizing weights based on the average operator to improve detail retention. Finally, the base layers are fused by incorporating a new salient target detection method. The subjective and objective experimental results on TNO and MSRS datasets demonstrate that our method achieves better results compared to other traditional and deep learning-based methods.  相似文献   

4.
To solve the fusion problem of the multifocus images of the same scene, a novel algorithm based on focused region detection and multiresolution is proposed. In order to integrate the advantages of spatial domain-based fusion methods and transformed domain-based fusion methods, we use a technique of focused region detection and a new fusion method of multiscale transform (MST) to guide pixel combination. Firstly, the initial fused image is acquired with a novel multiresolution image fusion method. The pixels of the original images, which are similar to the corresponding initial fused image pixels, are considered to be located in the sharply focused regions. By this method, the initial focused regions can be determined, and the techniques of morphological opening and closing are employed for post-processing. Then the pixels within the focused regions in each source image are selected as the pixels of the fused image; meanwhile, the initial fused image pixels which are located at the focused border regions are retained as the pixels of the final fused image. The fused image is then obtained. The experimental results show that the proposed fusion approach is effective and performs better in fusing multi-focus images than some current methods.  相似文献   

5.
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.  相似文献   

6.
基于Shearlet变换的自适应图像融合算法   总被引:3,自引:1,他引:2  
石智  张卓  岳彦刚 《光子学报》2013,42(1):115-120
针对多聚焦图像与多光谱和全色图像的成像特点,结合Shearlet变换具有较好的稀疏表示图像特征的性质,提出了一种新的图像融合规则.并基于此融合规则,提出了基于Shearlet变换的自适应图像融合算法.在多聚焦图像的融合算法中,分别对聚焦不同的图像进行Shearlet变换,并基于本文提出的融合规则,对分解后的高低频系数进行融合处理. 通过与多种算法的比较实验证明了本文提出的算法融合的图像具有更高的清晰度和更加丰富的细节信息.在多光谱和全色图像的融合处理中,提出了一种基于Shearlet变换与HSV变换相结合的图像融合方法.该算法首先对多光谱图像作HSV变换,将得到的V分量与全色图像进行Shearlet分解与融合,在融合过程中对分解系数选用特定的融合准则进行融合,最后将融合生成新的分量与H、S分量进行HSV逆变换产生新的RGB融合图像. 该算法在空间分辨率和光谱特性两方面达到了良好的平衡,融合后的图像在减少光谱失真的同时,有效增强了空间分辨率. 仿真实验证明,本文算法融合的图像与传统的多光谱和全色图像融合算法相比,具有更佳的融合性能和视觉效果.  相似文献   

7.
Qing Guo  Shutian Liu 《Optik》2011,122(9):811-819
During the past few years, many fusion algorithms have been proposed to combine a high-resolution panchromatic image with a low-resolution multi-spectral image to generate a high-resolution multi-spectral image. Among them, the wavelet-based algorithm has gained its popularity due to its ability of multi-resolution decomposition. More specifically, the wavelet transform is first applied to images. The wavelet coefficients are then combined based on a certain rule to produce the fused image. In this paper, we evaluated the performances of both the wavelet transform discrete approaches and the coefficient combination methods when they are applied to fuse multi-spectral and panchromatic images. For the discrete approaches of the wavelet transform, Mallat and “à trous” algorithms are chosen. For the coefficient combination, the additive wavelet method, the additive wavelet intensity method and the additive wavelet principal component method are selected. To evaluate the spectral quality of the fused images, correlation coefficient and Qavg index are used as a local and global measure, respectively. Meanwhile, average gradient and standard deviation are used to evaluate the spatial quality. Our experiments show that keeping the combination method the same, the “à trous” algorithm works better than the Mallat algorithm for the fusion purpose. In addition, if keeping the wavelet discrete algorithm the same, the combination methods mentioned above are found to have different advantages between the spatial resolution improvement and the spectral quality preservation.  相似文献   

8.
多谱CT成像是通过不同谱段的CT图像表征检测对象中的不同组分。为了便于在同一视图中显示所有组分的信息,需要研究多谱CT序列的融合方法;但是常用融合方法如加权平均法、小波变换融合法等都是针对图像细节信息的优化,不能表达组分的物理特性,从而导致融合图像的灰度不具有物理表征性,影响CT的定量检测。为此,结合具有物理表征特征的数据约束模型(DCM),开展了基于先验组分的多谱CT序列DCM融合算法研究。首先通过能谱滤波分离的成像方法获得多个能谱范围内的多能投影数据,采用TV-OSEM算法重建不同能谱段的CT序列;其次,利用传统DCM模型和改进DCM模型分别对多谱CT序列进行融合,传统DCM模型是严格单能的,由于滤波后能谱的非严格单能特性,其融合结果不能表征出对象序列中的全部组分。针对此问题提出了改进DCM模型。改进DCM模型选择了新的体元定义,并且在多谱CT序列融合中引入先验组分作为参照,通过先验物质对融合结果中其他物质进行校准,实现检测对象中各组分位置的准确分布。仿真实验表明,该方法可从物理表征正确性的角度,实现多谱CT序列融合,在满足CT序列中不同组分区分的同时,其融合图像的灰度具有物理可参照性,有利于后续的CT定量检测。  相似文献   

9.
Military, navigation and concealed weapon detection need different imaging modalities such as visible and infrared to monitor a targeted scene. These modalities provide complementary information. For better situation awareness, complementary information of these images has to be integrated into a single image. Image fusion is the process of integrating complementary source information into a composite image. In this paper, we propose a new image fusion method based on saliency detection and two-scale image decomposition. This method is beneficial because the visual saliency extraction process introduced in this paper can highlight the saliency information of source images very well. A new weight map construction process based on visual saliency is proposed. This process is able to integrate the visually significant information of source images into the fused image. In contrast to most of the multi-scale image fusion techniques, proposed technique uses only two-scale image decomposition. So it is fast and efficient. Our method is tested on several image pairs and is evaluated qualitatively by visual inspection and quantitatively using objective fusion metrics. Outcomes of the proposed method are compared with the state-of-art multi-scale fusion techniques. Results reveal that the proposed method performance is comparable or superior to the existing methods.  相似文献   

10.
为了在野外环境中快速有效地识别敌方伪装的机动目标,设计了基于光谱探测与视频图像目标识别方法联用的目标识别系统。采用视频图像识别技术获取被测区域的二维影像,再通过光谱探测技术识别目标,最终将目标重建在图像相应位置上从而实现目标识别的可视化。理论推导得到了系统可识别目标的函数关系式,根据该函数关系进行了目标识别的量化实验。实验采用汽车模拟被测机动目标,在不同距离上分别以平坦荒地、灌木丛和废弃建筑物为背景,对明显目标、涂覆迷彩色的目标以及遮挡伪装物的目标分别进行光谱探测。实验结果显示,测试背景对光谱探测效果有一定影响,背景的连续性有利于目标识别;伪装方式以伪装物遮挡最难识别,且随着目标与系统的距离增大而信噪比随之降低。综上所述,采用光谱探测技术克服了传统图像目标识别无法识别伪装目标的缺点,可以实现对伪装目标的有效识别。  相似文献   

11.
We employ the target detection to improve the performance of the feature-based fusion of infrared and visible dynamic images, which forms a novel fusion scheme. First, the target detection is used to segment the source image sequences into target and background regions. Then, the dual-tree complex wavelet transform (DT-CWT) is proposed to decompose all the source image sequences. Different fusion rules are applied respectively in target and background regions to preserve the target information as much as possible. Real world infrared and visible image sequences are used to validate the performance of the proposed novel scheme. Compared with the previous fusion approaches of image sequences, the improvements of shift invariance, temporal stability and consistency, and computation cost are all ensured.  相似文献   

12.
We introduce a new spectrum transform into the image fusion field and propose a novel fusion method based on discrete fractional random transform (DFRNT). In DFRNT domain, high amplitude spectrum (HAS) and low amplitude spectrum (LAS) components carry different information of original images. For different fusion goals, different fusion rules can be adopted in HAS and LAS components, respectively. The proposed method is applied to fuse real multi-spectral (MS) and panchromatic (Pan) images. The fused image is observed to preserve both spectral information of MS and spatial information of Pan. Spectrum distribution of DFRNT is random and uniform, which guarantees that good information is reserved.  相似文献   

13.
四通道可见光光谱相机的设计   总被引:1,自引:0,他引:1       下载免费PDF全文
冯姗  曾祥忠 《应用光学》2019,40(3):393-398
随着光谱技术的发展,多光谱相机在农业、医疗、机器视觉、遥感探测等领域都得到了广泛的应用。提出一款小型化四通道可见光波段的多光谱相机,尺寸为107 mm×110 mm×74 mm,质量1 043 g,适合搭载在小型化无人机上进行遥感探测。相机系统采用棱镜分光光路设计方案,避免光程差和光轴偏移;设计开发的四通道可见光多光谱相机,采用一个FPGA控制器同时驱动4个图像传感器,实现4个通道像素级同步采集,曝光完全一致。实验结果显示:在红、绿、蓝3个特定波段及全波段相机可实时输出图像,既可输出某个特定波段图像(60 fps),也可同时输出4个波段图像(15 fps),有利于多波段图像同步采集,以及多光谱图像融合的研究。  相似文献   

14.
Integration of infrared and visible images is an active and important topic in image understanding and interpretation. In this paper, a new fusion method is proposed based on the improved multi-scale center-surround top-hat transform, which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of infrared and visible images are respectively extracted at different scale levels by the improved multi-scale center-surround top-hat transform. Secondly, the feature regions at the same scale in both images are combined by multi-judgment contrast fusion rule, and the final feature images are obtained by simply adding all scales of feature images together. Then, a base image is calculated by performing Gaussian fuzzy logic combination rule on two smoothed source images. Finally, the fusion image is obtained by importing the extracted bright and dark feature images into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method is superior to current popular MST-based methods and morphology-based methods in the field of infrared-visible images fusion.  相似文献   

15.
The increasing availability and deployment of imaging sensors operating in multiple spectral bands has led to a large research effort in color image fusion, resulting in a plethora of pixel-level image fusion algorithms. In this study a simple and fast fusion approach for color night vision is presented. The contrast of infrared and visible images is adjusted by local histogram equalization. Then the two enhanced images are fused into the three components of a Lab image in terms of a simple linear fusion strategy. To obtain false color images possessing a natural day-time color appearance, this paper adopts an approach which transfers color from the reference to the fused images in a simplified Lab space. To enhance the contrast between the target and the background, a stretch factor is introduced into the transferring equation in the b channel. Experimental results based on three different data sets show that the hot targets are popped out with intense colors while the background details present natural color appearance. Target detection experiments also show that the presented method has a better performance than the former methods, owing to the target recognition area, detection rate, color distance and running time.  相似文献   

16.
In this paper, an interesting fusion method, named as NNSP, is developed for infrared and visible image fusion, where non-negative sparse representation is used to extract the features of source images. The characteristics of non-negative sparse representation coefficients are described according to their activity levels and sparseness levels. Multiple methods are developed to detect the salient features of the source images, which include the target and contour features in the infrared images and the texture features in the visible images. The regional consistency rule is proposed to obtain the fusion guide vector for determining the fused image automatically, where the features of the source images are seamlessly integrated into the fused image. Compared with the classical and state-of-the-art methods, our experimental results have indicated that our NNSP method has better fusion performance in both noiseless and noisy situations.  相似文献   

17.
Image fusion for visible and infrared images is a significant task in image analysis. The target regions in infrared image and abundant detail information in visible image should be both extracted into the fused result. Thus, one should preserve or even enhance the details from original images in fusion process. In this paper, an algorithm using pixel value based saliency detection and detail preserving based image decomposition is proposed. Firstly, the multi-scale decomposition is constructed using weighted least squares filter for original infrared and visible images. Secondly, the pixel value based saliency map is designed and utilized for image fusion in different decomposition level. Finally, the fusion result is reconstructed by synthesizing different scales with synthetic weights. Since the information of original signals can be well preserved and enhanced with saliency extraction and multi scale decomposition process, the fusion algorithm performs robustly and excellently. The proposed approach is compared with other state-of the-art methods on several image sets to verify the effectiveness and robustness.  相似文献   

18.
Image fusion refers to the techniques that integrate complementary information from multiple image sensors’ data in a way that makes the new images more suitable for human visual perception. The paper focuses on the low color contrast problem of linear fusion algorithms with color transfer method. Firstly, the contrast of infrared and visible images is enhanced using local histogram equalization and median filter. Then the two enhanced images are fused into the three components of a Lab image in terms of a simple linear fusion strategy. To enhance the color contrast between the target and the background, the scaling factor is introduced into the transferring equation in the b channel. Experimental results based on three different data sets show that the hot and cold targets are all popped out with intense colors while the background details present natural color appearance. Target detection experiments through target recognition area, detection rate, target-background discrimination also show that the presented method has a better performance than the former methods.  相似文献   

19.
Multi-focus image fusion combines multiple source images with different focus points into one image, so that the resulting image appears all in-focus. In order to improve the accuracy of focused region detection and fusion quality, a novel multi-focus image fusion scheme based on robust principal component analysis (RPCA) and pulse-coupled neural network (PCNN) is proposed. In this method, registered source images are decomposed into principal component matrices and sparse matrices with RPCA decomposition. The local sparse features computed from the sparse matrix construct a composite feature space to represent the important information from the source images, which become inputs to PCNN to motivate the PCNN neurons. The focused regions of the source images are detected by the firing maps of PCNN and are integrated to construct the final, fused image. Experimental results demonstrate that the superiority of the proposed scheme over existing methods and highlight the expediency and suitability of the proposed method.  相似文献   

20.
针对灰度图像融合的分辨率低及现有的彩色图像融合方法融合的图像色彩不自然、不符合人的视觉感受的特点,在此提出一种基于Snake模型的区域检测和非下采样轮廓波变换(NSCT)的红外与彩色可见光图像融合的方法。首先对彩色可见光图像进行亮度、色度和饱和度(IHS)颜色空间变换提取亮度分量,并用Snake模型对红外图像的目标区域进行检测;然后对亮度分量和目标替换的红外图像应用NSCT分解,对所得到的高频系数采用像素点"绝对值和取大"、低频系数采用基于"亮度重映射技术"的加权融合规则进行融合;通过对融合系数进行NSCT逆变换获得融合图像的亮度分量,最后运用颜色空间逆变换得到融合图像。实验结果表明,所提出的融合方法既能保持可见光图像的高分辨率和自然色彩,又能准确保留红外图像中检测出的目标信息,获得视觉效果较好、综合指标较优的融合图像。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号