首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 319 毫秒
1.
提出了一种融合全局和局部深度特征(GLDFB)的视觉词袋模型。通过视觉词袋模型将深度卷积神经网络提取的多个层次的高层特征进行重组编码并融合,利用支持向量机对融合特征进行分类。充分利用包含场景局部细节信息的卷积层特征和包含场景全局信息的全连接层特征,完成对遥感影像场景的高效表达。通过对两个不同规模的遥感图像场景数据集的实验研究表明,相比现有方法,所提方法在高层特征表达能力和分类精度方面具有显著优势。  相似文献   

2.
《光学技术》2013,(6):510-516
立体匹配,其目的是在对两幅存在一定视差的图像进行匹配后,获得两幅图像精确的视差图,是计算机视觉领域的重点和难点问题。为了快速、高效地获得高精度的稠密视差图,在充分研究了基于边缘特征的立体匹配和基于局部窗口的立体匹配两类算法的基础上,提出了一种基于局域边缘特征的自适应立体匹配算法。首先该算法自适应地选择局部匹配窗口大小,并计算其中相应位置的权值,实现初次匹配,然后再以基于边缘特征匹配获得的高精度稀疏视差图作为约束,修正部分误匹配点,最终获得准确、稠密的视差图。实验表明,该算法具良好的实验结果和较大的实用价值。  相似文献   

3.
针对脑肿瘤良恶性分类过程复杂、分类准确率不高等问题,提出了一种基于多尺度特征与通道特征融合的分类模型。该模型以ResNeXt网络为主干网络,首先,将基于空洞卷积的多尺度特征提取模块代替第一层卷积层,利用膨胀率获取不同感受野的图像信息,将全局特征与局部显著特征相结合;其次,添加通道注意力机制模块,融合特征通道信息,提高对肿瘤区域的关注度,降低对冗余信息的关注度;最后,采用学习率的线性衰减策略、图像的标签平滑策略以及基于医学图像的迁移学习策略的组合优化提高模型的学习能力和泛化能力。在BraTS2017和BraTS2019数据集中进行实验,准确率分别达到98.11%和98.72%。与经典模型和其他先进方法相比,该分类模型能够有效地减少分类过程的复杂度,提高脑肿瘤良恶性分类的准确率。  相似文献   

4.
为了提高对复杂场景下多尺度遥感目标的检测精度,提出了基于多尺度单发射击检测(SSD)的特征增强目标检测算法.首先对SSD的金字塔特征层中的浅层网络设计浅层特征增强模块,以提高浅层网络对小目标物体的特征提取能力;然后设计深层特征融合模块,替换SSD金字塔特征层中的深层网络,提高深层网络的特征提取能力;最后将提取的图像特征与不同纵横比的候选框进行匹配以执行不同尺度遥感图像目标检测与定位.在光学遥感图像数据集上的实验结果表明,该算法能够适应不同背景下的遥感目标检测,有效地提高了复杂场景下的遥感目标的检测精度.此外,在拓展实验中,文中算法对图像中的模糊目标的检测效果也优于SSD.  相似文献   

5.
基于局部尺度不变特征的快速目标识别   总被引:1,自引:0,他引:1  
介绍了图像局部尺度不变特征的提取方法,将局部尺度不变特征用于目标识别,为提高识别实时性,提出利用金字塔和尺度空间的混合多尺度表示方法,按照从大尺度到小尺度的顺序对待识别图像的特征点进行检测与匹配,直到完成识别为止,有效地提高了识别速度。  相似文献   

6.
针对现有高光谱视频目标跟踪算法在目标尺度发生变化时容易出现跟踪精度下降的问题,提出一种基于光谱匹配降维和特征融合的高光谱目标跟踪算法。首先,利用目标局部光谱和阈值来估计目标光谱,并利用目标光谱与高光谱图像进行朴素相关,实现高光谱图像降维,从而提取目标的深度特征。然后,利用局部方差判断目标区域,提取目标的3D方向梯度直方图(HOG)特征。为保留高光谱图像的光谱信息以及深度特征的语义信息,利用通道卷积融合的方法,得到更具辨别力的融合特征。最后,将融合特征送入相关滤波器,通过尺度池思想提高算法在目标尺度变化挑战下的跟踪鲁棒性。实验结果表明,所提跟踪算法在目标尺度变化挑战下具有更好的性能。  相似文献   

7.
水下环境中基于曲线约束的SIFT特征匹配算法研究   总被引:1,自引:0,他引:1  
张强  郝凯  李海滨 《光学学报》2014,34(2):215003-197
针对水下双目图像匹配时不再满足空气中极线约束条件以及尺度不变特征变换(SIFT)特征匹配算法处理水下图像误匹配率较高等问题,提出一种基于曲线约束的水下特征匹配算法。对双目摄像机进行标定获取相关参数,再获取参考图和待匹配图;利用SIFT算法对两幅图像进行匹配,同时利用由参考图提取的特征点推导出其在待匹配图上对应的曲线,将该曲线作为约束条件判定待匹配图上对应特征点是否在曲线上,从而剔除误匹配点,以达到提高精度的目的。实验结果表明,该算法优于SIFT算法,可以有效地剔除误匹配点,比SIFT算法匹配精度提高约12%,解决了SIFT算法在水下双目立体匹配中误匹配率高的问题。  相似文献   

8.
为了解决生成对抗融合方法获得的融合图像不能同时保留红外图像典型目标和可见光图像纹理细节的问题,提出一种红外与可见光图像交互注意力生成对抗融合方法.首先,在生成网络模型中采用权重参数共享的双路编码器架构,利用多尺度聚合卷积模块提取源图像各自的深度特征;其次,在融合层设计上,利用交互注意力融合模型建立两类图像局部特征的全局...  相似文献   

9.
图像局部特征自适应的快速SIFT图像拼接方法   总被引:1,自引:0,他引:1       下载免费PDF全文
陈月  赵岩  王世刚 《中国光学》2016,9(4):415-422
针对目前图像拼接中计算量较大、实时性较差的问题,本文提出了一种图像局部特征自适应的快速尺度不变特征变换(SIFT)拼接方法。首先,对待拼接图像分块,确定图像局部块的特征类型;接着自适应采用不同的简化方法提取各局部块的特征点。然后,通过特征匹配求出变换矩阵,并结合RANSAC算法去除伪匹配对。最后,通过图像融合得到最终的拼接图像。文中使用提出的方法对3组待拼接图像进行实验。从实验结果可以看出:与标准拼接方法相比,本文改进方法的计算速度提升了30%~45%。因此,这种方法能够在保证图像拼接质量的前提下,有效提高图像拼接的效率,克服图像拼接中计算复杂度高的问题,在实际图像拼接中具有一定的应用价值。  相似文献   

10.
基于极线局部校正的特征匹配方法   总被引:1,自引:0,他引:1  
针对利用多视图像进行目标三维测量、结构重建时在极线约束下对直接区域灰度相关进行同名特征匹配常常失效的问题,提出了一种基于极线局部校正的特征匹配算法。介绍了极线约束匹配的原理,分析了相关方法在极线约束匹配中的缺陷以及在多种像机位姿配置下的图像特征间的约束关系,在此基础上提出了一种极线局部区域校正的方法,对待匹配区域进行校正使自动相关匹配能有效执行,结合最小二乘匹配得到了高精度的匹配结果。实验结果证明了新算法的有效性,大大提高了自动匹配的可靠性、速度和精度。  相似文献   

11.
In order to further improve the performance of speaker recognition, features fusion and models fusion are proposed. The features fusion method is to fuse deep and shallow features. The fused feature describes speaker characteristics more comprehensively than a single feature because of the complementarity between different levels of features. The models fusion method is to fuse i-vectors extracted from different speaker recognition systems. The fused model can combine advantages of different speaker recognition systems. Experimental results show the effectiveness of the proposed methods. Compared with the state-of-the-art system on CASIA North and South dialect corpus,the proposed features fusion system and models fusion system achieved about 54.8% and 69.5% relative improvement on the equal error rate(EER),respectively.  相似文献   

12.
Yi Chai  Huafeng Li  Xiaoyang Zhang 《Optik》2012,123(7):569-581
In this paper, an efficient multifocus image fusion approach is proposed based on local features contrast of multiscale products in nonsubsampled contourlet transform (NSCT) domain. In order to improve the robustness of the fusion algorithm to the noise and select the coefficients of the fused image properly, the multiscale products, which can distinguish edge structures from noise more effectively in NSCT domain, is developed and introduced into image fusion field. The selection principles of different subband coefficients obtained by the NSCT decomposition are discussed in detail. To improve the quality of the fused image, novel different local features contrast measurements, which are proved to be more suitable for human vision system and can extract more useful detail information from source images and inject them into the fused image, are developed and used to select coefficients from the clear parts of subimages to compose coefficients of fused images. Experimental results demonstrate the proposed method performs very well in fusion both noisy and noise-free multifocus images, and outperform conventional methods in terms of both visual quality and objective evaluation criteria.  相似文献   

13.
Multifocus image fusion aims at overcoming imaging cameras's finite depth of field by combining information from multiple images with the same scene. For the fusion problem of the multifocus image of the same scene, a novel algorithm is proposed based on multiscale products of the lifting stationary wavelet transform (LSWT) and the improved pulse coupled neural network (PCNN), where the linking strength of each neuron can be chosen adaptively. In order to select the coefficients of the fused image properly with the source multifocus images in a noisy environment, the selection principles of the low frequency subband coefficients and bandpass subband coefficients are discussed, respectively. For choosing the low frequency subband coefficients, a new sum modified-Laplacian (NSML) of the low frequency subband, which can effectively represent the salient features and sharp boundaries of the image in the LSWT domain, is an input to motivate the PCNN neurons; when choosing the high frequency subband coefficients, a novel local neighborhood sum of Laplacian of multiscale products is developed and taken as one type of feature of high frequency to motivate the PCNN neurons. The coefficients in the LSWT domain with large firing times are selected as coefficients of the fused image. Experimental results demonstrate that the proposed fusion approach outperforms the traditional discrete wavelet transform (DWT)-based, LSWT-based and LSWT-PCNN-based image fusion methods even though the source image is in a noisy environment in terms of both visual quality and objective evaluation.  相似文献   

14.
Recently, the rapid development of the Internet of Things has contributed to the generation of telemedicine. However, online diagnoses by doctors require the analyses of multiple multi-modal medical images, which are inconvenient and inefficient. Multi-modal medical image fusion is proposed to solve this problem. Due to its outstanding feature extraction and representation capabilities, convolutional neural networks (CNNs) have been widely used in medical image fusion. However, most existing CNN-based medical image fusion methods calculate their weight maps by a simple weighted average strategy, which weakens the quality of fused images due to the effect of inessential information. In this paper, we propose a CNN-based CT and MRI image fusion method (MMAN), which adopts a visual saliency-based strategy to preserve more useful information. Firstly, a multi-scale mixed attention block is designed to extract features. This block can gather more helpful information and refine the extracted features both in the channel and spatial levels. Then, a visual saliency-based fusion strategy is used to fuse the feature maps. Finally, the fused image can be obtained via reconstruction blocks. The experimental results of our method preserve more textual details, clearer edge information and higher contrast when compared to other state-of-the-art methods.  相似文献   

15.
In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this issue. At first, we extract the textural information of the source images. Then two densely connected networks are trained to fuse textural information and source image, respectively. By this way, we can preserve more textural details in the fused image. Moreover, loss functions we designed to constrain two densely connected convolutional networks are according to the characteristics of textural information and source images. Through our method, the fused image will obtain more textural information of source images. For proving the validity of our method, we implement comparison and ablation experiments from the qualitative and quantitative assessments. The ablation experiments prove the effectiveness of TPFusion. Being compared to existing advanced IR and VIS image fusion methods, our fusion results possess better fusion results in both objective and subjective aspects. To be specific, in qualitative comparisons, our fusion results have better contrast ratio and abundant textural details. In quantitative comparisons, TPFusion outperforms existing representative fusion methods.  相似文献   

16.
To solve the fusion problem of the multifocus images of the same scene, a novel algorithm based on focused region detection and multiresolution is proposed. In order to integrate the advantages of spatial domain-based fusion methods and transformed domain-based fusion methods, we use a technique of focused region detection and a new fusion method of multiscale transform (MST) to guide pixel combination. Firstly, the initial fused image is acquired with a novel multiresolution image fusion method. The pixels of the original images, which are similar to the corresponding initial fused image pixels, are considered to be located in the sharply focused regions. By this method, the initial focused regions can be determined, and the techniques of morphological opening and closing are employed for post-processing. Then the pixels within the focused regions in each source image are selected as the pixels of the fused image; meanwhile, the initial fused image pixels which are located at the focused border regions are retained as the pixels of the final fused image. The fused image is then obtained. The experimental results show that the proposed fusion approach is effective and performs better in fusing multi-focus images than some current methods.  相似文献   

17.
Multi-focus image fusion combines multiple source images with different focus points into one image, so that the resulting image appears all in-focus. In order to improve the accuracy of focused region detection and fusion quality, a novel multi-focus image fusion scheme based on robust principal component analysis (RPCA) and pulse-coupled neural network (PCNN) is proposed. In this method, registered source images are decomposed into principal component matrices and sparse matrices with RPCA decomposition. The local sparse features computed from the sparse matrix construct a composite feature space to represent the important information from the source images, which become inputs to PCNN to motivate the PCNN neurons. The focused regions of the source images are detected by the firing maps of PCNN and are integrated to construct the final, fused image. Experimental results demonstrate that the superiority of the proposed scheme over existing methods and highlight the expediency and suitability of the proposed method.  相似文献   

18.
基于CT切片特征匹配的三维体重建   总被引:3,自引:2,他引:1  
详细描述了医学解剖学切片图数据读取,特征提取和特征匹配.引入了解剖数据读取方法,叙述了切片匹配的两个过程:特征提取和特征匹配.提出了一种新的有效的特征匹配方法.所有切片图象在匹配之前首先进行了预处理(如滤波、二值化、细化等过程),特征是分区域进行提取的.匹配方法分两步,首先按照对应特征点之间距离的方法匹配特征点,然后用矩的方法进行切片之间匹配.体重建过程中还使用了三维等值面和插值方法.实验结果证实这种特征匹配方法不仅快速而且准确.  相似文献   

19.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号