首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
尺度不变特征与几何特征融合的人耳识别方法   总被引:3,自引:1,他引:2  
田莹  苑玮琦 《光学学报》2008,28(8):1485-1491
要提高人耳的识别率,关键是特征的提取与表达.尺度不变特征变换(SIFT)技术是局部点特征提取算法,在尺度空间寻找极值点,提取对图像的尺度和旋转变化具有不变性,对光照变化和图像变形具有较强的适应性的特征向量.尝试用SIFT技术来提取外耳图像的结构特征点以形成稳定的特征描述子,为了克服一幅图像中有多个局部描述子相似的问题,在SIFT特征描述子中融入一个耳廓几何特征.最后采用特征向量的欧氏距离作为两幅图像相似性度量标准进行人耳识别.在耳图像库七进行实验.结果表明,该方法不仅可以有效地提取人耳特征,通过少量特征可获得较高的识别率,而且对耳图像刚体变化具有较强的稳健性.  相似文献   

2.
该方法提出以基于边缘区域的局部不变矩作为识别特征,结合多神经网络实现对缺损扩展目标的有效识别。讨论了离散情况下基于边缘区域局部不变矩的平移、旋转和尺度不变性。在此基础上,建立目标多个处理区域的BP人工神经网络,利用各网络分类综合结果提高缺损目标的识别率。实验结果显示该方法能够对缺损扩展目标进行正确识别,特别对于有较大部分缺损的扩展目标识别有明显优势。  相似文献   

3.
 该方法提出以基于边缘区域的局部不变矩作为识别特征,结合多神经网络实现对缺损扩展目标的有效识别。讨论了离散情况下基于边缘区域局部不变矩的平移、旋转和尺度不变性。在此基础上,建立目标多个处理区域的BP人工神经网络,利用各网络分类综合结果提高缺损目标的识别率。实验结果显示该方法能够对缺损扩展目标进行正确识别,特别对于有较大部分缺损的扩展目标识别有明显优势。  相似文献   

4.
图像局部特征自适应的快速SIFT图像拼接方法   总被引:1,自引:0,他引:1       下载免费PDF全文
陈月  赵岩  王世刚 《中国光学》2016,9(4):415-422
针对目前图像拼接中计算量较大、实时性较差的问题,本文提出了一种图像局部特征自适应的快速尺度不变特征变换(SIFT)拼接方法。首先,对待拼接图像分块,确定图像局部块的特征类型;接着自适应采用不同的简化方法提取各局部块的特征点。然后,通过特征匹配求出变换矩阵,并结合RANSAC算法去除伪匹配对。最后,通过图像融合得到最终的拼接图像。文中使用提出的方法对3组待拼接图像进行实验。从实验结果可以看出:与标准拼接方法相比,本文改进方法的计算速度提升了30%~45%。因此,这种方法能够在保证图像拼接质量的前提下,有效提高图像拼接的效率,克服图像拼接中计算复杂度高的问题,在实际图像拼接中具有一定的应用价值。  相似文献   

5.
基于近似尺度不变特征转换的序列图像融合算法(英文)   总被引:1,自引:1,他引:0  
在分析了尺度不变特征转换算法特点的基础上提出了一种近似尺度不变特征转换算法,该算法改变了传统尺度不变特征转换算法的框架,将尺度不变特征转换描述符看作是一种特殊的Harris算子,该算法既保留了尺度不变特征转换描述符的优点又降低了计算量.另外,为了提高特征匹配的精度,给出了一种新的特征点对提纯算法.实验结果表明,近似尺度不变特征转换算法在不影响匹配性能的前提下大大缩短了处理时间,经过提纯后的特征点对匹配性能显著提高,最终得到的全景图像过渡平稳,重叠区域没有显著的痕迹.  相似文献   

6.
在分析了尺度不变特征转换算法特点的基础上提出了一种近似尺度不变特征转换算法,该算法改变了传统尺度不变特征转换算法的框架,将尺度不变特征转换描述符看作是一种特殊的Harris算子,该算法既保留了尺度不变特征转换描述符的优点又降低了计算量.另外,为了提高特征匹配的精度,给出了一种新的特征点对提纯算法.实验结果表明,近似尺度不变特征转换算法在不影响匹配性能的前提下大大缩短了处理时间,经过提纯后的特征点对匹配性能显著提高,最终得到的全景图像过渡平稳,重叠区域没有显著的痕迹.  相似文献   

7.
一种新的红外成像末制导目标跟踪方法   总被引:1,自引:1,他引:0  
陈冰  赵亦工  李欣 《光子学报》2014,38(11):3034-3039
为了稳定跟踪导弹末制导阶段的红外目标,提出了一种基于尺度不变特征变换的红外目标跟踪算法.尺度不变性特征变换所提取的图像纹理特征具有尺度和旋转不变性,跟踪算法分别提取目标模板和待跟踪图像的尺度不变特征变换特征.根据最小欧氏距离准则提取目标模板与待跟踪图像间相匹配的尺度不变特征变换特征点对,利用该特征点对拟合反映两图像间映射关系的仿射模型,并据此估计目标中心位置及调整目标模板尺寸.仿真结果表明,跟踪算法能够较好地实现在导弹末制导阶段对红外地面杂波背景下目标的稳定跟踪,其跟踪准确度和稳定度优于传统方法.
关键词:末制导跟踪|尺度不变性特征变换|特征匹配|仿射模型  相似文献   

8.
用相关矩阵特征判据法实现三重不变光学图像识别   总被引:1,自引:0,他引:1  
本文首先在K-L变换中用相关矩阵作为特征判据进行特征压缩,然后用综合判别函数法制备空间综合匹配滤波器,可以有效地压缩特征图像数目.用这种滤波器实现了平移、旋转、尺度三重不变光学图像识别,且有较高的信噪比.  相似文献   

9.
针对火灾图像纹理识别问题,提出了基于Gabor小波变换的ICA火灾图像纹理识别算法,并根据火灾图像纹理识别特点进行了优化。首先用不同尺度和方向的Gabor滤波器对待识别图像滤波,得到其特征图像,然后将特征图像转化成特征向量作为ICA的输入,得到基矢量子空间,再将测试图像经过Gabor滤波器的特征向量投影到ICA子空间中得到系数向量作为目标识别特征,最后用支持向量机进行识别。通过与Gabor滤波器法和ICA方法的对比实验,表明该算法可以在火灾纹理图像的识别率上比传统方法提高5%以上,为火灾图像识别提供了一种新思路。  相似文献   

10.
王灿进  孙涛  李正炜 《中国光学》2015,8(5):775-784
针对激光主动成像的图像特性,提出一种基于快速轮廓转动力矩的目标识别方法。将转动力矩的概念引入目标识别中,提出的快速轮廓转动力矩特征(FCTF)不仅包含了轮廓的尺寸、位置、规则度以及目标的亮暗等信息,同时对于旋转、尺度缩放等变换具有不变性。采用转动力矩的快速计算方法,提高了识别算法的计算效率。识别算法首先使用最大稳定极值区域(MSER)算法检测出目标特征区域,并将其变换为圆形区域,然后结合快速转动力矩特征算法提取出目标区域的局部不变特征,最后输入训练好的支持向量机分类器进行识别。实验结果表明相比于已有的激光主动成像目标识别方法,所提算法对于旋转、仿射变换均具有更高的识别率,同时单帧平均运算时间为9.68 ms,满足激光主动成像目标识别系统实时性的要求。  相似文献   

11.
Conventional Gabor representation and its exttacted features often yield a fairly poor performance in extracting the invariance features of objects.To address this issue,a global Gabor representation method for raised characters pressed on label is proposed in this paper,where the representation only requires few summations on the conventional Gabor filter responses.Features are then extracted from these new representations to construct the invariant features.Experimental results clearly show that the obtained global Gabor feattires provide good performance in rotation,translation,and scale invariance.Also,they are insensitive to illumination conditions and noise changes.It is proved that Gabor filters can be reliably used in olw-level feature extraction in image processing and the global Gabor features can be used to construct robust invariant recognition system.  相似文献   

12.
13.
The extraction of stable local features directly affects the performance of infrared face recognition algorithms.Recent studies on the application of scale invariant feature transform(SIFT) to infrared face recognition show that star-styled window filter(SWF) can filter out errors incorrectly introduced by SIFT.The current letter proposes an improved filter pattern called Y-styled window filter(YWF) to further eliminate the wrong matches.Compared with SWF,YWF patterns are sparser and do not maintain rotation invariance;thus,they are more suitable to infrared face recognition.Our experimental results demonstrate that a YWF-based averaging window outperforms an SWF-based one in reducing wrong matches,therefore improving the reliability of infrared face recognition systems.  相似文献   

14.
Affine invariant feature extraction has been one of the key issues for object recognition, especially for the images captured under the variable environments. Considering that multiscale autoconvolution feature (MSA), which has the prominent comprehensive performance, is very sensitive to illumination change, a novel algorithm of extracting affine invariant feature is proposed based on the MSA transform combining with texture structure analysis. Firstly, a new MSA feature is extracted from texture structure map of the image which is computed based on local binary pattern theory. And then the original image based MSA and the texture map based MSA are combined to a new feature using generalized canonical correlation analysis, called TFMSA. This new feature represents much more image information than the traditional one and is performed in various object recognition tasks. The experimental results indicate that the new TFMSA not only conquers the defect of the traditional MSA, but also has good adaptability for a certain range of viewing angles, partial occlusion, uniform and non-uniform illumination changes. The recognition accuracy of the new feature is superior to MSA and other improved methods.  相似文献   

15.
提出了一种既符合人耳听觉特性又具有良好抗噪性的语音特征分析方法。首先将单边自相关函数序列进行时间方向的平滑处理,提高单边自相关函数的抗噪性,然后用平滑后的单边自相关函数序列代替原信号进行频率规整的LPC分析,最后经倒谱变换得到该特征参数。数字语音识别实验证明:利用该特征参数的语音识别系统的识别性能优于MEL倒谱系数、LPC倒谱系数等传统的语音特征参数。  相似文献   

16.
成像目标跟踪目标建模技术综述   总被引:3,自引:2,他引:1  
由目标跟踪的数学模型得出,影响目标跟踪性能的三个主要因素为目标状态转移模型、滤波算法和目标建模技术.对目标建模技术进行了综述和分析,分别从特征选择、特征的统计建模和相似性度量三个方面进行了阐述.以畸变不变性、目标/背景分辨能力作为性能评价手段定性地比较了国内外文献中提出的多种目标表征模型.指出了目标跟踪中目标表征模型自...  相似文献   

17.
The joint transform correlator (JTC) is one of the two main optical image processing architecture which provides a highly effective way of comparing images in a wide range of applications. Traditionally, an optical correlator is used to compare an unknown input scene with a pre-captured reference image library, to detect if the reference occurs within the input. Strength of the correlation signal decreases rapidly as the input object rotates or varies in scale relative to the reference object. The aim of this paper is to overcome the intolerance of the JTC to rotation and scale changes in the target image. Many JTC systems are constructed with the use of ferroelectric liquid crystal (FLC) spatial light modulators (SLMs) as they provide fast two-dimensional binary modulation of coherent light. Due to the binary nature of the FLC SLMs used in the JTC systems, any image addressed to the device need to have some form of thresholding. Carefully thresholding the grey scale input plane and the joint power spectrum (JPS) has significant effect on the quality of correlation peaks and zero order (DC) noise. A new thresholding technique to binarise the JPS has been developed and implemented optically. This algorithm selectively enhances the desirable fringes in the JPS which provide correlation peaks of higher intensity. Zero order noise is further reduced when compared to existing thresholding techniques.Keeping in mind the architecture of the JTC and limitations of FLC SLMs, a new technique to design rotation and scale invariant binary phase only filters for the JTC architecture is presented. Filers design with this technique have limited dynamic range, higher discriminability among target and non-target objects, and convenience for implementation on FLC SLMs. Simulation and experiments shows excellent results of various rotation and scale invariant filters designed with this technique. A rotation invariant filter is needed for various machine vision applications of the JTC. By fixing the distance between camera and input object, the scale sensitivity of the correlator can be avoided. In contrast to the industrial machine vision applications, scale factor is very important factor for the applications of the JTC systems in defence and security. A security system using a scale invariant JTC will be able to detect a target object well in advance and will provide more time to take a decision.  相似文献   

18.
The canonical quantization of diffeomorphism invariant theories of connections in terms of loop variables is revisited. Such theories include general relativity described in terms of Ashtekar-Barbero variables and extension to Yang-Mills fields (with or without fermions) coupled to gravity. It is argued that the operators induced by classical diffeomorphism invariant or covariant functions are respectively invariant or covariant under a suitable completion of the diffeomorphism group. The canonical quantization in terms of loop variables described here, yields a representation of the algebra of observables in a separable Hilbert space. Furthermore, the resulting quantum theory is equivalent to a model for diffeomorphism invariant gauge theories which replaces space with a manifestly combinatorial object.  相似文献   

19.
Object detection is challenging in large-scale images captured by unmanned aerial vehicles (UAVs), especially when detecting small objects with significant scale variation. Most solutions employ the fusion of different scale features by building multi-scale feature pyramids to ensure that the detail and semantic information are abundant. Although feature fusion benefits object detection, it still requires the long-range dependencies information necessary for small objects with significant scale variation detection. We propose a simple yet effective scale enhancement pyramid network (SEPNet) to address these problems. A SEPNet consists of a context enhancement module (CEM) and feature alignment module (FAM). Technically, the CEM combines multi-scale atrous convolution and multi-branch grouped convolution to model global relationships. Additionally, it enhances object feature representation, preventing features with lost spatial information from flowing into the feature pyramid network (FPN). The FAM adaptively learns offsets of pixels to preserve feature consistency. The FAM aims to adjust the location of sampling points in the convolutional kernel, effectively alleviating information conflict caused by the fusion of adjacent features. Results indicate that the SEPNet achieves an AP score of 18.9% on VisDrone, which is 7.1% higher than the AP score of state-of-the-art detectors RetinaNet achieves an AP score of 81.5% on PASCAL VOC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号