共查询到20条相似文献,搜索用时 15 毫秒
1.
The iris biometric recognizes a human based on his/her iris texture, which is a stable and unique feature for every individual. A typical iris biometric system performs better for the ideal data, which is acquired under controlled conditions. However, its performance degrades when localizing iris in non-ideal data containing the noisy issues, e.g., the non-uniform illumination, defocus, and non-circular iris boundaries. This study proposes a reliable algorithm to localize iris in such images robustly. First, a small region containing the coarse location of iris is localized. Next, the pupillary boundary is extracted within this small region using an iterative-scheme comprising an adaptive binarization and a pupil location verification test. Following that, the limbic boundary is localized by reusing the Hough accumulator. The iris location is also verified through a gray-level test. After that, the pupillary and limbic boundaries are regularized by applying an enhanced method comprising a Radial-gradient operator (RGO), an error-transform (ET), and the Fourier series. Experimental results, obtained on the CASIA-IrisV3, CASIA-IrisV4, MMU V1.0, and MMU(new) V2.0 iris databases, show superiority of the proposed technique over some of the contemporary techniques. 相似文献
2.
A non-circular iris localization algorithm using image projection function and gray level statistics
Iris recognition technology identifies an individual from its iris texture with great precision. A typical iris recognition system comprises eye image acquisition, iris segmentation, feature extraction, and matching. However, the system precision greatly depends on accurate iris localization in the segmentation module. In this paper, we propose a reliable iris localization algorithm. First, we locate a coarse eye location in an eye image using integral projection function (IPF). Next, we localize the pupillary boundary in a sub image using a reliable technique based on the histogram-bisection, image statistics, eccentricity, and object geometry. After that, we localize the limbic boundary using a robust scheme based on the radial gradients and an error distance transform. Finally, we regularize the actual iris boundaries using active contours. The proposed algorithm is tested on public iris databases: MMU V1.0, CASIA-IrisV1, and the CASIA-IrisV3-Lamp. Experimental results demonstrate superiority of the proposed algorithm over some of the contemporary techniques. 相似文献
3.
A new method of iris localization based on intensity value analysis is proposed in this paper. Iris recognition systems are mainly dependent on the performance of iris localization processing. Steps after localization involve normalization, feature extraction and matching. These steps are based on the accuracy and efficiency of localization of iris in human eye images. In the proposed scheme, the inner boundary of iris is calculated by finding the pupil center and radius using two methods. In the first method, selected region is adaptively binarized and centroid of the region utilized for obtaining pupil parameters. Edges are processed to detect radius and center of pupil during the second method. For outer iris boundary, a band is calculated within which iris outer boundary lies. Signals in one dimension are picked up along radial direction within determined band at different angles. Three points with maximum gradient are selected from each signal. Redundant points are deleted using Mahalanobis distance and remaining points are used to obtain the outer circle of the iris. Points for upper and lower eyelids are found in the same way as the iris outer boundary. Selected points are then statistically fitted to make parabolas and lastly eyelashes are removed from the image to completely localize the iris. Experimental results show that proposed method is very efficient and accurate. 相似文献
4.
This paper presents a novel approach for the automatic localization of pupil and iris. Pupil and iris are nearly circular regions, which are surrounded by sclera, eyelids and eyelashes. The localization of both pupil and iris is extremely important in any iris recognition system. In the proposed algorithm pupil is localized using Eccentricity based Bisection method which looks for the region that has the highest probability of having pupil. While iris localization is carried out in two steps. In the first step, iris image is directionally segmented and a noise free region (region of interest) is extracted. In the second step, angular lines in the region of interest are extracted and the edge points of iris outer boundary are found through the gradient of these lines. The proposed method is tested on CASIA ver 1.0 and MMU Iris databases. Experimental results show that this method is comparatively accurate. 相似文献
5.
Continuous efforts have been made to process degraded iris images for enhancement of the iris recognition performance in unconstrained situations. Recently, many researchers have focused on developing the iris segmentation techniques, which can deal with iris images in a non-cooperative environment where the probability of acquiring unideal iris images is very high due to gaze deviation, noise, blurring, and occlusion by eyelashes, eyelids, glasses, and hair. Although there have been many iris segmentation methods, most focus primarily on the accurate detection of iris images captured in a closely controlled environment. The novelty of this research effort is that we propose to apply a variational level set-based curve evolution scheme that uses a significantly larger time step to numerically solve the evolution partial differential equation (PDE) for segmentation of an unideal iris image accurately, and thereby, speeding up the curve evolution process drastically. The iris boundary represented by the variational level set may break and merge naturally during evolution, and thus, the topological changes are handled automatically. The proposed variational model is also robust against poor localization and weak iris/sclera boundaries. In order to solve the size irregularities occurring due to arbitrary shapes of the extracted iris/pupil regions, a simple method is applied based on connection of adjacent contour points. Furthermore, to reduce the noise effect, we apply a pixel-wise adaptive 2D Wiener filter. The verification and identification performance of the proposed scheme is validated on three challenging iris image datasets, namely, the ICE 2005, the WVU Unideal, and the UBIRIS Version 1. 相似文献
6.
7.
针对复杂背景下钢索图像难以准确分割的问题,提出一种基于纹理分析的钢索图像分割与边界识别方法.采用基于模糊Hough变换的纹理方向检测方法确定钢索走向,利用边缘方向密度直方图作为纹理特征,对与钢索纹理方向相应的边缘方向赋予不同权重,抑制纹理分割中背景的干扰,对钢丝绳图像进行聚类分割,采用检测平行直线的方法确定其边界,并根据算法参量对边界进行修正.在实验中,对比了边缘方向密度直方图特征与灰度共生矩阵、局部二值模式在钢索图像纹理分割中的结果与计算时间,结果表明边缘方向密度直方图特征计算速度快、受背景干扰小,分割准确率高.本文方法无须预先训练,受背景干扰小,可以准确地识别出钢索并确定其边界,能满足钢丝绳视觉检测的要求. 相似文献
8.
9.
Priyanka Singh Kilari Jyothsna Devi Hiren Kumar Thakkar Jos Santamaría 《Entropy (Basel, Switzerland)》2021,23(12)
In the past decade, rapid development in digital communication has led to prevalent use of digital images. More importantly, confidentiality issues have also come up recently due to the increase in digital image transmission across the Internet. Therefore, it is necessary to provide high imperceptibility and security to digitally transmitted images. In this paper, a novel blind digital image watermarking scheme is introduced tackling secured transmission of digital images, which provides a higher quality regarding both imperceptibility and robustness parameters. A block based hybrid IWT- SVD transform is implemented for robust transmission of digital images. To ensure high watermark security, the watermark is encrypted using a Pseudo random key which is generated adaptively from cover and watermark images. An encrypted watermark is embedded in randomly selected low entropy blocks to increase the security as well as imperceptibility. Embedding positions within the block are identified adaptively using a Blum–Blum–Shub Pseudo random generator. To ensure higher visual quality, Initial Scaling Factor (ISF) is chosen adaptively from a cover image using image range characteristics. ISF can be optimized using Nature Inspired Optimization (NIO) techniques for higher imperceptibility and robustness. Specifically, the ISF parameter is optimized by using three well-known and novel NIO-based algorithms such as Genetic Algorithms (GA), Artificial Bee Colony (ABC), and Firefly Optimization algorithm. Experiments were conducted for the proposed scheme in terms of imperceptibility, robustness, security, embedding rate, and computational time. Experimental results support higher effectiveness of the proposed scheme. Furthermore, performance comparison has been done with some of the existing state-of-the-art schemes which substantiates the improved performance of the proposed scheme. 相似文献
10.
Traditional iris recognition systems transfer iris images to polar (or log-polar) coordinates and have performed very well on data that tends to have a centered gaze. The patterns of an iris are part of a 3-D structure that is captured as a two-dimensional (2-D) image and cooperative iris recognition systems are capable of correctly matching these 2-D representations of iris features. However, when the gaze of an eye changes with respect to the camera lens, many times the size, shape, and detail of iris patterns will change as well and cannot be matched to enrolled images using traditional methods. Additionally, the transformation of off-angle eyes to polar coordinates becomes much more challenging and noncooperative iris algorithms will require a different approach. The direct application of the scale-invariant feature transform (SIFT) method would not work well for iris recognition because it does not take advantage of the characteristics of iris patterns. We propose the region-based SIFT approach to iris recognition. This new method does not require polar transformation, affine transformation or highly accurate segmentation to perform iris recognition and is scale invariant. This method was tested on the iris challenge evaluation (ICE), WVU and IUPUI noncooperative databases and results show that the method is capable of cooperative and noncooperative iris recognition. 相似文献
11.
Automatic white blood cell segmentation using stepwise merging rules and gradient vector flow snake 总被引:1,自引:0,他引:1
This study aims at proposing a new stained WBC (white blood cell) image segmentation method using stepwise merging rules based on mean-shift clustering and boundary removal rules with a GVF (gradient vector flow) snake. This paper proposes two different schemes for segmenting the nuclei and cytoplasm of WBCs, respectively. For nuclei segmentation, a probability map is created using a probability density function estimated from samples of WBC's nuclei and sub-images cropped to include a nucleus based on the fact that nuclei have a salient color against the background and red blood cells. Mean-shift clustering is then performed for region segmentation, and a stepwise merging scheme applied to merge particle clusters with a nucleus. Meanwhile, for cytoplasm segmentation, morphological opening is applied to a green image to boost the intensity of the granules and canny edges detected within the sub-image. The boundary edges and noise edges are then removed using removal rules, while a GVF snake is forced to deform to the cytoplasm boundary edges. When evaluated using five different types of stained WBC, the proposed algorithm produced accurate segmentation results for most WBC types. 相似文献
12.
虹膜定位是虹膜识别过程中的重要环节,定位速度和精度决定了整个虹膜识别系统的性能。提出了一种基于人眼灰度分布特征的虹膜定位算法,该算法利用形态学运算实现瞳孔圆心粗定位,采用划分区域求灰度均值隔项差值最大值的方法实现外圆半径的粗定位,并通过分层聚类的方法实现内外边界的精确定位。实验结果表明.与经典的虹膜定位算法如Wildes算法、Daugman算法相比,该算法定位结果更加准确,定位速度大幅度提高。 相似文献
13.
14.
A novel reduced-order modeling method for vibration problems of elastic structures with localized piecewise-linearity is proposed. The focus is placed upon solving nonlinear forced response problems of elastic media with contact nonlinearity, such as cracked structures and delaminated plates. The modeling framework is based on observations of the proper orthogonal modes computed from nonlinear forced responses and their approximation by a truncated set of linear normal modes with special boundary conditions. First, it is shown that a set of proper orthogonal modes can form a good basis for constructing a reduced-order model that can well capture the nonlinear normal modes. Next, it is shown that the subspace spanned by the set of dominant proper orthogonal modes can be well approximated by a slightly larger set of linear normal modes with special boundary conditions. These linear modes are referred to as bi-linear modes, and are selected by an elaborate methodology which utilizes certain similarities between the bi-linear modes and approximations for the dominant proper orthogonal modes. These approximations are obtained using interpolated proper orthogonal modes of smaller dimensional models. The proposed method is compared with traditional reduced-order modeling methods such as component mode synthesis, and its advantages are discussed. Forced response analyses of cracked structures and delaminated plates are provided for demonstrating the accuracy and efficiency of the proposed methodology. 相似文献
15.
16.
17.
A high-order accurate hybrid scheme using a central flux scheme and a WENO scheme for compressible flowfield analysis 总被引:1,自引:0,他引:1
A high-order accurate hybrid central-WENO scheme is proposed. The fifth order WENO scheme [G.S. Jiang, C.W. Shu, Efficient implementation of weighted ENO schemes, J. Comput. Phys. 126 (1996) 202–228] is divided into two parts, a central flux part and a numerical dissipation part, and is coupled with a central flux scheme. Two sub-schemes, the WENO scheme and the central flux scheme, are hybridized by means of a weighting function that indicates the local smoothness of the flowfields. The derived hybrid central-WENO scheme is written as a combination of the central flux scheme and the numerical dissipation of the fifth order WENO scheme, which is controlled adaptively by a weighting function. The structure of the proposed hybrid central-WENO scheme is similar to that of the YSD-type filter scheme [H.C. Yee, N.D. Sandham, M.J. Djomehri, Low-dissipative high-order shock-capturing methods using characteristic-based filters, J. Comput. Phys. 150 (1999) 199–238]. Therefore, the proposed hybrid scheme has also certain merits that the YSD-type filter scheme has. The accuracy and efficiency of the developed hybrid central-WENO scheme are investigated through numerical experiments on inviscid and viscous problems. Numerical results show that the proposed hybrid central-WENO scheme can resolve flow features extremely well. 相似文献
18.
For segmentation method to be useful it must be fast, easy to use, and produce high quality segmentations, but few algorithms can offer this in various conditions and applications. In this paper, we propose a context dependent graph-based method for transition region extraction and thresholding. The graph-based approach is introduced into image thresholding, and context dependent graph is constructed from a given image, which can adaptively extract the pixel context and shape information because of the scalable neighborhood. Then an edge weight function is defined as the measure of possible transition pixels, and a robust fully automatic scheme for the optimal threshold is also presented. The proposed approach is validated both quantitatively and qualitatively. Compared with the traditional state-of-art algorithms on synthetic and real images, as well as laser cladding images, the experimental results suggest that the new proposal is efficient and effective. 相似文献
19.
20.
本文介绍了一种新型的摆杆式可变光栏。根据推导出的动环转角与光栏片转角的关系式,用计算机辅助设计选取光栏机构的一组最优化参数,使动环等角转动变换每档光圈时,实际光通量与理论值误差≤8.4%。 相似文献