首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
针对JPEG2000图像压缩标准所具有的渐进传输、一次编码多次解码等特性,提出了一种基于图像特征的鲁棒性图像认证算法.该算法在JPEG2000编码过程中,先根据图像不变特征,生成认证水印,再根据实际的鲁棒性认证需求,在量化后的小波系数中确定每个子带的认证水印嵌入位平面,最后基于小波系数位平面的特征嵌入认证水印.算法不仅能适应JPEG2000各种灵活的编码方式,还能定位图像篡改的位置.实验结果验证了图像认证算法对可允许图像操作的鲁棒性以及对图像篡改的敏感性.  相似文献   

2.
余瑞艳 《数学杂志》2014,34(3):502-508
本文研究了全变差正则化模型在图像去噪过程中易产生阶梯效应的问题,依据图像的局部结构特利用联合高斯滤波器和边缘检测算子的方法,构建了广义全变差正则化图像去噪模型,获得了在消除噪声的同时能够保留图像边缘细节和纹理信息的结果.实验结果表明,广义全变差正则化模型在平滑噪声的同时能够保留图像的边缘轮廓等细节信息,得到的复原图像在峰值信噪比、平均结构相似度和主观视觉效果方面均有所提高.  相似文献   

3.
In this paper, an efficient self-adaptive model for chaotic image encryption algorithm is proposed. With the help of the classical structure of permutation-diffusion and double simple two-dimensional chaotic systems, an efficient and fast encryption algorithm is designed. However, different from most of the existing methods which are found insecure upon chosen-plaintext or known-plaintext attack in the process of permutation or diffusion, the keystream generated in both operations of our method is dependent on the plain-image. Therefore, different plain-images will have different keystreams in both processes even just only a bit is changed in the plain-image. This design can solve the problem of fixed chaotic sequence produced by the same initial conditions but for different images. Moreover, the operation speed is high because complex mathematical methods, such as Runge–Kutta method, of solving the high-dimensional partial differential equations are avoided. Numerical experiments show that the proposed self-adaptive method can well resist against chosen-plaintext and known-plaintext attacks, and has high security and efficiency.  相似文献   

4.
The security of digital image attracts much attention recently. A hash-based digital image encryption algorithm has been proposed in Ref. [1]. But both the theoretical analysis and computer simulation show the characteristic of diffusion is too weak to resist Chosen Plaintext Attack and Known Plaintext Attack. Besides, one bit difference of the plain pixel will lead to only one corresponding bit change of the cipher pixel. In our improved algorithm, coupled with self-adaptive algorithm, only one pixel difference of the plain-image will cause changes of almost all the pixels in the cipher-image (NPCR > 98.77%), and the unified average changing intensity is high (UACI > 30.96%). Both theoretical analysis and computer simulation indicate that the improved algorithm can overcome these flaws and maintain all the merits of the original one.  相似文献   

5.
In this paper, we suggest a new steganographic spatial domain algorithm based on a single chaotic map. Unlike most existing steganographic algorithms, the proposed algorithm uses one chaotic map to determine the pixel position of the host color image, the channel (red, green or blue) and the bit position of the targeted value in which a sensitive information bit can be hidden. Furthermore, this algorithm can be regarded as a variable-sized embedding algorithm. Experimental results demonstrate that this algorithm can defeat many existing steganalytic attacks. In comparison with existing steganographic spatial domain based algorithms, the suggested algorithm is shown to have some advantages over existing ones, namely, larger key space and a higher level of security against some existing attacks.  相似文献   

6.
Sparsity-driven image recovery methods assume that images of interest can be sparsely approximated under some suitable system. As discontinuities of 2D images often show geometrical regularities along image edges with different orientations, an effective sparsifying system should have high orientation selectivity. There have been enduring efforts on constructing discrete frames and tight frames for improving the orientation selectivity of tensor product real-valued wavelet bases/frames. In this paper, we studied the general theory of discrete Gabor frames for finite signals, and constructed a class of discrete 2D Gabor frames with optimal orientation selectivity for sparse image approximation. Besides high orientation selectivity, the proposed multi-scale discrete 2D Gabor frames also allow us to simultaneously exploit sparsity prior of cartoon image regions in spatial domain and the sparsity prior of textural image regions in local frequency domain. Using a composite sparse image model, we showed the advantages of the proposed discrete Gabor frames over the existing wavelet frames in several image recovery experiments.  相似文献   

7.
This paper presents panoramic unmanned aerial vehicle (UAV) image stitching techniques based on an optimal Scale Invariant Feature Transform (SIFT) method. The image stitching representation associates a transformation matrix with each input image. In this study, we formulate stitching as a multi-image matching problem, and use invariant local features to find matches between the images. An improved Geometric Algebra (GA-SIFT) algorithm is proposed to realize fast feature extraction and feature matching work for the scanned images. The proposed GA-SIFT method can locate more feature points with greater accurately than the traditional SIFT method. The adaptive threshold value method proposed solves the limitation problem of high computation load and high cost of stitching time by greater feature points extraction and stitching work. The modified random sample consensus method is proposed to estimate the image transformation parameters and to determine the solution with the best consensus for the data. The experimental results demonstrate that the proposed image stitching method greatly increases the speed of the image alignment process and produces a satisfactory image stitching result. The proposed image stitching model for aerial images has good distinctiveness and robustness, and can save considerable time for large UAV image stitching.  相似文献   

8.
In this paper, we propose a new fast level set model of multi‐atlas labels fusion for 3D magnetic resonance imaging (MRI) tissues segmentation. The proposed model is aimed at segmenting regions of interest in MR images, especially the tissues such as the amygdala, the caudate, the hippocampus, the pallidum, the putamen, and the thalamus. We first define a new energy functional by taking full advantage of an image data term, a length term, and a label fusion term. Different from using the region‐scalable fitting image data term and length term directly, we define a new image data term and a new length term, which is also incorporated with an edge detect function. By introducing a spatially weight function into the label fusion term, segmentation sensitivity to warped images can be largely improved. Furthermore, the special structure of the new energy functional ensures the application of the split Bregman method, which is a significant highlight and can improve segmentation efficiency of the proposed model. Because of these promotions, several good characters, such as accuracy, efficiency, and robustness have been exhibited in experimental results. Quantitative and qualitative comparisons with other methods have demonstrated the superior advantages of the proposed model.  相似文献   

9.
In this letter a new watermarking scheme for color image is proposed based on a family of the pair-coupled maps. Pair-coupled maps are employed to improve the security of watermarked image, and to encrypt the embedding position of the host image. Another map is also used to determine the pixel bit of host image for the watermark embedding. The purpose of this algorithm is to improve the shortcoming of watermarking such as small key space and low security. Due to the sensitivity to the initial conditions of the introduced pair-coupled maps, the security of the scheme is greatly improved.  相似文献   

10.
Image segmentation is required to be studied in detail some particular features (areas of interest) of a digital image. It forms an important and exigent part of image processing and requires an exhaustive and robust search technique for its implementation. In the present work we have studied the working of MRLDE, a newly proposed variant of differential evolution combined with Otsu method, a well known image segmentation method for bi-level thresholding. The proposed variant, termed as Otsu+MRLDE, is tested on a set of 10 images and the results are compared with Otsu method and some other well known metaheuristics.  相似文献   

11.
In this paper we present a scheme for fuzzy similarity based strategy to retrieve an image from a library of color images. Color features are among the most important features used in image database retrieval. Due to its compact representation and low complexity, direct histogram comparison is the most commonly used technique in measuring color similarity of images. A gamma membership function, derived from the Gamma distribution, has been proposed to find the membership values of the gray levels of the histogram. We present here an image retrieval scheme with some popular vector fuzzy distance measures using a gamma membership function for finding the membership values of the gray levels and evaluate the matching function to select the appropriate retrieval mechanism.  相似文献   

12.
13.
In this paper, we propose a new random forest (RF) algorithm to deal with high dimensional data for classification using subspace feature sampling method and feature value searching. The new subspace sampling method maintains the diversity and randomness of the forest and enables one to generate trees with a lower prediction error. A greedy technique is used to handle cardinal categorical features for efficient node splitting when building decision trees in the forest. This allows trees to handle very high cardinality meanwhile reducing computational time in building the RF model. Extensive experiments on high dimensional real data sets including standard machine learning data sets and image data sets have been conducted. The results demonstrated that the proposed approach for learning RFs significantly reduced prediction errors and outperformed most existing RFs when dealing with high-dimensional data.  相似文献   

14.
针对政策可能对金融收益产生风险问题,提出了基于Hilbert-Huang变换方法的政策风险因子识别检测方法。通过经验模态分解,Hilbert-Huang频谱分析得到金融时间序列的时域和频域特征,通过与量化处理后的政策进行匹配得到政策产生的异常波动情况,从而实现对政策因子风险的识别与处理。研究结果对于探究宏观政策对金融收益的影响具有重要参考意义。最后以国家房地产调控政策与地产指数为算例,发现本研究提出的方法识别精度高,具有非常好的应用前景。  相似文献   

15.
A new image coding method based on discrete directional wavelet transform (S-WT) and quad-tree decomposition is proposed here. The S-WT is a kind of transform proposed in [V. Velisavljevic, B. Beferull-Lozano, M. Vetterli, P.L. Dragotti, Directionlets: anisotropic multidirectional representation with separable filtering, IEEE Trans. Image Process. 15(7) (2006)], which is based on lattice theory, and with the difference with the standard wavelet transform is that the former allows more transform directions. Because the directional property in a small region is more regular than in a big block generally, in order to sufficiently make use of the multidirectionality and directional vanishing moment (DVM) of S-WT, the input image is divided into many small regions by means of the popular quad-tree segmentation, and the splitting criterion is on the rate-distortion sense. After the optimal quad-tree is obtained, by means of the embedded property of SPECK, a resource bit allocation algorithm is fast implemented utilizing the model proposed in [M. Rajpoot, Model based optimal bit allocation, in: IEEE Data Compression Conference, 2004, Proceedings, DCC 2004.19]. Experiment results indicate that our algorithms perform better compared to some state-of-the-art image coders.  相似文献   

16.
Existing algorithms that fuse level-2 and level-3 fingerprint match scores perform well when the number of features are adequate and the quality of images are acceptable. In practice, fingerprints collected under unconstrained environment neither guarantee the requisite image quality nor the minimum number of features required. This paper presents a novel fusion algorithm that combines fingerprint match scores to provide high accuracy under non-ideal conditions. The match scores obtained from level-2 and level-3 classifiers are first augmented with a quality score that is quantitatively determined by applying redundant discrete wavelet transform to a fingerprint image. We next apply the generalized belief functions of Dezert–Smarandache theory to effectively fuse the quality-augmented match scores obtained from level-2 and level-3 classifiers. Unlike statistical and learning based fusion techniques, the proposed plausible and paradoxical reasoning approach effectively mitigates conflicting decisions obtained from classifiers especially when the evidences are imprecise due to poor image quality or limited fingerprint features. The proposed quality-augmented fusion algorithm is validated using a comprehensive database which comprises of rolled and partial fingerprint images of varying quality with arbitrary number of features. The performance is compared with existing fusion approaches for different challenging realistic scenarios.  相似文献   

17.
A number of high‐order variational models for image denoising have been proposed within the last few years. The main motivation behind these models is to fix problems such as the staircase effect and the loss of image contrast that the classical Rudin–Osher–Fatemi model [Leonid I. Rudin, Stanley Osher and Emad Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60 (1992), pp. 259–268] and others also based on the gradient of the image do have. In this work, we propose a new variational model for image denoising based on the Gaussian curvature of the image surface of a given image. We analytically study the proposed model to show why it preserves image contrast, recovers sharp edges, does not transform piecewise smooth functions into piecewise constant functions and is also able to preserve corners. In addition, we also provide two fast solvers for its numerical realization. Numerical experiments are shown to illustrate the good performance of the algorithms and test results. © 2015 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 32: 1066–1089, 2016  相似文献   

18.
Bayesian adaptive randomization has attracted increasingly attention in the literature and has been implemented in many phase II clinical trials. Doubly adaptive biased coin design (DBCD) is a superior choice in response-adaptive designs owing to its promising properties. In this paper, we propose a randomized design by combining Bayesian adaptive randomization with doubly adaptive biased coin design. By selecting a fixed tuning parameter, the proposed randomization procedure can target an explicit allocation proportion, and assign more patients to the better treatment simultaneously. Moreover, the proposed randomization is efficient to detect treatment differences. We illustrate the proposed design by its applications to both discrete and continuous responses, and evaluate its operating features through simulation studies.  相似文献   

19.
The existing support vector machines (SVMs) are all assumed that all the features of training samples have equal contributions to construct the optimal separating hyperplane. However, for a certain real-world data set, some features of it may possess more relevances to the classification information, while others may have less relevances. In this paper, the linear feature-weighted support vector machine (LFWSVM) is proposed to deal with the problem. Two phases are employed to construct the proposed model. First, the mutual information (MI) based approach is used to assign appropriate weights for each feature of the whole given data set. Second, the proposed model is trained by the samples with their features weighted by the obtained feature weight vector. Meanwhile, the feature weights are embedded in the quadratic programming through detailed theoretical deduction to obtain the dual solution to the original optimization problem. Although the calculation of feature weights may add an extra computational cost, the proposed model generally exhibits better generalization performance over the traditional support vector machine (SVM) with linear kernel function. Experimental results upon one synthetic data set and several benchmark data sets confirm the benefits in using the proposed method. Moreover, it is also shown in experiments that the proposed MI based approach to determining feature weights is superior to the other two mostly used methods.  相似文献   

20.
杨文莉  黄忠亿 《计算数学》2022,44(3):305-323
图像融合通常是指从多源信道采集同一目标图像,将互补的多焦点、多模态、多时相和/或多视点图像集成在一起,形成新图像的过程.在本文中,我们采用基于Huber正则化的红外与可见光图像的融合模型.该模型通过约束融合图像与红外图像相似的像素强度保持热辐射信息,以及约束融合图像与可见光图像相似的灰度梯度和像素强度保持图像的边缘和纹理等外观信息,同时能够改善图像灰度梯度相对较小区域的阶梯效应.为了最小化这种变分模型,我们结合增广拉格朗日方法(ALM)和量身定做有限点方法(TFPM)的思想设计数值算法,并给出了算法的收敛性分析.最后,我们将所提模型和算法与其他七种图像融合方法进行定性和定量的比较,分析了本文所提模型的特点和所提数值算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号