首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
In order to perform source coding (data compression), we treat messages emitted by independent and identically distributed sources as imprecise measurements (symbolic sequence) of a chaotic, ergodic, Lebesgue measure preserving, non-linear dynamical system known as Generalized Luröth Series (GLS). GLS achieves Shannon’s entropy bound and turns out to be a generalization of arithmetic coding, a popular source coding algorithm, used in international compression standards such as JPEG2000 and H.264. We further generalize GLS to piecewise non-linear maps (Skewed-nGLS). We motivate the use of Skewed-nGLS as a framework for joint source coding and encryption.  相似文献   

2.
A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) and its coefficients are derived from the plaintext for higher security. During the encryption process, the mapping interval in each iteration of arithmetic coding (AC) is decided by both the plaintext and the initial values of NDF, and the data compression is also achieved with entropy optimality simultaneously. And this modification of arithmetic coding methodology which also provides security is easy to be expanded into the most international image and video standards as the last entropy coding stage without changing the existing framework. Theoretic analysis and numerical simulations both on static and adaptive model show that the proposed encryption algorithm satisfies highly security without loss of compression efficiency respect to a standard AC or computation burden.  相似文献   

3.
The One-Time Pad (OTP) is the only known unbreakable cipher, proved mathematically by Shannon in 1949. In spite of several practical drawbacks of using the OTP, it continues to be used in quantum cryptography, DNA cryptography and even in classical cryptography when the highest form of security is desired (other popular algorithms like RSA, ECC, AES are not even proven to be computationally secure). In this work, we prove that the OTP encryption and decryption is equivalent to finding the initial condition on a pair of binary maps (Bernoulli shift). The binary map belongs to a family of 1D nonlinear chaotic and ergodic dynamical systems known as Generalized Luröth Series (GLS). Having established these interesting connections, we construct other perfect secrecy systems on the GLS that are equivalent to the One-Time Pad, generalizing for larger alphabets. We further show that OTP encryption is related to Randomized Arithmetic Coding – a scheme for joint compression and encryption.  相似文献   

4.
Joint Photographic Experts Group (JPEG) baseline algorithm is widely used because of its high compression capability. The algorithm has a characteristic that the degradation of image quality tends to be perceived as the compression ratio becomes high. The eyesore degradations are false contour and mosquito. A method of improving the image quality is proposed. The first step: a domain of false contours is extracted from the image and the second step: the domain is smoothed by a fitting process. It is confirmed that this method is effective in improving the image quality degraded by the JPEG high compression.  相似文献   

5.
针对JPEG2000图像压缩标准所具有的渐进传输、一次编码多次解码等特性,提出了一种基于图像特征的鲁棒性图像认证算法.该算法在JPEG2000编码过程中,先根据图像不变特征,生成认证水印,再根据实际的鲁棒性认证需求,在量化后的小波系数中确定每个子带的认证水印嵌入位平面,最后基于小波系数位平面的特征嵌入认证水印.算法不仅能适应JPEG2000各种灵活的编码方式,还能定位图像篡改的位置.实验结果验证了图像认证算法对可允许图像操作的鲁棒性以及对图像篡改的敏感性.  相似文献   

6.
In this paper, we give an analytical model of the compression error of down-sampled compression based on wavelet transform, which explains why down-sampling before compression can improve coding performance. And we approximate the missing details due to down-sampling and compression by using the linear combination of a set of basis vectors with L1 norm. Then we propose a down-sampled and high frequency information approximated coding scheme and apply it to natural images, and achieve gains of both subjective quality and objective quality compared with JPEG2000.  相似文献   

7.
The use of graphics hardware for non-graphics applications has become popular among many scientific programmers and researchers as we have observed a higher rate of theoretical performance increase than the CPUs in recent years. However, performance gains may be easily lost in the context of a specific parallel application due to various both hardware and software factors. JPEG 2000 is a complex standard for data compression and coding, that provides many advanced capabilities demanded by more specialized applications. There are several JPEG 2000 implementations that utilize emerging parallel architectures with the built-in support for parallelism at different levels. Unfortunately, many available implementations are only optimized for a certain parallel architecture or they do not take advantage of recent capabilities provided by modern hardware and low level APIs. Thus, the main aim of this paper is to present a comprehensive real performance analysis of JPEG 2000. It consists of a chain of data and compute intensive tasks that can be treated as good examples of software benchmarks for modern parallel hardware architectures. In this paper we compare achieved performance results of various JPEG 2000 implementations executed on selected architectures for different data sets to identify possible bottlenecks. We discuss also best practices and advices for parallel software development to help users to evaluate in advance and then select appropriate solutions to accelerate the execution of their applications.  相似文献   

8.
Fractal video sequences coding with region-based functionality   总被引:1,自引:0,他引:1  
In this paper, we explore the fractal video sequences coding in the context of region-based functionality. Since the main drawback of fractal coding is the high computational complexity, some schemes are proposed to speed up the encoding process. As fractal encoding essentially spends most time on the search for the best-matching block in a large domain pool, this paper firstly ameliorates the conventional CPM/NCIM method and then applies a new hexagon block-matching motion estimation technology into the fractal video coding. The images in the video sequences are encoded region by region according to a previously-computed segmentation map. Experimental results indicate that the proposed algorithm spends less encoding time and achieves higher compression ratio and compression quality compared with the conventional CPM/NCIM method.  相似文献   

9.
Nagaraj et al. [1], [2] present a skewed-non-linear generalized Luroth Series (s-nGLS) framework. S-nGLS uses non-linear maps for GLS to introduce a security parameter a which is used to build a keyspace for image or data encryption. The map introduces non-linearity to the system to add an “encryption key parameter”. The skew is added to achieve optimal compression efficiency. s-nGLS used as such for joint encryption and compression is a weak candidate, as explained in this communication. First, we show how the framework is vulnerable to known plaintext based attacks and that a key of size 256 bits can be broken within 1000 trials. Next, we demonstrate that the proposed non-linearity exponentially increases the hardware complexity of design. We also discover that s-nGlS cannot be implemented as such for large bitstreams. Finally, we demonstrate how correlation of key parameter with compression performance leads to further key vulnerabilities.  相似文献   

10.
Fractal image compression is a promising technique to improve the efficiency of image storage and image transmission with high compression ratio, however, the huge time consumption for the fractal image coding is a great obstacle to the practical applications. In order to improve the fractal image coding, efficient fractal image coding algorithms using a special unified feature and a DCT coder are proposed in this paper. Firstly, based on a necessary condition to the best matching search rule during fractal image coding, the fast algorithm using a special unified feature (UFC) is addressed, and it can reduce the search space obviously and exclude most inappropriate matching subblocks before the best matching search. Secondly, on the basis of UFC algorithm, in order to improve the quality of the reconstructed image, a DCT coder is combined to construct a hybrid fractal image algorithm (DUFC). Experimental results show that the proposed algorithms can obtain good quality of the reconstructed images and need much less time than the baseline fractal coding algorithm.  相似文献   

11.
基于嵌入式零树小波编码算法的研究   总被引:2,自引:0,他引:2  
嵌入式零树小波编码是目前国际上最先进的图像编码方法之一,但仍然有不足之处,该算法在编“孤立零”码时会造成比特位冗余,因此在分析了零树编码不足的基础上,提出了基于人眼视觉特性的零树小波编码方法。SPIHT有着很高的压缩率,但对误码非常敏感,一个比特出错可能导致解码器不能正确解码,引起图像质量严重下降。本文在H.Man等人的图像编码方案的基础上提出一种改进方案。  相似文献   

12.
The 1990s witnessed an explosion of wavelet-based methods in the field of image processing. This article will focus primarily on wavelet-based image compression. We shall describe the connection between wavelets and vision, and how wavelet techniques provide image compression algorithms that are clearly superior to the present JPEG standard. In particular, the wavelet-based algorithms known as SPIHT, ASWDR, and the new standard JPEG2000, will be described and compared. Our comparison will show that, in many respects, ASWDR is the best algorithm. Applications to denoising will also be briefly referenced and pointers supplied to other references on wavelet-based image processing.  相似文献   

13.
运用小波变换进行图像压缩的算法其核心都是小波变换的多分辨率分析以及对不同尺度的小波系数的量化和编码 .本文提出了一种基于能量的自适应小波变换和矢量量化相结合的压缩算法 .即在一定的能量准则下 ,根据子图像的能量大小决定是否进行小波分解 ,然后给出恰当的小波系数量化 .在量化过程中 ,采用一种改进的LBG算法进行码书的训练 .实验表明 ,本算法广泛适用于不同特征的数字图像 ,在取得较高峰值信噪比的同时可以获得较高的重建图像质量 .  相似文献   

14.
Nowadays, problems arise when handling large-sized images (i.e. medical image such as Computed Tomographies or satellite images) of 10, 50, 100 or more Megabytes, due to the amount of time required for transmitting and displaying, this time being even worse when a narrow bandwidth transmission medium is involved (i.e. dial-up or mobile network), because the receiver must wait until the entire image has arrived. To solve this issue, progressive transmission schemes are used. These schemes allow the image sender to encode the image data in such a way that it is possible for the receiver to perform a reconstruction of the original image from the very beginning of transmission. Despite this reconstruction being, of course, partial, it is possible to improve the reconstruction on the fly, as more and more data of the original image are received. There are many progressive transmission methods available, such as it planes, TSVQ, DPCM, and, more recently, matrix polynomial interpolation, Discrete Cosine Transform (DCT, used in JPEG) and wavelets (used in JPEG 2000). However, none of them is well suited, or perform poorly, when, in addition to progressive transmission, we want to include also ROIs (Region Of Interest) handling. In the progressive transmission of ROIs, we want not only to reconstruct the image as we receive image data, but also to be able to select which part or parts of the emerging image we think are relevant and want to receive first, and which part or parts are of no interest. In this context we present an algorithm for lossy adaptive encoding based on singular value decomposition (SVD). This algorithm turns out to be well suited for progressive transmission and ROI selection of 2D and 3D images, as it is able to avoid redundancy in data transmission and does not require any sort of data recodification, even if we select arbitrary ROIs on the fly. We compare the performing of SVD with DCT and wavelets and show the results.  相似文献   

15.
16.
本文通过引入整数余弦变换与Hash函数方法相结合,在视觉模型框架下提出了一种新的数字水印算法。整数变换的引入,提高了运算速度和图像质量,视觉模型引入,使得水印算法抗JPEG压缩以及其他图像处理方法能力强;本文水印方案加密方法符合公开密码体制,具有高度安全特性。  相似文献   

17.
The topic of quantum chaos has begun to draw increasing attention in recent years. While a satisfactory definition for it is not settled yet in order to differentiate between its classical counterparts. Dissipative quantum maps can be characterized by sensitive dependence on initial conditions, like classical maps. Considering this property, an implementation of image encryption scheme based on the quantum logistic map is proposed. The security and performance analysis of the proposed image encryption is performed using well-known methods. The results of the reliability analysis are encouraging and it can be concluded that, the proposed scheme is efficient and secure. The results of this study also suggest application of other quantum maps such as quantum standard map and quantum baker map in cryptography and other aspects of security and privacy.  相似文献   

18.
A common statement made when discussing the efficiency of compression programs like JPEG is that the transformations used, the discrete cosine or wavelet transform, decorrelate the data. The standard measure used for the information content of the data is the probabilistic entropy. The data can, in this case, be considered as the sampled values of a function. However no sampling independent definition of the entropy of a function has been proposed. Such a definition is given and it is shown that the entropy so defined is the same as the entropy of the sampled data in the limit as the sample spacing goes to zero.  相似文献   

19.
Nowadays, still images are used everywhere in the digital world. The shortages of storage capacity and transmission bandwidth make efficient compression solutions essential. A revolutionary mathematics tool, wavelet transform, has already shown its power in image processing. MinImage, the major topic of this paper, is an application that compresses still images by wavelets. MinImage is used to compress grayscale images and true color images. It implements the wavelet transform to code standard BMP image files to LET wavelet image files, which is defined in MinImage. The code is written in C++ on the Microsoft Windows NT platform. This paper illustrates the design and implementation details in Min-Image according to the image compression stages. First, the preprocessor generates the wavelet transform blocks. Second, the basic wavelet decomposition is applied to transform the image data to the wavelet coefficients. The discrete wavelet transforms are the kernel component of MinImage and are discussed in detail. The different wavelet transforms can be plugged in to extend the functionality of MinImage. The third step is the quantization. The standard scalar quantization algorithm and the optimized quantization algorithm, as well as the dequantization, are described. The last part of MinImage is the entropy-coding schema. The reordering of the coefficients based on the Peano Curve and the different entropy coding methods are discussed. This paper also gives the specification of the wavelet compression parameters adjusted by the end user. The interface, parameter specification, and analysis of MinImage are shown in the final appendix.  相似文献   

20.
The Discrete Wavelet Transform (DWT) is of considerable practical use in image and signal processing applications. For example, significant compression can be achieved through the use of the DWT. A fundamental problem with the DWT, however, is the treatment of finite length data sequences. Commonly used techniques such as circular convolution and symmetric extension can produce undesirable edge effects which propagate into the interior of the transformed data as the number of DWT iterations increases. In this paper, we develop a DWT applicable to Daubechies’ orthogonal wavelets which does not exhibit edge effects. The underlying idea is to extrapolate the data at the boundaries by determining the coefficients of a best fit polynomial through data points in the vicinity of the boundary. This approach can be regarded as a solution to the problem of orthogonal wavelets on an interval. However, it has the advantage that it does not involve the explicit construction of boundary wavelets. The extrapolated DWT is designed to be well conditioned and to produce a critically sampled output. The methods we describe are equally applicable to biorthogonal wavelet bases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号