首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman–Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.  相似文献   

3.
We present a unifying framework for a wide class of iterative methods in numerical linear algebra. In particular, the class of algorithms contains Kaczmarz's and Richardson's methods for the regularized weighted least squares problem with weighted norm. The convergence theory for this class of algorithms yields as corollaries the usual convergence conditions for Kaczmarz's and Richardson's methods. The algorithms in the class may be characterized as being group-iterative, and incorporate relaxation matrices, as opposed to a single relaxation parameter. We show that some well-known iterative methods of image reconstruction fall into the class of algorithms under consideration, and are thus covered by the convergence theory. We also describe a novel application to truly three-dimensional image reconstruction.  相似文献   

4.
Filter back-projection (FBP) algorithms are available and extensively used methods for tomography. In this paper, we prove the convergence of FBP algorithms at any continuous point of image function, in L 2-norm and L 1-norm under the certain assumptions of image and window functions of FBP algorithms.  相似文献   

5.
In this paper we present an extensive experimental study comparing four general-purpose graph drawing algorithms. The four algorithms take as input general graphs (with no restrictions whatsoever on connectivity, planarity, etc.) and construct orthogonal grid drawings, which are widely used in software and database visualization applications. The test data (available by anonymous ftp) are 11,582 graphs, ranging from 10 to 100 vertices, which have been generated from a core set of 112 graphs used in “real-life” software engineering and database applications. The experiments provide a detailed quantitative evaluation of the performance of the four algorithms, and show that they exhibit trade-offs between “aesthetic” properties (e.g., crossings, bends, edge length) and running time.  相似文献   

6.
In this work, we present a survey of efficient techniques for software implementation of finite field arithmetic especially suitable for cryptographic applications. We discuss different algorithms for three types of finite fields and their special versions popularly used in cryptography: Binary fields, prime fields and extension fields. Implementation details of the algorithms for field addition/subtraction, field multiplication, field reduction and field inversion for each of these fields are discussed in detail. The efficiency of these different algorithms depends largely on the underlying micro-processor architecture. Therefore, a careful choice of the appropriate set of algorithms has to be made for a software implementation depending on the performance requirements and available resources.  相似文献   

7.
Magnetic resonance imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured k‐space lines. In contrast to well‐known algorithms like SENSE and GRAPPA and its flavours we consider the problem as a non‐linear inverse problem. Fast computation algorithms for the necessary Fréchet derivative and reconstruction algorithms are given. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

8.
Linear inverse problems are very common in signal and image processing. Many algorithms that aim at solving such problems include unknown parameters that need tuning. In this work we focus on optimally selecting such parameters in iterative shrinkage methods for image deblurring and image zooming. Our work uses the projected Generalized Stein Unbiased Risk Estimator (GSURE) for determining the threshold value λ and the iterations number K in these algorithms. The proposed parameter selection is shown to handle any degradation operator, including ill-posed and even rectangular ones. This is achieved by using GSURE on the projected expected error. We further propose an efficient greedy parameter setting scheme, that tunes the parameter while iterating without impairing the resulting deblurring performance. Finally, we provide extensive comparisons to conventional methods for parameter selection, showing the superiority of the use of the projected GSURE.  相似文献   

9.
In this study, we scanned the core of a cylindrical soil sample (60mm diameter and 100mm height) by X-ray Computed Tomography (CT) producing 300 consecutive 2D digital images with 16-bit gray level depth and a resolution of 32 microns (image size 676 × 676 pixels). The aim of this work was to determine the geometry and spatial distribution of the elements in a sample, related in this case to pore, solid and gravel, inside each 2D image for the latter reconstruction of the corresponding 3D approximation of the elements using the total set of 300 soil images. Therefore, it was possible to determine the relative percentage of each element present in each 2D image and, correspondingly, the structure and total percentage in the 3D reconstruction. The identification of elements in the 2D image slices was very well accomplished using three standard segmentation algorithms: k-Means, Fuzzy c-Means and Otsu multilevel. In order to compare and evaluate the quality of results, a non-uniformity (NU) measure was applied such that low values were indicative of homogeneous regions. Due to the depth of the greyscale of the images, the results were very similar with comparable statistics and homogeneity (NU values) among the detected materials of the three algorithms. That suggests that the pore, solid and gravel spaces were very well identified, and this is reflected through their connectivity in the 3D reconstruction. Additionally, the gray level depth was reduced to 8 bits and the same study was undertaken. In this case, the quality of results was comparable to the previous ones, as the number of elements and NU values were very close. However, this also depends largely on the high resolution of the images. Thereby, the soil sample of this work was very well characterized using the simplest and most common algorithms for image segmentation thanks to the high contrast and resolution, and regardless the depth of the grey-level.  相似文献   

10.
This paper introduces a proximity operator framework for studying the L1/TV image denoising model which minimizes the sum of a data fidelity term measured in the ?1-norm and the total-variation regularization term. Both terms in the model are non-differentiable. This causes algorithmic difficulties for its numerical treatment. To overcome the difficulties, we formulate the total-variation as a composition of a convex function (the ?1-norm or the ?2-norm) and the first order difference operator, and then express the solution of the model in terms of the proximity operator of the composition. By developing a “chain rule” for the proximity operator of the composition, we identify the solution as fixed point of a nonlinear mapping expressed in terms of the proximity operator of the ?1-norm or the ?2-norm, each of which is explicitly given. This formulation naturally leads to fixed-point algorithms for the numerical treatment of the model. We propose an alternative model by replacing the non-differentiable convex function in the formulation of the total variation with its differentiable Moreau envelope and develop corresponding fixed-point algorithms for solving the new model. When partial information of the underlying image is available, we modify the model by adding an indicator function to the minimization functional and derive its corresponding fixed-point algorithms. Numerical experiments are conducted to test the approximation accuracy and computational efficiency of the proposed algorithms. Also, we provide a comparison of our approach to two state-of-the-art algorithms available in the literature. Numerical results confirm that our algorithms perform favorably, in terms of PSNR-values and CPU-time, in comparison to the two algorithms.  相似文献   

11.
We revisit an algorithm [called Edge Pushing (EP)] for computing Hessians using Automatic Differentiation (AD) recently proposed by Gower and Mello (Optim Methods Softw 27(2): 233–249, 2012). Here we give a new, simpler derivation for the EP algorithm based on the notion of live variables from data-flow analysis in compiler theory and redesign the algorithm with close attention to general applicability and performance. We call this algorithm Livarh and develop an extension of Livarh that incorporates preaccumulation to further reduce execution time—the resulting algorithm is called Livarhacc. We engineer robust implementations for both algorithms Livarh and Livarhacc within ADOL-C, a widely-used operator overloading based AD software tool. Rigorous complexity analyses for the algorithms are provided, and the performance of the algorithms is evaluated using a mesh optimization application and several kinds of synthetic functions as testbeds. The results show that the new algorithms outperform state-of-the-art sparse methods (based on sparsity pattern detection, coloring, compressed matrix evaluation, and recovery) in some cases by orders of magnitude. We have made our implementation available online as open-source software and it will be included in a future release of ADOL-C.  相似文献   

12.
The Traveling Salesman Problem (TSP) is one of the most famous problems in combinatorial optimization. Hundreds of papers have been written on the TSP and several exact and heuristic algorithms are available for it. Their concise guide outlines the most important and best algorithms for the symmetric and asymmetric versions of the TSP. In several cases, references to publicly available software are provided.  相似文献   

13.
Computational geometry is a new (about 30 years) and rapidly growing branch of knowledge in computer science that deals with the analysis and design of algorithms for solving geometric problems. These problems typically arise in computer graphics, image processing, computer vision, robotics, manufacturing, knot theory, polymer physics and molecular biology. Since its inception many of the algorithms proposed for solving geometric problems, published in the literature, have been found to be incorrect. These incorrect algorithms rather than being ‘purely mathematical’ often contain a strong kinesthetic component. This paper explores the relationship between computational geometric thinking and kinesthetic thinking, the effect of the latter on the correctness and efficiency of the resulting algorithms, and their implications for education.  相似文献   

14.
This paper discusses the mathematical framework for designing methods of Large Deformation Diffeomorphic Matching (LDM) for image registration in computational anatomy. After reviewing the geometrical framework of LDM image registration methods, we prove a theorem showing that these methods may be designed by using the actions of diffeomorphisms on the image data structure to define their associated momentum representations as (cotangent-lift) momentum maps. To illustrate its use, the momentum map theorem is shown to recover the known algorithms for matching landmarks, scalar images, and vector fields. After briefly discussing the use of this approach for diffusion tensor (DT) images, we explain how to use momentum maps in the design of registration algorithms for more general data structures. For example, we extend our methods to determine the corresponding momentum map for registration using semidirect product groups, for the purpose of matching images at two different length scales. Finally, we discuss the use of momentum maps in the design of image registration algorithms when the image data is defined on manifolds instead of vector spaces.  相似文献   

15.
A number of high‐order variational models for image denoising have been proposed within the last few years. The main motivation behind these models is to fix problems such as the staircase effect and the loss of image contrast that the classical Rudin–Osher–Fatemi model [Leonid I. Rudin, Stanley Osher and Emad Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60 (1992), pp. 259–268] and others also based on the gradient of the image do have. In this work, we propose a new variational model for image denoising based on the Gaussian curvature of the image surface of a given image. We analytically study the proposed model to show why it preserves image contrast, recovers sharp edges, does not transform piecewise smooth functions into piecewise constant functions and is also able to preserve corners. In addition, we also provide two fast solvers for its numerical realization. Numerical experiments are shown to illustrate the good performance of the algorithms and test results. © 2015 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 32: 1066–1089, 2016  相似文献   

16.
Data-extrapolating (extension) technique has important applications in image processing on implicit surfaces and in level set methods. The existing data-extrapolating techniques are inefficient because they are designed without concerning the specialities of the extrapolating equations. Besides, there exists little work on locating the narrow band after data extrapolating—a very important problem in narrow band level set methods. In this paper, we put forward the general Huygens’ principle, and based on the principle we present two efficient data-extrapolating algorithms. The algorithms can easily locate the narrow band in data extrapolating. Furthermore, we propose a prediction–correction version for the data-extrapolating algorithms and the corresponding band locating method for a special case where the direct band locating method is hard to apply. Experiments demonstrate the efficiency of our algorithms and the convenience of the band locating method.  相似文献   

17.
This collection of Matlab 7.0 software supplements and complements the package UTV Tools from 1999, and includes implementations of special-purpose rank-revealing algorithms developed since the publication of the original package. We provide algorithms for computing and modifying symmetric rank-revealing VSV decompositions, we expand the algorithms for the ULLV decomposition of a matrix pair to handle interference-type problems with a rank-deficient covariance matrix, and we provide a robust and reliable Lanczos algorithm which – despite its simplicity – is able to capture all the dominant singular values of a sparse or structured matrix. These new algorithms have applications in signal processing, optimization and LSI information retrieval. AMS subject classification 65F25  相似文献   

18.
The transportation problem with exclusionary side constraints, a practical distribution and logistics problem, is formulated as a 0–1 mixed integer programming model. Two branch-and-bound (B&B) algorithms are developed and implemented in this study to solve this problem. Both algorithms use the Driebeek penalties to strengthen the lower bounds so as to fathom some of the subproblems, to peg variables, and to guide the selection of separation variables. One algorithm also strongly exploits the problem structure in selecting separation variables in order to find feasible solutions sooner. To take advantage of the underlying network structure of the problem, the algorithms employ the primal network simplex method to solve network relaxations of the problem. A computational experiment was conducted to test the performance of the algorithms and to characterize the problem difficulty. The commercial mixed integer programming software CPLEX and an existing special purpose algorithm specifically designed for this problem were used as benchmarks to measure the performance of the algorithms. Computational results show that the new algorithms completely dominate the existing special purpose algorithm and run from two to three orders of magnitude faster than CPLEX.  相似文献   

19.
Fractal image compression is a promising technique to improve the efficiency of image storage and image transmission with high compression ratio, however, the huge time consumption for the fractal image coding is a great obstacle to the practical applications. In order to improve the fractal image coding, efficient fractal image coding algorithms using a special unified feature and a DCT coder are proposed in this paper. Firstly, based on a necessary condition to the best matching search rule during fractal image coding, the fast algorithm using a special unified feature (UFC) is addressed, and it can reduce the search space obviously and exclude most inappropriate matching subblocks before the best matching search. Secondly, on the basis of UFC algorithm, in order to improve the quality of the reconstructed image, a DCT coder is combined to construct a hybrid fractal image algorithm (DUFC). Experimental results show that the proposed algorithms can obtain good quality of the reconstructed images and need much less time than the baseline fractal coding algorithm.  相似文献   

20.
Recently [Solak E, Çokal C, Yildiz OT Biyikogˇlu T. Cryptanalysis of Fridrich’s chaotic image encryption. Int J Bifur Chaos 2010;20:1405-1413] cryptanalyzed the chaotic image encryption algorithm of [Fridrich J. Symmetric ciphers based on two-dimensional chaotic maps. Int J Bifur Chaos 1998;8(6):1259-1284], which was considered a benchmark for measuring security of many image encryption algorithms. This attack can also be applied to other encryption algorithms that have a structure similar to Fridrich’s algorithm, such as that of [Chen G, Mao Y, Chui, C. A symmetric image encryption scheme based on 3D chaotic cat maps. Chaos Soliton Fract 2004;21:749-761]. In this paper, we suggest a novel image encryption algorithm based on a three dimensional (3D) chaotic map that can defeat the aforementioned attack among other existing attacks. The design of the proposed algorithm is simple and efficient, and based on three phases which provide the necessary properties for a secure image encryption algorithm including the confusion and diffusion properties. In phase I, the image pixels are shuffled according to a search rule based on the 3D chaotic map. In phases II and III, 3D chaotic maps are used to scramble shuffled pixels through mixing and masking rules, respectively. Simulation results show that the suggested algorithm satisfies the required performance tests such as high level security, large key space and acceptable encryption speed. These characteristics make it a suitable candidate for use in cryptographic applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号