首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
离散余弦变换(DCT)在数字信号、图像处理、频谱分析、数据压缩和信息隐藏等领域有着广泛的应用.推广离散余弦变换,给出一个包含三个参数的统一表达式,并证明在许多情形新变换是正交变换.最后给出一种新型离散余弦变换,并证明它是正交变换.  相似文献   

2.
Intuition suggests that the variance of additive noise contaminating a signal can be estimated by investigation of the highest resolution level in a local Nonlinear Multiresolution Analysis. In the case of the Discrete Pulse Transforms with LULU-operators well known elegant identities between B-splines lead to some surprisingly simple and useful results.  相似文献   

3.
Using the theory of Hankel convolution, continuous and discrete Bessel wavelet transforms are defined. Certain boundedness results and inversion formula for the continuous Bessel wavelet transform are obtained. Important properties of the discrete Bessel wavelet transform are given.  相似文献   

4.
An important question in discrete optimization under uncertainty is to understand the persistency of a decision variable, i.e., the probability that it is part of an optimal solution. For instance, in project management, when the task activity times are random, the challenge is to determine a set of critical activities that will potentially lie on the longest path. In the spanning tree and shortest path network problems, when the arc lengths are random, the challenge is to pre-process the network and determine a smaller set of arcs that will most probably be a part of the optimal solution under different realizations of the arc lengths. Building on a characterization of moment cones for single variate problems, and its associated semidefinite constraint representation, we develop a limited marginal moment model to compute the persistency of a decision variable. Under this model, we show that finding the persistency is tractable for zero-one optimization problems with a polynomial sized representation of the convex hull of the feasible region. Through extensive experiments, we show that the persistency computed under the limited marginal moment model is often close to the simulated persistency value under various distributions that satisfy the prescribed marginal moments and are generated independently.  相似文献   

5.
The proposed method of linear time invariant discrete system order reduction is based on multipoint step response matching for both pole and zero evaluation of the low order model. Depending on the number of zeros and poles of the low order model, the number of points are selected on the time axis of the unit step response such that the unknown poles and zeros can be determined by solving a set of nonlinear equations using Newton's method.  相似文献   

6.
Data reduction is an important issue in the field of data mining. The goal of data reduction techniques is to extract a subset of data from a massive dataset while maintaining the properties and characteristics of the original data in the reduced set. This allows an otherwise difficult or impossible data mining task to be carried out efficiently and effectively. This paper describes a new method for selecting a subset of data that closely represents the original data in terms of its joint and univariate distributions. A pair of distance criteria, motivated by the χ2-statistic, are used for measuring the goodness-of-fit between the distributions of the reduced and full datasets. Under these criteria, the data reduction problem can be formulated as a bi-objective quadratic program. A genetic algorithm technique is used in the search/optimization process. Experiments conducted on several real-world data sets demonstrate the effectiveness of the proposed method.  相似文献   

7.
The inverse boundary spectral problem for selfadjoint Maxwell–s equations is to reconstruct unknown coefficient functions in Maxwell– equations from the knowledge of the boundary spectral data, i.e. fromt eh eigenvalues and the boudnary value of the eigenfunctions. Since the spectrum of non–selfadjoint Maxwell–s operator consists of normal eigenvalues and an interval, the complete boundary spectral data can be defind only in a very complicated way. In this article we show that the coefficients can be reconstructed from incomplete data, that is, from the large eigenvalues and the boundary values of the generalized eigenfunctions. Particularly, we do not need the nfinit–dimensional data corresponding to the non–discrete spectrum.  相似文献   

8.
9.
We recover the first linear programming bound of McEliece, Rodemich, Rumsey, and Welch for binary error-correcting codes and designs via a covering argument. It is possible to show, interpreting the following notions appropriately, that if a code has a large distance, then its dual has a small covering radius and, therefore, is large. This implies the original code to be small. We also point out that this bound is a natural isoperimetric constant of the Hamming cube, related to its Faber–Krahn minima. While our approach belongs to the general framework of Delsarte’s linear programming method, its main technical ingredient is Fourier duality for the Hamming cube. In particular, we do not deal directly with Delsarte’s linear program or orthogonal polynomial theory. This research was partially supported by ISF grant 039-7682.  相似文献   

10.
This paper proposes a new robust chaotic algorithm for digital image steganography based on a 3-dimensional chaotic cat map and lifted discrete wavelet transforms. The irregular outputs of the cat map are used to embed a secret message in a digital cover image. Discrete wavelet transforms are used to provide robustness. Sweldens’ lifting scheme is applied to ensure integer-to-integer transforms, thus improving the robustness of the algorithm. The suggested scheme is fast, efficient and flexible. Empirical results are presented to showcase the satisfactory performance of our proposed steganographic scheme in terms of its effectiveness (imperceptibility and security) and feasibility. Comparison with some existing transform domain steganographic schemes is also presented.  相似文献   

11.
12.
《Optimization》2012,61(5):735-745
In real applications of data envelopment analysis (DEA), there are a number of pitfalls that could have a major influence on the efficiency. Some of these pitfalls are avoidable and the others remain problematic. One of the most important pitfalls that the researchers confront is the closeness of the number of operational units and the number of inputs and outputs. In performance measurement using DEA, the closeness of these two numbers could yield a large number of efficient units. In this article, some inputs or outputs will be aggregated and the number of inputs and outputs are reduced iteratively. Numerical examples show that in comparison to the single DEA method, our approach has the fewest efficient units. This means that our approach has a superior ability to discriminate the performance of the DMUs.  相似文献   

13.
Consider the Vandermonde-like matrix , where the polynomials satisfy a three-term recurrence relation. If are the Chebyshev polynomials , then coincides with . This paper presents a new fast algorithm for the computation of the matrix-vector product in arithmetical operations. The algorithm divides into a fast transform which replaces with and a subsequent fast cosine transform. The first and central part of the algorithm is realized by a straightforward cascade summation based on properties of associated polynomials and by fast polynomial multiplications. Numerical tests demonstrate that our fast polynomial transform realizes with almost the same precision as the Clenshaw algorithm, but is much faster for .

  相似文献   


14.
In [S. Cuomo, L. D’Amore, A. Murli, M.R. Rizzardi, Computation of the inverse Laplace transform based on a collocation method which uses only real values, J. Comput. Appl. Math., 198 (1) (2007) 98–115] the authors proposed a Collocation method (C-method) for real inversion of Laplace transforms (Lt), based on the truncated Laguerre expansion of the inverse function:
where σ, b are parameters and ck, kN, are the MacLaurin coefficients of a function depending on the Lt. The computational kernel of a C-method is the solution of a Vandermonde linear system, where the right hand side is obtained evaluating the Lt on the real axis. The Bjorck Pereira algorithm has been used for solving the Vandermonde linear system, providing a computable componentwise error bound on the solution.

For an inversion problem on discrete data F is known on a pre-assigned set of points (we refer to these points as samples of F) only and the major challenge is to deal with a significative loss of information. A natural approach to overcome this intrinsic difficulty is to construct a suitable fitting model that approximates the given data. In this case, we show that such approach leads to a C-method with perturbed right hand side, and then we use again the Bjorck Pereira algorithm.

Starting from the error introduced by the fitting model, we study its propagation in order to determine the maximum attainable accuracy on fN. Moreover we derive a computable error bound that allows to get the suitable value of the parameter N that gives the maximum attainable accuracy.  相似文献   


15.
In this paper, we study the estimation and variable selection of the sufficient dimension reduction space for survival data via a new combination of $L_1$ penalty and the refined outer product of gradient method (rOPG; Xia et al. in J R Stat Soc Ser B 64:363–410, 2002), called SH-OPG hereafter. SH-OPG can exhaustively estimate the central subspace and select the informative covariates simultaneously; Meanwhile, the estimated directions remain orthogonal automatically after dropping noninformative regressors. The efficiency of SH-OPG is verified through extensive simulation studies and real data analysis.  相似文献   

16.
Ramanujan numbers were introduced in [2] to implement discrete fourier transform (DFT) without using any multiplication operation. Ramanujan numbers are related to π and integers which are powers of 2. If the transform sizeN, is a Ramanujan number, then the computational complexity of the algorithms used for computing isO(N 2) addition and shift operations, and no multiplications. In these algorithms, the transform can be computed sequentially with a single adder inO(N 2) addition times. Parallel implementation of the algorithm can be executed inO(N) addition times, withO(N) number of adders. Some of these Ramanujan numbers of order-2 are related to the Biblical and Babylonian values of π [1]. In this paper, we analytically obtain upper bounds on the degree of approximation in the computation of DFT if JV is a prime Ramanujan number.  相似文献   

17.
A new contrast enhancement algorithm for image is proposed combining genetic algorithm (GA) with wavelet neural network (WNN). In-complete Beta transform (IBT) is used to obtain non-linear gray transform curve so as to enhance global contrast for an image. GA determines optimal gray transform parameters. In order to avoid the expensive time for traditional contrast enhancement algorithms, which search optimal gray transform parameters in the whole parameters space, based on gray distribution of an image, a classification criterion is proposed. Contrast type for original image is determined by the new criterion. Parameters space is, respectively, determined according to different contrast types, which greatly shrink parameters space. Thus searching direction of GA is guided by the new parameter space. Considering the drawback of traditional histogram equalization that it reduces the information and enlarges noise and background blur in the processed image, a synthetic objective function is used as fitness function of GA combining peak signal-noise-ratio (PSNR) and information entropy. In order to calculate IBT in the whole image, WNN is used to approximate the IBT. In order to enhance the local contrast for image, discrete stationary wavelet transform (DSWT) is used to enhance detail in an image. Having implemented DSWT to an image, detail is enhanced by a non-linear operator in three high frequency sub-bands. The coefficients in the low frequency sub-bands are set as zero. Final enhanced image is obtained by adding the global enhanced image with the local enhanced image. Experimental results show that the new algorithm is able to well enhance the global and local contrast for image while keeping the noise and background blur from being greatly enlarged.  相似文献   

18.
The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman–Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.  相似文献   

19.
Suppose that the data have a discrete distribution determined by (∞, ψ) where θ is the scalar parameter of interest and ψ is a nuisance parameter vector. The Buehler 1 - α upper confidence limit for θ is as small as possible, subject to the constraints that (a) its coverage probability is at least 1 - α and (b) it is a nondecreasing function of a pre-specified statisticT. This confidence limit has important biostatistical and reliability applications. The main result of the paper is that for a wide class of models (including binomial and Poisson), parameters of interest 9 and statisticsT (which possess what we call the “logical ordering” property) there is a dramatic increase in the ease with which this upper confidence limit can be computed. This result is illustrated numerically for θ a difference of binomial probabilities. Kabaila & Lloyd (2002) also show that ifT is poorly chosen then an assumption required for the validity of the formula for this confidence limit may not be satisfied. We show that for binomial data this assumption must be satisfied whenT possesses the “logical ordering” property.  相似文献   

20.
We propose a two-component graphical chain model, the discrete regression distribution, where a set of discrete random variables is modeled as a response to a set of categorical and continuous covariates. The proposed model is useful for modeling a set of discrete variables measured at multiple sites along with a set of continuous and/or discrete covariates. The proposed model allows for joint examination of the dependence structure of the discrete response and observed covariates and also accommodates site-to-site variability. We develop the graphical model properties and theoretical justifications of this model. Our model has several advantages over the traditional logistic normal model used to analyze similar compositional data, including site-specific random effect terms and the incorporation of discrete and continuous covariates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号