首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A number of previous attempts at the vectorization of the fast Fourier transform (FFT) algorithm have fallen somewhat short of achieving the full potential speed of vector processors. The algorithm formulation and implementation described here not only achieves full vector utilization but successfully copes with the problems of hierarchical storage. In the present paper, these techniques are described and extended to the general mixed radix algorithms, prime factor algorithm (PFA), the multidimensional discrete Fourier transform (DFT), the rectangular transform convolution algorithms, and the Winograd fast Fourier transform algorithm. Some of the methods were used in the Engineering Scientific Subroutine Library for the IBM 3090 Vector Facility. Using this approach, very good and consistent performance was obtained over a very wide range of transform lengths.  相似文献   

2.
Baker  P.W. 《Electronics letters》1973,9(21):493-494
A scheme for speeding up iterative algorithms for generating reciprocals, logarithms and exponentials that are based on multiplication by (1 + Ek 2?k is proposed. The scheme relies on the prediction of the EkS, thereby allowing most of the additions to be of the fast-carry type.  相似文献   

3.
Continuous versions of the multidimensional chirp algorithms compute the function G(y)=F(My), where F(y) is the Fourier transform of a function f(x) of a vector variable x and M is an invertible matrix. Discrete versions of the algorithms compute values of F over the lattice L(2)=ML(1 ) from values of f over a lattice L(1), where L(2) need not contain the lattice reciprocal to L(1). If M is symmetric, the algorithms are multidimensional versions of the Bluestein chirp algorithm, which employs two pointwise multiplication operations (PMOs) and one convolution operation (CO). The discrete version may be efficiently implemented using fast algorithms to compute the convolutions. If M is not symmetric, three modifications are required. First, the Fourier transform is factored as the product of two Fresnel transforms. Second, the matrix M is factored as M=AB, where A and B are symmetric matrices. Third, the Fresnel transforms are modified by the matrices A and B and each modified transform is factored into a product of two PMOs and one CO.  相似文献   

4.
Competitive learning algorithms for robust vector quantization   总被引:1,自引:0,他引:1  
The efficient representation and encoding of signals with limited resources, e.g., finite storage capacity and restricted transmission bandwidth, is a fundamental problem in technical as well as biological information processing systems. Typically, under realistic circumstances, the encoding and communication of messages has to deal with different sources of noise and disturbances. We propose a unifying approach to data compression by robust vector quantization, which explicitly deals with channel noise, bandwidth limitations, and random elimination of prototypes. The resulting algorithm is able to limit the detrimental effect of noise in a very general communication scenario. In addition, the presented model allows us to derive a novel competitive neural networks algorithm, which covers topology preserving feature maps, the so-called neural-gas algorithm, and the maximum entropy soft-max rule as special cases. Furthermore, continuation methods based on these noise models improve the codebook design by reducing the sensitivity to local minima. We show an exemplary application of the novel robust vector quantization algorithm to image compression for a teleconferencing system  相似文献   

5.
Anupindi  N. Prabhu  K.M.M. 《Electronics letters》1990,26(23):1973-1975
A fast algorithm for computing the discrete Hartley transform of a real-symmetric data sequence is introduced. The number of computations required is significantly less than that required by the usual split-radix fast Hartley transform.<>  相似文献   

6.
A parallel, pipelined architecture for calculating the fast Hartley transform (FHT) is discussed. Hardware implementation of the FHT introduces two challenges: retrograde indexing and data scaling. A novel addressing scheme that permits the fast computation of FHT butterflies is proposed, and a hardware implementation of conditional block floating point scaling that reduces error due to data growth with little extra cost is described. Simulations reveal a processor capable of transforming a 1 K-point sequence in 170 μs using a 15.4 MHz clock  相似文献   

7.
Three-dimensional convolutions and correlations are used for three-dimensional image-processing applications. Their calculation involves extensive computation, which makes the use of fast transforms very advantageous. As the number of arithmetic operations is very large, the accumulation of rounding or truncation errors arising in the use of the fast Fourier and Hartley transforms tends to increase. Number theoretic transforms are calculated modulo an integer and hence they are not subject to these errors. Previously, a one-dimensional transform called the new Mersenne number transform (NMNT) was introduced and applied successfully to the calculation of 1-D convolutions/correlations. Unlike other Mersenne number transforms, the NMNT can handle long data sequences and has fast algorithms. In the paper, the 1-D definitions are first extended to the 3-D case in detail for use in 3-D image processing applications. The concept and derivation of the 3-D vector radix algorithm is then introduced for the fast calculation of the 3-D NMNT. The proposed algorithm is found to offer substantial savings over the row-column approach in terms of arithmetic operations. Examples are given showing the validity of both transform and algorithm  相似文献   

8.
An approach for massive parallel processing in multidimensional digital filtering, which has briefly been introduced for causal digital filters in previous publications, is generalized and examined in more detail. It is based on a suitably modified sampling procedure combined with diagonal processing and does not require any additional arithmetic operations in comparison with corresponding conventional digital filtering. The condition that has to be satisfied for making the approach suitable for full parallel processing is derived. Properties of diagonal hyperplanes as required for the present approach are discussed.  相似文献   

9.
One of the most serious problems for vector quantisation is the high computational complexity of searching for the closest codeword in the codebook design and encoding phases. The authors present a fast algorithm to search for the closest codeword. The proposed algorithm uses two significant features of a vector, mean value and variance, to reject many unlikely codewords and saves a great deal of computation time. Since the proposed algorithm rejects those codewords that are impossible to be the closest codeword, this algorithm introduces no extra distortion than conventional full search method. The results obtained confirm the effectiveness of the proposed algorithm  相似文献   

10.
11.
The performance of optimum vector quantizers subject to a conditional entropy constraint is studied. This new class of vector quantizers was originally suggested by Chou and Lookabaugh (1990). A locally optimal design of this kind of vector quantizer can be accomplished through a generalization of the well-known entropy-constrained vector quantizer (ECVQ) algorithm. This generalization of the ECVQ algorithm to a conditional entropy-constrained is called CECVQ, i.e., conditional ECVQ. Furthermore, we have extended the high-rate quantization theory to this new class of quantizers to obtain a new high-rate performance bound. The new performance bound is compared and shown to be consistent with bounds derived through conditional rate-distortion theory. A new algorithm for designing entropy-constrained vector quantizers was introduced by Garrido, Pearlman, and Finamore (see IEEE Trans. Circuits Syst. Video Technol., vol.5, no.2, p.83-95, 1995), and is named entropy-constrained pairwise nearest neighbor (ECPNN). The algorithm is basically an entropy-constrained version of the pairwise nearest neighbor (ECPNN) clustering algorithm of Equitz (1989). By a natural extension of the ECPNN algorithm we develop another algorithm, called CECPNN, that designs conditional entropy-constrained vector quantizers. Through simulation results on synthetic sources, we show that CECPNN and CECVQ have very close distortion-rate performance  相似文献   

12.
This paper presents the development and evaluation of fuzzy vector quantization algorithms. These algorithms are designed to achieve the quality of vector quantizers provided by sophisticated but computationally demanding approaches, while capturing the advantages of the frequently used in practice k-means algorithm, such as speed, simplicity, and conceptual appeal. The uncertainty typically associated with clustering tasks is formulated in this approach by allowing the assignment of each training vector to multiple clusters in the early stages of the iterative codebook design process. A training vector assignment strategy is also proposed for the transition from the fuzzy mode, where each training vector can be assigned to multiple clusters, to the crisp mode, where each training vector can be assigned to only one cluster. Such a strategy reduces the dependence of the resulting codebook on the random initial codebook selection. The resulting algorithms are used in image compression based on vector quantization. This application provides the basis for evaluating the computational efficiency of the proposed algorithms and comparing the quality of the resulting codebook design with that provided by competing techniques.  相似文献   

13.
The implementation of digital filtering algorithms using pipelined vector processors is investigated. Modeling of vector processors and vectorization methods are explained, and then the performances of several implementation methods are evaluated based on the model. Vector processor implementation of FIR filtering algorithms using the outer product method and the indirect convolution method is evaluated. Recursive and adaptive filtering algorithms, which lead to dependency problems in direct vector processor implementations, are implemented very efficiently using a newly developed vectorization method. The proposed method computes multiple output samples at a time, making the vector length independent of the filter order. Illustrative examples comparing theoretical results with Cray X-MP simulation results are included.  相似文献   

14.
支持向量机基于结构风险最小化原则,在经验风险和泛化能力之间折衷。它以其良好的性能。在分类领域得到越来越广泛的应用。探讨了SVM的基本原理,研究了在其基础上的一些改进算法,分析了它们之间的联系和区别,为在实际应用中选择最佳的模型提供参考。  相似文献   

15.
A least upper bound for the increasing factor of the magnitude of the decimation-in-time fast Hartley transform (FHT) in fixed-point arithmetic is developed and a new scaling model for the roundoff analysis in the fixed-point arithmetic computation is proposed. In this new scaling model, the input data for each computing stage of the decimation-in-time FHT only need to be divided by a constant of 2, and this can prevent overflow successfully. Hence, the novel approach would result in a higher noise-to-signal ratio for the fixed-point computation of FHT  相似文献   

16.
Hsu  C.-Y. Lin  T.-P. 《Electronics letters》1988,24(4):223-224
A novel approach to discrete interpolation of finite-duration real sequences using subsequences with the fast Hartley transform (FHT) is presented. It is shown that accuracy can be improved by decomposing the signal into an ordered set of subsequences. This development is also convenient because it permits the use of inverse fast Hartley transforms (IFHT) that are always the same size as the original FHT. This approach is also appropriate for the parallel processing of each subsequence  相似文献   

17.
Telecommunication Systems - Phishing websites are amongst the biggest threats Internet users face today, and existing methods like blacklisting, using SSL certificates, etc. often fail to keep up...  相似文献   

18.
将光纤陀螺作为一种角速率传感器,会使其姿态系统的旋转矢量算法的误差增大,角增量的提取方法改变。为此给出三子样和四子样算法的角速度提取角增量公式,推导算法在圆锥运动下的误差,根据圆锥运动下误差最小的原则,对算法系数进行改进,提出两种改进算法,推导误差公式表明改进算法比原算法的精度提高(Ωh)^2倍。仿真表明:改进算法比原算法的精度提高了半个数量级。  相似文献   

19.
The class of the single linkage region growing (SLRG) algorithms, in which pairs of neighboring pixels are compared for merging, is one of the conceptually simplest approaches to image segmentation. A new normalized coefficient, which measures the degree of match between two multivalued vectors, termed the vector degree of match (VDM), provides the SLRG applications with the metric needed to group adjacent pixel pairs. Two new SLRG algorithms, applied to multiband images and exploiting the VDM criterion, are presented. Their major advantage, in comparison with the SLRG implementations found in the literature, is that they required only one user-defined parameter, the VDM Threshold (VDMT), which is a normalized value featuring local adaptivity and having an intuitive physical meaning. The use of an SLRG module is convenient as the first stage of a structured segmentation procedure featuring the following functional properties: (i) it is implemented sequentially; (ii) it combines single linkage, centroid linkage and hybrid linkage criteria; and (iii) its goal is the detection of image areas characterized by low contrast  相似文献   

20.
A fast search method for vector quantization is proposed in this paper. It makes use of the fact that in the generalized Lloyd algorithm (GLA) a vector in a training sequence is either placed in the same minimum distance partition (MDP) as in the previous iteration or in a partition within a very small subset of partitions. The proposed method searches for the MDP for a training vector only in this subset of partitions plus the single previous MDP. As the size of this subset is much smaller than the total number of codevectors, the search process is speeded up significantly. The creation of the subset is essential, as it has a direct effect on the improvement in computation time of the proposed method. The schemes that create the subset efficiently have been proposed. The proposed method generates a codebook identical to that generated using the GLA. It is simple and requires only minor modification of the GLA and a modest amount of additional memory. The experimental results show that the computation time of codebook training was improved by factors from 6.6 to 50.7 and from 5.8 to 70.4 for two test data sets when codebooks of sizes from N = 16 to 2048 were trained. The proposed method was also combined with an earlier published method to further improve the computation time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号