首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22篇
  免费   0篇
数学   1篇
无线电   21篇
  2014年   2篇
  2010年   1篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2004年   2篇
  2003年   2篇
  2002年   2篇
  2000年   1篇
  1999年   3篇
  1998年   2篇
  1997年   1篇
  1995年   2篇
排序方式: 共有22条查询结果,搜索用时 15 毫秒
1.
In this paper, area-efficient and high-throughput multi-mode architectures for the SHA-1 and SHA-2 hash families are proposed and implemented in several FPGA technologies. Additionally a systematic flow for designing multi-mode architectures (implementing more than one function) of these families is introduced. Compared to the corresponding architectures that are produced by a commercial synthesis tool, the proposed ones are better in terms of both area (at least 40%) and throughput/area (from 32% up to 175%). Finally, the proposed architectures outperform similar existing ones in terms of throughput and throughput/area, from 4.2× up to 279.4× and from 1.2× up to 5.5×, respectively.  相似文献   
2.
3.
Techniques for interconnect power consumption reduction in realizations of sum-of-products computations are presented. The proposed techniques reorder the sequence of accesses of the coefficient and data memories to minimize power-costly address and data bus bit switching. The reordering problem is systematically formulated by mapping into the traveling salesman's problem (TSP) for both single and multiple functional unit architectures. The cost function driving the memory accesses reordering procedure explicitly takes into consideration the static information related to algorithms' coefficients and storage addresses and data-related dynamic information. Experimental results from several typical digital signal-processing algorithms prove that the proposed techniques lead to significant bus switching activity savings. The power consumption in the data paths is reduced in most cases as well.  相似文献   
4.
We present an automated framework that partitions the code and data types for the needs of data management in an object-oriented source code. The goal is to identify the crucial data types from data management perspective and separate these from the rest of the code. In this way, the design complexity is reduced allowing the designer to easily focus on the important parts of the code to perform further refinements and optimizations. To achieve this, static and dynamic analysis is performed on the initial C++ specification code. Based on the analysis results, the data types of the application are characterized as crucial or non-crucial. Continuing, the initial code is rewritten automatically in such a way that the crucial data types and the code portions that manipulate them are separated from the rest of the code. Experiments on well-known multimedia and telecom applications demonstrate the correctness of the performed automated analysis and code rewriting as well as the applicability of the introduced framework in terms of execution time and memory requirements. Comparisons with Rational’s QuantifyTM suite show the failure of QuantifyTM to analyze correctly the initial code for the needs of data management.  相似文献   
5.
6.
In this correspondence, new algorithms are presented for computing the l-D and 2-D discrete cosine transform (DCT) of even length by using the discrete Fourier transform (DFT). A comparison of the proposed algorithms to other fast ones points out their computational efficiency, which is mainly based on the advantages of prime-factor decomposition and a proper choice of index mappings  相似文献   
7.
In this paper, a novel algorithm for low-power image coding and decoding is presented and the various inherent trade-offs are described and investigated in detail. The algorithm reduces the memory requirements of vector quantization, i.e., the size of memory required for the codebook and the number of memory accesses by using small codebooks. This significantly reduces the memory-related power consumption, which is an important part of the total power budget. To compensate for the loss of quality introduced by the small codebook size, simple transformations are applied on the codewords during coding. Thus, small codebooks are extended through computations and the main coding task becomes computation-based rather than memory-based. Each image block is encoded by a codeword index and a set of transformation parameters. The algorithm leads to power savings of a factor of 10 in coding and of a factor of 3 in decoding, at least in comparison to classical full-search vector quantization. In terms of SNR, the image quality is better than or comparable to that corresponding to full-search vector quantization, depending on the size of the codebook that is used. The main disadvantage of the proposed algorithm is the decrease of the compression ratio in comparison to vector quantization. The trade-off between image quality and power consumption is dominant in this algorithm and is mainly determined by the size of the codebook.  相似文献   
8.
In the usual frequentist formulation of testing and interval estimation there is a strong relationship between -level tests and 1 - confidence intervals. Such strong relationships do not always persist for post-data, or Bayesian, measures of accuracy of these procedures. We explore the relationship between post-data measures of accuracy of both tests and interval estimates, measures that are derived under a decision-theoretic structure. We find that, in general, there are strong post-data relationships in the one-sided case, and some relationships in the two-sided case.  相似文献   
9.
This paper presents performance improvements and energy savings from mapping real-world benchmarks on an embedded single-chip platform that includes coarse-grained reconfigurable logic with a microprocessor. The reconfigurable hardware is a 2-D array of processing elements connected with a mesh-like network. Analytical results derived from mapping seven real-life digital signal processing applications, with the aid of an automated design flow, on six different instances of the system architecture are presented. Significant overall application speedups relative to an all-software solution, ranging from 1.81 to 3.99 are reported being close to theoretical speedup bounds. Additionally, the energy savings range from 43% to 71%. Finally, a comparison with a system coupling a microprocessor with a very long instruction word core shows that the microprocessor/coarse-grained reconfigurable array platform is more efficient in terms of performance and energy consumption.  相似文献   
10.
In this paper, a novel architecture of a floating-point digital signal processor is presented. It introduces a single hardware structure with a full set of elementary arithmetic functions which includessin, cos, tan, arctanh, circular rotation andvectoring, sinh, cosh, tanh, arctanh, hyperbolic rotation andvectoring, square root, logarithm, exponential as well asaddition, multiplication anddivision. The architecture of the processor is based on the COordinate Rotation DIgital Computer (CORDIC) and the Convergence Computing Method (CCM) algorithms for computing arithmetic functions and it is fully parallel and pipelined. Its advanced functionality is achieved without significant increase in hardware, in comparison to ordinary CORDIC processor, and makes it an ideal processing element in high speed multiprocessor applications, e.g. real time Digital Signal Processing (DSP) and computer graphics.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号