共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The Count Compatible Pattern Run-Length (CCPRL) coding compression method is proposed to further improve the compression ratio. Firstly, a segment of pattern in the test set is retained. Secondly, don’t-care bits are filled so as to make subsequent patterns compatible with the retained pattern for as many times as possible until it can no longer be made compatible. Thirdly, the compatible patterns are represented by symbol “0” (equal) and symbol “1” (contrary) in the codeword. In addition, the number of consecutive compatible patterns is counted and expanded into binary which indicates when the codeword ends. At last, the six largest ISCAS’89 benchmark circuits verify the proposed method, the experimental results show that the average compression ratio achieved is up to 71.73 %. 相似文献
3.
4.
Growing test data volume and excessive test application time are two serious concerns in scan-based testing for SoCs. This paper presents an efficient test-independent compression technique based on block merging and eight coding (BM-8C) to reduce the test data volume and test application time. Test compression is achieved by encoding the merged blocks after merging consecutive compatible blocks with exact eight codewords. The proposed scheme compresses the pre-computed test data without requiring any structural information of the circuit under test. Therefore, it is applicable for IP cores in SoCs. Experimental results demonstrate that the BM-8C technique can achieve an average compression ratio up to 68.14 % with significant low test application time. 相似文献
5.
该文提出一种用于测试数据压缩的自适应EFDR(Extended Frequency-Directed Run-length)编码方法。该方法以EFDR编码为基础,增加了一个用于表示后缀与前缀编码长度差值的参数N,对测试集中的每个测试向量,根据其游程分布情况,选择最合适的N值进行编码,提高了编码效率。在解码方面,编码后的码字经过简单的数学运算即可恢复得到原测试数据的游程长度,且不同N值下的编码码字均可使用相同的解码电路来解码,因此解码电路具有较小的硬件开销。对ISCAS-89部分标准电路的实验结果表明,该方法的平均压缩率达到69.87%,较原EFDR编码方法提高了4.07%。 相似文献
6.
This paper presents a flexible runs-aware PRL coding method whose coding algorithm is simple and easy to implement. The internal 2-n -PRL coding iteratively codes 2 n runs of compatible or inversely compatible patterns inside a single segment. The external N-PRL coding iteratively codes flexible runs of compatible or inversely compatible segments across multiple segments. The decoder architecture is concise. The benchmark circuits verify the flexible runs-aware PRL coding method, the experimental results show it obtains higher compression ratio and shorter test application time. 相似文献
7.
为了降低数据存储和传输的成本,对数据进行压缩处理是一种有效的手段。该文针对具有较小均方值特征的整型数据序列提出了一种新的可用于数据无损压缩的位重组标记编码方法。该方法首先对整型数据序列进行位重组处理,以提高部分数据出现的概率;然后根据数据流中局部数据的概率分布特点自适应地选择合适的编码方式对数据流进行编码。运用实际具有较小均方值特征的整型数据序列对该文方法和其它几种无损压缩方法进行了压缩解压测试,并对比分析了各种压缩算法的压缩效果。测试结果表明,新方法可以实现数据的无损压缩与解压,且其压缩效果优于LZW编码,经典的算术编码,通用的WinRAR软件和专业音频数据压缩软件FLAC的压缩效果,具有良好的应用前景。 相似文献
8.
Optimal Parsing Trees for Run-Length Coding of Biased Data 总被引:1,自引:0,他引:1
Aviran S. Siegel P.H. Wolf J.K. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2008,54(2):841-849
We study coding schemes which encode unconstrained sequences into run-length-limited (d, k)-constrained sequences. We present a general framework for the construction of such (d, k)-codes from variable-length source codes. This framework is an extension of the previously suggested bit stuffing, bit flipping, and symbol sliding algorithms. We show that it gives rise to new code constructions which achieve improved performance over the three aforementioned algorithms. Therefore, we are interested in finding optimal codes under this framework, optimal in the sense of maximal achievable asymptotic rates. However, this appears to be a difficult problem. In an attempt to solve it, we are led to consider the encoding of unconstrained sequences of independent but biased (as opposed to equiprobable) bits. Here, our main result is that one can use the Tunstall source coding algorithm to generate optimal codes for a partial class of (d, k) constraints. 相似文献
9.
Testing time and power consumption during the testing of SoCs are becoming increasingly important with an increasing volume of test data in intellectual property cores in SoCs. This paper presents a new algorithm to reduce the scan‐in power and test data volume using a modified scan latch reordering algorithm. We apply a scan latch reordering technique to minimize the column hamming distance in scan vectors. During scan latch reordering, the don't‐care inputs in the scan vectors are assigned for low power and high compression. Experimental results for ISCAS 89 benchmark circuits show that reduced test data and low power scan testing can be achieved in all cases. 相似文献
10.
The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder. 相似文献
11.
随着集成电路制造工艺的发展,VLSI(Very Large Scale Integrated)电路测试面临着测试数据量大和测试功耗过高的问题.对此,本文提出一种基于多级压缩的低功耗测试数据压缩方案.该方案先利用输入精简技术对原测试集进行预处理,以减少测试集中的确定位数量,之后再进行第一级压缩,即对测试向量按多扫描划分为子向量并进行相容压缩,压缩后的测试向量可用更短的码字表示;接着再对测试数据进行低功耗填充,先进行捕获功耗填充,使其达到安全阈值以内,然后再对剩余的无关位进行移位功耗填充;最后对填充后的测试数据进行第二级压缩,即改进游程编码压缩.对ISCAS89基准电路的实验结果表明,本文方案能取得比golomb码、FDR码、EFDR码、9C码、BM码等更高的压缩率,同时还能协同优化测试时的捕获功耗和移位功耗. 相似文献
12.
13.
14.
The emergence of the nanometer scale integration technology made it possible for systems-on-a-chip, SoC, design to contain
many reusable cores from multiple resources. This resulted in higher complexity SoC testing than the conventional VLSI. To
address this increase in design complexity in terms of data-volume and test-time, several compression methods have been developed,
employed and proposed in the literature. In this paper, we present a new efficient test vector compression scheme based on
block entropy in conjunction with our improved row-column reduction routine to reduce test data significantly. Our results
show that the proposed method produces much higher compression ratio than all previously published methods. On average, our
scheme scores nearly 13% higher than the best reported results. In addition, our scheme outperformed all results for each
of the tested circuits. The proposed scheme is very fast and has considerable low complexity. 相似文献
15.
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性. 相似文献
16.
An Efficient Test Data Compression Technique Based on Codes 总被引:1,自引:1,他引:0
提出了一种新的测试数据压缩/解压缩的算法,称为混合游程编码,它充分考虑了测试数据的压缩率、相应硬件解码电路的开销以及总的测试时间.该算法是基于变长-变长的编码方式,即把不同游程长度的字串映射成不同长度的代码字,可以得到一个很好的压缩率.同时为了进一步提高压缩率,还提出了一种不确定位填充方法和测试向量的排序算法,在编码压缩前对测试数据进行相应的预处理.另外,混合游程编码的研究过程中充分考虑到了硬件解码电路的设计,可以使硬件开销尽可能小,并减少总的测试时间.最后,ISCAS 89 benchmark电路的实验结果证明了所提算法的有效性. 相似文献
17.
Shih-Ping Lin Chung-Len Lee Jwu-E Chen Ji-Jan Chen Kun-Lun Luo Wen-Ching Wu 《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2007,15(7):767-776
The random-like filling strategy pursuing high compression for today's popular test compression schemes introduces large test power. To achieve high compression in conjunction with reducing test power for multiple-scan-chain designs is even harder and very few works were dedicated to solve this problem. This paper proposes and demonstrates a multilayer data copy (MDC) scheme for test compression as well as test power reduction for multiple-scan-chain designs. The scheme utilizes a decoding buffer, which supports fast loading using previous loaded data, to achieve test data compression and test power reduction at the same time. The scheme can be applied automatic test pattern generation (ATPG)-independently or to be incorporated in an ATPG to generate highly compressible and power efficient test sets. Experiment results on benchmarks show that test sets generated by the scheme had large compression and power saving with only a small area design overhead. 相似文献
18.
快速分形图象压缩编码 总被引:34,自引:3,他引:34
本文提出一种基于局部迭代函数系统(LIFS)的快速分形图象压缩编码、解码方法。实验表明,该法在恢复图象的信噪比为30dB时,仍能达到25倍的压缩倍数。 相似文献
19.
Test data compression is an effective methodology for reducing test data volume and testing time. A novel compatibility-based
test data compression method is presented in this paper. With the high compression efficiency of extended frequency-directed
run length coding algorithm, the proposed method groups the test vectors that have least incompatible bits and amalgamates
them into a single vector by assigning 1 or 0 to unspecified bits and c to incompatible bits. Three runs of 1, 0 and c can be encoded simultaneously. In addition, the corresponding decoder architecture with low hardware overhead has been developed.
To evaluate the effectiveness of the proposed approach, in experiments, it is applied to the International Symposium on Circuits
and Systems’ benchmark circuits. The experiments results show that the proposed algorithm gets a higher compression ratio
than the conventional algorithms. 相似文献
20.
Usha Sandeep Mehta Kankar S. Dasgupta Nirnjan M. Devashrayee 《Journal of Electronic Testing》2010,26(6):679-688
A compression-decompression scheme, Modified Selective Huffman (MS-Huffman) scheme based on Huffman code is proposed in this
paper. This scheme aims at optimization of the parameters that influence the test cost reduction: the compression ratio, on-chip
decoder area overhead and overall test application time. Theoretically, it is proved that the proposed scheme gives the better
test data compression compared to very recently proposed encoding schemes for any test set. It is clearly demonstrated with
a large number of experimental results that the proposed scheme improves the test data compression, reduces overall test application
time and on-chip area overhead compared to other Huffman code based schemes. 相似文献