首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   27篇
  免费   0篇
化学   1篇
数学   2篇
物理学   1篇
无线电   23篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2006年   5篇
  2005年   4篇
  2004年   1篇
  2003年   6篇
  2002年   3篇
  2001年   1篇
  2000年   2篇
  1990年   1篇
排序方式: 共有27条查询结果,搜索用时 15 毫秒
1.
The paper contains a systematic investigation of practical coding strategies for noncoherent communication over fading channels, guided by explicit comparisons with information-theoretic benchmarks. Noncoherent reception is interpreted as joint data and channel estimation, assuming that the channel is time varying and a priori unknown. We consider iterative decoding for a serial concatenation of a standard binary outer channel code with an inner modulation code amenable to noncoherent detection. For an information rate of about 1/2 bit per channel use, the proposed scheme, using a quaternary phase-shift keying (QPSK) alphabet, provides performance within 1.6-1.7 dB of Shannon capacity for the block fading channel, and is about 2.5-3 dB superior to standard differential demodulation in conjunction with an outer channel code. We also provide capacity computations for noncoherent communication using standard phase-shift keying (PSK) and quadrature amplitude modulation (QAM) alphabets; comparing these with the capacity with unconstrained input provides guidance as to the choice of constellation as a function of the signal-to-noise ratio. These results imply that QPSK suffices to approach the unconstrained capacity for the relatively low information and fading rates considered in our performance evaluations, but that QAM is superior to PSK for higher information or fading rates, motivating further research into efficient noncoherent coded modulation with QAM alphabets.  相似文献   
2.
3.
Minimum mean squared error equalization using a priori information   总被引:11,自引:0,他引:11  
A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP decoder exchange soft information in the form of prior probabilities over the transmitted symbols. A number of reduced-complexity methods for turbo equalization have been introduced in which MAP equalization is replaced with suboptimal, low-complexity approaches. We explore a number of low-complexity soft-input/soft-output (SISO) equalization algorithms based on the minimum mean square error (MMSE) criterion. This includes the extension of existing approaches to general signal constellations and the derivation of a novel approach requiring less complexity than the MMSE-optimal solution. All approaches are qualitatively analyzed by observing the mean-square error averaged over a sequence of equalized data. We show that for the turbo equalization application, the MMSE-based SISO equalizers perform well compared with a MAP equalizer while providing a tremendous complexity reduction  相似文献   
4.
Turbo equalization   总被引:3,自引:0,他引:3  
Turbo equalization is an iterative equalization and decoding technique that can achieve equally impressive performance gains for communication systems that send digital data over channels that require equalization, i.e., those that suffer from intersymbol interference (ISI). In this article, we discuss the turbo equalization approach to coded data transmission over ISI channels, with emphasis on the basic ideas and some of the practical details. The original system introduced by Douillard et al. can be viewed as an extension of the turbo decoding algorithm by considering the effect of the ISI channel as another form of error protection, i.e., as a rate-1 convolutional code.  相似文献   
5.
An important property of low-density parity-check codes is the existence of highly efficient algorithms for their decoding. Many of the most efficient, recent graph-based algorithms, e.g. message-passing iterative decoding and linear programming decoding, crucially depend on the efficient representation of a code in a graphical model. In order to understand the performance of these algorithms, we argue for the characterization of codes in terms of a so-called fundamental cone in Euclidean space. This cone depends upon a given parity-check matrix of a code, rather than on the code itself. We give a number of properties of this fundamental cone derived from its connection to unramified covers of the graphical models on which the decoding algorithms operate. For the class of cycle codes, these developments naturally lead to a characterization of the fundamental cone as the Newton polyhedron of the Hashimoto edge zeta function of the underlying graph.  相似文献   
6.
Modulo lattice additive noise (MLAN) channels appear in the analysis of structured binning codes for Costa's dirty-paper channel and of nested lattice codes for the additive white Gaussian noise (AWGN) channel. In this paper, we derive a new lower bound on the error exponents of the MLAN channel. With a proper choice of the shaping lattice and the scaling parameter, the new lower bound coincides with the random-coding lower bound on the error exponents of the AWGN channel at the same signal-to-noise ratio (SNR) in the sphere-packing and straight-line regions. This result implies that, at least for rates close to channel capacity, 1) writing on dirty paper is as reliable as writing on clean paper; and 2) lattice encoding and decoding suffer no loss of error exponents relative to the optimal codes (with maximum-likelihood decoding) for the AWGN channel.  相似文献   
7.
Efficient soft-decision decoding of Reed-Solomon (RS) codes is made possible by the Koetter-Vardy (KV) algorithm which consists of a front-end to the interpolation-based Guruswami-Sudan (GS) list decoding algorithm. This paper approaches the soft-decision KV algorithm from the point of view of a communications systems designer who wants to know what benefits the algorithm can give, and how the extra complexity introduced by soft decoding can be managed at the systems level. We show how to reduce the computational complexity and memory requirements of the soft-decision front-end. Applications to wireless communications over Rayleigh fading channels and magnetic recording channels are proposed. For a high-rate RS(255,239) code, 2-3 dB of soft-decision gain is possible over a Rayleigh fading channel using 16-quadrature amplitude modulation. For shorter codes and at lower rates, the gain can be as large as 9 dB. To lower the complexity of decoding on the systems level, the redecoding architecture is explored, which uses only the appropriate amount of complexity to decode each packet. An error-detection criterion based on the properties of the KV decoder is proposed for the redecoding architecture. Queueing analysis verifies the practicality of the redecoding architecture by showing that only a modestly sized RAM buffer is required.  相似文献   
8.
The microstructure of unpassivated PVD copper interconnects has been determined by electron backscatter diffraction technique (EBSD) inside a scanning electron microscope (SEM), and the appearance and growth of voids and hillocks during the electromigration testing has been observed in situ inside the SEM. The EBSD measurement indicates a strong <111 > texture for the tested line and a high angle boundary fraction of more than 70%. The comparison of the EBSD maps and the SEM images of the defect formation due to electromigration shows that the voids are formed mainly at the sidewall and after blocking grains. These images indicate that the diffusion paths are both the interface and the grain boundaries.  相似文献   
9.
The microstructural changes in copper thin films at room temperature after electrolytic deposition have been studied by X-ray diffraction, wafer curvature stress measurement, electrical resistance measurement, and local orientation mapping. Changes in texture and stress were found to take place earlier than grain growth became distinctly visible. Additionally, FIB cross sections showed the evolution of grains in third dimension. The results are discussed in terms of grain growth from the bottom to the top of the film.  相似文献   
10.
The structure of tail-biting trellises: minimality and basic principles   总被引:3,自引:0,他引:3  
Basic structural properties of tail-biting trellises are investigated. We start with rigorous definitions of various types of minimality for tail-biting trellises. We then show that biproper and/or nonmergeable tail-biting trellises are not necessarily minimal, even though every minimal tail-biting trellis is biproper. Next, we introduce the notion of linear (or group) trellises and prove, by example, that a minimal tail-biting trellis for a binary linear code need not have any linearity properties whatsoever. We observe that a trellis - either tail-biting or conventional - is linear if and only if it factors into a product of elementary trellises. Using this result, we show how to construct, for any given linear code /spl Copf/, a tail-biting trellis that minimizes the product of state-space sizes among all possible linear tail-biting trellises. We also prove that every minimal linear tail-biting trellis for /spl Copf/ arises from a certain n/spl times/n characteristic matrix, and show how to compute this matrix in time O(n/sup 2/) from any basis for /spl Copf/. Furthermore, we devise a linear-programming algorithm that starts with the characteristic matrix and produces a linear tail-biting trellis for /spl Copf/; which minimizes the maximum state-space size. Finally, we consider a generalized product construction for tail-biting trellises, and use it to prove that a linear code /spl Copf/ and its dual /spl Copf//sup /spl perp///spl Copf//sup /spl perp///spl Copf//sup /spl perp///spl Copf//sup /spl perp// have the same state-complexity profiles.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号