排序方式: 共有76条查询结果,搜索用时 156 毫秒
1.
Minimax Universal Decoding With an Erasure Option 总被引:1,自引:0,他引:1
Merhav N. Feder M. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2007,53(5):1664-1675
Motivated by applications of rateless coding, decision feedback, and automatic repeat request (ARQ), we study the problem of universal decoding for unknown channels in the presence of an erasure option. Specifically, we harness the competitive minimax methodology developed in earlier studies, in order to derive a universal version of Forney's classical erasure/list decoder, which in the erasure case, optimally trades off between the probability of erasure and the probability of undetected error. The proposed universal erasure decoder guarantees universal achievability of a certain fraction xi of the optimum error exponents of these probabilities (in a sense to be made precise in the sequel). A single-letter expression for xi, which depends solely on the coding rate and the Neyman-Pearson threshold (to be defined), is provided. The example of the binary-symmetric channel is studied in full detail, and some conclusions are drawn 相似文献
2.
Neri Merhav Bhaskaran Vasudev 《Journal of Visual Communication and Image Representation》1996,7(4):395-410
In prior work, we developed a fast inverse motion compensation method that can be implemented directly on the DCT domain representation derived from the compressed bitstreams conforming to MPEG, H.261, and H.263 standards. That work was restricted to compressed-domain representations wherein the motion-vectors have integer pel accuracy. Here, we extend this work to fractional-pel accurate motion-vectors. We also extend the prior work to speed up the inverse motion compensation process in the DCT domain by explicitly exploiting the sparseness of the DCT domain representation. Using partial DCT information, we show that the DCT domain method has substantially lower operation count than the conventional spatial domain approach which requires decompression followed by inverse motion-compensation. 相似文献
3.
Merhav N. Ziv J. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2006,52(3):867-873
We consider a variation of the Wyner-Ziv (W-Z) problem pertaining to lossy compression of individual sequences using finite-state encoders and decoders. There are two main results in this paper. The first characterizes the relationship between the performance of the best M-state encoder-decoder pair to that of the best block code of size /spl lscr/ for every input sequence, and shows that the loss of the latter relative to the former (in terms of both rate and distortion) never exceeds the order of (logM)//spl lscr/, independently of the input sequence. Thus, in the limit of large M, the best rate-distortion performance of every infinite source sequence can be approached universally by a sequence of block codes (which are also implementable by finite-state machines). While this result assumes an asymptotic regime where the number of states is fixed, and only the length n of the input sequence grows without bound, we then consider the case where the number of states M=M/sub n/ is allowed to grow concurrently with n. Our second result is then about the critical growth rate of M/sub n/ such that the rate-distortion performance of M/sub n/-state encoder-decoder pairs can still be matched by a universal code. We show that this critical growth rate of M/sub n/ is linear in n. 相似文献
4.
Merhav N. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1991,37(6):1586-1594
Binary hypotheses testing using empirically observed statistics is studied in the Neyman-Pearson formulation for the hidden Markov model (HMM). An asymptotically optimal decision rule is proposed and compared to the generalized likelihood ratio test (GLRT), which has been shown in earlier studies to be asymptotically optimal for simpler parametric families. The proof of the main theorem is provided. The result can be applied to several types of HMMs commonly used in speech recognition and communication applications. Several applications are demonstrated 相似文献
5.
Model order estimation is a subject in time series analysis that deals with fitting a parametric model to a vector of observations. This paper focuses on several model order estimators known in the literature and examines their performance under small deviations of the probability distribution of the noise with respect to a nominal distribution assumed in the model. It is demonstrated that the standard estimators suffer from high sensitivity to deviations from the nominal distribution, and a drastic performance degradation is experienced. To overcome this problem, robust estimators that are insensitive to small deviations from the nominal distribution are developed. These estimators are based on a composition between model order estimation methods and robust statistical inference techniques known in the literature. In addition, a new estimator based on a locally best test for weak signals is presented both in nonrobust and robust versions. The proposed robust model order estimators are developed on a heuristic basis, and there is no claim of optimality, but experimental results indicate that they provide significant improvement over the standard estimators 相似文献
6.
Weissman T. Ordentlich E. Weinberger M.J. Somekh-Baruch A. Merhav N. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2007,53(4):1253-1264
We consider the filtering problem, where a finite-alphabet individual sequence is corrupted by a discrete memoryless channel, and the goal is to causally estimate each sequence component based on the past and present noisy observations. We establish a correspondence between the filtering problem and the problem of prediction of individual sequences which leads to the following result: Given an arbitrary finite set of filters, there exists a filter which performs, with high probability, essentially as well as the best in the set, regardless of the underlying noiseless individual sequence. We use this relationship between the problems to derive a filter guaranteed of attaining the "finite-state filterability" of any individual sequence by leveraging results from the prediction problem 相似文献
7.
Cohen A. Merhav N. Weissman T. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2007,53(9):3001-3020
We investigate the problem of scanning and prediction (ldquoscandiction,rdquo for short) of multidimensional data arrays. This problem arises in several aspects of image and video processing, such as predictive coding, for example, where an image is compressed by coding the error sequence resulting from scandicting it. Thus, it is natural to ask what is the optimal method to scan and predict a given image, what is the resulting minimum prediction loss, and whether there exist specific scandiction schemes which are universal in some sense. Specifically, we investigate the following problems: first, modeling the data array as a random field, we wish to examine whether there exists a scandiction scheme which is independent of the field's distribution, yet asymptotically achieves the same performance as if this distribution were known. This question is answered in the affirmative for the set of all spatially stationary random fields and under mild conditions on the loss function. We then discuss the scenario where a nonoptimal scanning order is used, yet accompanied by an optimal predictor, and derive bounds on the excess loss compared to optimal scanning and prediction. This paper is the first part of a two-part paper on sequential decision making for multidimensional data. It deals with clean, noiseless data arrays. The second part deals with noisy data arrays, namely, with the case where the decision maker observes only a noisy version of the data, yet it is judged with respect to the original, clean data. 相似文献
8.
In continuation to an earlier work, we further consider the problem of robust estimation of a random vector (or signal), with an uncertain covariance matrix, that is observed through a known linear transformation and corrupted by additive noise with a known covariance matrix. While, in the earlier work, we developed and proposed a competitive minimax approach of minimizing the worst-case mean-squared error (MSE) difference regret criterion, here, we study, in the same spirit, the minimum worst-case MSE ratio regret criterion, namely, the worst-case ratio (rather than difference) between the MSE attainable using a linear estimator, ignorant of the exact signal covariance, and the minimum MSE (MMSE) attainable by optimum linear estimation with a known signal covariance. We present the optimal linear estimator, under this criterion, in two ways: The first is as a solution to a certain semidefinite programming (SDP) problem, and the second is as an expression that is of closed form up to a single parameter whose value can be found by a simple line search procedure. We then show that the linear minimax ratio regret estimator can also be interpreted as the MMSE estimator that minimizes the MSE for a certain choice of signal covariance that depends on the uncertainty region. We demonstrate that in applications, the proposed minimax MSE ratio regret approach may outperform the well-known minimax MSE approach, the minimax MSE difference regret approach, and the "plug-in" approach, where in the latter, one uses the MMSE estimator with an estimated covariance matrix replacing the true unknown covariance. 相似文献
9.
Weissman T. Merhav N. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2001,47(6):2151-2173
The problem of predicting the next outcome of an individual binary sequence, based on noisy observations of the past, is considered. The goal of the predictor is to perform, for each individual sequence, “almost” as well as the best in a set of experts, where performance is evaluated using a general loss function. A comprehensive approach to prediction in this noisy setting is presented and proven generally efficient under appropriate conditions. As an illustration of the applicability of the approach suggested for concrete situations, two important special cases are explicitly treated. The first is the case where the data-corrupting noise process is binary-valued (where the observed bit is the bitwise XOR of the clean bit and the noise bit). The second case is that of real-valued additive noise. It is shown that even in this more challenging situation, where the information available to the predictor regarding the past sequence is incomplete, a predictor can be guaranteed to successfully compete with a whole set of experts in considerably strong senses 相似文献
10.
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity. 相似文献