首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
In this paper, we propose a new methodology for analysis of microarray images. First, a new gridding algorithm is proposed for determining the individual spots and their borders. Then, a Gaussian mixture model (GMM) approach is presented for the analysis of the individual spot images. The main advantages of the proposed methodology are modeling flexibility and adaptability to the data, which are well-known strengths of GMM. The maximum likelihood and maximum a posteriori approaches are used to estimate the GMM parameters via the expectation maximization algorithm. The proposed approach has the ability to detect and compensate for artifacts that might occur in microarray images. This is accomplished by a model-based criterion that selects the number of the mixture components. We present numerical experiments with artificial and real data where we compare the proposed approach with previous ones and existing software tools for microarray image analysis and demonstrate its advantages.  相似文献   

2.
Intensity-based segmentation of microarray images   总被引:5,自引:0,他引:5  
The underlying principle in microarray image analysis is that the spot intensity is a measure of the gene expression. This implicitly assumes the gene expression of a spot to be governed entirely by the distribution of the pixel intensities. Thus, a segmentation technique based on the distribution of the pixel intensities is appropriate for the current problem. In this paper, clustering-based segmentation is described to extract the target intensity of the spots. The approximate boundaries of the spots in the microarray are determined by manual adjustment of rectilinear grids. The distribution of the pixel intensity in a grid containing a spot is assumed to be the superposition of the foreground and the local background. The k-means clustering technique and the partitioning around medoids (PAM) were used to generate a binary partition of the pixel intensity distribution. The median (k-means) and the medoid (PAM) of the cluster members are chosen as the cluster representatives. The effectiveness of the clustering-based segmentation techniques was tested on publicly available arrays generated in a lipid metabolism experiment (Callow et al., 2000). The results are compared against those obtained using the region-growing approach (SPOT) (Yang et al., 2001). The effect of additive white Gaussian noise is also investigated.  相似文献   

3.
A fundamental step of microarray image analysis is the detection of the grid structure for the accurate location of each spot, representing the state of a given gene in a particular experimental condition. This step is known as gridding and belongs to the class of deformable grid matching problems which are well known in literature. Most of the available microarray gridding approaches require human intervention; for example, to specify landmarks, some points in the spot grid, or even to precisely locate individual spots. Automating this part of the process can allow high throughput analysis. This paper focuses on the development of a fully automated procedure for the problem of automatic microarray gridding. It is grounded on the Bayesian paradigm and on image analysis techniques. The procedure has two main steps. The first step, based on the Radon transform, is aimed at generating a grid hypothesis; the second step accounts for local grid deformations. The accuracy and properties of the procedure are quantitatively assessed over a set of synthetic and real images; the results are compared with well-known methods available from the literature.  相似文献   

4.
Lossless compression of color mosaic images poses a unique and interesting problem of spectral decorrelation of spatially interleaved R, G, B samples. We investigate reversible lossless spectral-spatial transforms that can remove statistical redundancies in both spectral and spatial domains and discover that a particular wavelet decomposition scheme, called Mallat wavelet packet transform, is ideally suited to the task of decorrelating color mosaic data. We also propose a low-complexity adaptive context-based Golomb-Rice coding technique to compress the coefficients of Mallat wavelet packet transform. The lossless compression performance of the proposed method on color mosaic images is apparently the best so far among the existing lossless image codecs.  相似文献   

5.
An efficient approach for face compression is introduced. Restricting a family of images to frontal facial mug shots enables us to first geometrically deform a given face into a canonical form in which the same facial features are mapped to the same spatial locations. Next, we break the image into tiles and model each image tile in a compact manner. Modeling the tile content relies on clustering the same tile location at many training images. A tree of vector-quantization dictionaries is constructed per location, and lossy compression is achieved using bit-allocation according to the significance of a tile. Repeating this modeling/coding scheme over several scales, the resulting multiscale algorithm is demonstrated to compress facial images at very low bit rates while keeping high visual qualities, outperforming JPEG-2000 performance significantly.  相似文献   

6.
We use an optimization technique to accurately locate a distorted grid structure in a microarray image. By assuming that spot centers deviate smoothly from a checkerboard grid structure, we show that the process of gridding spot centers can be formulated as a constrained optimization problem. The constraint is equal to the variations of the transform parameter. We demonstrate the accuracy of our algorithm on two sets of microarray images. One set consists of some images from the Stanford Microarray Database; we compare our centers with those annotated in the Database. The other set consists of oligonucleotide images, and we compare our results with those obtained by GenePix Pro 5.0. Our experiments were performed completely automatically.  相似文献   

7.
Wavelet-domain approximation and compression of piecewise smooth images.   总被引:1,自引:0,他引:1  
The wavelet transform provides a sparse representation for smooth images, enabling efficient approximation and compression using techniques such as zerotrees. Unfortunately, this sparsity does not extend to piecewise smooth images, where edge discontinuities separating smooth regions persist along smooth contours. This lack of sparsity hampers the efficiency of wavelet-based approximation and compression. On the class of images containing smooth C2 regions separated by edges along smooth C2 contours, for example, the asymptotic rate-distortion (R-D) performance of zerotree-based wavelet coding is limited to D(R) (< or = 1/R, well below the optimal rate of 1/R2. In this paper, we develop a geometric modeling framework for wavelets that addresses this shortcoming. The framework can be interpreted either as 1) an extension to the "zerotree model" for wavelet coefficients that explicitly accounts for edge structure at fine scales, or as 2) a new atomic representation that synthesizes images using a sparse combination of wavelets and wedgeprints--anisotropic atoms that are adapted to edge singularities. Our approach enables a new type of quadtree pruning for piecewise smooth images, using zerotrees in uniformly smooth regions and wedgeprints in regions containing geometry. Using this framework, we develop a prototype image coder that has near-optimal asymptotic R-D performance D(R) < or = (log R)2 /R2 for piecewise smooth C2/C2 images. In addition, we extend the algorithm to compress natural images, exploring the practical problems that arise and attaining promising results in terms of mean-square error and visual quality.  相似文献   

8.
Increasingly automated techniques for arraying, immunostaining, and imaging tissue sections led us to design software for convenient management, display, and scoring. Demand for molecular marker data derived in situ from tissue has driven histology informatics automation to the point where one can envision the computer, rather than the microscope, as the primary viewing platform for histopathological scoring and diagnoses. Tissue microarrays (TMAs), with hundreds or even thousands of patients' tissue sections on each slide, were the first step in this wave of automation. Via TMAs, increasingly rapid identification of the molecular patterns of cancer that define distinct clinical outcome groups among patients has become possible. TMAs have moved the bottleneck of acquiring molecular pattern information away from sampling and processing the tissues to the tasks of scoring and results analyses. The need to read large numbers of new slides, primarily for research purposes, is driving continuing advances in commercially available automated microscopy instruments that already do or soon will automatically image hundreds of slides per day. We reviewed strategies for acquiring, collating, and storing histological images with the goal of streamlining subsequent data analyses. As a result of this work, we report an implementation of software for automated preprocessing, organization, storage, and display of high resolution composite TMA images.  相似文献   

9.
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.  相似文献   

10.
Noise degrades the performance of any image compression algorithm. This paper studies the effect of noise on lossy image compression. The effect of Gaussian, Poisson, and film-grain noise on compression is studied. To reduce the effect of the noise on compression, the distortion is measured with respect to the original image not to the input of the coder. Results of noisy source coding are then used to design the optimal coder. In the minimum-mean-square-error (MMSE) sense, this is equivalent to an MMSE estimator followed by an MMSE coder. The coders for the Poisson noise and the film-grain noise cases are derived and their performance is studied. The effect of this preprocessing step is studied using standard coders, e.g., JPEG, also. As is demonstrated, higher quality is achieved at lower bit rates.  相似文献   

11.
Lossless compression of AVIRIS images   总被引:7,自引:0,他引:7  
Adaptive DPCM methods using linear prediction are described for the lossless compression of hyperspectral (224-band) images recorded by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The methods have two stages-predictive decorrelation (which produces residuals) and residual encoding. Good predictors are described, whose performance closely approaches limits imposed by sensor noise. It is imperative that these predictors make use of the high spectral correlations between bands. The residuals are encoded using variable-length coding (VLC) methods, and compression is improved by using eight codebooks whose design depends on the sensor's noise characteristics. Rice (1979) coding has also been evaluated; it loses 0.02-0.05 b/pixel compression compared with better VLC methods but is much simpler and faster. Results for compressing ten AVIRIS images are reported.  相似文献   

12.
Lossless compression of continuous-tone images   总被引:3,自引:0,他引:3  
In this paper, we survey some of the recent advances in lossless compression of continuous-tone images. The modeling paradigms underlying the state-of-the-art algorithms, and the principles guiding their design, are discussed in a unified manner. The algorithms are described and experimentally compared  相似文献   

13.
Methods for reversible coding can be classified according to the organization of the source model as either static, semi-adaptive, or adaptive. Magnetic resonance (MR) images have different statistical characteristics in the foreground and the background and separation is thus a promising path for reversible MR image compression. A new reversible compression method, based on static source models for foreground and background separately, is presented. The method is nonuniversal and uses contextual information to exploit the fact that entropy and bit rate are reduced by increasing the statistical order of the model. This paper establishes a realistic level of expectation regarding the bit rate in reversible MR image compression, in general, and the bit rate using static modeling, in particular. The experimental results show that compression using the new method can give bit rates comparable to the best existing reversible methods.  相似文献   

14.
Recently, several efficient context-based arithmetic coding algorithms have been developed successfully for lossless compression of error-diffused images. In this paper, we first present a novel block- and texture-based approach to train the multiple-template according to the most representative texture features. Based on the trained multiple template, we next present an efficient texture- and multiple-template-based (TM-based) algorithm for lossless compression of error-diffused images. In our proposed TM-based algorithm, the input image is divided into many blocks and for each block, the best template is adaptively selected from the multiple-template based on the texture feature of that block. Under 20 testing error-diffused images and the personal computer with Intel Celeron 2.8-GHz CPU, experimental results demonstrate that with a little encoding time degradation, 0.365 s (0.901 s) on average, the compression improvement ratio of our proposed TM-based algorithm over the joint bilevel image group (JBIG) standard [over the previous block arithmetic coding for image compression (BACIC) algorithm proposed by Reavy and Boncelet is 24%] (19.4%). Under the same condition, the compression improvement ratio of our proposed algorithm over the previous algorithm by Lee and Park is 17.6% and still only has a little encoding time degradation (0.775 s on average). In addition, the encoding time required in the previous free tree-based algorithm is 109.131 s on average while our proposed algorithm takes 0.995 s; the average compression ratio of our proposed TM-based algorithm, 1.60, is quite competitive to that of the free tree-based algorithm, 1.62.  相似文献   

15.
In this paper we focus on lossy compression of Atmospheric Infrared Sounder images that include around 40 MB of data distributed over more than two thousand bands. We present a novel architecture that integrates both preprocessing and compression stages providing efficient lossy compression. As part of preprocessing the spectral bands are normalized and reordered such that the bands of the transformed cube are spatially segmented and scanned to generate a unidimensional signal. This signal is then modeled as an autoregressive process and subjected to linear prediction (LP) for which a valid filter order is obtained by analyzing the prediction gain of the filter. The outcomes of this procedure are LP coefficients and an error signal that, when encoded, results in a compressed version of the original image. Performance of this novel architecture is mathematically justified by means of rate-distortion analysis and compared against other well-known compression techniques.  相似文献   

16.
陶长武  蔡自兴 《信息技术》2007,31(12):53-56
阐述了图像压缩编码的基本原理,系统地介绍了几种比较有应用前景的现代图像编码方法及其特点,最后对图像编码进行了总结和展望,指出从图像模型的角度研究图像编码将成为新一代图像编码的研究方向。  相似文献   

17.
This paper describes the compression of grayscale medical ultrasound images using a recent compression technique, i.e., space-frequency segmentation (SITS). This method finds the rate-distortion optimal representation of an image from a large set of possible space-frequency partitions and quantizer combinations and is especially effective when the images to code are statistically inhomogeneous, which is the case for medical ultrasound images. We implemented a compression application based on this method and tested the algorithm on representative ultrasound images. The result is an effective technique that performs better than a leading wavelet-transform coding algorithm, i.e., set partitioning in hierarchical trees (SPIHT), using standard objective distortion measures. To determine the subjective qualitative performance, an expert viewer study was run by presenting ultrasound radiologists with images compressed using both SFS and SPIHT. The results confirmed the objective performance rankings. Finally, the performance sensitivity of the space-frequency codec is shown with respect to several parameters, and the characteristic space-frequency partitions found for ultrasound images are discussed  相似文献   

18.
Fast fractal compression of greyscale images   总被引:1,自引:0,他引:1  
A new algorithm for fractal compression of greyscale images is presented. It uses some previous results allowing the compression process to be reduced to a nearest neighbors problem, and is essentially based on a geometrical partition of the image block feature space. Experimental comparisons with previously published methods show a significant improvement in speed with no quality loss  相似文献   

19.
An adaptive image-coding algorithm for compression of medical ultrasound (US) images in the wavelet domain is presented. First, it is shown that the histograms of wavelet coefficients of the subbands in the US images are heavy-tailed and can be better modelled by using the generalised Student's t-distribution. Then, by exploiting these statistics, an adaptive image coder named JTQVS-WV is designed, which unifies the two approaches to image-adaptive coding: rate-distortion (R-D) optimised quantiser selection and R-D optimal thresholding, and is based on the varying-slope quantisation strategy. The use of varying-slope quantisation strategy (instead of fixed R-D slope) allows coding of the wavelet coefficients across various scales according to their importance for the quality of reconstructed image. The experimental results show that the varying-slope quantisation strategy leads to a significant improvement in the compression performance of the JTQVS-WV over the best state-of-the-art image coder, SPIHT, JPEG2000 and the fixed-slope variant of JTQVS-WV named JTQ-WV. For example, the coding of US images at 0.5 bpp yields a peak signal-to-noise ratio gain of >0.6, 3.86 and 0.3 dB over the benchmark, SPIHT, JPEG2000 and JTQ-WV, respectively.  相似文献   

20.
Reversible intraframe compression of medical images   总被引:3,自引:0,他引:3  
The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号