首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Proposes a Bayesian method whereby maximum a posteriori (MAP) estimates of functional (PET and SPECT) images may be reconstructed with the aid of prior information derived from registered anatomical MR images of the same slice. The prior information consists of significant anatomical boundaries that are likely to correspond to discontinuities in an otherwise spatially smooth radionuclide distribution. The authors' algorithm, like others proposed recently, seeks smooth solutions with occasional discontinuities; the contribution here is the inclusion of a coupling term that influences the creation of discontinuities in the vicinity of the significant anatomical boundaries. Simulations on anatomically derived mathematical phantoms are presented. Although computationally intense in its current implication, the reconstructions are improved (ROI-RMS error) relative to filtered backprojection and EM-ML reconstructions. The simulations show that the inclusion of position-dependent anatomical prior Information leads to further improvement relative to Bayesian reconstructions without the anatomical prior. The algorithm exhibits a certain degree of robustness with respect to errors in the location of anatomical boundaries.  相似文献   

2.
3.
Wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the need for computationally expensive iterative training to fit an HMT model to a given data set (e.g., using the expectation-maximization algorithm). We greatly simplify the HMT model by exploiting the inherent self-similarity of real-world images. The simplified model specifies the HMT parameters with just nine meta-parameters (independent of the size of the image and the number of wavelet scales). We also introduce a Bayesian universal HMT (uHMT) that fixes these nine parameters. The uHMT requires no training of any kind, while extremely simple, we show using a series of image estimation/denoising experiments that these new models retain nearly all of the key image structure modeled by the full HMT. Finally, we propose a fast shift-invariant HMT estimation algorithm that outperforms other wavelet-based estimators in the current literature, both visually and in mean square error.  相似文献   

4.
Bayesian compressive sensing (BCS) plays an important role in signal processing for dealing with sparse representation related problems. BCS utilizes a Bayesian model to solve the compressing sensing (CS) problem, such as signal sampling processing and model parameters using the hierarchical Bayesian framework. The use of Gaussian and Laplace distribution priors on the basic coefficients has already been demonstrated in previous works. However, the two existing priors cannot more effectively encode sparsity representation for unknown signals. In this paper, a reweighted Laplace distribution prior is proposed for hierarchical Bayesian to fully exploit the sparsity of unknown signals. The proposed algorithm can automatically estimate all the coefficients of unknown signal, and the expected model parameters are solely gotten from observation by developing a fast greedy algorithm to solve the Bayesian maximum posterior and type-II maximum likelihood. Theoretical analysis on the sparsity of the proposed model is analyzed and compared with the Laplace priors model. Moreover, numerical experiments are conducted to prove that the proposed algorithm can achieve superior performance for reconstructing unknown sparse signal with low computational burden as well as high accuracy.  相似文献   

5.
In this paper, a Bayesian wavelet-based denoising procedure for multicomponent images is proposed. A denoising procedure is constructed that (1) fully accounts for the multicomponent image covariances, (2) makes use of Gaussian scale mixtures as prior models that approximate the marginal distributions of the wavelet coefficients well, and (3) makes use of a noise-free image as extra prior information. It is shown that such prior information is available with specific multicomponent image data of, e.g., remote sensing and biomedical imaging. Experiments are conducted in these two domains, in both simulated and real noisy conditions.  相似文献   

6.
This paper presents a new method for segmentation of medical images by extracting organ contours, using minimal path deformable models incorporated with statistical shape priors. In our approach, boundaries of structures are considered as minimal paths, i.e., paths associated with the minimal energy, on weighted graphs. Starting from the theory of minimal path deformable models, an intelligent "worm" algorithm is proposed for segmentation, which is used to evaluate the paths and finally find the minimal path. Prior shape knowledge is incorporated into the segmentation process to achieve more robust segmentation. The shape priors are implicitly represented and the estimated shapes of the structures can be conveniently obtained. The worm evolves under the joint influence of the image features, its internal energy, and the shape priors. The contour of the structure is then extracted as the worm trail. The proposed segmentation framework overcomes the short-comings of existing deformable models and has been successfully applied to segmenting various medical images.  相似文献   

7.
A generalized expectation-maximization (GEM) algorithm is developed for Bayesian reconstruction, based on locally correlated Markov random-field priors in the form of Gibbs functions and on the Poisson data model. For the M-step of the algorithm, a form of coordinate gradient ascent is derived. The algorithm reduces to the EM maximum-likelihood algorithm as the Markov random-field prior tends towards a uniform distribution. Three different Gibbs function priors are examined. Reconstructions of 3-D images obtained from the Poisson model of single-photon-emission computed tomography are presented.  相似文献   

8.
Mixture models with higher order moments   总被引:2,自引:0,他引:2  
The authors present a novel method for mixed pixel classification where the classification of groups of mixed pixels is achieved by taking into consideration the higher order moments of the distributions of the pure and the mixed classes. The equations expressing the relationship between the higher order moments are used to augment the set of equations that express the relationship between the means only. The authors show that weighting these equations does not make the set of equations available less reliable. As a consequence, the number of equations can be increased and thus more classes than available bands can be identified. The method is exhaustively tested using simulated data and is also applied to real Landsat TM data for which ground data are available  相似文献   

9.
In this study, a robust and efficient image dehazing technique based on the atmospheric scattering model is proposed, which effectively overcomes the limitations of a single prior condition. It is composed of a transmission estimation module and an atmospheric light estimation module. The transmission estimation module integrates multiple dehazing prior strategies and effectively optimises transmission estimation and application range. The atmospheric light estimation module uses the fuzzy C-means clustering algorithm (FCM) to estimate the atmospheric light of different scenes in an image. Unlike in the previous work, the atmospheric light in this module is a nonglobal value, and a pixel-level atmospheric light value matrix is obtained. Numerous experiments show that the proposed dehazing algorithm is superior to state-of-the-art methods.  相似文献   

10.
We consider the problem of system reconstruction from higher order spectra (HOS) slices. We establish that the impulse response of a complex system can be reconstructed up to a scalar and a shift based on any pair of HOS slices, as long as the distance between the two slices satisfies a certain condition. One slice is sufficient for the reconstruction in the case of a real system. We propose a cepstrum-based method for system reconstruction. We also propose a new method for the reconstruction of the system Fourier phase based on the phase of any odd-indexed bispectrum slice. Being able to choose the slices to be used in the reconstruction allows us to avoid bispectrum regions dominated by noise  相似文献   

11.
An artificial neural network for SPECT image reconstruction   总被引:1,自引:0,他引:1  
An artificial neural network has been developed to reconstruct quantitative single photon emission computed tomographic (SPECT) images. The network is trained with an ideal projection-image pair to learn a shift-invariant weighting (filter) for the projections. Once trained, the network produces weighted projections as a hidden layer when acquired projection data are presented to its input. This hidden layer is then backprojected to form an image as the network output. The learning algorithm adjusts the weighting coefficients using a backpropagation algorithm which minimizes the mean squared error between the ideal training image and the reconstructed training image. The response of the trained network to an impulse projection resembles the ramp filter typically used with backprojection, and reconstructed images are similar to filtered backprojection images.  相似文献   

12.
A novel approach to blindly estimate kernels of any discrete- and finite-extent quadratic models in higher order cumulants domain based on artificial neural networks is proposed in this paper. The input signal is assumed an unobservable independently identically, distributed random sequence which is viable for engineering practice. Because of the properties of the third-order cumulant functions, identifiability of the nonlinear model holds, even when the model output measurement is corrupted by a Gaussian random disturbance. The proposed approach enables a nonlinear relationship between model kernels and model output cumulants to be established by means of neural networks. The approximation ability of the neural network with the weights-decoupled extended Kalman filter training algorithm is then used to estimate the model parameters. Theoretical statements and simulation examples together with practical application to the train vibration signals modeling corroborate that the developed methodology is capable of providing a very promising way to identify truncated Volterra models blindly  相似文献   

13.
Considers the problem of estimating the parameters of a stable, scalar ARMA (p, q) signal model (causal or noncausal, minimum phase or mixed phase) driven by an i.i.d. non-Gaussian sequence. The driving noise sequence is not observed. The Wiggins-Donoho (1978, 1991) class of inverse filter criteria for estimation of model parameters are analyzed and extended. These criteria have been considered in the past only for moving average inverse filters. These criteria are extended to general ARMA inverses. Computer simulation examples are presented to illustrate the proposed approaches  相似文献   

14.
Exploiting the residual redundancy in a source coder output stream during the decoding process has been proven to be a bandwidth-efficient way to combat noisy channel degradations. This redundancy can be employed to either assist the channel decoder for improved performance or design better source decoders. In this work, a family of solutions for the asymptotically optimum minimum mean-squared error (MMSE) reconstruction of a source over memoryless noisy channels is presented when the redundancy in the source encoder output stream is exploited in the form of a /spl gamma/-order Markov model (/spl gamma//spl ges/1) and a delay of /spl delta/,/spl delta/>0, is allowed in the decoding process. It is demonstrated that the proposed solutions provide a wealth of tradeoffs between computational complexity and the memory requirements. A simplified MMSE decoder which is optimized to minimize the computational complexity is also presented. Considering the same problem setup, several other maximum a posteriori probability (MAP) symbol and sequence decoders are presented as well. Numerical results are presented which demonstrate the efficiency of the proposed algorithms.  相似文献   

15.
An image reconstruction problem motivated by X-ray fiber diffraction analysis is considered. The experimental data are sums of the squares of the amplitudes of particular sets of Fourier coefficients of the electron density, and a part of the electron density is known. The image reconstruction problem is to estimate the unknown part of the electron density, the "image." A Bayesian approach is taken in which a prior model for the image is based on the fact that it consists of atoms, i.e., the unknown electron density consists of separated, sharp peaks. Currently used heuristic methods are shown to correspond to certain maximum a posteriori estimates of the Fourier coefficients. An analytical solution for the Bayesian minimum mean-square-error estimate is derived. Simulations show that the minimum mean-square-error estimate gives good results, even when there is considerable data loss, and out-performs the maximum a posteriori estimates.  相似文献   

16.
17.
We consider the estimation of the unknown parameters for the problem of reconstructing a high-resolution image from multiple undersampled, shifted, degraded frames with subpixel displacement errors. We derive mathematical expressions for the iterative calculation of the maximum likelihood estimate of the unknown parameters given the low resolution observed images. These iterative procedures require the manipulation of block-semi circulant (BSC) matrices, that is, block matrices with circulant blocks. We show how these BSC matrices can be easily manipulated in order to calculate the unknown parameters. Finally the proposed method is tested on real and synthetic images.  相似文献   

18.
An hierarchical Bayes approach to reliability estimation for the exponential model with an unknown scale parameter, based on life tests that are terminated after a preassigned number of failures, is considered under the assumptions of squared error loss and Erlang distributions as the prior and hyperprior. The Bayesian estimation of reliability for the case of ‘attribute testing’ is also discussed.  相似文献   

19.
Most current efforts in near-infrared optical tomography are effectively limited to two-dimensional reconstructions due to the computationally intensive nature of full three-dimensional (3-D) data inversion. Previously, we described a new computationally efficient and statistically powerful inversion method APPRIZE (automatic progressive parameter-reducing inverse zonation and estimation). The APPRIZE method computes minimum-variance estimates of parameter values (here, spatially variant absorption due to a fluorescent contrast agent) and covariance, while simultaneously estimating the number of parameters needed as well as the size, shape, and location of the spatial regions that correspond to those parameters. Estimates of measurement and model error are explicitly incorporated into the procedure and implicitly regularize the inversion in a physically based manner. The optimal estimation of parameters is bounds-constrained, precluding infeasible values. In this paper, the APPRIZE method for optical imaging is extended for application to arbitrarily large 3-D domains through the use of domain decomposition. The effect of subdomain size on the performance of the method is examined by assessing the sensitivity for identifying 112 randomly located single-voxel heterogeneities in 58 3-D domains. Also investigated are the effects of unmodeled heterogeneity in background optical properties. The method is tested on simulated frequency-domain photon migration measurements at 100 MHz in order to recover absorption maps owing to fluorescent contrast agent. This study provides a new approach for computationally tractable 3-D optical tomography.  相似文献   

20.
Multipinhole single photon emission computed tomography (SPECT) imaging has several advantages over single pinhole SPECT imaging, including an increased sensitivity and an improved sampling. However, the quest for a good design is challenging, due to the large number of design parameters. The effect of one of these, the amount of overlap in the projection images, on the reconstruction image quality, is examined in this paper. The evaluation of the quality is based on efficient approximations for the linearized local impulse response and the covariance in a voxel, and on the bias of the reconstruction of the noiseless projection data. Two methods are proposed that remove the overlap in the projection image by blocking certain projection rays with the use of extra shielding between the pinhole plate and the detector. Also two measures to quantify the amount of overlap are suggested. First, the approximate method, predicting the contrast-to-noise ratio (CNR), is validated using postsmoothed maximum likelihood expectation maximization (MLEM) reconstructions with an imposed target resolution. Second, designs with different amounts of overlap are evaluated to study the effect of multiplexing. In addition, the CNR of each pinhole design is also compared with that of the same design where overlap is removed. Third, the results are interpreted with the overlap quantification measures. Fourth, the two proposed overlap removal methods are compared. From the results we can conclude that, once the complete detector area has been used, the extra sensitivity due to multiplexing is only able to compensate for the loss of information, not to improve the CNR. Removing the overlap, however, improves the CNR. The gain is most prominent in the central field of view, though often at the cost of the CNR of some voxels at the edges, since after overlap removal very little information is left for their reconstruction. The reconstruction images provide insight in the multiplexing and truncation artifacts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号