首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65–77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T1 data, and all had fixed data spacings, uniform in log-time. However, for T2 data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T2 data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise.  相似文献   

2.
The multiexponential inversion program UPEN by the authors [J. Magn. Reson. 1998; 132: 65-77; Ibid. 2000;147:273-85] employs negative feedback to a regularization penalty to implement variable smoothing when both sharp and broad features appear on a single distribution of relaxation times. This allows a good fit to relaxation data that correspond to a sum of decaying exponentials plus random noise, but it usually does not give a good fit to data that are distorted by systematic errors from instrument problems, which can cause erroneous "resolution" or erroneous non-resolution of peaks. UPEN provides a series of diagnostic parameters to help identify such data problems that can lead to interpretation errors, and, in particular, to warn when a close call on the resolution or non-resolution of nearby peaks might be questionable. Examples are given from a series of T(2) data sets from desiccated bone samples, with examples where the presence of two peaks is required by good data, examples where the presence of two peaks is negated by good data, and examples where the resolution or non-resolution of peaks cannot be trusted because of instrumental distortions revealed by UPEN diagnostic parameters. It is suggested that processing relaxation data with UPEN in nearly real time could permit retaking data while a sample is still available if the diagnostic parameters show instrumental problems.  相似文献   

3.
Linear regularization is a common and robust technique for fitting multi-exponential relaxation decay data to obtain a distribution of relaxation times. The regularization algorithms employed by the Uniform-Penalty inversion (UPEN) and CONTIN computer programs have been compared using simulated transverse (T2) relaxation data derived from a typical bimodal distribution observed in cartilage tissue which contain a component shorter than t(0), the time of the first decay sample. We examined the reliability of detecting sub-t(0) relaxation components and the accuracy of statistical estimates of T2 distribution parameters. When the integrated area of the sub-t(0) component relative to that of the total distribution was greater than 0.25, our results indicated a signal-to-noise threshold of about 300 for detecting the presence of the sub-t(0) component with a probability of 0.9 or greater. This threshold was obtained using both the UPEN and CONTIN algorithms. In addition, when using the second-derivative-squared regularizer, UPEN solutions provided statistical estimates of T2 distribution parameters which were substantially free of the biasing effect of the regularizer observed in analagous CONTIN solutions.  相似文献   

4.
王鹤  李鲠颖 《物理学报》2005,54(3):1431-1436
讨论非负最小二乘(NNLS)法和非线性拟合在分析处理核磁共振(NMR)弛豫数据中的应用.同时将二者结合,提出用NNLS的反演结果来设定非线性拟合初值的方法,并用计算机模拟和实验证明了该方法在分析处理NMR弛豫数据中的有效性. 关键词: 非负最小二乘法 非线性拟合 核磁共振 弛豫时间  相似文献   

5.
Despite significant differences between bone tissues and other porous media such as oilfield rocks, there are common features as well as differences in the response of NMR relaxation measurements to the internal structures of the materials. Internal surfaces contribute to both transverse (T2) and longitudinal (T1) relaxation of pore fluids, and in both cases the effects depend on, among other things, local surface-to-volume ratio (S/V). In both cases variations in local S/V can lead to distributions of relaxation times, sometimes over decades. As in rocks, it is useful to take bone data under different conditions of cleaning, saturation, and desaturation. T1 and T2 distributions are computed using UPEN. In trabecular bone it is easy to see differences in dimensions of intertrabecular spaces in samples that have been de-fatted and saturated with water, with longer T1 and T2 for larger pores. Both T1 and T2 distributions for these water-saturated samples are bimodal, separating or partly separating inter- and intratrabecular water. The T1 peak times have a ratio of from 10 to 30, depending on pore size, but for the smaller separations the distributions may not have deep minima. The T2 peak times have ratios of over 1000, with intratrabecular water represented by large peaks at a fraction of a ms, which we can observe only by single spin echoes. CPMG data show peaks at about a second, tapering down to small amplitudes by a ms. In all samples the free induction decay (FID) from an inversion-recovery (IR) T1 measurement shows an approximately Gaussian (solid-like) component, exp[-1/2 (T/TGC), with TGC approximately 11.7+/-0.7 micros (GC for "Gaussian Component"), and a liquid-like component (LLC) with initially simple-exponential decay at the rate-average time T(2-FID) for the first 100 micros. Averaging and smoothing procedures are adopted to derive T(2-FID) as a function of IR time and to get T1 distributions for both the GC and the LLC. It appears that contact with the GC, which is presumed to be 1H on collagen, leads to the T2 reduction of at least part of the LLC, which is presumed to be water. Progressive drying of the cleaned and water-saturated samples confirms that the long T1 and T2 components were in the large intertrabecular spaces, since the corresponding peaks are lost. Further drying leads to further shortening of T2 for the remaining water but eventually leads to lengthening of T1 for both the collagen and the water. After the intertrabecular water is lost by drying, T1 is the same for GC and LLC. T(2-FID) is found to be roughly 320/alpha micros, where alpha is the ratio of the extrapolated GC to LLC, appearing to indicate a time tau of about 320 micros for 1H transverse magnetization in GC to exchange with that of LLC. This holds for all samples and under all conditions investigated. The role of the collagen in relaxation is confirmed by treatment to remove the mineral component, observing that the GC remains and has the same TGC and has the same effect on the relaxation times of the associated water. Measurements on cortical bone show the same collagen-related effects but do not have the long T1 and T2 components.  相似文献   

6.
We show here that the problem of maximizing a family of quantitative functions, encompassing both the modularity (Q-measure) and modularity density (D-measure), for community detection can be uniformly understood as a combinatoric optimization involving the trace of a matrix called modularity Laplacian. Instead of using traditional spectral relaxation, we apply additional nonnegative constraint into this graph clustering problem and design efficient algorithms to optimize the new objective. With the explicit nonnegative constraint, our solutions are very close to the ideal community indicator matrix and can directly assign nodes into communities. The near-orthogonal columns of the solution can be reformulated as the posterior probability of corresponding node belonging to each community. Therefore, the proposed method can be exploited to identify the fuzzy or overlapping communities and thus facilitates the understanding of the intrinsic structure of networks. Experimental results show that our new algorithm consistently, sometimes significantly, outperforms the traditional spectral relaxation approaches.  相似文献   

7.
A better knowledge of the NMR relaxation behavior of bone tissue can improve the definition of imaging protocols to detect bone diseases like osteoporosis. The six rat lumbar vertebrae, from L1 to L6, were analyzed by means of both transverse (T(2)) and longitudinal (T(1)) relaxation of (1)H nuclei at 20 MHz and 30 degrees C. Distributions of relaxation times, computed using the multiexponential inversion software uniform penalty inversion, extend over decades for both T(2) and T(1) relaxation. In all samples, the free induction decay (FID) from an inversion-recovery (IR) T(1) measurement shows an approximately Gaussian (solid-like) component, exp[-1/2(t/T(GC))2], with T(GC) approximately 12 micros (GC for Gaussian component) and a liquid-like component (LLC) with initially simple-exponential decay. Averaging and smoothing procedures are adopted to obtain the ratio alpha between GC and LLC signals and to get separate T(1) distributions for GC and LLC. Distributions of T(1) for LLC show peaks centered at 300-500 ms and shoulders going down to 10 ms, whereas distributions of T(1) for GC are single broad peaks centered at roughly 100 ms. The T(2) distributions by Carr-Purcell-Meiboom-Gill at 600 micros echo spacing are very broad and extend from 1 ms to hundreds of ms. This long echo spacing does not allow one to see a peak in the region of hundreds of micros, which is better seen by single spin-echo T(2) measurements. Results of the relaxation analysis were then compared with densitometric data. From the study, a clear picture of the intratrabecular and intertrabecular (1)H signals emerges. In particular, the GC is presumed to be due to (1)H in collagen, LLC due to all the fluids in the bone including water and fat, and the very short T(2) peak due to the intratrabecular water. Overall, indications of some trends in composition and in pore-space distributions going from L1 to L6 appeared. Published results on rat vertebrae obtained by fitting the curves by discrete two-component models for both T(2) and T(1) are consistent with our results and can be better interpreted in light of the shown distributions of relaxation times.  相似文献   

8.
Blind deconvolution: multiplicative iterative algorithm   总被引:2,自引:0,他引:2  
Zhang J  Zhang Q  He G 《Optics letters》2008,33(1):25-27
A new algorithm has been developed for performing blind deconvolution on degraded images. The algorithm naturally preserves the nonnegative constraint on the iterative solutions of blind deconvolution and can produce a restored image of high resolution. Furthermore, benefiting from the multiplicative form, the algorithm is free from the instability of numerical computation. Results of applying the algorithm to simulated and real degraded images are reported.  相似文献   

9.
A new low photon energy regime of angle-resolved photoemission spectroscopy is accessed with lasers and used to study the high T(C) superconductor Bi2Sr2CaCu2O(8+delta). The low energy increases bulk sensitivity, reduces background, and improves resolution. With this we observe spectral peaks which are sharp on the scale of their binding energy--the clearest evidence yet for quasiparticles in the normal state. Crucial aspects of the data such as the dispersion, superconducting gaps, and the bosonic coupling kink are found to be robust to a possible breakdown of the sudden approximation.  相似文献   

10.
A new non-iterative curve resolution technique for resolving single decay profiles is proposed. The new technique, called DoubleSlicing, is based on the Decra (Direct Exponential Curve Resolution Algorithm) principle. While the original Decra was designed to resolve several decay curves simultaneously and thus fitting common pure exponentials, DoubleSlicing can resolve single decay profiles by a simple double data transformation followed by an analytical and unique three-way decomposition. The new approach is successfully demonstrated on experimental NMR CPMG relaxation data, measured on combinations of unmixed paramagnetic CuSO(4) solutions. Decay signals of the water component were acquired following an innovative experimental design that ensured no interaction between the components present in each sample under observation. DoubleSlicing proved to be accurate in estimating relaxation times differing in one order of magnitude (range: 19.6-159.4ms). Its performance was comparable to discrete exponential fitting with the advantage of being much faster - in terms of computation time, DoubleSlicing outperformed exponential fitting by a factor of four.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号