首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Pulsed ESR techniques with the aid of site-directed spin labeling have proven useful in providing unique structural information about proteins. The determination of distance distributions in electron spin pairs directly from the dipolar time evolution of the pulsed ESR signals by means of the Tikhonov regularization method is reported. The difficulties connected with numerically inverting this ill-posed mathematical problem are clearly illustrated. The Tikhonov regularization with the regularization parameter determined by the L-curve criterion is then described and tested to confirm its accuracy and reliability. The method is applied to recent experimental results on doubly labeled proteins that have been studied using two pulsed ESR techniques, double quantum coherence (DQC) ESR and double electron-electron resonance (DEER). The extracted distance distributions are able to provide valuable information about the conformational constraints in various partially folded states of proteins. This study supplies a mathematically reliable method for extracting pair distributions from pulsed ESR experimental data and has extended the use of pulsed ESR to provide results of greater value for structural biology.  相似文献   

2.
Pulse dipolar electron paramagnetic resonance spectroscopy provides means of distance measurements in the range of ~ 1.5–10 nm between two spin labels tethered to a biological system. However, the extraction of distance distribution between spin labels is an ill-posed mathematical problem. The most common approach for obtaining distance distribution employs Tikhonov regularization method, where a regularization parameter characterizing the smoothness of distribution is introduced. However, in case of multi-modal distance distributions with peaks of different widths, the use of a single regularization parameter might lead to certain distortions of actual distribution shapes. Recently, a multi-Gaussian Monte Carlo approach was proposed for eliminating this drawback and verified for model biradicals [1]. In the present work, we for the first time test this approach on complicated biological systems exhibiting multi-modal distance distributions. We apply multi-Gaussian analysis to pulsed electron–electron double resonance data of supramolecular ribosomal complexes, where the 11-mer oligoribonucleotide (MR) bearing two nitroxide labels at its termini is used as a reporter. Calculated distance distributions reveal the same conformations of MR as those obtained by Tikhonov regularization, but feature the peaks having different widths, which leads to a better resolution in several cases. The advantages, complications, and further perspectives of application of Monte-Carlo-based multi-Gaussian approach to real biological systems are discussed.  相似文献   

3.
Double electron-electron resonance (DEER), also known as pulsed electron-electron double resonance (PELDOR), is a time-domain electron paramagnetic resonance method that can measure the weak dipole-dipole interactions between unpaired electrons. DEER has been applied to discrete pairs of free radicals in biological macromolecules and to clusters containing small numbers of free radicals in polymers and irradiated materials. The goal of such work is to determine the distance or distribution of distances between radicals, which is an underdetermined problem. That is, the spectrum of dipolar interactions can be readily calculated for any distribution of free radicals, but there are many, quite different distributions of radicals that could produce the same experimental dipolar spectrum. This paper describes two methods that are useful for approximating the distance distributions for the large subset of cases in which the mutual orientations of the free radicals are uncorrelated and the width of the distribution is more than a few percent of its mean. The first method relies on a coordinate transformation and is parameter-free, while the second is based on iterative least-squares with Tikhonov regularization. Both methods are useful in DEER studies of spin-labeled biomolecules containing more than two labels.  相似文献   

4.
In this paper, we propose to apply information theory to Ultra wide band (UWB) radar sensor network (RSN) to detect target in foliage environment. Information theoretic algorithms such as Maximum entropy method (MEM) and mutual information are proven methods, that can be applied to data collected by various sensors. However, the complexity of the environment poses uncertainty in fusion center. Chernoff information provides the best error exponent of detection in Bayesian environment. In this paper, we consider the target detection as binary hypothesis testing and use Chernoff information as sensor selection criterion, which significantly reduces the processing load. Another strong information theoretic algorithm, method of types, is applicable to our MEM based target detection algorithm as entropy is dependent on the empirical distribution only. Method of types analyzes the probability of a sequence based on empirical distribution. Based on this, we can find the bound on probability of detection. We also propose to use Relative entropy based processing in the fusion center based on method of types and Chernoff Stein Lemma. We study the required quantization level and number of nodes in gaining the best error exponent. The performance of the algorithms were evaluated, based on real world data.  相似文献   

5.
The superiority of maximum entropy method (MEM) over the traditional fast Fourier transform (FFT) method is demonstrated in NQR spectral analyses. Using computersimulated and real spectral data, a comparative study was made between the maximum entropy and the conventional discrete Fourier transform methods. It is concluded that use of MEM in NQR spectroscopy can lead to sensitivity improvements, reduction of instrumental artifacts and truncation errors, shortened data acquisition times and automatic suppression of noise, while at the same time increasing the resolution. A property of MEM which is particularly significant for two-dimensional NQR spectroscopy is its ability to produce spectral estimates from the short data records, free of truncation artifacts. The use of MEM in two-dimensional NQR studies can lead to reduction of the time necessary to acquire two-dimensional set.  相似文献   

6.
The reliability of procedures for extracting the distance distribution between spins from the dipolar evolution function is studied with particular emphasis on broad distributions. A new numerically stable procedure for fitting distance distributions with polynomial interpolation between sampling points is introduced and compared to Tikhonov regularization in the dipolar frequency and distance domains and to approximate Pake transformation. Distance distribution with only narrow peaks are most reliably extracted by distance-domain Tikhonov regularization, while frequency-domain Tikhonov regularization is favorable for distributions with only broad peaks. For the quantification of distributions by their mean distance and variance, Hermite polynomial interpolation provides the best results. Distributions that contain both broad and narrow peaks are most difficult to analyze. In this case a high signal-to-noise ratio is strictly required and approximate Pake transformation should be applied. A procedure is given for renormalizing primary experimental data from protein preparations with slightly different degrees of spin labelling, so that they can be compared directly. Performance of all the data analysis procedures is demonstrated on experimental data for a shape-persistent biradical with a label-to-label distance of 5 nm, for a [2]catenane with a broad distance distribution, and for a doubly spin-labelled double mutant of plant light harvesting complex II  相似文献   

7.
Infrared and visible image fusion has been an important and popular topic in imaging science. Dual-band image fusion aims to extract both target regions in infrared image and abundant detail information in visible image into fused result, preserving even enhancing the information that inherits from source images. In our study, we propose an optimization-based fusion method by combining global entropy and gradient constrained regularization. We design a cost function by taking the advantages of global maximum entropy as the first term, together with gradient constraint as the regularized term. In this cost function, global maximum entropy could make the fused result inherit as more information as possible from sources. And using gradient constraint, the fused result would have clear details and edges with noise suppression. The fusion is achieved based on the minimization of the cost function by adding weight value matrix. Experimental results indicate that the proposed method performs well and has obvious superiorities over other typical algorithms in both subjective visual performance and objective criteria.  相似文献   

8.
为了解现代谱分析技术在风廓线雷达中应用的可行性,通过采用实测的和模拟的风廓线雷达回波信号,对比研究了FFT法与最大熵法的谱分析效果。结果表明:(1)当回波信号比较强时,两者都可以得到较好的谱分析效果;但是当回波信号较弱时,最大熵法分析效果优于FFT法,最大熵法对地杂波具有较好的抑制能力。(2)最大熵谱比较光滑,表明最大熵法对随机白噪声也有一定的抑制作用。(3)最大熵法的递推阶数对谱分析结果有一定影响,最终预测误差准则确定的递推阶数一般偏小,采用15阶的递推阶数进行最大熵法分析取得了较好结果。由于风廓线雷达回波通常都较弱,因此研究结果可望用于改善信号处理效果。  相似文献   

9.
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations.  相似文献   

10.
采用L1/2稀疏约束的梅尔倒谱系数语音重建方法   总被引:1,自引:0,他引:1       下载免费PDF全文
周健  刘荣敏  窦云峰  路成  陶亮 《声学学报》2018,43(6):991-999
提出了一种利用L1/2稀疏约束从梅尔倒谱系数重建语音时域信号方法。从梅尔倒谱系数估计语音幅度谱是一个欠定问题,现有的方法均采用幅度谱最小均方误差估计或采用L1正则化进行幅度谱的稀疏约束。相比于L1正则化模型,L1/2的稀疏约束特性更强,为此,本文在从梅尔倒谱系数估计语音幅度谱时引入L1/2正则化约束,并利用求解的稀疏幅度谱估计相位谱,最后利用估计的频谱重建时域语音信号。实验结果表明,与幅度谱最小均方误差法相比,本文算法所估计出的语音信号具有更高的语音质量;在噪声环境下进行语音重建实验,与L1正则化幅度谱估计方法相比,本文算法重建的语音质量更好,表现出更好抗噪性。   相似文献   

11.
It is shown how homonuclear distances and homonuclear dipolar lattice sums between spin-1/2 nuclei can be measured by a pulsed solid-state NMR experiment under magic-angle spinning conditions. The presented technique is based on double-quantum coherence filtering. Instead of measuring a build-up of double-quantum coherence the pulse sequence is designed to dephase double-quantum coherence. This is achieved by exciting double-quantum coherence either with the help of the through-space dipolar coupling or the through-bond dipolar coupling while the dephasing relies on the through-space dipolar coupling as selected by a gamma-encoded pulse sequence from the C/R symmetry class. Since dephasing curves can be normalized on zero dephasing, it is possible to analyze the initial dephasing regime and hence determine dipolar lattice sums (effective dipolar couplings) in multiple-spin systems. A formula for the effective dipolar coupling is derived theoretically and validated by numerical calculations and experiments on crystalline model compounds for (13)C and (31)P spin systems. The double-quantum dephasing experiment can be combined with constant-time data sampling to compensate for relaxation effects, consequently only two experimental data points are necessary for a single distance measurement. The phase cycling overhead for the constant-time experiment is minimal because a short cogwheel phase cycle exists. A 2D implementation is demonstrated on [(13)C(3)]alanine.  相似文献   

12.
Reconstruction of two-dimensional images by filtered back-projection (FBP) and by the maximum entropy method (MEM) was compared for spectral-spatial EPR images with differing signal-to-noise ratios. Experimental projections were recorded using direct-detected rapid scans in the presence of a series of magnetic field gradients. The slow-scan absorption lineshapes were calculated by Fourier deconvolution. A Hamming filter was used in conjunction with FBP, but not for MEM. Imperfections in real experimental data, as well as random noise, contribute to discrepancies between the reconstructed image and experimental projections, which may make it impossible to achieve the customary MEM criterion for convergence. The Cambridge MEM algorithm, with allowance for imperfections in experimental data, produced images with more linear intensity scales and more accurate linewidths for weak signals than was obtained with another MEM method. The more effective elimination of noise in baseline regions by MEM made it possible to detect weak trityl (13)C trityl hyperfine lines that could not be distinguished from noise in images reconstructed by FBP. Another advantage of MEM is that projections do not need to be equally spaced. FBP has the advantages that computational time is less, the amplitude scale is linear, and there is less noise superimposed on peaks in images. It is useful to reconstruct images by both methods and compare results. Our observations indicate that FBP works well when the number of projections is large enough that the star effect is negligible. When there is a smaller number of projections, projections are unequally spaced, and/or signal-to-noise is lower MEM is advantageous.  相似文献   

13.
采用两种常用的粒度反演方法——正则化和Chahine算法,对90nm与250nm单峰分布、50nm与200nm双峰分布、100nm与300nm双峰分布的模拟动态光散射数据,以及105nm、300nm标准颗粒的实测动态光散射数据进行了反演分析.结果表明:噪声水平的高低是影响粒度分布反演准确性的关键因素之一,反演结果的准确性随噪声水平的增加而降低,噪声水平超过某一阈值后,将无法得到有意义的反演结果;不同反演方法具有不同的抗噪能力,在低噪声水平下反演结果无显著差别,随着噪声水平的增加,反演结果表现出很大差异;正则化方法通过正则参数的选择可以有效抑制噪声影响,表现出强于Chahine算法的抗噪能力;与Chahine算法相比,正则化方法不需要假定初始分布,因此,在噪声较大的实验或生产过程中进行颗粒分布测量时,宜采用正则化方法.  相似文献   

14.
用于电介质中空间电荷分布测量的Tikhonov反卷积算法   总被引:5,自引:1,他引:4  
研究了使用压力波法测量平板电介质试样的空间电荷分布的数值解法,使用基于Tikhonov正则化方法的反卷积算法得到了真实的空间电荷分布.在反卷积算法中使用了相关的技术处理,如小波包过滤高频噪音,Tikhonov正则化方法处理积分方程等.利用数值实验研究了噪声对反卷积算法的影响,结果表明,在无噪或者低噪环境下,反卷积算法能够非常好地计算出电介质中的空间电荷分布;在处理有噪数据时,反卷积的结果受到明显的影响,但仍然有较高的计算精度.正则化参数α对空间电荷分布的数值解起着明显的光滑作用,但是对于解的积分值却影响不大.对实际测量数据进行处理的结果表明,反卷积算法成功地计算出了固体电介质中的空间电荷分布和电场分布.  相似文献   

15.
The application of a maximum entropy method (MEM) for analysis of time-resolved fluorescence data is discussed. A developed version of MEM has been tested using simulated kinetic data. Based on computed results, practical criteria have been established to determine whether the lifetime distribution of emitting centers is described by a discrete spectrum (a set of two or three exponentials) or by a continuous one (mono- or bimodal distribution of exponentials). The proposed method has been used to analyze the fluorescence decay kinetics of thioflavin T (ThT) intercalated into amyloid fibrils. The presence of two peaks in the lifetime distribution of emitting centers has been explained by the existence in fibrils of two types of binding centers substantially differing in microenvironment rigidity. This suggestion is supported by the results of fluorescence quenching of intercalated ThT with the quencher KI.  相似文献   

16.
A new algorithm for the Maximum Entropy Method (MEM) is proposed for recovering the lifetime distribution in time-resolved fluorescence decays. The procedure is based on seeking the distribution that maximizes the Skilling entropy function subjected to the chi-squared constraint χ 2?~?1 through iterative linear approximations, LU decomposition of the Hessian matrix of the lagrangian problem and the Golden Section Search for backtracking. The accuracy of this algorithm has been investigated through comparisons with simulated fluorescence decays both of narrow and broad lifetime distributions. The proposed approach is capable to analyse datasets of up to 4,096 points with a discretization ranging from 100 to 1,000 lifetimes. A good agreement with non linear fitting estimates has been observed when the method has been applied to multi-exponential decays. Remarkable results have been also obtained for the broad lifetime distributions where the position is recovered with high accuracy and the distribution width is estimated within 3 %. These results indicate that the procedure proposed generates MEM lifetime distributions that can be used to quantify the real heterogeneity of lifetimes in a sample.  相似文献   

17.
饶光辉 《物理》1996,25(10):595-601
从粉末衍射数据直接测定晶体结构介材料和晶体学研究的热门课题之一。文章介绍了粉末衍射结构的最大熵法。最大熵法是基于信息论的最大熵原理和最大似然原理的一种方法。由于其独特的优点,最大熵法是最有前景的粉末衍射结构分析方法之一。  相似文献   

18.
The distance and divergence of the probability measures play a central role in statistics, machine learning, and many other related fields. The Wasserstein distance has received much attention in recent years because of its distinctions from other distances or divergences. Although computing the Wasserstein distance is costly, entropy-regularized optimal transport was proposed to computationally efficiently approximate the Wasserstein distance. The purpose of this study is to understand the theoretical aspect of entropy-regularized optimal transport. In this paper, we focus on entropy-regularized optimal transport on multivariate normal distributions and q-normal distributions. We obtain the explicit form of the entropy-regularized optimal transport cost on multivariate normal and q-normal distributions; this provides a perspective to understand the effect of entropy regularization, which was previously known only experimentally. Furthermore, we obtain the entropy-regularized Kantorovich estimator for the probability measure that satisfies certain conditions. We also demonstrate how the Wasserstein distance, optimal coupling, geometric structure, and statistical efficiency are affected by entropy regularization in some experiments. In particular, our results about the explicit form of the optimal coupling of the Tsallis entropy-regularized optimal transport on multivariate q-normal distributions and the entropy-regularized Kantorovich estimator are novel and will become the first step towards the understanding of a more general setting.  相似文献   

19.
We consider the possibility of solving the inverse scattering problem in the linear approximation (in the form of a convolution equation) by reducing it to a system of linear algebraic equations and minimizing the residual. Since the problem is an ill-posed one, the Tikhonov regularization proves useful. The possibility of using the entropy of the image estimate as a stabilizing functional is considered, which is the key idea of the maximum entropy method. The single-frequency and multifrequency versions of the method are realized. The advantage of the maximum entropy method over the conventional linear methods of solving the inverse scattering problem is shown. The superresolution and sidelobe suppression abilities of the maximum entropy method are demonstrated. The method is shown to be stable to measurement noise and multiplicative interference in the form of aperture decimation. Examples of the image reconstruction by the maximum entropy method from model and experimental data are presented.  相似文献   

20.
A maximum-entropy method of data analysis previously applied in astrophysics is adapted to the problem of obtaining the shape of a central potential from noisy scattering-amplitude data. The method is applied to a simplified nucleon-nucleon scattering problem in the case where the interaction is purely repulsive. The entropy of the potential distribution is maximized subject to the constraint that the scattering-amplitude data, which is related to the potential distribution through a Born integral plus gaussian noise, has a χ2 per degree of freedom of 1. Good results, limited only by the quality of the data and the maximum value of the momentum transfer used, are obtained for synthetic data generated from exponential and gaussian potentials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号