首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present methods that introduce concepts from Rosenbluth sampling [M. N. Rosenbluth and A. W. Rosenbluth, J. Chem. Phys. 23, 356 (1955)] into the Jarzynski nonequilibrium work (NEW) free-energy calculation technique [C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997)]. The proposed hybrid modifies the way steps are taken in the NEW process. With it, each step is selected from a range of alternatives, with bias given to steps that contribute the least work. The definition of the work average is modified to account for the bias. We introduce two variants of this method, lambda-bias sampling and configuration-bias sampling, respectively; a combined lambda- and configuration-bias method is also considered. By reducing the likelihood that large nonequilibrated work values enter the ensemble average, the Rosenbluth sampling aids in remedying problems of inaccuracy of the calculation. We demonstrate the performance of the proposed methods through a model system of N independent harmonic oscillators. This model captures the difficulties involved in calculating free energies in real systems while retaining many tractable features that are helpful to the study. We examine four variants of this model that differ qualitatively in the nature of their phase-space overlap. Results indicate that the lambda-bias sampling method is most useful for systems with entropic sampling barriers, while the configuration-bias methods are best for systems with energetic sampling barriers. The Rosenbluth-sampling schemes yield much more accurate results than the unbiased nonequilibrium work method. Typically the accuracy can be improved by about an order of magnitude for a given amount of sampling; this improvement translates into two or more orders of magnitude less sampling required to obtain a given level of accuracy, owing to the generally slow convergence of the NEW calculation when the inaccuracy is large.  相似文献   

2.
The evaluation of the free energy is essential in molecular simulation because it is intimately related with the existence of multiphase equilibrium. Recently, it was demonstrated that it is possible to evaluate the Helmholtz free energy using a single statistical ensemble along an entire isotherm by accounting for the “chemical work” of transforming each molecule, from an interacting one, to an ideal gas. In this work, we show that it is possible to perform such a free energy perturbation over a liquid vapor phase transition. Furthermore, we investigate the link between a general free energy perturbation scheme and the novel nonequilibrium theories of Crook's and Jarzinsky. We find that for finite systems away from the thermodynamic limit the second law of thermodynamics will always be an inequality for isothermal free energy perturbations, resulting always to a dissipated work that may tend to zero only in the thermodynamic limit. The work, the heat, and the entropy produced during a thermodynamic free energy perturbation can be viewed in the context of the Crooks and Jarzinsky formalism, revealing that for a given value of the ensemble average of the “irreversible” work, the minimum entropy production corresponded to a Gaussian distribution for the histogram of the work. We propose the evaluation of the free energy difference in any free energy perturbation based scheme on the average irreversible “chemical work” minus the dissipated work that can be calculated from the variance of the distribution of the logarithm of the work histogram, within the Gaussian approximation. As a consequence, using the Gaussian ansatz for the distribution of the “chemical work,” accurate estimates for the chemical potential and the free energy of the system can be performed using much shorter simulations and avoiding the necessity of sampling the computational costly tails of the “chemical work.” For a more general free energy perturbation scheme that the Gaussian ansatz may not be valid, the free energy calculation can be expressed in terms of the moment generating function of the “chemical work” distribution. © 2014 Wiley Periodicals, Inc.  相似文献   

3.
正常拖尾色谱峰的塔板模型表达式   总被引:11,自引:0,他引:11  
韩振为  何志敏  余国琮 《色谱》1997,15(6):532-533
得到了描述正常拖尾色谱峰的塔板模型表达式,根据这一表达式,正常的色谱流出曲线应是非对称的拖尾峰,而对称的高斯型分布函数是对塔板模型进行近似处理的结果。和扩散模型的色谱流出曲线方程相比,二者在形式上完全相同,因此,尽管塔板模型和扩散模型的机理不同,但它们对于色谱流出曲线的数学描述是完全相同的。  相似文献   

4.
Single spark evaluation (SSE) needs time resolved multispectral detection of individual emission intensities from the spark plasma. The classification and statistical evaluation of these data, typically 300 data per second and spectral channel have to be processed. Experiments with metal samples have shown that the single spark pulse height distributions can be satisfactorily approximated by a Gaussian function if the respective element is homogeneously distributed within the sample. If there are nonhomogeneities of the analyte concentration in the sample, such as inclusions, than significant deviations from the Gaussian distribution are observed. The integrals over the Gaussian and non-Gaussian parts can be used to quantify the respective components of the analyte. This procedure could be succesfully applied to calibrate the contents of soluble and insoluble Al in a set of steel samples. SSE allows also the calculation of correlations between pulse height distributions of different emission lines. One finds a range from anticorrelation to correlation for various line combinations. This information can help to select reference lines for improving the precision.  相似文献   

5.
After characterization of poly(dithiocarbamate) resin, theoretical and experimental sequestration results were compared. For cobalt it is shown that the calculated conditional distribution ratios can be successfully applied to establish the optimum phase ratio. The use of the recovery as an optimization criterion in sample enrichment procedures involving ion-exchange-chelating resins is evaluated.  相似文献   

6.
The possibility of estimating equilibrium free‐energy profiles from multiple non‐equilibrium simulations using the fluctuation–dissipation theory or the relation proposed by Jarzynski has attracted much attention. Although the Jarzynski estimator has poor convergence properties for simulations far from equilibrium, corrections have been derived for cases in which the work is Gaussian distributed. Here, we examine the utility of corrections proposed by Gore and collaborators using a simple dissipative system as a test case. The system consists of a single methane‐like particle in explicit water. The Jarzynski equality is used to estimate the change in free energy associated with pulling the methane particle a distance of 3.9 nm at rates ranging from ~0.1 to 100 m s?1. It is shown that although the corrections proposed by Gore and collaborators have excellent numerical performance, the profiles still converge slowly. Even when the corrections are applied in an ideal case where the work distribution is necessarily Gaussian, performing simulations under quasi‐equilibrium conditions is still most efficient. Furthermore, it is shown that even for a single methane molecule in water, pulling rates as low as 1 m s?1 can be problematic. The implications of this finding for studies in which small molecules or even large biomolecules are pulled through inhomogeneous environments at similar pulling rates are discussed.  相似文献   

7.
This work introduces a new local aromaticity measure, defined as the mean of Bader's electron delocalization index (DI) of para-related carbon atoms in six-membered rings. This new electronic criterion of aromaticity is based on the fact that aromaticity is related to the cyclic delocalized distribution of pi-electrons. We have found that this DI and the harmonic oscillator model of aromaticity (HOMA) index are strongly correlated for a series of six-membered rings in eleven planar polycyclic aromatic hydrocarbons. The correlation between the DI and the nucleus-independent chemical shift (NICS) values is less remarkable, although in general six-membered rings with larger DI values also have more negative NICS indices. We have shown that this index can also be applied, with some modifications, to study of the aromaticity in five-membered rings.  相似文献   

8.
A promising method for calculating free energy differences DeltaF is to generate nonequilibrium data via "fast-growth" simulations or by experiments--and then use Jarzynski's equality. However, a difficulty with using Jarzynski's equality is that DeltaF estimates converge very slowly and unreliably due to the nonlinear nature of the calculation--thus requiring large, costly data sets. The purpose of the work presented here is to determine the best estimate for DeltaF given a (finite) set of work values previously generated by simulation or experiment. Exploiting statistical properties of Jarzynski's equality, we present two fully automated analyses of nonequilibrium data from a toy model, and various simulated molecular systems. Both schemes remove at least several k(B)T of bias from DeltaF estimates, compared to direct application of Jarzynski's equality, for modest sized data sets (100 work values), in all tested systems. Results from one of the new methods suggest that good estimates of DeltaF can be obtained using 5-40-fold less data than was previously possible. Extending previous work, the new results exploit the systematic behavior of bias due to finite sample size. A key innovation is better use of the more statistically reliable information available from the raw data.  相似文献   

9.
Discretization of a size-exclusion chromatography (SEC) chromatogram is shown here to be an important calculation for characterizing the distribution of a polydisperse polymer, especially when the polydispersity is large. Commercial poly-glucose maltodextrins are known to have such a polydispersity. A mathematical discretization method with Gaussian peaks centered on each individual degree of polymerization is proposed and is performed on the entire SEC chromatogram for three different grades of corn maltodextrins. Because SEC and high-performance anion exchange chromatography with pulsed amperometric detection (HPAEC-PAD) are based on different separation mechanisms, they can be considered orthogonal techniques, and HPAEC-PAD was therefore used to validate the SEC discretization procedure. Because this validation proved satisfactory for all commercially available oligomers, the discretization is extended to all of their SEC chromatograms. Comparing the number-average molar weight and the weight-average molar weight before and after the mathematical discretization verifies that such a mathematical treatment does not denaturate the chromatogram. This approach tentatively leads to a more exhaustive characterization of a broadly polydisperse sample, such as maltodextrins, than was previously available, as it (i) gets rid of the apparent, chemically irrelevant, continuous molar weight distribution obtained by raw SEC and (ii) addresses the current detection and quantitation limits of the HPAEC-PAD technique without any sample treatment.  相似文献   

10.
In this work, is given the Combined Standard Uncertainty (CSU) calculation procedure, which can be applied in spectrophotometric measurements. For the assessment of the computations, different approaches are discussed, such as the contribution to the Combined Standard Uncertainty of the reproducibility, the repeatability, the total bias, the calibration curve, and the type of the measurand. Results of inter-laboratory measurements confirmed the assumptions. For the minimization of the errors propagation a controlled experimental procedure was applied by this laboratory, called “errors propagation break-up” (ERBs). The uncertainty of sample concentration from a reference curve dominates the Combined Standard Uncertainty. The contribution of the method and the laboratory bias (total bias) to the CSU is insignificant under controlled conditions of a measurement. This work develops a simple methodology that can be utilized to evaluate the uncertainty and errors control on routine methods used both by academic researchers or the industrial sector.  相似文献   

11.
When reporting activity measurement results near the natural limit for activities, (zero), care must be taken not to report observed values from the unfeasible region and not to report confidence intervals extending beyond the limit of the feasible range. This can be achieved by reporting best estimates, where the requirements that the best-estimate values remain in the feasible range and that the confidence intervals do not extend beyond its limit are taken into account. The use of the truncated and renormalized normal distribution, defined by the primary measurement result, as the posterior in the calculation of the best estimates fulfills both requirements, but suffers in that it introduces a bias into the measurement results reported, that is, the observed value may not lie within the interval, centered at the best-estimate value and having a width equal to twice its uncertainty, and that, consequently, the use of best estimates may lead to unacceptable results. Therefore, a posterior, comprising the Dirac delta function distribution δ(a) and the truncated normal distribution, is used for the calculation of the best estimate. It is shown that here the bias introduced is smaller; the observed value lies within the interval centered on the best-estimate value and has a width equal to twice its uncertainty, and that these best estimates perform better when used in calculations of the mean.  相似文献   

12.
This article presents cubature grids of the Gaussian type that are adapted for the purpose of wave function calculation on atoms and molecules. The problems of the singularity at the nucleus, the derivatives in the kinetic energy, and the presence of two‐electron integrals are shown to be resolved. Each grid has a definite degree of accuracy so that it reproduces the exact values of all the integrals in a defined class. Seventh‐degree accuracy can be obtained from a grid of 143 nodes. The grids are applied, as simple illustrations, to well‐known self‐consistent field (SCF) calculations on helium. Grids for homonuclear diatomics are also discussed and an illustrative application given to a homonuclear diatomic molecule. A comparison between a molecular grid and the union of two unmodified atomic grids shows that overlaps and distortions in weights can occur to the extent that this is not practical. © 2006 Wiley Periodicals, Inc. Int J Quantum Chem, 2007  相似文献   

13.
Neutron scattering data for melt-crystallized polyethylene have been analyzed in order to clarify to what extent the chain folding is randomly reentrant. No attempt has been made to specify the molecular conformation in every detail, and the emphasis is on distinguishing between different classes of conformation. The most random folding corresponded to a model where the folding is imposed solely by the criterion of the chain segments moving the least possible distance during the crystallization process (a “freezing-in” model). This has been shown not to be compatible with published data. For this model analytic calculations are possible based on the projection of a three-dimensional Gaussian distribution onto a plane. A subunit model is then proposed which requires substantial local rearrangement of the chain as it folds during crystallization, but where the distribution of the subunits within the whole molecule is imposed by the preexisting Gaussian chain of the melt. Arguments based on space filling considerations are invoked, with the postulate of a surface structure which is neither crystalline nor truly amorphous. Anything approaching a random switchboard model (e.g., the freezing-in model which we consider) is contrary to both space filling considerations and to the comparison of observed and calculated neutron scattering. The analytical calculation which was performed for the freezing-in model was employed so as to simplify calculations for the subunit model. For scattering intensities over a wide range of scattering angle it is deduced that only the structure within the subunit need be considered. Numerical computer calculations involving only a small number of stems were then carried out for a number of different subunit structures, and some general features are noted which restrict the type of model which can explain the data. As in previously published analyses, a very high proportion of adjacent folds is not compatible with the results. A row model for the stems within a molecule can achieve good agreement, either with straight rows or with a certain amount of “stagger” incorporated. Up to about 40% of the folds could be adjacent. Models based on two-dimensional random walks did not give good agreement.  相似文献   

14.
In this work, the Gaussian Network Model (GNM) and Anisotropic Network Model (ANM) approaches are applied to describe the dynamics of protein structure graphs built from calculated promolecular electron density (ED) distribution functions. A first set of analyses is carried out on results obtained from ED maxima calculated at various smoothing levels. A second set is achieved for ED networks whose edges are weighted by ED overlap integral values. Results are compared with those obtained through the classical GNM and ANM approaches applied to networks of C(alpha) atoms. It is shown how the network model and the consideration of crystal packing as well as of the side chains may lead to various improvements dependent upon the structure under study. The selected protein structures are Crambin and Pancreatic Trypsin Inhibitor because of their small size and numerous dynamical data obtained by other authors.  相似文献   

15.
A model for evaluating instantaneous degree of polymerization distribution and the chain composition distribution of copolymers produced in emulsion is developed. The approach adopted is based on the mathematics of Markov processes and represents an extension of the one developed for homopolymers in Part I. As in the homopolymer case, the main aspect of the theoretical treatment is the definition of the proper one step transition probability matrix through the so called subprocess-main process procedure. The model accounts for monomolecular and bimolecular termination (both by combination and disproportionation) and, in principle, it can be applied to any number of reacting monomer species as well as to any number of active chains per particle. However, only the 0–1–2 and 0–1–2–3 emulsion copolymerization systems are discussed in detail. In the case of the chain composition distribution, the model allows the calculation of its moments only, through the method of the Generating Function associated with the probability density function. The expression obtained for the instantaneous probability density functions, as well as for the corresponding cumulative distributions, are all in explicit form and involve only algebraic operations among matrices. Efficient numerical procedure for their application are reported in the Appendix. Illustrative calculations are reported for a 0–1–2–3 copolymerization system, simulating the copolymer styrene–methylmethacrylate. The effect of the various termination mechanisms on the distribution of degrees of polymerization and on the first two moments of the chain composition distribution is discussed in detail. Finally, the three dimensional overall distribution function of both chain length and composition is shown under the assumption of Gaussian type chain composition distribution.  相似文献   

16.
The requirement that the true value of an activity cannot be negative is used for the transformation of raw observed values, which can be positive or negative, into the expected activity values. The probability distribution of the activity values is a truncated Gaussian distribution, and the expected value and the variance of the activity values are derived from the observed value and its standard deviation. It has been shown that the standard deviation of the activity values is smaller than the standard deviation of the observed value and that the ratio of the standard deviation of the activity values and the expected value is less than unity. Since the expected activity value is larger than the original observed value, and the standard deviation of the activity values is smaller than the standard deviation of the observed value, the additional information, that the activity cannot be negative, leads to an improvement in the result. However, since the expected activity value depends on the standard deviation of the observed value, conservatively assessed standard deviation lead to a bias of the expected activity values.  相似文献   

17.
We consider ways to quantify the overlap of the parts of phase space important to two systems, labeled A and B. Of interest is how much of the A-important phase space lies in that important to B, and how much of B lies in A. Two measures are proposed. The first considers four total-energy distributions, formed from all combinations made by tabulating either the A-system or the B-system energy when sampling either the A or B system. Measures for A in B and B in A are given by two overlap integrals defined on pairs of these distributions. The second measure is based on information theory, and defines two relative entropies which are conveniently expressed in terms of the dissipated work for free-energy perturbation (FEP) calculations in the A-->B and B-->A directions, respectively. Phase-space overlap is an important consideration in the performance of free-energy calculations. To demonstrate this connection, we examine bias in FEP calculations applied to a system of independent particles in a harmonic potential. Systems are selected to represent a range of overlap situations, including extreme subset, subset, partial overlap, and nonoverlap. The magnitude and symmetry of the bias (A-->B vs B-->A) are shown to correlate well with the overlap, and consequently with the overlap measures. The relative entropies are used to scale the amount of sampling to obtain a universal bias curve. This result leads to develop a simple heuristic that can be applied to determine whether a work-based free-energy measurement is free of bias. The heuristic is based in part on the measured free energy, but we argue that it is fail-safe inasmuch as any bias in the measurement will not promote a false indication of accuracy.  相似文献   

18.
19.
Molecular weight averages have long been used as a measure of polymer molecular weight properties in industrial polymer manufacturing processes. With a kinetic model, it is possible to directly calculate the polymer chain length distribution by integrating an infinite number of the polymer population balance equations. However, when the polymer chain length is very large, such a direct integration of polymer population balance equations can be computationally demanding. In this paper, the method of finite molecular weight moments is applied to the calculation of polymer chain length distribution in a batch free radical thermal polymerization of styrene. The weight fraction of a finite chain length interval is directly calculated in conjunction with a kinetic model. The method of calculation is illustrated through model simulations.  相似文献   

20.
Consequences resulting from a three-dimensional calibration model introduced in [5] are investigated. Accordingly, there exists a different statistical background for the calibration, the analytical evaluation and the validation step. If the errors of the concentration values are not negligible compared with the errors of the measured values, orthogonal calibration models have to be used instead of the common Gaussian least squares (GLS). Four different approximation models of orthogonal least squares, Wald's approximation (WA), Mandel's approximation (MA), Geometrical mean (GM), and Principal component estimation (PC) are investigated and compared with each other and with GLS by simulations and by real analytical applications. From the simulations it can be seen that GLS is affected by bias in the estimates of both slope and intercept in the case of increasing concentration error. On the other hand, the orthogonal models estimate the calibration parameter better. The best fit is obtained by Wald's approximation. It is shown by simulations and real analytical calibration problems that orthogonal calibration has to be used in all cases in which the concentration errors cannot be neglected compared to the errors of the measured values. This is in particular relevant in recovery experiments for validation by means of comparison of methods. In such cases orthogonal least squares methods have always to be applied where the use of WA is recommended. The situation is different in the case of ordinary calibration experiments. The examples considered show small existing differences between the classical GLS and the orthogonal procedures. In doubtful cases both GLS and WA should be computed where the latter should be used if significant differences appear.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号