首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The exact computation of free energy differences requires adequate sampling of all relevant low energy conformations. Especially in systems with rugged energy surfaces, adequate sampling can only be achieved by biasing the exploration process, thus yielding non-Boltzmann probability distributions. To obtain correct free energy differences from such simulations, it is necessary to account for the effects of the bias in the postproduction analysis. We demonstrate that this can be accomplished quite simply with a slight modification of Bennett's Acceptance Ratio method, referring to this technique as Non-Boltzmann Bennett. We illustrate the method by several examples and show how a creative choice of the biased state(s) used during sampling can also improve the efficiency of free energy simulations.  相似文献   

3.
Most processes occurring in a system are determined by the relative free energy between two or more states because the free energy is a measure of the probability of finding the system in a given state. When the two states of interest are connected by a pathway, usually called reaction coordinate, along which the free-energy profile is determined, this profile or potential of mean force (PMF) will also yield the relative free energy of the two states. Twelve different methods to compute a PMF are reviewed and compared, with regard to their precision, for a system consisting of a pair of methane molecules in aqueous solution. We analyze all combinations of the type of sampling (unbiased, umbrella-biased or constraint-biased), how to compute free energies (from density of states or force averaging) and the type of coordinate system (internal or Cartesian) used for the PMF degree of freedom. The method of choice is constraint-bias simulation combined with force averaging for either an internal or a Cartesian PMF degree of freedom.  相似文献   

4.
The solid-fluid coexistence properties of the n - 6 Lennard-Jones system, n from 7 to 12, are reported. The procedure relies on determining Helmholtz free energy curves as a function of volume for each phase independently, from several NVT simulations, and then connecting it to points of known absolute free energy. For n = 12 this requires connecting the simulated points to states of very low densities on the liquid phase, and to a harmonic crystal for the solid phase, which involves many extra simulations for each temperature. For the reference points of the remaining systems, however, the free energy at a given density and temperature can be calculated relative to the n = 12 system. The method presented here involves a generalization of the multiple histogram method to combine simulations performed with different potentials, provided they visit overlapping regions of the phase space, and allows for a precise calculation of relative free energies. The densities, free energies, average potential energies, pressure, and chemical potential at coexistence are presented for up to T? = 5.0 and new estimations of the triple points are given for the n - 6 Lennard-Jones system.  相似文献   

5.
We present a method of parallelizing flat histogram Monte Carlo simulations, which give the free energy of a molecular system as an output. In the serial version, a constant probability distribution, as a function of any system parameter, is calculated by updating an external potential that is added to the system Hamiltonian. This external potential is related to the free energy. In the parallel implementation, the simulation is distributed on to different processors. With regular intervals the modifying potential is summed over all processors and distributed back to every processor, thus spreading the information of which parts of parameter space have been explored. This implementation is shown to decrease the execution time linearly with added number of processors.  相似文献   

6.
This study develops an efficient approach for calculating the density of states from energy transition probability matrices generated from extended sampling Monte Carlo simulations. Direct and iterative variants of the method are shown to achieve high accuracy when applied to the two-dimensional Ising model for which the density of states function can be determined exactly. They are also used to calculate the density of states of lattice protein and Lennard-Jones models which generate more complex nonzero matrix structures. Whereas the protein simulations test the method on a system exhibiting a rugged free energy landscape, the Lennard-Jones calculations highlight implementation details that arise in applications to continuous energy systems. Density of states results for these two systems agree with estimates from multiple histogram reweighting, demonstrating that the new method provides an alternative approach for computing the thermodynamic properties of complex systems.  相似文献   

7.
Many schemes for calculating reaction rates and free energy barriers require an accurate reaction coordinate, but it is difficult to quantify reaction coordinate accuracy for complex processes like protein folding and nucleation. The histogram test, based on estimated committor probabilities, is often used as a qualitative indicator for good reaction coordinates. This paper derives the mean and variance of the intrinsic committor distribution in terms of the mean and variance of the histogram of committor estimates. These convenient formulas enable the first quantitative calculations of reaction coordinate error for complex systems. An example shows that the approximate transition state surface from Peters' and Trout's reaction coordinate for nucleation in the Ising model gives a mean committor probability of 0.495 and a standard deviation of 0.042.  相似文献   

8.
In this work we analyze the finite-size and discretization effects that occur in field-theoretic polymer simulations. Following our previous work, we study these effects for a polymer solution in the canonical ensemble confined to a slit (with nonadsorbing walls) of width L, and focus on the behavior of two quantities: the chemical potential mu, and the correlation length xi. Our results show that the finite-size effects disappear for both quantities once the lateral size of the system L is larger than approximately 2xi. On the other hand, the chemical potential is dominated by the lattice discretization Deltax. The origins of this dependence are discussed in detail, and a scheme is proposed in which this effect is avoided. Our results also show that the density profiles do not depend on the lattice discretization if Deltax < approximately xi/4. This implies that the correlation length xi, extracted from the density profiles, is free of lattice size and lattice discretization artifacts once L is > approximately 2xi and Deltax < approximately xi/4.  相似文献   

9.
Grand canonical transition matrix Monte Carlo simulations are used to investigate the phase behavior of the model argon on solid carbon dioxide system introduced by Ebner and Saam (Phys. Rev. Lett. 1977, 38, 1486). Our results indicate that the system exhibits first-order prewetting transitions at temperatures above a wetting temperature of Tw = 0.598(5) and below a critical prewetting temperature of Tpwc approximately 0.92. The wetting transition is identified by determining the temperature at which the difference between the bulk vapor-liquid and prewetting saturation chemical potentials goes to zero. Coexistence is directly located at a given temperature by obtaining a density probability distribution from simulation data and utilizing histogram reweighting to determine the conditions that satisfy phase coexistence. Structural properties of the adsorbed films are also examined.  相似文献   

10.
We propose a numerical scheme based on the Chebyshev pseudo-spectral collocation method for solving the integral and integro-differential equations of the density-functional theory and its dynamic extension. We demonstrate the exponential convergence of our scheme, which typically requires much fewer discretization points to achieve the same accuracy compared to conventional methods. This discretization scheme can also incorporate the asymptotic behavior of the density, which can be of interest in the investigation of open systems. Our scheme is complemented with a numerical continuation algorithm and an appropriate time stepping algorithm, thus constituting a complete tool for an efficient and accurate calculation of phase diagrams and dynamic phenomena. To illustrate the numerical methodology, we consider an argon-like fluid adsorbed on a Lennard-Jones planar wall. First, we obtain a set of phase diagrams corresponding to the equilibrium adsorption and compare our results obtained from different approximations to the hard sphere part of the free energy functional. Using principles from the theory of sub-critical dynamic phase field models, we formulate the time-dependent equations which describe the evolution of the adsorbed film. Through dynamic considerations we interpret the phase diagrams in terms of their stability. Simulations of various wetting and drying scenarios allow us to rationalize the dynamic behavior of the system and its relation to the equilibrium properties of wetting and drying.  相似文献   

11.
Dynamic biological processes such as enzyme catalysis, molecular motor translocation, and protein and nucleic acid conformational dynamics are inherently stochastic processes. However, when such processes are studied on a nonsynchronized ensemble, the inherent fluctuations are lost, and only the average rate of the process can be measured. With the recent development of methods of single-molecule manipulation and detection, it is now possible to follow the progress of an individual molecule, measuring not just the average rate but the fluctuations in this rate as well. These fluctuations can provide a great deal of detail about the underlying kinetic cycle that governs the dynamical behavior of the system. However, extracting this information from experiments requires the ability to calculate the general properties of arbitrarily complex theoretical kinetic schemes. We present here a general technique that determines the exact analytical solution for the mean velocity and for measures of the fluctuations. We adopt a formalism based on the master equation and show how the probability density for the position of a molecular motor at a given time can be solved exactly in Fourier-Laplace space. With this analytic solution, we can then calculate the mean velocity and fluctuation-related parameters, such as the randomness parameter (a dimensionless ratio of the diffusion constant and the velocity) and the dwell time distributions, which fully characterize the fluctuations of the system, both commonly used kinetic parameters in single-molecule measurements. Furthermore, we show that this formalism allows calculation of these parameters for a much wider class of general kinetic models than demonstrated with previous methods.  相似文献   

12.
Recently, we have introduced a new method, metadynamics, which is able to sample rarely occurring transitions and to reconstruct the free energy as a function of several variables with a controlled accuracy. This method has been successfully applied in many different fields, ranging from chemistry to biophysics and ligand docking and from material science to crystal structure prediction. We present an important development that speeds up metadynamics calculations by orders of magnitude and renders the algorithm much more robust. We use multiple interacting simulations, walkers, for exploring and reconstructing the same free energy surface. Each walker contributes to the history-dependent potential that, in metadynamics, is an estimate of the free energy. We show that the error on the reconstructed free energy does not depend on the number of walkers, leading to a fully linear scaling algorithm even on inexpensive loosely coupled clusters of PCs. In addition, we show that the accuracy and stability of the method are much improved by combining it with a weighted histogram analysis. We check the validity of our new method on a realistic application.  相似文献   

13.
Summary: We have shown that the components of Cartesian rotation vectors can be used successfully as generalized coordinates describing angular orientation in Brownian dynamics simulations of non‐spherical nanoparticles. For this particular choice of generalized coordinates, we rigorously derived the conformation‐space diffusion equations from kinetic theory for both free nanoparticles and nanoparticles interconnected by springs or holonomic constraints into polymer chains. The equivalent stochastic differential equations were used as a foundation for the Brownian dynamics algorithms. These new algorithms contain singularities only for points in the conformation‐space where both the probability density and its first coordinate derivative equal zero (weak singularities). In addition, the coordinate values after a single Brownian dynamics time step are throughout the conformation‐space equal to the old coordinate values plus the respective increments. For some parts of the conformation‐space these features represent a major improvement compared to the situation when Eulerian angles describe rotational dynamics. The presented simulation results of the equilibrium probability density for free nanoparticles are in perfect agreement with the results from kinetic theory.

Simulation of p(eq)(Φ) for free nanoparticles.  相似文献   


14.
The decision that a given detection level corresponds to the effective presence of a radionuclide is still widely made on the basis of a classic hypothesis test. However, the classic framework suffers several drawbacks, such as the conceptual and practical impossibility to provide a probability of zero radioactivity, and confidence intervals for the true activity level that are likely to contain negative and hence meaningless values. The Bayesian framework being potentially able to overcome these drawbacks, several attempts have recently been made to apply it to this decision problem. Here, we present a new Bayesian method that, unlike the previous ones, presents two major advantages together. First, it provides an estimate of the probability of no radioactivity, as well as physically meaningful point and interval estimates for the true radioactivity level. Second, whereas Bayesian approaches are often controversial because of the arbitrary choice of the priors they use, the proposed method permits to estimate the parameters of the prior density of radioactivity by fitting its marginal distribution to previously recorded activity data. The new scheme is first mathematically developed. Then, it is applied to the detection of radioxenon isotopes in noble gas measurement stations of the International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty.  相似文献   

15.
We develop a novel technique which deduces the surface tension in air of a fluid as a function of surface age, beginning at age zero. The technique utilizes pointwise measurements of perpendicular free surface profiles of a steady oscillating jet corresponding to a discretization interval on the order of 0.1 ms. We implement the technique on constant-surface-tension test fluids (100% ethanol and 15% ethanol/85% water by volume) to demonstrate the extent to which the technique can qualitatively capture that the surface tensions of these fluids are constant in time, and quantitatively produce values of these constants consistent with static measurements. We then implement the technique on jets of two agricultural surfactant mixtures, Triton X-405 and Triton X-100, and quantitatively deduce the decay of surface tension as a function of surfactant concentration.  相似文献   

16.
We describe a method to detect and count transient burstlike signals in the presence of a significant stationary noise. To discriminate a transient signal from the background noise, an optimum threshold is determined using an iterative algorithm that yields the probability distribution of the background noise. Knowledge of the probability distribution of the noise then allows the determination of the number of transient events with a quantifiable error (wrong-positives). We apply the method, which does not rely on the choice of free parameters, to the detection and counting of transient single-molecule fluorescence events in the presence of a strong background noise. The method will be of importance in various ultra sensing applications.  相似文献   

17.
The constants of binding of five peptide analogs to the active site of the HIV-1 aspartic-protease are calculated based on a novel sampling scheme that is efficient and does not introduce any approximations in addition to the energy function used to describe the system. The results agree with experiments. The squared correlation coefficient of the calculated vs. the measured values is 0.79. The sampling scheme consists of a series of molecular dynamics integrations with biases. The biases are selected based on an estimate of the probability density function of the system in a way to explore the conformational space and to reduce the statistical error in the calculated binding constants. The molecular dynamics integrations are done with a vacuum potential using a short cutoff scheme. To estimate the probability density of the simulated system, the results of the molecular dynamics integrations are combined using an extension of the weighted histogram analysis method (C. Bartels, Chem. Phys. Letters 331 (2000) 446-454). The probability density of the solvated ligand-protein system is obtained by applying a correction for the use of the short cutoffs in the simulations and by taking into account solvation with an electrostatic term and a hydrophobic term. The electrostatic part of the solvation is determined by finite difference Poisson-Boltzmann calculations; the hydrophobic part of the solvation is set proportional to the solvent accessible surface area. Setting the hydrophobic surface tension parameter equal to 8 mol(-1) K(-1) A(-2), absolute binding constants are in the muM to nM range. This is in agreement with experiments. The standard errors determined from eight repeated binding constant determinations are a factor of 14 to 411. A single determination of a binding constant is done with 499700 steps of molecular dynamics integration and 4500 finite difference Poisson-Boltzmann calculations. The simulations can be analyzed with respect to conformational changes of the active site of the HIV-1 protease or the ligands upon binding and provide information that complements experiments and can be used in the drug development process.  相似文献   

18.
Gillespie's direct method is a stochastic simulation algorithm that may be used to calculate the steady state solution of a chemically reacting system. Recently the all possible states method was introduced as a way of accelerating the convergence of the simulations. We demonstrate that while the all possible states (APS) method does reduce the number of required trajectories, it is actually much slower than the original algorithm for most problems. We introduce the elapsed time method, which reformulates the process of recording the species populations. The resulting algorithm yields the same results as the original method, but is more efficient, particularly for large models. In implementing the elapsed time method, we present robust methods for recording statistics and empirical probability distributions. We demonstrate how to use the histogram distance to estimate the error in steady state solutions.  相似文献   

19.
20.
The best estimate and its standard deviation are calculated for the case when the a priori probability that the analyte is absent from the test sample is not zero. In the calculation, a generalization of the Bayesian prior that is used in the ISO 11929 standard is applied. The posterior probability density distribution of the true values, given the observed value and its uncertainty, is a linear combination of the Dirac delta function and the normalized, truncated, normal probability density distribution defined by the observed value and its uncertainty. The coefficients of this linear combination depend on the observed value and its uncertainty, as well as on the a priori probability. It is shown that for a priori probabilities larger than zero the lower level of the uncertainty interval of the best estimate reaches the unfeasible range (i.e., negative activities). However, for a priori probabilities in excess of 0.26 it reaches the unfeasible range even for positive observed values. The upper limit of the confidence interval covering a predefined fraction of the posterior is derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号