首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
Molecular dynamics (MD) simulations are a vital tool in chemical research, as they are able to provide an atomistic view of chemical systems and processes that is not obtainable through experiment. However, large‐scale MD simulations require access to multicore clusters or supercomputers that are not always available to all researchers. Recently, scientists have returned to exploring the power of graphics processing units (GPUs) for various applications, such as MD, enabled by the recent advances in hardware and integrated programming interfaces such as NVIDIA's CUDA platform. One area of particular interest within the context of chemical applications is that of aqueous interfaces, the salt solutions of which have found application as model systems for studying atmospheric process as well as physical behaviors such as the Hoffmeister effect. Here, we present results of GPU‐accelerated simulations of the liquid–vapor interface of aqueous sodium iodide solutions. Analysis of various properties, such as density and surface tension, demonstrates that our model is consistent with previous studies of similar systems. In particular, we find that the current combination of water and ion force fields coupled with the ability to simulate surfaces of differing area enabled by GPU hardware is able to reproduce the experimental trend of increasing salt solution surface tension relative to pure water. In terms of performance, our GPU implementation performs equivalent to CHARMM running on 21 CPUs. Finally, we address possible issues with the accuracy of MD simulaions caused by nonstandard single‐precision arithmetic implemented on current GPUs. © 2010 Wiley Periodicals, Inc. J Comput Chem, 2011  相似文献   

2.
Implicit solvent representations, in general, and generalized Born models, in particular, provide an attractive way to reduce the number of interactions and degrees of freedom in a system. The instantaneous relaxation of the dielectric shielding provided by an implicit solvent model can be extremely efficient for high‐throughput and Monte Carlo studies, and a reduced system size can also remove a lot of statistical noise. Despite these advantages, it has been difficult for generalized Born implementations to significantly outperform optimized explicit‐water simulations due to more complex functional forms and the two extra interaction stages necessary to calculate Born radii and the derivative chain rule terms contributing to the force. Here, we present a method that uses a rescaling transformation to make the standard generalized Born expression a function of a single variable, which enables an efficient tabulated implementation on any modern CPU hardware. The total performance is within a factor 2 of simulations in vacuo. The algorithm has been implemented in Gromacs, including single‐instruction multiple‐data acceleration, for three different Born radius models and corresponding chain rule terms. We have also adapted the model to work with the virtual interaction sites commonly used for hydrogens to enable long‐time steps, which makes it possible to achieve a simulation performance of 0.86 μs/day for BBA5 with 1‐nm cutoff on a single quad‐core desktop processor. Finally, we have also implemented a set of streaming kernels without neighborlists to accelerate the non‐cutoff setup occasionally used for implicit solvent simulations of small systems. © 2010 Wiley Periodicals, Inc. J Comput Chem, 2010  相似文献   

3.
The treatment of pH sensitive ionization states for titratable residues in proteins is often omitted from molecular dynamics (MD) simulations. While static charge models can answer many questions regarding protein conformational equilibrium and protein–ligand interactions, pH‐sensitive phenomena such as acid‐activated chaperones and amyloidogenic protein aggregation are inaccessible to such models. Constant pH molecular dynamics (CPHMD) coupled with the Generalized Born with a Simple sWitching function (GBSW) implicit solvent model provide an accurate framework for simulating pH sensitive processes in biological systems. Although this combination has demonstrated success in predicting pKa values of protein structures, and in exploring dynamics of ionizable side‐chains, its speed has been an impediment to routine application. The recent availability of low‐cost graphics processing unit (GPU) chipsets with thousands of processing cores, together with the implementation of the accurate GBSW implicit solvent model on those chipsets (Arthur and Brooks, J. Comput. Chem. 2016, 37, 927), provide an opportunity to improve the speed of CPHMD and ionization modeling greatly. Here, we present a first implementation of GPU‐enabled CPHMD within the CHARMM‐OpenMM simulation package interface. Depending on the system size and nonbonded force cutoff parameters, we find speed increases of between one and three orders of magnitude. Additionally, the algorithm scales better with system size than the CPU‐based algorithm, thus allowing for larger systems to be modeled in a cost effective manner. We anticipate that the improved performance of this methodology will open the door for broad‐spread application of CPHMD in its modeling pH‐mediated biological processes. © 2016 Wiley Periodicals, Inc.  相似文献   

4.
We introduce a complete implementation of viscoelastic model for numerical simulations of the phase separation kinetics in dynamic asymmetry systems such as polymer blends and polymer solutions on a graphics processing unit (GPU) by CUDA language and discuss algorithms and optimizations in details. From studies of a polymer solution, we show that the GPU-based implementation can predict correctly the accepted results and provide about 190 times speedup over a single central processing unit (CPU). Further accuracy analysis demonstrates that both the single and the double precision calculations on the GPU are sufficient to produce high-quality results in numerical simulations of viscoelastic model. Therefore, the GPU-based viscoelastic model is very promising for studying many phase separation processes of experimental and theoretical interests that often take place on the large length and time scales and are not easily addressed by a conventional implementation running on a single CPU.  相似文献   

5.
6.
We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers.  相似文献   

7.
The implementation of molecular dynamics (MD) with our physics-based protein united-residue (UNRES) force field, described in the accompanying paper, was extended to Langevin dynamics. The equations of motion are integrated by using a simplified stochastic velocity Verlet algorithm. To compare the results to those with all-atom simulations with implicit solvent in which no explicit stochastic and friction forces are present, we alternatively introduced the Berendsen thermostat. Test simulations on the Ala(10) polypeptide demonstrated that the average kinetic energy is stable with about a 5 fs time step. To determine the correspondence between the UNRES time step and the time step of all-atom molecular dynamics, all-atom simulations with the AMBER 99 force field and explicit solvent and also with implicit solvent taken into account within the framework of the generalized Born/surface area (GBSA) model were carried out on the unblocked Ala(10) polypeptide. We found that the UNRES time scale is 4 times longer than that of all-atom MD simulations because the degrees of freedom corresponding to the fastest motions in UNRES are averaged out. When the reduction of the computational cost for evaluation of the UNRES energy function is also taken into account, UNRES (with hydration included implicitly in the side chain-side chain interaction potential) offers about at least a 4000-fold speed up of computations relative to all-atom simulations with explicit solvent and at least a 65-fold speed up relative to all-atom simulations with implicit solvent. To carry out an initial full-blown test of the UNRES/MD approach, we ran Berendsen-bath and Langevin dynamics simulations of the 46-residue B-domain of staphylococcal protein A. We were able to determine the folding temperature at which all trajectories converged to nativelike structures with both approaches. For comparison, we carried out ab initio folding simulations of this protein at the AMBER 99/GBSA level. The average CPU time for folding protein A by UNRES molecular dynamics was 30 min with a single Alpha processor, compared to about 152 h for all-atom simulations with implicit solvent. It can be concluded that the UNRES/MD approach will enable us to carry out microsecond and, possibly, millisecond simulations of protein folding and, consequently, of the folding process of proteins in real time.  相似文献   

8.
We adapted existing polymer growth strategies for equilibrium sampling of peptides described by modern atomistic forcefields with a simple uniform dielectric solvent. The main novel feature of our approach is the use of precalculated statistical libraries of molecular fragments. A molecule is sampled by combining fragment configurations—of single residues in this study—which are stored in the libraries. Ensembles generated from the independent libraries are reweighted to conform with the Boltzmann‐factor distribution of the forcefield describing the full molecule. In this way, high‐quality equilibrium sampling of small peptides (4–8 residues) typically requires less than one hour of single‐processor wallclock time and can be significantly faster than Langevin simulations. Furthermore, approximate, clash‐free ensembles can be generated for larger peptides (up to 32 residues in this study) in less than a minute of single‐processor computing. We discuss possible applications of our growth procedure to free energy calculation, fragment assembly protein‐structure prediction protocols, and to “multi‐resolution” sampling. © 2010 Wiley Periodicals, Inc. J Comput Chem, 2011  相似文献   

9.
We present an algorithm to efficiently compute accurate volumes and surface areas of macromolecules on graphical processing unit (GPU) devices using an analytic model which represents atomic volumes by continuous Gaussian densities. The volume of the molecule is expressed by means of the inclusion–exclusion formula, which is based on the summation of overlap integrals among multiple atomic densities. The surface area of the molecule is obtained by differentiation of the molecular volume with respect to atomic radii. The many‐body nature of the model makes a port to GPU devices challenging. To our knowledge, this is the first reported full implementation of this model on GPU hardware. To accomplish this, we have used recursive strategies to construct the tree of overlaps and to accumulate volumes and their gradients on the tree data structures so as to minimize memory contention. The algorithm is used in the formulation of a surface area‐based non‐polar implicit solvent model implemented as an open source plug‐in (named GaussVol) for the popular OpenMM library for molecular mechanics modeling. GaussVol is 50 to 100 times faster than our best optimized implementation for the CPUs, achieving speeds in excess of 100 ns/day with 1 fs time‐step for protein‐sized systems on commodity GPUs. © 2017 Wiley Periodicals, Inc.  相似文献   

10.
This article describes an extension of the quantum supercharger library (QSL) to perform quantum mechanical (QM) gradient and optimization calculations as well as hybrid QM and molecular mechanical (QM/MM) molecular dynamics simulations. The integral derivatives are, after the two‐electron integrals, the most computationally expensive part of the aforementioned calculations/simulations. Algorithms are presented for accelerating the one‐ and two‐electron integral derivatives on a graphical processing unit (GPU). It is shown that a Hartree–Fock ab initio gradient calculation is up to 9.3X faster on a single GPU compared with a single central processing unit running an optimized serial version of GAMESS‐UK, which uses the efficient Schlegel method for ‐ and ‐orbitals. Benchmark QM and QM/MM molecular dynamics simulations are performed on cellobiose in vacuo and in a 39 Å water sphere (45 QM atoms and 24843 point charges, respectively) using the 6‐31G basis set. The QSL can perform 9.7 ps/day of ab initio QM dynamics and 6.4 ps/day of QM/MM dynamics on a single GPU in full double precision. © 2015 Wiley Periodicals, Inc.  相似文献   

11.
We describe a complete implementation of all‐atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2009  相似文献   

12.
The influence of the total number of cores, the number of cores dedicated to Particle mesh Ewald (PME) calculation and the choice of single vs. double precision on the performance of molecular dynamic (MD) simulations in the size of 70,000 to 1.7 million of atoms was analyzed on three different high‐performance computing facilities employing GROMACS 4 by running about 6000 benchmark simulations. Small and medium sized systems scaled linear up to 64 and 128 cores, respectively. Systems with half a million to 1.2 million atoms scaled linear up to 256 cores. The best performance was achieved by dedicating 25% of the total number of cores to PME calculation. Double precision calculations lowered the performance by 30–50%. A database for collecting information about MD simulations and the achieved performance was created and is freely available online and allows the fast estimation of the performance that can be expected in similar environments. © 2010 Wiley Periodicals, Inc. J Comput Chem, 2011  相似文献   

13.
Jun Gao  Ruixin Zhu  Qi Liu  Zhiwei Cao 《中国化学》2011,29(9):1805-1810
Classical Petri net has been applied into biological analysis, especially as a qualitative model for biochemical pathways analysis, but lack of the ability for quantitative kinetic simulations. In our study, we presented an integration work of the qualitative model–Petri nets with the quantitative approach‐ordinary differential equations (ODEs) for the modeling and analysis of metabolic networks. As an application of our study, the computational modeling of arachidonic acid (AA) biochemical network was provided. A Petri net was set up to present the constraint‐based dynamic simulations on AA metabolic network combined with the validated ODEs model. Furthermore, Graphics Processing Unit (GPU) was adopted to accelerate the calculation of kinetic parameters unavailable from experiments. Our results have indicated that the proposed simulation method was practicable and useful with GPU acceleration, and provides new clues for the large‐scale qualitative and quantitative models of biochemical networks.  相似文献   

14.
The approach used to calculate the two‐electron integral by many electronic structure packages including generalized atomic and molecular electronic structure system‐UK has been designed for CPU‐based compute units. We redesigned the two‐electron compute algorithm for acceleration on a graphical processing unit (GPU). We report the acceleration strategy and illustrate it on the (ss|ss) type integrals. This strategy is general for Fortran‐based codes and uses the Accelerator compiler from Portland Group International and GPU‐based accelerators from Nvidia. The evaluation of (ss|ss) type integrals within calculations using Hartree Fock ab initio methods and density functional theory are accelerated by single and quad GPU hardware systems by factors of 43 and 153, respectively. The overall speedup for a single self consistent field cycle is at least a factor of eight times faster on a single GPU compared with that of a single CPU. © 2011 Wiley Periodicals, Inc. J Comput Chem, 2011  相似文献   

15.
Using a grid‐based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high‐performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single‐precision is used. © 2014 Wiley Periodicals, Inc.  相似文献   

16.
We accelerated an ab initio molecular QMC calculation by using GPGPU. Only the bottle‐neck part of the calculation is replaced by CUDA subroutine and performed on GPU. The performance on a (single core CPU + GPU) is compared with that on a (single core CPU with double precision), getting 23.6 (11.0) times faster calculations in single (double) precision treatments on GPU. The energy deviation caused by the single precision treatment was found to be within the accuracy required in the calculation, ~10?5 hartree. The accelerated computational nodes mounting GPU are combined to form a hybrid MPI cluster on which we confirmed the performance linearly scales to the number of nodes. © 2011 Wiley Periodicals, Inc. J Comput Chem, 2011  相似文献   

17.
The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand–protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual‐GPU version leads to a 39‐fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc.  相似文献   

18.
In this work, we present a tentative step toward the efficient implementation of polarizable molecular mechanics force fields with GPU acceleration. The computational bottleneck of such applications is found in the treatment of electrostatics, where higher-order multipoles and a self-consistent treatment of polarization effects are needed. We have implemented a GPU accelerated code, based on the Tinker program suite, for the computation of induced dipoles. The largest test system used shows a speedup factor of over 20 for a single precision GPU implementation, when comparing to the serial CPU version. A discussion of the optimization and parametrization steps is included. Comparison between different graphic cards and CPU-GPU embedding is also given. The current work demonstrates the potential usefulness of GPU programming in accelerating this field of applications.  相似文献   

19.
The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU‐based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker‐OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200‐fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host–guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc.  相似文献   

20.
A method is proposed to combine the local elevation (LE) conformational searching and the umbrella sampling (US) conformational sampling approaches into a single local elevation umbrella sampling (LEUS) scheme for (explicit‐solvent) molecular dynamics (MD) simulations. In this approach, an initial (relatively short) LE build‐up (searching) phase is used to construct an optimized biasing potential within a subspace of conformationally relevant degrees of freedom, that is then used in a (comparatively longer) US sampling phase. This scheme dramatically enhances (in comparison with plain MD) the sampling power of MD simulations, taking advantage of the fact that the preoptimized biasing potential represents a reasonable approximation to the negative of the free energy surface in the considered conformational subspace. The method is applied to the calculation of the relative free energies of β‐D ‐glucopyranose ring conformers in water (within the GROMOS 45A4 force field). Different schemes to assign sampled conformational regions to distinct states are also compared. This approach, which bears some analogies with adaptive umbrella sampling and metadynamics (but within a very distinct implementation), is shown to be: (i) efficient (nearly all the computational effort is invested in the actual sampling phase rather than in searching and equilibration); (ii) robust (the method is only weakly sensitive to the details of the build‐up protocol, even for relatively short build‐up times); (iii) versatile (a LEUS biasing potential database could easily be preoptimized for small molecules and assembled on a fragment basis for larger ones). © 2009 Wiley Periodicals, Inc. J Comput Chem 2010  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号