排序方式: 共有55条查询结果,搜索用时 15 毫秒
51.
Sheng-Chun Yang Hu-Jun Qian Zhong-Yuan Lu 《Applied and Computational Harmonic Analysis》2018,44(2):273-293
An efficient calculation of NFFT (nonequispaced fast Fourier transforms) is always a challenging task in a variety of application areas, from medical imaging to radio astronomy to chemical simulation. In this article, a new theoretical derivation is proposed for NFFT based on gridding algorithm and new strategies are proposed for the implementation of both forward NFFT and its inverse on both CPU and GPU. The GPU-based version, namely CUNFFT, adopts CUDA (Compute Unified Device Architecture) technology, which supports a fine-grained parallel computing scheme. The approximation errors introduced in the algorithm are discussed with respect to different window functions. Finally, benchmark calculations are executed to illustrate the accuracy and performance of NFFT and CUNFFT. The results show that CUNFFT is not only with high accuracy, but also substantially faster than conventional NFFT on CPU. 相似文献
52.
描述了HL-2A等离子体实时平衡重建的GPU并行化算法,主要包括G-S方程的并行化处理、三对角方程求解、网格边界磁通计算以及一系列矩阵相乘的并行加速。并行后,在129×129的网格下完成一次迭代计算需要约575μs。 相似文献
53.
Particle-Mesh Ewald(PME)算法的GPU加速 总被引:1,自引:0,他引:1
讨论在NVIDIACUDA开发环境下,用GPU加速分子动力学模拟中静电作用的长程受力计算部分.采用Particle-Mesh Ewald(PME)方法,将其分解为参数确定、点电荷网格离散、离散网格的傅立叶变换、静电热能求解与静电力求解5个部分,并分别分析各部分的GPU实现.此方法已成功用于7个不同大小的生物分子体系的模拟计算,达到了7倍左右的加速.该程序可耦合到现有分子动力学模拟软件中,或作为进一步开发的GPU分子动力学程序的一部分,显著加速传统分子动力学程序. 相似文献
54.
在等离子体平衡重建迭代计算过程中,需要快速求解Grad-Shafranov方程(G-S方程)。构造了具有四阶精度紧致差分格式的离散方程,采用离散正弦变换技术对其进行快速求解并采用CUDATM实现GPU并行加速,将其应用到EAST等离子体平衡重建PEFIT代码中,实现基于紧致差分格式的快速G-S方程求解。结果表明,在65×65的网格下,给定方程右端项电流分布的前提下,使用GPU求解G-S方程所需时间为大约34μs。 相似文献
55.
Brad A. Bauer Joseph E. Davis Michela Taufer Sandeep Patel 《Journal of computational chemistry》2011,32(3):375-385
Molecular dynamics (MD) simulations are a vital tool in chemical research, as they are able to provide an atomistic view of chemical systems and processes that is not obtainable through experiment. However, large‐scale MD simulations require access to multicore clusters or supercomputers that are not always available to all researchers. Recently, scientists have returned to exploring the power of graphics processing units (GPUs) for various applications, such as MD, enabled by the recent advances in hardware and integrated programming interfaces such as NVIDIA's CUDA platform. One area of particular interest within the context of chemical applications is that of aqueous interfaces, the salt solutions of which have found application as model systems for studying atmospheric process as well as physical behaviors such as the Hoffmeister effect. Here, we present results of GPU‐accelerated simulations of the liquid–vapor interface of aqueous sodium iodide solutions. Analysis of various properties, such as density and surface tension, demonstrates that our model is consistent with previous studies of similar systems. In particular, we find that the current combination of water and ion force fields coupled with the ability to simulate surfaces of differing area enabled by GPU hardware is able to reproduce the experimental trend of increasing salt solution surface tension relative to pure water. In terms of performance, our GPU implementation performs equivalent to CHARMM running on 21 CPUs. Finally, we address possible issues with the accuracy of MD simulaions caused by nonstandard single‐precision arithmetic implemented on current GPUs. © 2010 Wiley Periodicals, Inc. J Comput Chem, 2011 相似文献