首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Commercial cloud computing (CCC) has the promise of an untold number of computing nodes available for the researcher as long as he or she has the financial means to absorb these costs and the administrative skills necessary to effectively utilize the resources. The key is finding how to maximize parallelization for a minimum of monetary and management costs. Previous work has shown that CCC resources are viable for use on large numbers of small‐to‐medium sized quantum chemical computations. Composite energy quartic force fields (QFFs) are a highly‐attractive platform for subsequent testing of CCC resources to find the proper balance between time savings of the cloud versus monetary expenditure. Use of this type of potential energy surface has lead to highly‐accurate rovibrational data in earlier work. QFFs use large numbers of stand‐alone energies that have to be computed for various molecular geometries. At each geometry, different methods and/or basis sets are used to efficiently generate accurate representations of the nuclear potential. For this initial study, the small molecular anion, SiCH? of interest in astrochemistry, is chosen for analysis as it can be done cheaply on the cloud while still providing insight into the nature of CCC usage. Additionally, no rovibrational data exists for this molecule making it the first molecule quantum chemically computed purely via CCC tools. © 2015 Wiley Periodicals, Inc.  相似文献   

2.
3.
The restructuring of quantum mechanical applications for use on message-passing, distributed memory multicomputers is found to be a challenge. A key computation in these large scale quantum chemistry packages is the determination of eigenvalues and eigenvectors of real sym metric matrices. These computations arise during geometry optimization and vibrational analysis, and typically consume at least half of the total computation time. This work illustrates the parallelization of both tasks within the semiempirical quantum chemistry code, MOPAC, on Intel parallel platforms. The application of this parallel code is demonstrated on novel organic systems.  相似文献   

4.
Jaguar is an ab initio quantum chemical program that specializes in fast electronic structure predictions for molecular systems of medium and large size. Jaguar focuses on computational methods with reasonable computational scaling with the size of the system, such as density functional theory (DFT) and local second‐order Møller–Plesset perturbation theory. The favorable scaling of the methods and the high efficiency of the program make it possible to conduct routine computations involving several thousand molecular orbitals. This performance is achieved through a utilization of the pseudospectral approximation and several levels of parallelization. The speed advantages are beneficial for applying Jaguar in biomolecular computational modeling. Additionally, owing to its superior wave function guess for transition‐metal‐containing systems, Jaguar finds applications in inorganic and bioinorganic chemistry. The emphasis on larger systems and transition metal elements paves the way toward developing Jaguar for its use in materials science modeling. The article describes the historical and new features of Jaguar, such as improved parallelization of many modules, innovations in ab initio pKa prediction, and new semiempirical corrections for nondynamic correlation errors in DFT. Jaguar applications in drug discovery, materials science, force field parameterization, and other areas of computational research are reviewed. Timing benchmarks and other results obtained from the most recent Jaguar code are provided. The article concludes with a discussion of challenges and directions for future development of the program. © 2013 Wiley Periodicals, Inc.  相似文献   

5.
A two-level hierarchical scheme, generalized distributed data interface (GDDI), implemented into GAMESS is presented. Parallelization is accomplished first at the upper level by assigning computational tasks to groups. Then each group does parallelization at the lower level, by dividing its task into smaller work loads. The types of computations that can be used with this scheme are limited to those for which nearly independent tasks and subtasks can be assigned. Typical examples implemented, tested, and analyzed in this work are numeric derivatives and the fragment molecular orbital method (FMO) that is used to compute large molecules quantum mechanically by dividing them into fragments. Numeric derivatives can be used for algorithms based on them, such as geometry optimizations, saddle-point searches, frequency analyses, etc. This new hierarchical scheme is found to be a flexible tool easily utilizing network topology and delivering excellent performance even on slow networks. In one of the typical tests, on 16 nodes the scalability of GDDI is 1.7 times better than that of the standard parallelization scheme DDI and on 128 nodes GDDI is 93 times faster than DDI (on a multihub Fast Ethernet network). FMO delivered scalability of 80-90% on 128 nodes, depending on the molecular system (water clusters and a protein). A numerical gradient calculation for a water cluster achieved a scalability of 70% on 128 nodes. It is expected that GDDI will become a preferred tool on massively parallel computers for appropriate computational tasks.  相似文献   

6.
A unified, computer algebra system‐based scheme of code‐generation for computational quantum‐chemistry programs is presented. Generation of electron‐repulsion integrals and their derivatives as well as exchange‐correlation potential and its derivatives is discussed. Application to general‐purpose computing on graphics processing units is considered.  相似文献   

7.
Code interoperability and the search for domain‐specific standard data formats represent critical issues in many areas of computational science. The advent of novel computing infrastructures such as computational grids and clouds make these issues even more urgent. The design and implementation of a common data format for quantum chemistry (QC) and quantum dynamics (QD) computer programs is discussed with reference to the research performed in the course of two Collaboration in Science and Technology Actions. The specific data models adopted, Q5Cost and D5Cost, are shown to work for a number of interoperating codes, regardless of the type and amount of information (small or large datasets) to be exchanged. The codes are either interfaced directly, or transfer data by means of wrappers; both types of data exchange are supported by the Q5/D5Cost library. Further, the exchange of data between QC and QD codes is addressed. As a proof of concept, the H + H2 reaction is discussed. The proposed scheme is shown to provide an excellent basis for cooperative code development, even across domain boundaries. Moreover, the scheme presented is found to be useful also as a production tool in the grid distributed computing environment. © 2013 Wiley Periodicals, Inc.  相似文献   

8.
We advocate domain-specific virtual processors (DSVP) as a portability layer for expressing and executing domain-specific computational workloads on modern heterogeneous HPC architectures, with applications in quantum chemistry. Specifically, in this article we extend, generalize and better formalize the concept of a domain-specific virtual processor as applied to scientific high-performance computing. In particular, we introduce a system-wide recursive (hierarchical) hardware encapsulation mechanism into the DSVP architecture and specify a concrete microarchitectural design of an abstract DSVP from which specialized DSVP implementations can be derived for specific scientific domains. Subsequently, we demonstrate, an example of a domain-specific virtual processor specialized to numerical tensor algebra workloads, which is implemented in the ExaTENSOR library developed by the author with a primary focus on the quantum many-body computational workloads on large-scale GPU-accelerated HPC platforms.  相似文献   

9.
The key trends and prospects of development of computational chemistry that formed in the early 2010s are considered. The most advanced methods for solution of various types of challenges in the computational and quantum chemistry by calculations in the distributed environments (Grid) and on high-performance supercomputer installations and the application methods of parallel and distributed computations are demonstrated; the tailor-made technologies developed for these calculations are described.  相似文献   

10.
11.
12.
The analysis of scalar and vector fields in quantum chemistry is an essential task for the computational chemistry community, where such quantities must be evaluated rapidly to perform a particular study. For example, the atoms in molecules approach proposed by Bader has become popular; however, this method demands significant computational resources to compute the involved tasks in short times. In this article, we discuss the importance of graphics processing units (GPU) to analyze electron density, and related fields, implementing several scalar, and vector fields within the graphics processing units for atoms and molecules (GPUAM) code developed by a group of the Universidad Autónoma Metropolitana in México City. With this application, the quantum chemistry community can perform demanding computational tasks on a desktop, where CPUs and GPUs are used to their maximum capabilities. The performance of GPUAM is tested in several systems and over different GPUs, where a GPU installed in a workstation converts it to a robust high-performance computing system.  相似文献   

13.
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single‐core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault‐tolerant, and load‐balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10–80 k computing cores of the Curie machine (GENCI‐TGCC‐CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large‐scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. © 2013 Wiley Periodicals, Inc.  相似文献   

14.
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.  相似文献   

15.
For the publication of research results, the chemical sciences community has had a long history of requiring authors to provide sufficient data so that their research results and procedures can be (1) understood, (2) critically evaluated, and (3) replicated by other competent scientists. The emergence of computational chemistry as a distinct area of research presents new challenges in defining criteria to meet these obligations. While much of the long-standing paradigm for experimental chemistry can be directly transferred to computational chemistry, some differences are apparent. A computational study does not give a product for which one can measure physical properties, nor are percent yields and recoveries available to demonstrate experimental success. Nonetheless, it is imperative that computational results be able to withstand the same scientific scrutiny as experimental ones. Like all fields of scientific endeavor, computational chemistry is also a dynamic science. The continuous and dramatic improvements in computational algorithms and increases in computing power over the last decade have made possible the study of chemical problems for which solutions by computational means previously were unattainable. Moreover, advances in computer technology have also changed the way these computational studies are carried out. For any new study, the traditional search for the nearest energy minimum may no longer be adequate, fewer assumptions and approximations may be acceptable, and even the nature of the data to be stored and reported may have evolved. For example, many computer algorithms have become sufficiently fast and convenient that it is more efficient to repeat some part of the overall calculation than to save and record the corresponding data that it generates. This document has been developed to provide guidance to chemists who employ computations of molecular structure, properties, reactivity, and dynamics as either a part or as the main thrust of a research report. It is derived in part from earlier work carried out by the Provisional Section Committee on Medicinal Chemistry of IUPAC (Gund, P.; Barry, D. C.; Blaney, J. M.; Cohen, C. N. J. Med. Chem., 1988, 31 , 2230–2234). ©1998 IUPAC  相似文献   

16.
Herein, the Zimmerman Möbius/Hückel concept is extended to pericyclic reactions involving transition metals. While sigmatropic hydrogen shifts in parent hydrocarbons are either uniquely antarafacial or suprafacial, we have shown by theoretical orbital topology considerations and quantum chemical computations at DFT level that both modes of stereoselectivity must become allowed in the same system as a consequence of Craig–Möbius‐type orbital arrays, in which a transition metal d orbital induces a phase dislocation in metallacycles. This may have fundamental implications for the understanding of reactivity and bonding in organometallic chemistry.  相似文献   

17.
Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano‐objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano‐objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. © 2013 Wiley Periodicals, Inc.  相似文献   

18.
It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.  相似文献   

19.
The general atomic and molecular electronic structure system (GAMESS) is a quantum chemistry package used in the first-principles modeling of complex molecular systems using density functional theory (DFT) as well as a number of other post-Hartree-Fock methods. Both DFT and time-dependent DFT (TDDFT) are of particular interest to the materials modeling community. Millions of CPU hours per year are expended by GAMESS calculations on high-performance computing systems; any substantial reduction in the time-to-solution for these calculations represents a significant saving in CPU hours. As part of this work, three areas for improvement were identified: (1) the exchange-correlation (XC) integration grid, (2) profiling and optimization of the DFT code, and (3) TDDFT parallelization. We summarize the work performed in these task areas and present the resulting performance improvement. These software enhancements are available in 12JAN2009R3 or later versions of GAMESS.  相似文献   

20.
In this work, we present a parallel approach to complete and restricted active space second‐order perturbation theory, (CASPT2/RASPT2). We also make an assessment of the performance characteristics of its particular implementation in the Molcas quantum chemistry programming package. Parallel scaling is limited by memory and I/O bandwidth instead of available cores. Significant time savings for calculations on large and complex systems can be achieved by increasing the number of processes on a single machine, as long as memory bandwidth allows, or by using multiple nodes with a fast, low‐latency interconnect. We found that parallel efficiency drops below 50% when using 8–16 cores on the shared‐memory architecture, or 16–32 nodes on the distributed‐memory architecture, depending on the calculation. This limits the scalability of the implementation to a moderate amount of processes. Nonetheless, calculations that took more than 3 days on a serial machine could be performed in less than 5 h on an InfiniBand cluster, where the individual nodes were not even capable of running the calculation because of memory and I/O requirements. This ensures the continuing study of larger molecular systems by means of CASPT2/RASPT2 through the use of the aggregated computational resources offered by distributed computing systems. © 2013 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号