首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   79篇
  免费   9篇
  国内免费   1篇
化学   14篇
力学   15篇
综合类   3篇
数学   13篇
物理学   44篇
  2023年   1篇
  2022年   2篇
  2020年   1篇
  2019年   5篇
  2018年   2篇
  2017年   2篇
  2016年   5篇
  2015年   1篇
  2014年   2篇
  2013年   2篇
  2012年   5篇
  2011年   6篇
  2010年   2篇
  2009年   8篇
  2008年   4篇
  2007年   6篇
  2006年   3篇
  2004年   1篇
  2003年   1篇
  2002年   4篇
  2001年   2篇
  2000年   2篇
  1999年   2篇
  1998年   1篇
  1989年   1篇
  1987年   1篇
  1985年   1篇
  1983年   2篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   5篇
  1977年   1篇
  1976年   3篇
  1975年   1篇
排序方式: 共有89条查询结果,搜索用时 15 毫秒
1.
A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster‐type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re‐docking of X‐ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives. © 2013 Wiley Periodicals, Inc.  相似文献   
2.
Events with a large transverse momentum (>1 GeV/c) pion or nucleon have been selected from the data of a high-statistics pp bubble chamber experiment (√s=6.84 GeV). Only events in which all secondary particles could be identified were used. One finds that fewer pions are produced in the azimuthal hemisphere containing the large transverse momentum particle than in the opposite hemisphere. An indication for coplanarity is found. Most pions associated with a large transverse momentum pion are emitted with small absolute c.m. rapidities, whereas those associated with large transverse momentum nucleons show some back-to-back structure. Various results of this investigation are similar to those obtained at the ISR. Most of the findings are compatible with predictions from an independent emission model.  相似文献   
3.
孙翠丽  魏东波  杨民 《光学技术》2007,33(3):345-347,351
分析了三维ICT近似重建FDK快速算法以及算法串行执行与并行执行的复杂度,利用MPI并行编程环境,对原有ICT重建流程进行并行化,在PC集群上实现了并行ICT重建,进行了系统集成。给出了系统应用实现的解决方案与软件集成流程图。实验结果证明串行重建和并行重建结果是一致的,并行重建可以得到比较理想地重建时间结果和比较理想的加速比与效率。  相似文献   
4.
基于SUN5500小型计算机并行开发环境,给出了消息传递模型和蕴式行模型的实现方法,通过实例分析了SUNMPI实际编程,并对选取不同模型有不同参数的运算时间进行了比较,结果表明,在SUN5500计算机上MPI模型和蕴式并行模型均能较大地提高运算速度,而且MPI在灵活性和并行程度方面更优。  相似文献   
5.
针对ADS颗粒靶概念的研究和设计,中国科学院近代物理研究所自主研发了蒙特卡罗模拟软件GMT。为了提高GMT程序的计算效率,研究了MPI在GMT中的应用和发展,实现了大规模随机数在进程中的随机分配,并采用快速读写文件的方式替代了MPI相关数据通信函数,极大地提高了计算效率。并研究了不同规模计算实例进程数、加速比、效率之间的关系,确定了最大加速进程数及并行效率最高时的进程数,为科研工作者在计算资源和计算效率之间选择最优计算方案提供了科学依据。MPI在GMT中的成功应用使计算资源得到了充分、高效的利用,极大地提高了计算效率,解决了蒙特卡罗方法中大规模事件模拟计算时间长、计算不稳定等问题,在散裂靶大规模扫描计算中发挥了重要的作用。For the research and design of the ADS granular-flow target concept, the Institute of Modern Physics, CAS has developed a Monte Carlo simulation software (GPU-accelerated Monte Carlo Transport program, GMT). In order to improve the computational efficiency of the GMT program, development and application of MPI in GMT were studied, to realize random distribution of the large-scale random number in the sub processes. Rapid reading and writing files were employed instead of the MPI data communication function, which greatly improves the computational efficiency. Different scale calculations were performed to study the relationship of process instance number, speedup to find the maximum acceleration process number and the number of processes when parallel efficiency is highest, which provides a scientific basis for researchers to optimize the computational program between computational resources and computation efficiency. The successful application of MPI in GMT, utilizes the computing resources fully and efficiently, improves the computational efficiency, solve the long time cost and unstable problem of Monte Carlo method in large-scale event simulations, plays an important role in the large-scale scanning calculation of the spallation target.  相似文献   
6.
Brownian dynamics simulations on CPU and GPU with BD_BOX   总被引:1,自引:0,他引:1  
There has been growing interest in simulating biological processes under in vivo conditions due to recent advances in experimental techniques dedicated to study single particle behavior in crowded environments. We have developed a software package, BD_BOX, for multiscale Brownian dynamics simulations. BD_BOX can simulate either single molecules or multicomponent systems of diverse, interacting molecular species using flexible, coarse-grained bead models. BD_BOX is written in C and employs modern computer architectures and technologies; these include MPI for distributed-memory architectures, OpenMP for shared-memory platforms, NVIDIA CUDA framework for GPGPU, and SSE vectorization for CPU.  相似文献   
7.
The family of expectation--maximization (EM) algorithms provides a general approach to fitting flexible models for large and complex data. The expectation (E) step of EM-type algorithms is time-consuming in massive data applications because it requires multiple passes through the full data. We address this problem by proposing an asynchronous and distributed generalization of the EM called the distributed EM (DEM). Using DEM, existing EM-type algorithms are easily extended to massive data settings by exploiting the divide-and-conquer technique and widely available computing power, such as grid computing. The DEM algorithm reserves two groups of computing processes called workers and managers for performing the E step and the maximization step (M step), respectively. The samples are randomly partitioned into a large number of disjoint subsets and are stored on the worker processes. The E step of DEM algorithm is performed in parallel on all the workers, and every worker communicates its results to the managers at the end of local E step. The managers perform the M step after they have received results from a γ-fraction of the workers, where γ is a fixed constant in (0, 1]. The sequence of parameter estimates generated by the DEM algorithm retains the attractive properties of EM: convergence of the sequence of parameter estimates to a local mode and linear global rate of convergence. Across diverse simulations focused on linear mixed-effects models, the DEM algorithm is significantly faster than competing EM-type algorithms while having a similar accuracy. The DEM algorithm maintains its superior empirical performance on a movie ratings database consisting of 10 million ratings. Supplementary material for this article is available online.  相似文献   
8.
In this paper we study correlations in multiparticle final states from pp interactions at 12 and 24 GeV/c. In an attempt to distinguish true dynamical correlations from the consequences of kinematics together with PT damping and the leading-particle effect, we compare the data with an independent-emission model which reproduces the single-particle spectra and also with a model that simulates a fragmentation mechanism. We investigate the forward-backward particle configurations and in particular the multiplicity imbalance and charge transfer, defining forward-backward by the largest rapidity gap as well as simply by c.m.s. hemispheres. We also study clustering by looking at distributions of the dispersions in longitudinal rapidity. From the comparison of the data with the models we find clear evidence for dynamical correlations of a sort one would expect from fragmentation-type mechanisms. We also find indications of non-fragmentation formation of neutral meson clusters.  相似文献   
9.
The inclusive production of Σ±(1385) is studied in pp interactions at 12 and 24 GeV/c. In this energy range, the inclusive cross sections for Σ+(1385) and Σ?(1385) rise from 0.20 ± 0.03 mb to 0.28 ± 0.03 mb and from 0.07 ± 0.02 mb to 0.12 ± 0.02 mb. The decays of Σ±(1385) account for ~20% of all observed Λ hyperons. The pT2 distributions are compatible with an exponential decrease and the slopes are in agreement with a common value of B ~ 3 (GeV/c)?2. The longitudinal spectra are significantly different: Σ?(1385) is mainly produced in the central region, whereas proton fragmentation contributes strongly to Σ+(1385) production.  相似文献   
10.
In an experiment with the hydrogen bubble chamber BEBC at CERN multiplicities of hadrons produced in νp and vp interactions have been investigated. Results are presented on the multiplicities of charged hadrons and neutral pions, forward and backward multiplicities of charged hadrons and correlations between forward and backward multiplicities. Comparisons are made with hadronic reactions and e+e? annihilation. In the framework of the quark-parton model the data imply similar charged multiplicities for the fragments of a u- and a d-quark, and a larger multiplicities for the fragments of a uu- than for a ud-diquark. The correlation data suggest independent fragmentation of the quark and diquark for hadronic masses above ~ 7 GeV and local charge compensation within an event.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号