首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   41篇
  免费   0篇
  国内免费   1篇
化学   18篇
力学   2篇
数学   17篇
物理学   5篇
  2023年   8篇
  2021年   1篇
  2020年   2篇
  2019年   2篇
  2017年   1篇
  2015年   1篇
  2014年   1篇
  2013年   8篇
  2012年   1篇
  2011年   2篇
  2010年   1篇
  2009年   1篇
  2008年   1篇
  2007年   3篇
  2006年   1篇
  2005年   1篇
  2001年   2篇
  2000年   1篇
  1998年   1篇
  1997年   2篇
  1993年   1篇
排序方式: 共有42条查询结果,搜索用时 15 毫秒
41.
An efficient computing framework, namely PFlows, for fully resolved-direct numerical simulations of particle-laden flows was accelerated on NVIDIA General Processing Units (GPUs) and GPU-like accelerator (DCU) cards. The framework is featured as coupling the lattice Boltzmann method for fluid flow with the immersed boundary method for fluid-particle interaction, and the discrete element method for particle collision, using two fixed Eulerian meshes and one moved Lagrangian point mesh, respectively. All the parts are accelerated by a fine-grained parallelism technique using CUDA on GPUs, and further using HIP on DCU cards, i.e., the calculation on each fluid grid, each immersed boundary point, each particle motion, and each pair-particle collision is responsible by one computer thread, respectively. Coalesced memory accesses to LBM distribution functions with the data layout of Structure of Arrays are used to maximize utilization of hardware bandwidth. Parallel reduction with shared memory for data of immersed boundary points is adopted for the sake of reducing access to global memory when integrate particle hydrodynamic force. MPI computing is further used for computing on heterogeneous architectures with multiple CPUs-GPUs/DCUs. The communications between adjacent processors are hidden by overlapping with calculations. Two benchmark cases were conducted for code validation, including a pure fluid flow and a particle-laden flow. The performances on a single accelerator show that a GPU V100 can achieve 7.1–11.1 times speed up, while a single DCU can achieve 5.6–8.8 times speed up compared to a single Xeon CPU chip (32 cores). The performances on multi-accelerators show that parallel efficiency is 0.5–0.8 for weak scaling and 0.68–0.9 for strong scaling on up to 64 DCU cards even for the dense flow (φ = 20%). The peak performance reaches 179 giga lattice updates per second (GLUPS) on 256 DCU cards by using 1 billion grids and 1 million particles. At last, a large-scale simulation of a gas-solid flow with 1.6 billion grids and 1.6 million particles was conducted using only 32 DCU cards. This simulation shows that the present framework is prospective for simulations of large-scale particle-laden flows in the upcoming exascale computing era.  相似文献   
42.
The combination of SnII or SbIII with π, non-π-conjugated units has produced birefringent crystals with birefringence ranging from 0.005 to 0.468@1064 nm. It is proven that introducing SnII or SbIII into crystals is a feasible strategy to enlarge the birefringence, which not only promotes the miniaturization of fabricated devices, but also effectively modulates polarized light. Herein, recently discovered SnII, SbIII-based birefringent crystals with birefringence investigated are summarized, including their crystal structure and optical properties, especially birefringence. This review also presents the influence of SnII, SbIII with stereochemically active lone pair on the optical anisotropy. We hope that this work provides a clear perspective on the crystal chemistry of SnII, SbIII-based optical functional crystals and promotes the development of new birefringent crystals with large optical anisotropy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号