首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
In this Letter,a new approach of optical tape for high capacity multilayer data storage is proposed.We show that a length of 5 cm and width of 2 cm of soft and transparent optical tape can be used for two-photon three-dimensional bit data storage.We successfully demonstrate writing and reading of six layers of data storage with a transverse bit separation of 2μm and an axial separation of 2.5μm in a tetraphenylethylene-doped photobleaching polymer.The fluorescence intensity is insensitive to the storage depth of the photopolymer matrix.Thus,the optical tape that we put forward in the experiment can help people realize true large data storage in the future,like magnetic tape.This method significantly paves a novel way for solving big data storage problems.  相似文献   

2.
The Collider Detector at Fermilab(CDF) experiment records and analyses proton-antiprotion interactions at a center-of -mass energy of 2 TeV,Run II of the Fermilab Tevatron started in April of this year,The duration of the run is expected to be over two years.One of the main data handling strategies of CDF for RUn II is to hide all tape access from the user and to facilitate sharing of data and thus disk space,A disk inventory manager was designed and developed over the past years to keep track of the data on disk.to coordinate user access to the data,and to stage data back from tape to disk as needed.The CDF Run II disk inventory manager consists of a server process,a user and administrator command line interfaces.and a library with the routines of the client API.Data are managed in filesets which are groups of one or more files.The system keeps track of user acess to the filesets and attempts to keep frequently accessed data on disk.Data that are not on disk are automatically staged back from tape as needed.For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard.  相似文献   

3.
Large High Energy and Nuclear Physics(HENP)databases are commonly stored on robotic tape systems because of cost considerations.Later,selected subsets of the data are cached into disk caches for analysis or data mining.Because of the relatively long time to mount,seek,and read a tape,it is important to minimize the number of times that data is cached into disk.Having too little disk cache will force files to be removed from disk prematurely,thus reducing the potential of their sharing with other users .Similarly,having too few tape drives will not make good use of a large disk cache,as the throughput from the tape system will form the bottleneck.Balancing the tape and disk resources is dependent on the patterns of the requests to the data.In this paper,we describe a simulation that characterizes such a system in terms of the rsources and the request patterns.We learn from the simulation which parameters affect the performance of the system the most.We also observe from the simulation that,there is a point beyond which it is not worth investing in additional resources as the benefit is too marginal.We call this point the “point-of -no-benefit“(or PNB),and show that using this concept we can more easily discover the relationship of various parameters to the performance of the system.  相似文献   

4.
温淑焕 《中国物理 B》2009,18(10):4222-4228
Collision avoidance is always difficult in the planning path for a mobile robot. In this paper, the virtual force field between a mobile robot and an obstacle is formed and regulated to maintain a desired distance by hybrid force control algorithm. Since uncertainties from robot dynamics and obstacle degrade the performance of a collision avoidance task, intelligent control is used to compensate for the uncertainties. A radial basis function (RBF) neural network is used to regulate the force field of an accurate distance between a robot and an obstacle in this paper and then simulation studies are conducted to confirm that the proposed algorithm is effective.  相似文献   

5.
In reactor neutrino experiments, the analysis of time correlations between different physical events is an important task. Such analysis can help to understand the physical mechanisms of the signal and background events as well as the details of event selection and background estimation. This study investigates a "sampling and mixing" method used for producing large MC data samples for the Daya Bay reactor neutrino experiment. We designed a simple, generic mixing algorithm and generated large MC data samples for physics analysis from several samples according to their respective event rates. Basic plots based on the mixed data are shown.  相似文献   

6.
In reactor neutrino experiments, the analysis of time correlations between different physical events is an important task. Such analysis can help to understand the physical mechanisms of the signal and background events as well as the details of event selection and background estimation. This study investigates a "sampling and mixing" method used for producing large MC data samples for the Daya Bay reactor neutrino experiment. We designed a simple, generic mixing algorithm and generated large MC data samples for physics analysis from several samples according to their respective event rates. Basic plots based on the mixed data are shown.  相似文献   

7.
8.
FBSNG [1] is a redesigned version of Farm Batch System (FBS[1]),which was developed as a batch process management system for off-line Run II data processing at FNAL.FBSNG is designed for UNIX computer farms and is capable of managing up to 1000 nodes in a single farm.FBSNG allows users to start arrays of parallel processes on one or more farm computers,It uses a simplified abstract resource counting method for load balancing between computers.The resource counting approach allows FBSNG to be a simple and flexible tool for farm resource management.FBSNG scheduler features include guaranteed and controllable” fair-share” scheduling.FBSNG is easily portable across different flavors of UNIX.The system has been successfully used at Fermilab as well as by off-site collaborators for several years on farms of different sizes and different platforms for off-line data processing,Monte-Carlo data generation and other tasks.  相似文献   

9.
With increasing physical event rates and the number of electronic channels, traditional readout schemes meet the challenge of improving readout speed caused by the limited bandwidth of the crate backplane. In this paper, a high-speed data readout method based on the Ethernet is presented to make each readout module capable of transmitting data to the DAQ. Features of explicitly parallel data transmitting and distributed network architecture give the readout system the advantage of adapting varying requirements of particle physics experiments. Furthermore, to guarantee the readout performance and flexibility, a standalone embedded CPU system is utilized for network protocol stack processing. To receive the customized data format and protocol from front-end electronics, a field programmable gate array (FPGA) is used for logic reconfiguration. To optimize the interface and to improve the data throughput between CPU and FPGA, a sophisticated method based on SRAM is presented in this paper. For the purpose of evaluating this high-speed readout method, a simplified readout module is designed and implemented. Test results show that this module can support up to 70 Mbps data throughput from the readout module to DAQ.  相似文献   

10.
This article is about a piece of middle ware,allowing to convert a dump tape based Tertiary Storage System into a multi petabyte random access device with thousands of channels.Using typical caching mechanisms,the software optimizes the access to the underlying Storage System and makes better use of possibly expensive drives and robots or allows to integrate cheap and slow devices without introducing unacceptable performance degadation.In addition,using the standard NFS2 protocol,the dCache provides a unique view into the storage repository,hiding the physical location of the file data,cached or tape only.Bulk data transfer is supported through the kerberized FTP protocol and a C-API,providing the posix file access semantics,Dataset staging and disk space management is performed invisibly to the data clients.The project is a DESY,Fermilab joint effort to overcome limitations in the usage of tertiary storage resources common to many HEP labs.The distributed cache nodes may range from high performance SGI machines to commodity CERN Linux-IDE like file server models.Different cache nodes are assumed to have different affinities to particular storage groups or file sets.Affinities may be defined manually or are calculated by the dCache based on topology considerations.Cache nodes may have different disk space management policies to match the large variety of applications from raw data to user analysis data pools.  相似文献   

11.
Jefferson Lab has implemented a scalable,distributed,high performance mass storage system-JASMine.The system is entirely implemented in Java,provides access to robotic tape storage and includes disk cache and stage manager components.The disk manager subsystem may be used independently to manage stand-alone disk pools.The system includes a scheduler to provide policy-based access to the storage systems.Security is provided by pluggable authentication modules and is implemented at the network socket level.The tape and disk cache systems have well defined interfaces in order to provids integration with grid-based services.The system is in production and being used to archive 1 TB per day from the experiments.and currently moves over 2 TB per day total.This paper will describe the architecture of JASMine;discuss the rationale for building the system,and present a transparent 3^rd party file replication service to move data to collab-orating institutes using JASMine,XML,and servlet technology interfacing to grid-based file transfer mechanisms.  相似文献   

12.
The ZEUS experiment has migrated its reconstruction and analysis farms to a PC-based environment.More than one year of experience has been acquired with successful operation of an analysis farm designed for several hundred users.Specially designed software has been used to proveide fast and reliable access to large amounts of data (30 TB in total),After the ongoing upgrade of the HERA luminosity,higher requirements will arise in terms of data storage capacity and throughput rate,The necessity of a bigger disk cache has led to consideration of solutions based on commodity technology,PC-based file servers are being tested as a cost-effective storage system,In this article we present the hardware and software solutions deplogyed and discuss their performance.scalability and maintenance issues.  相似文献   

13.
高娃  查富生  宋宝玉  李满天 《中国物理 B》2014,23(1):10701-010701
This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm.  相似文献   

14.
The ALICE experiment [1] at the Large Hadron Collider(LHC) at CERN will detect up to 20,000 particles in a single Pb-Pb event resulting in a data rate of -75 MByte/event,The event rate is limited by the bandwidth of the data storage system.Higher rates are possible by selecting interesting events and subevents (High Level trigger) or compressing the data efficiently with modeling techniques.Both require a fast parallel pattern recognition.One possible solution to process the detector data at such rates is a farm of clustered SMP nodes,based on off-the-shelf PCs,and connected by a high bandwidt,low latency network.  相似文献   

15.
The β+/EC decay of doubly odd nucleus ^176Ir has been studied via the ^146Nd(^35Cl, 5n7) heavy ion fusion evaporation reaction at 210MeV bombarding energy. With the aid of a helium-jet recoil fast tape transport system, the reaction products were transported to a low-background location for measurement. Based on the data analysis, the previously known γ rays from the decay of ^176Ir are proven. Moreover, three new excited levels and ten new γ rays are assigned to ^176Os. The time spectra of typical γ rays clearly indicate a long-lived low-spin isomer in ^176Ir.  相似文献   

16.
The current computing environment of our Computing Center in IHEP uses a SAS (server Attached Storage)architecture,attaching all the storage devices directly to the machines.This kind of storage strategy can‘t meet the requirement of our BEPC II/BESⅢ project properly.Thus we design and implement a SAN-based computing environment,which consists of several computing farms,a three-level storage pool,a set of storage management software and a web-based data management system.The feature of ours system includes cross-platform data sharing,fast data access,high scalability,convenient storage management and data management.  相似文献   

17.
The heavy elements in the Universe are formed during the s- and r-processes mainly in AGB stars and supernovae, respectively. Simulation of s- and r-nucleosynthesis critically depends on the neutron capture and weak decay rates for all the nuclei on the reaction chain. The present work analyzes systematically the neutron capture rates (cross sections) for the s-process nuclei, including ~3000 rates on ~200 nuclei. The network calculations for the constant temperature s-process have been performed using the different data sets selected as the nuclear inputs to investigate the uncertainties in the predicted s-abundances. We show that the available cross sections of neutron capture on many s-process nuclei still carry large uncertainties, which lead to low accuracy in the determination of s-process isotope abundances. We analyze the neutron capture cross section data for the same unique isobar nucleus accorded by year from previous work. Such an analysis indicates that the s-process has been studied for more than fifty years and there exist two research stages around 1976 and 2002, respectively. The needs and opportunities for future experiments and theoretical tools are highlighted to remove the existing shortcomings in the neutron capture rates.  相似文献   

18.
As presented at the last CHEP conference,the BTeV triggering and data collection pose a significant challenge in construction and operation,generating 1.5 Terabytes/second of raw data from over 30 million detector channels.We report on facets of the DAQ and trigger farms.We report on the current design of the DAQ,especially its partitioning features to support commissioning of the detector.We are exploring collaborations with computer science groups experienced in fault tolerant and dynamic real-time and embedded systems to develop a system to provide the extreme flexibility and high availability required of the heterogeneous trigger farm(-ten thousand DSPs and commodity processors).We describe directions in the following areas:system modeling and analysis using the Model Integrated Computing approach to assist in the creation of domain-specific modeling,analysis,and program synthesis environments for building complex,large-scale computer-based systems;System Configuration Management to include compileable design specifications for configurable hardware components,schedules,and communication maps. Runtime Environment and Hierarchical Fault Detection/Management-a system-wide infrastructure for rapidly detecting,isolating,filtering,and reporting faults which will be encapsulated in intelligent active entites(agents)to run on DSPs,L2/3 processors,and other supporting processors throughout the system.  相似文献   

19.
Energy levels, radiative rates, oscillator strengths and line strengths are reported for transitions among the lowest 97 levels of the(1s22s22p6) 3s23p2, 3s23p3 d, 3s3p3, 3p4, 3s3p23 d, and 3s23d2 configurations of Rb XXIV. A multiconfiguration Dirac–Fock(MCDF) method is adopted for the calculations. Radiative rates, oscillator strengths, and line strengths are provided for all electric dipole(E1), magnetic dipole(M1), electric quadrupole(E2), and magnetic quadrupole(M2)transitions from the ground level to all 97 levels, although calculations are performed for a much larger number of levels.To achieve the accuracy of the data, comparisons are provided with similar data obtained from the Flexible Atomic Code(FAC) and also with the available theoretical and experimental results. Our energy levels are found to be accurate to better than 1.2%. Wavelengths calculated are found to lie in EUV(extreme ultraviolet) and x-ray regions. Additionally, lifetimes for all 97 levels are obtained for the first time.  相似文献   

20.
We describe some recent results on isospin breaking corrections which are of relevance for predictions of the leading order hadronic contribution to the muon anomalous magnetic moment aμ^had.LO when using τ lepton data. When these corrections are applied to the new combined data on the π^±π^0 spectral function, the prediction for aμ^had.LO based on τ lepton data gets closer to the one obtained using e^+e^- data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号