首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
ARGO-YBJ,a Chinese-Italian Collaboration,is going to finish the first step of the installation of this cosmic ray telescope consisting in a single layer of RPCs,placed at 4300m.elevation,in Tibet,The detector will provide a detailed space-time picture of the showers front,initiated by primaries of energies in the range 10GeV-500 TeV.The data taking will start at the beginning of 2002 with a fraction of the detector installed.will be upgraded two times,being completed at the end of 2003,In this paper we briefly describe the dataflow,the trigger organization,the three operational steps in data taking and the computing model to process the data.the need of remote monitoring of the experiment will be touched upon.The processing power for the raw data reconstruction and for the Monte Carlo simulation is reported.  相似文献   

2.
In order to face the expected increase in statistics between now and 2005,the Babar experiment at SLAC is evolving its computing model toward a distributed multitier system.It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center,A unifrom computing enviromment is being deployed in the centers,the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of -100 TB of data per year,In parallel,smaller Tier-B and C sites receive subsets of data,presently in Kanga-Root[1] format and later in Objectivity[2] format,GRID tools will be used for remote job submission.  相似文献   

3.
The Italian Government has recently approved the construction of a National Center for Oncological Hadrontherapy(CNAO),TERA(Foundation for Oncological Hadrontherapy)will lead the high technology projects of the CNAO,whose machine design is a spin-off to the medical world of the collaboration with CERN.The CERN EDMS(Engineering Data Management System)was initially launched at CERN to support the LHC project but has since become a general service available for all divisions and recognized experiments.As TERA is closely associated to CERN,TERA decided to profit from EDMS and to use it to support the ambitious Quality Assurance plan for the CNAO project.With this EDMS project TERA transfers know-how that has been developed in the HEP Community to a social sector of major importance that also has high-density information management needs.The features available in the CERN EDMS system provide the tools for managing the complete lifecycle of any technical document including a distributed approval process and a controlled distributed collaborative work environment using the World Wide Web.The system allows management of structures representing projects and relative documents including drawings within working contexts and with a customizable release procedure.TERA is customizing CERN EDMS to document the CNAO project activities,to ensure that the medical accelecrator and its auxiliary installations can be properly managed throughout its lifecycle,from design to maintenance and possibly dismantling.The technical performance requirements of EDMS are identical to those for LHC and CERN in general.We will describe what we have learned about how to set-up an EDMS project,and how it benefits a challenging initiative like the CNAO Project of the TERA collaboration.The knowledge managed by the system will facilitate later installations of similar centers (planned for Lyon and Stockholm)and will allow the reuse of experience gained in Italy.  相似文献   

4.
BABAR[1] uses two formats for its data:Objectivity database and ROOT[1] files.This poster concerns the distribution of the latter-for Objectivity data see [3].The BABAR analysis data is stored in ROOT files-one per physics run and analysis selection channel-maintained in a large directory tree,Currently BABAR has more than 4.5 TBytes in 200,000 ROOT files.This data is (mostly)produced at SLAC,but is required for analysis at universities and research centres throughout the US and Europe.Two basic problems confront us when we seek to import bulk data from SLAC to an institute‘s local storage via the network.We must determine which files must be imported (depending on the local site requirements and which files have already been imported),and we must make the optimum use of the network when transferring the data,Basic ftp-like tools(ftp,scp,etc)do not attempt to solve the first problem.More sophisticated tools like rsync[4],the widely-used mirror/synchronisation program,compare local and remote file systems,checking for changes(based on file date,size and,if desired,an elaborate checksum)in order to only copy new or modified files,However rsync allows for only limited file selection.Also when,as in BABAR,an extremely large directory structure must be scanned,rsync can take several hours just to determine which files need to be copied.Although rsync(and scp)provides on -the=fly compression,it does not allow us to optimise the network transfer by using multiple streams,abjusting the TCP window size or separating encrypted authentication from unencrypted data channels.  相似文献   

5.
This paper discusses how to put into operation a midrange computing cluster for the Nuclear Chemistry Group(NCG) of the State University of New York at STONY Brook(SUNY-SB).The NCG is part and one of the collaborators within the RHIC/Phenix experiment located at the Brookhaven National Laboratory(BNL).The Phenix detector system produces about half a PB(or 500 TB) of data a year and our goal was to provide to this remote collaborating facility the means to be part of the analysis process.The computing installation was put into operation at the beginning of the year 2000.The cluster consists of 32 peripheral machines running under Linux and central server Alpha 4100 under DIgital Unix 4.of (formally True Unix 64),In the paper the realization process is under discussion.  相似文献   

6.
The Daya Bay Reactor Neutrino Experiment started running on September 23, 2011. The offline computing environment, consisting of 11 servers at Daya Bay, was built to process onsite data. With the current computing ability, onsite data processing is running smoothly. The Performance Quality Monitoring system (PQM) has been developed to monitor the detector performance and data quality. Its main feature is the ability to efficiently process multiple data streams from the three experimental halls. The PQM processes raw data files from the Daya Bay data acquisition system, generates and publishes histograms via a graphical web interface by executing the user-defined algorithm modules, and saves the histograms for permanent storage. The fact that the whole process takes only around 40 minutes makes it valuable for the shift crew to monitor the running status of all the sub-detectors and the data quality.  相似文献   

7.
Since 1998,the ALICE experiment and the CERN/IT division have jointly executed several large-scale high throughput distributed computing exercises:the ALICE data challenges.The goals of these regular exercises are to test hardware and software components of the data acqusition and computing systems in realistic conditions and to execute an early integration of the overall ALICE computing infrastructure.This paper reports on the third ALICE Data Challenge (ADC III) that has been performed at CERN from January to March 2001.The data used during the ADC Ⅲ are simulated physics raw data of the ALICE TPC,produced with the ALICE simulation program AliRoot.The data acquisition was based on the ALICE online framework called the ALICE Data Acquisition Test Environment (DATE) system.The data after event building,were then formatted with the ROOT I/O package and a data catalogue based on MySQl was established.The Mass Storage System used during ADC III is CASTOR.Different software tools have been used to monitor the performances,DATE has demonstrated performances of more than 500 MByte/s.An aggregate data throughput of 85 MByte/s was sutained in CASTOR over several days.The total collected data amounts to 100 TBytes in 100.00 files.  相似文献   

8.
After less than a year of operation ,the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB.Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon,France,and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory(LLNL),BaBar Collaborators plan to double data collection each year and export a third of the data to IN2P3.So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone.Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical.In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed.Methods for achieving the ambitious requirements will be discussed.  相似文献   

9.
We propose a scheme to implement quantum state transfer between two distant quantum nodes via a hybrid solid–optomechanical interface. The quantum state is encoded on the native superconducting qubit, and transferred to the microwave photon, then the optical photon successively, which afterwards is transmitted to the remote node by cavity leaking,and finally the quantum state is transferred to the remote superconducting qubit. The high efficiency of the state transfer is achieved by controllable Gaussian pulses sequence and numerically demonstrated with theoretically feasible parameters.Our scheme has the potential to implement unified quantum computing–communication–computing, and high fidelity of the microwave–optics–microwave transfer process of the quantum state.  相似文献   

10.
The COMPASS experiment at CERN is starting data taking in summer 2001,The COMPASS off-line framework(CORAL)will use the CERN Conditions Data Base(CDB)to handle time dependent quantities like calibration constants and data from the slow control system.We describe the use of the CDB within CORAL and the fullscale performance tests on the COMPASS Computing Farm(CCF),The CDB has been interfaced to the SCADA PVSS slow control system.To continuously transfer all the data to the CDB and make them available to the users,We describe this interface,a feasibility study performed using mock data and we predict the expected performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号