首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper,we discuss the CDF Run Ⅱ Run Control and online event monitoring system.Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes,Run Control is a real-time multi-threaded application implemented in Java with flexible state machines,using JDBC database connections to configure clients,and including a user friendly and powerful graphical user interface.The CDF online event monitoring system consists of several parts;the eent monitoring programs,the display to browse their results,the server program which communicates with the display via socket connections ,the error receiver which displays error messages and communicates with run Control,and the state manager which monitors the state of the monitor programs.  相似文献   

2.
The CDF collaboration at the Fermilab Tevatron analyses proton-antiproton interactions at a center-of=mass energy of 2 TeV.during the the collider run starting this year the experiment expects to record 1 Petabyte of data and associated data samples,The Data Handling(DH) system has online and offline components.The DH offline component provides access to the stored data,to stored reconstruction output,to stored Monte-Carlo data samples,and user owned data samples.It serves more than 450 physicists of the collaboration.The extra requirements to the offline component of the Data Handling system are simplicity and convenience for users.More than 50 million events of the CDF Run II data have been already processed using this system.  相似文献   

3.
The object-oriented tag database of the ZEUS experiment at HERA is based on Objectivity/DB.It is used to rapidly select events for physics analysis based on intuitive physical criteria.The total number of events currently in the database exceeds 150 millionBased on the detector configuration different information can be stored for each event.A new version of the database software was recently released which serves clients on a multitude of batch machines,workgroup servers and desktop machines running Irix,Linuxand Solaris.This replaces an earlier version which was restricted to three SGI machines Multiple copies of the data can be stored transparently to the users for example if a new offline reconstruction of the data is in progress.A report is given on the upgrade of the database and its superior performance compared to the old event selection method.  相似文献   

4.
The Collider Detector at Fermilab(CDF) experiment records and analyses proton-antiprotion interactions at a center-of -mass energy of 2 TeV,Run II of the Fermilab Tevatron started in April of this year,The duration of the run is expected to be over two years.One of the main data handling strategies of CDF for RUn II is to hide all tape access from the user and to facilitate sharing of data and thus disk space,A disk inventory manager was designed and developed over the past years to keep track of the data on disk.to coordinate user access to the data,and to stage data back from tape to disk as needed.The CDF Run II disk inventory manager consists of a server process,a user and administrator command line interfaces.and a library with the routines of the client API.Data are managed in filesets which are groups of one or more files.The system keeps track of user acess to the filesets and attempts to keep frequently accessed data on disk.Data that are not on disk are automatically staged back from tape as needed.For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard.  相似文献   

5.
The NUBASE2012 evaluation of nuclear properties   总被引:1,自引:0,他引:1  
This paper presents the Nubase2012 evaluation that contains the recommended values for nuclear and decay properties of nuclides in their ground and excited isomeric(T1/2>100 ns) states.All nuclides for which some experimental information is known are considered.NUBASE2012 covers all up to date experimental data published in primary(journal articles) and secondary(mainly laboratory reports and conference proceedings) references,together with the corresponding bibliographical information.During the development of NUBASE2012,the data available in the "Evaluated Nuclear Structure Data File"(Ensdf) database were consalted,and critically assessed of their validity and completeness.Furthermore,a large amount of new and somewhat older experimental results that were missing in Ensdf were compiled,evaluated and included in NUBASE2012.The atomic mass values were taken from the "Atomic Mass Evaluation"(AME2012,second and third parts of the present issue).In cases where no experimental data were available for a particular nuclide,trends in the behavior of specific properties in neighboring nuclei(TNN) were examined.This approach allowed to estimate,whenever possible,values for a range of properties,and are labeled in NUBASE2012 as "non-experimental"(lagged "#").Evaluation procedures and policies that were used during the development of this database are presented,together with a detailed table of recommended values and their uncertainties.  相似文献   

6.
This paper presents the NUBASE2016 evaluation that contains the recommended values for nuclear and decay properties of 3437 nuclides in their ground and excited isomeric(T_(1/2)≥100 ns) states.All nuclides for which any experimental information is known were considered.NUBASE2016 covers all data published by October 2016 in primary(journal articles) and secondary(mainly laboratory reports and conference proceedings) references,together with the corresponding bibliographical information.During the development of NUBASE2016,the data available in the "Evaluated Nuclear Structure Data File"(ENSDF) database were consulted and critically assessed for their validity and completeness.Furthermore,a large amount of new data and some older experimental results that were missing from ENSDF were compiled,evaluated and included in NUBASE2016.The atomic mass values were taken from the "Atomic Mass Evaluation"(AME2016,second and third parts of the present issue).In cases where no experimental data were available for a particular nuclide,trends in the behavior of specific properties in neighboring nuclides(TNN) were examined.This approach allowed to estimate values for a range of properties that are labeled in NUBASE2016 as "non-experimental"(flagged "#").Evaluation procedures and policies used during the development of this database are presented,together with a detailed table of recommended values and their uncertainties.  相似文献   

7.
Run 2 at Fermilab began in March,2001,CDF will collect data at a maximum rate of 20 MByte/sec during the run.The offline reconstruction of this data must keep up with the data taking rate.This reconstruction occurs on a large PC farm,which must have the capacity for quasi-real time data reconstruction,for reprocessing of some data and for generation and processing of Monte Carlo samples.In this paer we will give the design requirements ofr the farm,describe the hardware and software design used to meet those requirements,describe the early experiences with Run 2 data processing,and discussfuture prospects for the farm,including some ideas about Run 2b processing.  相似文献   

8.
9.
10.
This is the first of two articles(Part I and Part II)that presents the results of the new atomic mass evaluation,Ame2020.It includes complete information on the experimental input data that were used to derive the tables of recommended values which are given in Part II.This article describes the evaluation philosophy and procedures that were implemented in the selection of specific nuclear reaction,decay and mass-spectrometric data which were used in a least-squares fit adjustment in order to determine the recommended mass values and their uncertainties.All input data,including both the accepted and rejected ones,are tabulated and compared with the adjusted values obtained from the least-squares fit analysis.Differences with the previous Ame2016 evaluation are discussed and specific examples are presented for several nuclides that may be of interest to Ame users.  相似文献   

11.
The Debye equation with slit-smeared small angle x-ray scattering(SAXS) data is extended form an ideal two-phase system to a pseudo two-phase system with the presence of the interface layer,and a simple accurate solution is proposed to determine the average thickness of the interface layer in porous materials.This method is tested by experimental SAXS data,which were measured at 25℃,of organo-modified mesoporous silica prepared by condensation of tetraethoxysiland(TEOS) and methyltriethoxysilane(MTES) using non-ionic neutral surfactant as template under neutral condition.  相似文献   

12.
This paper is the first of two articles (Part I and Part II) that presents the results of the new atomic mass evaluation, AME2012. It includes complete information on the experimental input data (including not used and rejected ones), as well as details on the evaluation procedures used to derive the tables with recommended values given in the second part. This article describes the evaluation philosophy and procedures that were implemented in the selection of specific nuclear reaction, decay and mass-spectrometer results. These input values were entered in the least-squares adjustment procedure for determining the best values for the atomic masses and their uncertainties. Calculation procedures and particularities of the AME are then described. All accepted and rejected data, including outweighed ones, are presented in a tabular format and compared with the adjusted values (obtained using the adjustment procedure). Differences with the previous AME2003 evaluation are also discussed and specific information is presented for several cases that may be of interest to various AME users. The second AME2012 article, the last one in this issue, gives a table with recommended values of atomic masses, as well as tables and graphs of derived quantities, along with the list of references used in both this AME2012 evaluation and the NUBASE2012 one (the first paper in this issue).  相似文献   

13.
This paper is the first of two articles(Part Ⅰ and Part Ⅱ) that presents the results of the new atomic mass evaluation,AME2016.It includes complete information on the experimental input data(also including unused and rejected ones),as well as details on the evaluation procedures used to derive the tables of recommended values given in the second part.This article describes the evaluation philosophy and procedures that were implemented in the selection of specific nuclear reaction,decay and mass-spectrometric results.These input values were entered in the least-squares adjustment for determining the best values for the atomic masses and their uncertainties.Details of the calculation and particularities of the AME are then described.All accepted and rejected data,including outweighted ones,are presented in a tabular format and compared with the adjusted values obtained using the least-squares fit analysis.Differences with the previous AME2012 evaluation are discussed and specific information is presented for several cases that may be of interest to AME users.The second AME2016 article gives a table with the recommended values of atomic masses,as well as tables and graphs of derived quantities,along with the list of references used in both the AME2016 and the NUBASE2016 evaluations(the first paper in this issue).  相似文献   

14.
The CMS experiment at the CERN LHC collider is producing large amounts of simulated data in order to provide an adequate statistic for the Trigger System design.These productions are performed in a distributed environment,prototyping the hierarchical model of LHC computing centers developed by MONARC.A GRID approach is being used for interconnecting the Regional Centers.The main issues which are currently addressed are:automatic submission of data production requests to available productioin sites,data transfer among production sites,“best-replica” location and submission of enduser analysis job to the appropriate Regional Center,In each production site different hardware configurations are being tested and exploited.Furthermore robust job submission systems.which are also able to provide the needed bookkeeping of the produced data are being developed.BOSS(Batch Object Submission System)is an interface to the local computing center scheduling system that has been developed in order to allow recording in a relational database of information produced by the jobe running on the batch facilities A summary of the current activites and a plan for the use of DataGrid PM9 tools are presented.  相似文献   

15.
The main goal of the HyperCP(E87) experiment at Fermilab is to search for CP violation in Ξand Α decays at the -10^-4 level.This level of precision dictates a data sample of over a billion events.The experiment collected about 231 billion raw events on about 30,000 5-GB tapes in ten months of running in 1997and 1999,In order to analyze this huge amount of data,the collaboration has reconstructed the events on a farm of 55 dual-processor Linux-based PCs at Fermilab.A set of farm tools has been written by the collaboration to interface with the Farm Batch System (FBS and FBSNG)[1] developed by the Fermilab Computing Division,to automate much of the farming and to allow nonexpert farm shifters to submit and monitor jobs through a web-based interface.Special care has been taken to produce a robust system which facilitates easy recovery from errors.The code has provisions for extensive monitoring of the data on a spill-by-spill basis,as is required by the need to minimize potential Systematic errors.About 36 million plots of various parameters produced from the farm analysis can be accessed through a data management system.The entire data set was farmed in eleven mouths,or about the same time that was taken to acquire the data.We will describe the architecture of the farm.our experience in operating it ,and show some results from the farm analysis.  相似文献   

16.
The online Data Quality Monitoring (DQM) tool plays an important role in the data recording process of HEP experiments. The BESⅢ DQM collects data from the online data flow, reconstructs them with offline reconstruction software and automatically analyzes the reconstructed data with user-defined algorithms. The DQM software is a scalable distributed system. The monitored results are gathered and displayed in various formats, which provides the shifter with current run information that can be used to identify problems quickly. This paper gives an overview of the DQM system at BESⅢ.  相似文献   

17.
This article introduces the design and performance of the data acquisition system used in an omnidirectional gamma-ray positioning system, along with a new method used in this system to obtain the position of radiation sources in a large field. This data acquisition system has various built-in interfaces collecting, in real time, information from the radiation detector, the video camera and the GPS positioning module. Experiments show that the data acquisition system is capable of carrying out the proposed quantitative analysis to derive the position of radioactive sources, which also satisfies the requirements of high stability and reliability.  相似文献   

18.
Providing efficient access to more than 300TB of experiment data is the responsibility of the BaBar^1 Databases Group.Unlike generic tools,The Event Browser presents users with an abstraction of the BaBar data model.Multithreaded CORBA^2 servers perform database operations using small transactions in an effort to avoid lock contention issues and provide adequate response times.The GUI client is implemented in Java and can be easily deployed throughout the community in the form of a web applet.The browser allows users to examine collections of related physics events and identify associations between the collections and the physical files in which they reside,helping administrators distribute data to other sites worldwide,This paper discusses the various aspects of the Event Browser including requirements,design challenges and key features of the current implementation.  相似文献   

19.
This article introduces the design and performance of the data acquisition system used in an omnidi- rectional gamma-ray positioning system, along with a new method used in this system to obtain the position of radiation sources in a large field. This data acquisition system has various built-in interfaces collecting, in real time, information from the radiation detector, the video camera and the GPS positioning module. Experiments show that the data acquisition system is capable of carrying out the proposed quantitative analysis to derive the position of radioactive sources, which also satisfies the requirements of high stability and reliability.  相似文献   

20.
The future GSI Online-Offline-Object-Oriented analysis framework Go4 based on ROOT[CERN,R.Brun et al] provides a mechanism to monitor and control an analuysis at any time.This is achieved by running the GUI and the analysis in different tasks.To control these tasks by one non-lbocking GUI,the Go4TaskHandler package was developed.It offers an asynchronous inter task communication via independent channels for commands,data,and status information,Each channel is processed by a dedicated thread and has a buffer queue as interface to the working thread.The threads are controlled by the Go4ThreadMAanager package,based on the ROOT TThread package,In parallel to the GUI actions,the analysis tasks can display objects like histograms in the GUI.A test GUI was implemented using the Qt widget library(Trolltech Inc.).A Qt to ROOT interface has been developed.The Go4 packages may be utilized for any ROOT application that requires to control independent data processing or monitoring tasks from a non-blocking GUI.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号