首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
The future GSI Online-Offline-Object-Oriented analysis framework Go4 based on ROOT[CERN,R.Brun et al] provides a mechanism to monitor and control an analuysis at any time.This is achieved by running the GUI and the analysis in different tasks.To control these tasks by one non-lbocking GUI,the Go4TaskHandler package was developed.It offers an asynchronous inter task communication via independent channels for commands,data,and status information,Each channel is processed by a dedicated thread and has a buffer queue as interface to the working thread.The threads are controlled by the Go4ThreadMAanager package,based on the ROOT TThread package,In parallel to the GUI actions,the analysis tasks can display objects like histograms in the GUI.A test GUI was implemented using the Qt widget library(Trolltech Inc.).A Qt to ROOT interface has been developed.The Go4 packages may be utilized for any ROOT application that requires to control independent data processing or monitoring tasks from a non-blocking GUI.  相似文献   

2.
The BaBar Experiment collected around 20 TB of data during its first 6 months of running.Now,after 18 months,data size exceeds 300 TB,and according to prognosis,it is a small fraction of the size of data coming in the next few months,In order to keep up with the data significant effort was put into tuning the database system,It led to great performance improvements,as well as to inevitable system expansion-450 simultaneous processing nodes alone used for data reconstruction.It is believed,that further growth beyond 600 nodes will happen soon.In such an environment,many complex operations are executed simultaneously on hundreds of machines,putting a huge load on data servers and increasing network traffic Introducing two CORBA servers halved startup time,and dramatically offloaded database servers:data servers as well as lock servers The paper describes details of design and implementation of two servers recently in troduced in the Babar system:conditions OID server and Clustering Server,The first experience of using these servers is discussed.A discussion on a Collection Server for data analysis,currently being designed is included.  相似文献   

3.
After less than a year of operation ,the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB.Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon,France,and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory(LLNL),BaBar Collaborators plan to double data collection each year and export a third of the data to IN2P3.So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone.Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical.In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed.Methods for achieving the ambitious requirements will be discussed.  相似文献   

4.
Within the ATLAS experiment Trigger/DAQ and DCS are both logically and physically separated.Nevertheless there is a need to communicate.The initial problem definition and analysis suggested three subsystems the Trigger/DAQ DCS Communication (DDC) project should support the ability to :1.exchange data between Trigger/DAQ and DCS;2.send alarm messages from DCS to Trigger/DAQ;3.issue commands to DCS from Trigger/DAQ.Each subsystem is developed and implemented independently using a common software infrastructure.Among the various subsystems of the ATLAS Trigger/DAQ the Online is responsible for the control and configuration.It is the glue connecting the different systems such as data flow.level 1 and high-level triggers.The DDC uses the various Online components as an interface point on the Trigger/DAQ side with the PVSS II SCADA system on the DCS side and addresses issues such as partitioning,time stamps,event numbers,hierarchy,authorization and security,PVSS II is a commercial product chosen by CERN to be the SCADA system for all LHC experiments,Its API provides full access to its database,which is sufficient to implement the 3 subsystems of the DDC software,The DDC project adopted the Online Software Process,which recommends a basic software life-cycle:problem statement,analysis,design,implementation and testing.Each phase results in a corresponding document or in the case of the implementation and testing,a piece of code,Inspection and review take a major role in the Online software process,The DDC documents have been inspected to detect flaws and resulted in a improved quality.A first prototype of the DDC is ready and foreseen to be used at the test-beam during summer 2001.  相似文献   

5.
One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal.Resular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system.Feedback is received and returned into the development process.Studies of the system.behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size,Large scale and performance tests of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software.Of particular interest were the run control state transitions in various configurations of the run control hierarchy.For the purpose of the tests,the software from other Trigger/DAQ sub-systems has been emulated.This paper presents a brief overview of the online system structure,its components and the large scale integration tests and their results.  相似文献   

6.
The Fermilab CKM (E921) experiment studies a rare kaon decay which has a very small branching ratio and can be very hard to separate from background processes.A trigger and DAQ system is required to collecto all necessary unformation for background rejection and to maintain high reliability at high beam rate.The unique challenges have emphasized the following guiding concepts:(1) Collecting background is as important as collecting good events.(2) A DAQ “event“ should not be just a “snap shot“ of the detector.It should be a short history record of the detector around the candidate event. The hit history provides information to understand temporary detector blindness,which is extremely important to the CKM experiment.(3) The main purpose of the trigger system should not be “knocking down trigger rate“ or “throwing out garbage events“ .Instead,it should classify the events and select appropriate data collecting straegies among various predefined ones for the given types of the events.The following methodologies are epmployed in the architecture to fulfill the experiment requirements without confronting unnecessary technical difficulties.(1) Continuous digitization near the detector elements is utilized to preserve the data quality.(2) The concept of minimum synchronization is adopted to eliminate the needs of time matching signal paths.(3) A global level 1 trigger performs coincident and veto functions using digital timing information to avoid problems due to signal degrading in long calbes.(4) The DAQ logic allows to collect chronicle records around the interesting events with different levels of detail of ADC information,so that very low energy particles in the veto systems can be best detected.(5) A re-programmable hardware trigger(L2.5)and a software trigger(L3) sitting in the DAQ stream are planned to perform data selection functioins based on full detector data with adjustability.  相似文献   

7.
With increasing physical event rates and the number of electronic channels, traditional readout schemes meet the challenge of improving readout speed caused by the limited bandwidth of the crate backplane. In this paper, a high-speed data readout method based on the Ethernet is presented to make each readout module capable of transmitting data to the DAQ. Features of explicitly parallel data transmitting and distributed network architecture give the readout system the advantage of adapting varying requirements of particle physics experiments. Furthermore, to guarantee the readout performance and flexibility, a standalone embedded CPU system is utilized for network protocol stack processing. To receive the customized data format and protocol from front-end electronics, a field programmable gate array (FPGA) is used for logic reconfiguration. To optimize the interface and to improve the data throughput between CPU and FPGA, a sophisticated method based on SRAM is presented in this paper. For the purpose of evaluating this high-speed readout method, a simplified readout module is designed and implemented. Test results show that this module can support up to 70 Mbps data throughput from the readout module to DAQ.  相似文献   

8.
A track reconstruction program CATS based on a cellular automaton has been developed for the vertex detector system of the HERA-B experiment at DESY,A segment model of the cellular automation used for tracking can be regarded as local discrete form of the Denby-Peterson neural net.Since 1999 CATS has been used to reconstruct data collected in HERA-B.Results on the tracking performance,and accuracy of estimates and computing time are presented.  相似文献   

9.
Front-end readout electronics have been developed for silicon strip detectors at our institute. In this system an Application Specific Integrated Circuit (ASIC) ATHED is used to realize multi-channel energy and time measurements. The slow control of ASIC chips is achieved by parallel port and the timing control signals of ASIC chips are implemented with the CPLD. The data acquisition is carried out with a PXI-DAQ card. The software has a user-friendly GUI developed with LabWindows/CVI in the Windows XP operating system. The test results show that the energy resolution is about 1.14% for alpha at 5.48 MeV and the maximum channel crosstalk of the system is 4.60%. The performance of the system is very reliable and is suitable for nuclear physics experiments.  相似文献   

10.
The China JinPing underground Laboratory (CJPL) is the deepest underground laboratory running in the world at present. In such a deep underground laboratory, the cosmic ray flux is a very important and necessary parameter for rare-event experiments. A plastic scintillator telescope system has been set up to measure the cosmic ray flux. The performance of the telescope system has been studied using the cosmic rays on the ground laboratory near the CJPL. Based on the underground experimental data taken from November 2010 to December 2011 in the CJPL, which has an effective live time of 171 days, the cosmic ray muon flux in the CJPL is measured to be (2.0±0.4)×10-10/(cm2 ·s). The ultra-low cosmic ray background guarantees an ideal environment for dark matter experiments at the CJPL.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号