首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Athena,the Software Framework for ATLAS‘ offline software is based on the Gaudi Framework from LHCb^1,The Processing Model of Gaudi is essentially that of a batch-oriented system -a User prepares a file detailing the configuration of which Algorithms are to be applied to the input data of a job and the parameter values that control the behavior of each Algorithm instance.The Framework then reads that file once at the beginning of a job and runs to completion with no further interaction with the User. We have enhanced the Processing Model to include an interactive mode where a User can cotrol the event loop of a running job and modify the Algorithms and parameters on the fly.We changed only a very small number of Gaudi Classes to provide access to parameters from an embedded Python interpreter,No change was made to the Gaudi Programming Model.i.e., developers need not change anything to make use of this added interface,We present details of the design and implementation of the interactive Python interface for Athena.  相似文献   

2.
This paper outlines the desgn and prototyping of the ATLAS High Level Trigger(HLT)wihch is a combined effort of the Data Collection HLT and PESA(Physics and Event Selection Architecture)subgroups within the ATLAS TDAQ collaboration.Two important issues,alresdy outlined in the ATLAS HLT,DAQ and DCS Technical Proposal [1] will be highlighted:the treatment of the LVL2 Trigger and Event Filter as aspects of a general HLT with a view to easier migration of algorthms between the two levels;unification of the selective data collection for LVL2 and Event Building.  相似文献   

3.
The reconstruction and subsequent particle identification is a challenge in a complex and a high luminosity environment such as those expected in the ATLAS detector at the LHC.The ATLAS software has chosen the object oriented paradigm and has recently migrated much of its software components developed earlier using procedural programming languages.The new software,which emphasizes on the separation between algorthms and data objects,has been successfully integrated in the broader ATLAS framework.We will present a status report of the reconstruction software summarizing the experiences gained in the migration of several software components.We will examine some of the components of the calorimeter software design,which include simulation of real-time detector effects and online environment,and strategies deployed for identification of particles.  相似文献   

4.
R. Kwee 《中国物理 C》2010,34(9):1360-1363
One of the first measurements that will be made at the LHC by ATLAS deals with the properties of inelastic collisions,namely the central charged particle density and transverse momentum distributions.Current predictions of these distributions have large uncertainties in the LHC energy range.We describe the ATLAS minimum bias triggers,designed to select all kind of inelastic interactions,and the performance of the track reconstruction software which was adapted to soft particle track reconstruction.The precision with which the minimum bias distributions can be measured with early data is presented and the uncertainties on the inelastic distributions due to trigger bias is discussed.  相似文献   

5.
The configuration-fixed deformation constrained relativistic mean field approach with time-odd component has been applied to investigate the ground state properties of 33Mg with effective interaction PK1.The ground state of 33Mg has been found to be prolate deformed,β2=0.23,with the odd neutron in 1/2[330] orbital and the energy -251.85 MeV which is close to the data -252.06 MeV.The magnetic moment -0.9134 μN is obtained with the effective electromagnetic current which well reproduces the data -0.7456 μN se...  相似文献   

6.
The main goal of the HyperCP(E87) experiment at Fermilab is to search for CP violation in Ξand Α decays at the -10^-4 level.This level of precision dictates a data sample of over a billion events.The experiment collected about 231 billion raw events on about 30,000 5-GB tapes in ten months of running in 1997and 1999,In order to analyze this huge amount of data,the collaboration has reconstructed the events on a farm of 55 dual-processor Linux-based PCs at Fermilab.A set of farm tools has been written by the collaboration to interface with the Farm Batch System (FBS and FBSNG)[1] developed by the Fermilab Computing Division,to automate much of the farming and to allow nonexpert farm shifters to submit and monitor jobs through a web-based interface.Special care has been taken to produce a robust system which facilitates easy recovery from errors.The code has provisions for extensive monitoring of the data on a spill-by-spill basis,as is required by the need to minimize potential Systematic errors.About 36 million plots of various parameters produced from the farm analysis can be accessed through a data management system.The entire data set was farmed in eleven mouths,or about the same time that was taken to acquire the data.We will describe the architecture of the farm.our experience in operating it ,and show some results from the farm analysis.  相似文献   

7.
One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal.Resular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system.Feedback is received and returned into the development process.Studies of the system.behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size,Large scale and performance tests of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software.Of particular interest were the run control state transitions in various configurations of the run control hierarchy.For the purpose of the tests,the software from other Trigger/DAQ sub-systems has been emulated.This paper presents a brief overview of the online system structure,its components and the large scale integration tests and their results.  相似文献   

8.
CMS physicists need to seamlessly access their experimental data and results,independent of location and storage medium,in order to focus on the exploration for the new physics signals arther than the complexities of worldwide data management .In order to achieve this goal,CMS has adopted a tiered worldwide computing model which will incorporate emerging Grid technology.CMS has started to use Grid tools for data processing,replication and migration,Important Grid components are expected to be delivered by the Data Grid projects.like projects,CMS has created a set of long-term requirements to the Grid projects.These requirements are presented and discussed.  相似文献   

9.
In this update report about an Object Oriented (OO) track reconstruction model,which was presented at CHEP‘97,CHEP‘98,and CHEP‘2000,we shall describe subsequent new developments since the beginning of year 2000.The OO model for the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders.It has been coded in the C programming language originally for the CMS experiment at the future Large Hadron Collider (LHC) at CERN,and later has been successfully implemented into three different OO computing environments(including the level-2 trigger and offline software systems)of the ATLAS(another major experiment at LHC).For the level-2 trigger software environment.we shall selectively present some latest performance results(e.g.the B-physics event selection for ATLAS level-2 trigger,the robustness study result,ets.).For the offline environment,we shall present a new 3-D space point package which provides the essential offline input.A major development after CHEP‘2000 is the implementation of the OO model into the new OO software frameworkAthena“of ATLAS experiment.The new modularization of this OO package enables the model to be more flexible and to be more easily implemented into different software environments.Also it provides the potential to handle the more comlpicated realistic situation(e.g.to include the calibration correction and the alignment correction,etc.) Some general interface issues(e.g.design of the common track class)of the algorithms to different framework environments have been investigated by using this OO package.  相似文献   

10.
The CDF collaboration at the Fermilab Tevatron analyses proton-antiproton interactions at a center-of=mass energy of 2 TeV.during the the collider run starting this year the experiment expects to record 1 Petabyte of data and associated data samples,The Data Handling(DH) system has online and offline components.The DH offline component provides access to the stored data,to stored reconstruction output,to stored Monte-Carlo data samples,and user owned data samples.It serves more than 450 physicists of the collaboration.The extra requirements to the offline component of the Data Handling system are simplicity and convenience for users.More than 50 million events of the CDF Run II data have been already processed using this system.  相似文献   

11.
After less than a year of operation ,the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB.Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon,France,and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory(LLNL),BaBar Collaborators plan to double data collection each year and export a third of the data to IN2P3.So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone.Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical.In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed.Methods for achieving the ambitious requirements will be discussed.  相似文献   

12.
The liquid viscosity o[ immiscible Al-In alloys was measured using an oscillating-cup viscometer. It has been found that the viscosity of Al-In melts changes abruptly at the critical temperature of liquid-liquid phase separation during the cooling process. The experimental data above the temperature of phase separation are fitted to the Arrhenius equation. The fitted results show that the temperature dependence o[the viscosity obeys the Arrhenius relationship.  相似文献   

13.
Since 1998,the ALICE experiment and the CERN/IT division have jointly executed several large-scale high throughput distributed computing exercises:the ALICE data challenges.The goals of these regular exercises are to test hardware and software components of the data acqusition and computing systems in realistic conditions and to execute an early integration of the overall ALICE computing infrastructure.This paper reports on the third ALICE Data Challenge (ADC III) that has been performed at CERN from January to March 2001.The data used during the ADC Ⅲ are simulated physics raw data of the ALICE TPC,produced with the ALICE simulation program AliRoot.The data acquisition was based on the ALICE online framework called the ALICE Data Acquisition Test Environment (DATE) system.The data after event building,were then formatted with the ROOT I/O package and a data catalogue based on MySQl was established.The Mass Storage System used during ADC III is CASTOR.Different software tools have been used to monitor the performances,DATE has demonstrated performances of more than 500 MByte/s.An aggregate data throughput of 85 MByte/s was sutained in CASTOR over several days.The total collected data amounts to 100 TBytes in 100.00 files.  相似文献   

14.
J. H. Field 《中国物理C(英文版)》2017,41(10):103001-103001
The indirect estimation of the Higgs Boson mass from electroweak radiative corrections within the Standard Model is compared with the directly measured value obtained by the ATLAS and CMS Collaborations at the CERN LHC collider. Treating the direct measurement of m_H as input, the Standard Model indirect estimation of the top-quark mass is also obtained and compared with its directly measured value. A model-independent analysis finds an indirect value of m_H of ■70 GeV, below the directly measured value of 125.7±0.4 GeV and an indirect value:m_t = 177.3±1.0 GeV, above the directly measured value: 173.21±0.87 GeV. A goodness-of-fit test to the Standard Model using all Z-pole observables and mW has a χ~2 probability of ■2%. The reason why probability values about a factor of ten larger than this, and indirect estimates of m H about 30 GeV higher, have been obtained in recent global fits to the same data is recalled.  相似文献   

15.
A constrained high-order statistical algorithm is proposed to blindly deconvolute the measured spectral data and estimate the response function of the instruments simultaneously. In this algorithm, no priorknowledge is necessary except a proper length of the unit-impulse response. This length can be easily set to be the width of the narrowest spectral line by observing the measured data. The feasibility of this method has been demonstrated experimentally by the measured Raman and absorption spectral data.  相似文献   

16.
In order to face the expected increase in statistics between now and 2005,the Babar experiment at SLAC is evolving its computing model toward a distributed multitier system.It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center,A unifrom computing enviromment is being deployed in the centers,the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of -100 TB of data per year,In parallel,smaller Tier-B and C sites receive subsets of data,presently in Kanga-Root[1] format and later in Objectivity[2] format,GRID tools will be used for remote job submission.  相似文献   

17.
An all-transistor active-inductor shunt-peaking structure has been used in a prototype of 8 Gbps high- speed VCSEL driver which is designed for the optical link in ATLAS liquid Argon calorimeter upgrade. The VCSEL driver is fabricated in a commercial 0.25 p~m Silicon-on-Sapphire (SOS) CMOS process for radiation tolerant purpose. The all-transistor active-inductor shunt-peaking is used to overcome the bandwidth limitation from the CMOS pro- cess. The peaking structure has the same peaking effect as the passive one, but takes a small area, does not need linear resistors and can overcome the process variation by adjust the peaking strength via an external control. The design has been taped out, and the prototype has been proven by the preliminary electrical test results and bit error ratio test results. The driver achieves 8 Gbps data rate as simulated with the peaking. We present the all-transistor active-inductor shunt-peaking structure, simulation and test results in this paper.  相似文献   

18.
Position-sensitive thin-gap gas detectors have been developed in the laboratory, based on the ATLAS Thin Gap Chamber. The signal collection structure has been redesigned while retaining other configurations to keep the good time performance of the detector. The position resolution was measured using cosmic muons for two versions of the detector and found to be 409 μm and 233 μm respectively. This paper presents the structure of these two detector prototypes, with the position resolution measurement method and results.  相似文献   

19.
The ZEUS experiment has migrated its reconstruction and analysis farms to a PC-based environment.More than one year of experience has been acquired with successful operation of an analysis farm designed for several hundred users.Specially designed software has been used to proveide fast and reliable access to large amounts of data (30 TB in total),After the ongoing upgrade of the HERA luminosity,higher requirements will arise in terms of data storage capacity and throughput rate,The necessity of a bigger disk cache has led to consideration of solutions based on commodity technology,PC-based file servers are being tested as a cost-effective storage system,In this article we present the hardware and software solutions deplogyed and discuss their performance.scalability and maintenance issues.  相似文献   

20.
Object-oriented (OO) approach is the key technology to develop a software system in the LHC/ATLAS experiment.We developed a OO simulation framework based on the Geant4 general-purpose simulation toolkit.Because of complexity of simulation in ATLAS,we payed most attention to the scalability in its design.Although the first target to apply this framework is to implement the ATLAS full detector simulation program,there is no experiment-specific code in it,therefore it can be utilized for the development of any simulation package,not only for HEP experiments but also for various different research domains ,In this paper we discuss our approach of design and implementation of the framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号