首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The true coincidence summing (TCS) effect on the full energy peak (FEP) efficiency calibration of an HPGe detector has been studied as a function of sample-to-detector distance using multi-gamma sources. Analytical method has been used to calculate coincidence correction factors for 152Eu, 133Ba, 134Cs and 60Co for point and extended source geometry at close sample-to-detector distance. Peak and total efficiencies required for this method have been obtained by using MCNP code by using the optimized detector geometry. The correction factors have also been obtained experimentally. The analytical and the experimental correction factors have been found to match within 1–5%. The method has been applied to obtain the activity of the radionuclides (106Ru, 125Sb, 134Cs and 144Ce) present in a fission product sample.  相似文献   

2.
For the evaluation of coincidence summing effects for volume sources an effective total efficiency (ETE) is used instead of the common total efficiency (TE). In this paper ETE is computed by the Monte Carlo method. The differences between ETE and TE are analyzed and their origin is discussed. Measured values for the coincidence summing correction factors for a standard solution containing 152Eu in a one liter Marinelli beaker are compared with computed values obtained from appropriate values of ETE. It is shown that the procedure for the evaluation of the coincidence effects is reliable. As a consequence it can be concluded that 152Eu volume sources can be successfully used for efficiency calibration even in the case of high-efficiency detectors and close source-to-detector distances.  相似文献   

3.
A new cascade summing correction method with the algorithm extended to include true coincidence summing effects from low-energy gamma-rays, KX-rays from Electron Capture and Internal Conversion, and to include the 511 keV positron annihilation photons has been developed and implemented in Genie 2000 V3.2 released in 2009. To validate the accuracy and precision of the extended correction method, measurements of calibrated sources containing cascading nuclides from various types of ISOCS/LabSOCS characterized High Purity Germanium detectors have been analyzed. Validation of the true coincidence summing correction factors for the extended correction method has been made by comparison to the results from the Monte Carlo code MCNP-CP. In addition, comparison between the measured and the known activities of the cascading nuclides was performed, which shows that the method is effective and accurate.  相似文献   

4.
A GEANT4 based Monte Carlo simulation has been successfully utilised to calculate peak efficiency characterisations and cascade summing (true coincidence summing) corrections in two source geometries commonly used for environmental monitoring. The cascade summing corrections are compared with values generated using an existing (validated) system, and found to be in excellent agreement for all radionuclides simulated. The calculated correction factors and peak efficiencies were also tested by analysing well defined sources used in the operation of the International Monitoring System, which undertakes radionuclide monitoring for verification of the Comprehensive Nuclear-Test-Ban Treaty. All abundances of the radionuclides measured matched the values that were previously determined using proprietary software. Using GEANT4 in this way, cascade summing corrections can now be extended to complex detector models and source matrices, such as Compton Suppression systems.  相似文献   

5.
Journal of Radioanalytical and Nuclear Chemistry - True coincidence summing (TCS) correction factors were determined for 152Eu volume sources used for γ-ray efficiency calibration as the ratio...  相似文献   

6.
7.
There was an emerging need at the Nuclear Analysis and Radiography Department of the Centre for Energy Research, Hungarian Academy of Sciences, Budapest to be able to perform low level radioactivity measurements of various samples from in-beam activation and from environmental studies. Important aspects of reusing the low-background chamber called DÖME, which had been unused for some years, were the development of an easily reusable radon-tight sample container and the setup of a measurement system capable of counting extended samples in close-in geometry. As a result of our efforts a special sample container made of HDPE (High-density Polyethylene) was developed, and it is proved that the probability of a radon loss larger than the 2 % of its radioactive decay constant is <95 %. Due to the lack of reference samples, containing the same radionuclides as the unknown sample, the absolute method of measuring the activity concentration of nuclides in the sample had to be applied, which implied the reliable determination of the full-energy peak efficiency. A method called efficiency transfer combined with the correction for true coincidence summing effects is proven to be providing appropriate results and applied.  相似文献   

8.
For the correction of losses due to true coincidences summing and edge effect, a simple method which is based on the ratio of a reference single -ray energy to that of cascade energies at near and far geometry is developed. The correction factors for several radioactive sources with simple and complex decay schemes are experimentally determined for three types of germanium detectors. It is shown that coincidence summing can be a complex effect and depends on the individual detector, the counting geometry and on the decay scheme of the radionuclide concerned.  相似文献   

9.
The gamma lines 609.3 and 1,120.3 keV are two of the most intensive γ emissions of 214Bi, but they have serious true coincidence summing (TCS) effects due to the complex decay schemes with multi-cascading transitions. TCS effects cause inaccurate count rate and hence erroneous results. A simple and easy experimental method for determination of TCS correction of 214Bi gamma lines was developed in this work using naturally occurring radioactive material samples. Height efficiency and self attenuation corrections were determined as well. The developed method has been formulated theoretically and validated experimentally. The corrections problems were solved simply with neither additional standard source nor simulation skills.  相似文献   

10.
We have shown that it is sufficiently accurate to use the MCNP peak-to-total calibration results to correct for cascade summing effects in a gamma-spectrum. Also, it is sufficient to use only approximate detector characterization data with empirical peak-to-total to obtain good cascade summing correction results. The intrinsic P/T-curve for detectors with the same efficiency is very similar and it may be considered a common characteristic of the whole detector's family with given efficiency.  相似文献   

11.
When radionuclides decay by cascading photons, the accuracy of the measured nuclide activity may be affected by true coincidence summing effects. The effects can be quantified by Monte Carlo simulations that can handle correlated γ-and X-ray emissions from a radionuclide. Analysis techniques are also available commercially to correct for the effects due to cascading γ-rays. The MCNP-CP code was used to compute the effects in high purity germanium detectors for several commonly used nuclides and geometries and the results were compared to measurements and an analysis technique. Excellent agreement in true coincidence summing corrections predicted by MCNP-CP and the analysis technique was obtained. In addition, the X-ray true coincidence summing effects were evaluated.  相似文献   

12.
With SLOWPOKE and MNS reactors which have reproducible neutron fluxes, the standardization of multielement NAA can be reduced to measuring activation constants once for all elements and then determining relative detection efficiencies for new detectors and counting geometries. In this work, a method has been developed for the parameterization of the efficiency of gemanium detectors. The gamma-ray detection efficiency was measured as a function of energy and distance for three detectors. The variation with distance was found to follow a modified EID law, within 1%, for point sources 1 mm to 250 mm from the detector. A model, including coincidence summing corrections, was developed to calculate efficiency for NAA samples; it requires 16 measured parameters. Tests showed that the calculated relative detection efficiencies are accurate to better than 3% for close counting geometries and sample volumes up to a few millilitres. Areas of possible improvement to the accurarcy of the method are suggested.  相似文献   

13.
A new algorithm for calculating efficiency calibration has been implemented. It is based on orthonormal polynomial approximation and is capable of dealing with polynomials of degree up to 12 and to handle up to 20 experimental points. Additional utilities have been provided to control the calibration procedure including restriction of the maximal polynomial degree, applying corrections for coincidence summing, managing experimental data set (rejecting/restoring data from calculation) and graphical representation of the resulting curve and so on.  相似文献   

14.
Correction equations of the coincidence-summing effect for efficiencies of HpGe detector based on the decay scheme were developed by considering the summing up to triple coincidence. The correction equations which do not dependent on the kind of the Ge detector are very useful for efficiency calibrations of a Ge detector in the energy region from 60 to 400 keV by using75Se radionuclide even with very short source-to-detector distances.  相似文献   

15.
The efficiency calibration of laboratory based gamma spectrometry systems typically involves the purchase or construction of calibration samples that are supposed to represent the geometries of the unknown samples to be measured. For complete and correct calibrations, these sample containers must span the operational range of the system, which at times can include difficult configurations of size, density, matrix, and source distribution. The efficiency calibration of a system is dependent not only on the detector, but on the radiation attenuation factors in the detector–source configuration, and therefore is invalid unless all parameters of the sample assay condition are identical to the calibration condition. An alternative to source-based calibrations is to mathematically model the efficiency response of a given detector–sample configuration. In this approach, the measurement system is calibrated using physically accurate models whose parameters can generally be easily measured. Using modeled efficiencies, systems can be quickly adapted to changing sample containers and detector configurations. This paper explores the advantages of using mathematically computed efficiencies in place of traditional source-based measured efficiencies for laboratory samples, focusing specifically on the possibility of sample optimization for a given detector, uncertainty estimation, and cascade summing corrections.  相似文献   

16.
An original focus on univariate calibration as an experimental process of quantitative analysis is presented. A novel classification system is introduced against the background of the present situation concerning nomenclature of calibration methods. Namely, it has been revealed that four methods well-known in analytical chemistry: the conventional method, the internal standard method, the indirect method and the dilution method, can be split into those carried out in both the interpolative and the extrapolative mode. It is then shown that the basic procedures of all these methods can be modified including different approaches, such as matrix-matched technique, spiking the sample with a reactant, bracketing calibration, and others. For the first time (as compared to monographies dealing with univariate calibration) it is reviewed how the methods are mixed and integrated with one another thereby creating new calibration strategies of extended capabilities in terms of enhanced resistance to the interference and non-linear effects – as the main sources of systematic calibration errors. As additional novelty, rationally possible combinations of the calibration methods – not met hitherto in the literature – have been predicted. Finally, some general rules relating to calibration are formulated and the main calibration problems that still need to be solved are displayed.  相似文献   

17.
The sensitivity on n-type gamma-X detectors for low-energy X- and -rays calls for coincidence corrections in the efficiency calibration that do not apply to the calibration of p-type detectors. Corrections were calculated for the effect of cascade coincidences between -rays, X-rays, annihilation radiation, and bremsstrahlung, for 15 radionuclides frequently used for efficiency calibration. Experimental results are presented for a -X detector with 37% relative efficiency at distances from 0.9 to 17.5 cm. After coincidence correction smooth efficiency curves were found for the energy range 12 to 2750 keV, even for the position closest to the detector.  相似文献   

18.
Direct solid sampling techniques in AAS have several advantages over wet digestion methods such as small sample size requirements and simple calibration procedures. But some disadvantages also exist such as the sample in homogeneity and the matrix sensitivity of calibration. The calibration is commonly carried out by varying the sample mass and evaluating the peak intensity versus absolute analyte amount. It is shown here that this procedure must be considered doubtful when matrix effects are expected. In the case of zinc determination in geological samples it has been shown that calibration functions obtained by different reference materials differ significantly from each other. As an alternative a three-dimensional calibration technique can be applied that evaluates the peak intensity versus both analyte content and sample weight. The resulting calibration planes are expected to be hyperbolically curved. This three-dimensional calibration has been applied to the determination of Zn in geological samples and compared with classical solid sampling AAS calibration procedures.  相似文献   

19.
SHAMAN is an expert system for qualitative and quantitative radionuclide identification in gamma spectrometry. SHAMAN requires as input the calibrations, peak search, and fitting results from reliable spectral analysis software, such as SAMPO. SHAMAN uses a comprehensive reference library with 2600 radionuclides and 80 000 gamma-lines, as well as a rule base consisting of sixty inference rules. Identification results are presented both via an interactive graphical interface and in the form of configurable text reports. An organization has been established for monitoring the recent Comprehensive Test Ban Treaty. For radionuclide monitoring, 80 stations will be set up around the world. Air-filter gammaspectra will be collected from these stations on a daily basis and they will need to be reliably analyzed with minimum turnaround time. SHAMAN is currently being evaluated within the prototype monitoring system as an automated radionuclide identifier, in parallel with existing radionuclide identification software. In air-filter monitoring, very low concentrations of radionuclides are measured from bulky sources in close geometry and with long counting time. In this case true coincidence summing and self-absorption become important factors. SHAMAN is able to take into account these complicated phenomena, and the results it produces have been found to be very reliable and accurate.  相似文献   

20.
《Analytica chimica acta》2004,509(2):217-227
In near-infrared (NIR) measurements, some physical features of the sample can be responsible for effects like light scattering, which lead to systematic variations unrelated to the studied responses. These errors can disturb the robustness and reliability of multivariate calibration models. Several mathematical treatments are usually applied to remove systematic noise in data, being the most common derivation, standard normal variate (SNV) and multiplicative scatter correction (MSC). New mathematical treatments, such as orthogonal signal correction (OSC) and direct orthogonal signal correction (DOSC), have been developed to minimize the variability unrelated to the response in spectral data. In this work, these two new pre-processing methods were applied to a set of roasted coffee NIR spectra. A separate calibration model was developed to quantify the ash content and lipids in roasted coffee samples by PLS regression. The results provided by these correction methods were compared to those obtained with the original data and the data corrected by derivation, SNV and MSC. For both responses, OSC and DOSC treatments gave PLS calibration models with improved prediction abilities (4.9 and 3.3% RMSEP with corrected data versus 7.1 and 8.3% RMSEP with original data, respectively).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号