首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 792 毫秒
1.
After defining the term standardization the necessity of standardized planning, execution, and evaluation in quantitative analysis is illustrated. From the working steps: sampling, preliminary treatment of the sample, determination, and evaluation only the last two steps are standardizable in an universal way, because they can be examined and checked up independently from the actual object of inquiring. These steps are designated as analytical procedure in a narrow meaning. The quantitative analytical message is obtained by way of a chain of signal S, information I, and of an absolute quantity of the determinated substance, called working quantity A. The transitions should be described by a signal function I=f(S) and by an analytical function A=F(I). The important demand for a standard procedure is the realization of a lineary analytical function in a defined working range. A classification of standard working ranges as “complete” ranges of powers of ten for amounts of substances in the SI-unit Mol leads to new general designations for all kinds of analytical procedures. By fixing a general scheme of measurement execution and result evaluation an identically set of characteristic data can be obtained for a critical examination and an objective comparison of procedures. The base of the standard evaluation is the method of least squares, applicated to a set of 24 blocks of data arranged in standardized 6 groups of a size of 4 blocks. Details of standard measurements and standard evaluation are presented in paper II.  相似文献   

2.
This paper describes the range of analytical problems to be solved, the equipment, structure, and personnel structure of the center, the methods for ensuring the quality of analyses, and the external relations.  相似文献   

3.
4.
For spectrochemical analyses, calibration is a basic operation generally necessary for quantitative determinations. It yields the calibration function and by inversion the analytical function which is not directly accessible.—In the following, a scheme is developed in order to find-according to different methods of analysis-the respective suitable calibration function: it should give an expectable standard deviation as small as possible for the estimated content, it should, if possible, be linear and it should be applicable in a range of content as large as possible. The condition of Part I of this paper is the constant standard deviation for the measuring quantity in this range.-The scheme is examined by five analytical examples: for trace analyses (OES-ICP and AAS) the simple relationship between the intensity I and the concentration c is linear anyhow and yields a constant absolute standard deviation not dependent on the trace content. For analyses over a large range of concentration, a double-logarithmic transformation leads to linearity and constant relative standard deviation (OES with Laser). If physical matrix effects are existing, for example in surface analysis (ISS) the ‘binary-ratio method’ is recommended; in this case, the relative standard deviation is proportional to (1 - c). For XRF of main and additional components, the non-linear regression of the equation of Beattie and Brissey is to be realized with √I-values.  相似文献   

5.
The ‘characteristic function’ with two empirically determined parameters α and β, is proposed as a general purpose function to describe the variation of precision (in terms of standard deviation σ), or uncertainty, with analyte concentration c (here denoting any compositional quantity), for specific analytical methods applied to a defined type of test material. In this study it is applied to examples of analytical data collected under ‘instrumental’ conditions for estimating precision. The function fitted the data well, with no systematic lack of fit. The study therefore extends the range of applications of this function.  相似文献   

6.
 An approach to assess the permissible ranges for results of replicate determinations using uncertainty calculation is discussed. The approach is based on the known range distribution for normalized "range/standard deviation" values, which is equivalent to the distribution of the range for normalized results of replicate determinations having an average of 0 and a standard deviation of 1. It is shown that the permissible ranges can be assessed using tabulated percentiles of this distribution and calculated values of the determination (analysis) standard uncertainty. When the standard uncertainty calculation is performed before the analytical method validation, the permissible ranges can be predicted. As an example, the range is predicted for a new pH-metric method for acid number determination without titration in petroleum oils (basic, white and transformer). The results of the prediction are in good conformity with the experimental data. Received: 23 May 1999 · Accepted: 14 November 1999  相似文献   

7.
将分析化学的理论教学与实验教学相统一,对其知识内容进行模块化构建。因为定量分析化学的核心任务是"量"的分析,围绕这个核心将其分为条件控制与定量计算两大板块。条件控制是定量分析的前提保证,可划分为基础理论研究和实际操作技术两部分。定量计算获得结果则是分析化学的最终目的,其划分为结果计算和结果评价两部分。经过模块化功能划分,分析化学知识结构构架清晰,具有纵向层次性、横向类比性,有助于培养学生归纳、总结知识和分析、解决问题的能力,使学生不仅能够较好地掌握知识内容,而且能够提升自主学习能力,最终实现终身学习。  相似文献   

8.
辉光放电、阴极溅射/瞬变原子化原子吸收光谱中阴极材料的研究张必成(湖北大学化学系,武汉,430062)关键词辉光放电,阴极溅射,阴极材料辉光放电、阴极溅射/瞬变原子化原子吸收光谱(TACSGD/AAS)是近年来出现的高灵敏度痕量分析技术[1].它是通...  相似文献   

9.
The measurement of temperature is a useful analytical probe for industrial processes as well as applied research. However, temperature ranges for most monitoring equipment are not designed to deal with the broad temperature ranges common in electrothermal atomization atomic absorption spectroscopy. A simple circuit that enhances the temperature ranging of a commercial recording optical pyrometer, specifically, an Ircon Series 1100 optical pyrometer, is presented. This circuit used with the included “pseudo-code” can measure an increasing or decreasing temperature over the range 1000–3000°C.  相似文献   

10.
This review summarizes recent progress in the development and application of potentiometric sensors with limits of detection (LODs) in the range 10(-8)-10(-11) M. These LODs relate to total sample concentrations and are defined according to a definition unique to potentiometric sensors. LODs calculated according to traditional protocols (three times the standard deviation of the noise) yield values that are two orders of magnitude lower. We are targeting this article at analytical chemists who are non-specialists in the development of such sensors so that this technology may be adopted by a growing number of research groups to solve real-world analytical problems.We discuss the unique response features of potentiometric sensors and compare them to other analytical techniques, emphasizing that the choice of the method must depend on the problem of interest. We discuss recent directions in sensor design and development and present a list of 23 sensors with low LODs, with references. We give recent examples where potentiometric sensors have been used to solve trace-level analytical problems, including the speciation of lead and copper ions in drinking water, the measurement of free copper in sea water, and the uptake of cadmium ions by plant roots as a function of their speciation.  相似文献   

11.
The study described was designed to evaluate the use of carefully selected wavelength ranges to improve the accuracy of results when multiwavelength absorption data are used to quantity components in mixtures. Mixtures of polynuclear aromatic hydrocarbons were used as test samples. It is shown that in some cases narrow wavelength ranges can yield substantially improved results relative to broad wavelength ranges. Such improvement is most likely when the component of interest is present in low concentration relative toother components with similar absorptivities and when the component of interest has fine structure in a limited region of the spectrum. Results for components with fine structure throughout broad regions of the spectrum are not likely to be improved by selection of a narrow range that emphasizes just one of the regions of fine structure. However, careful selection of such a region seldom degrades results.  相似文献   

12.
Characterization of liquid phase flow-through detection systems as used in column liquid chromatography and flow injection analysis is discussed. Linear range, selectivity, peak broadening and detection limit are the most important characteristics. Peak broadening is treated with the aid of the concepts of systems analysis. The total peak broadening effect is given as the sum of contributions from connecting tubes or reactors, measuring volume and time constants in electronics and transducers. The influence of noise and signal frequency content on the precision of analytical results is treated qualitatively. The detection limit of a flow-through detection system is defined, taking these effects into account qualitatively. These characteristics are related to the performance of the whole analytical system with regard to concentration detection limit, absolute detection limit and maximum sample frequency.  相似文献   

13.
Selectivity and specificity are performance characteristics of analytical methods which are frequently used in analytical literature. In general, the terms are applied verbally and a quantification of selectivity and specificity is given rarely. Excepted are methods like chromatography and ISE sensoring which use individual quantities such as selectivity coefficients, indices and other parameters to characterize analytical procedures and systems. Here a proposal is given to characterize selectivity and specificity quantitatively by relative values in a range of 0 to 1 expressing so a certain degree of selectivity and specificity. By examples it will be shown that the derived quantities characterize analytical methods and problems in a plausible way.  相似文献   

14.
Several curve fitting methods, including linear, quadratic, and rational least squares fits and linear, cubic spline, and Stineman interpolations, are evaluated for their ability to fit highly nonlinear atomic absorption analytical calibration curves. In addition, the number of standards required to effectively calibrate a dynamic range covering 3 orders of magnitude of concentration is determined. Finally, the concentration ranges providing minimum relative concentration precision (RCP) are identified and the slopes of the calibration curves in these ranges are noted. Concentration ranges of minimum RCP also provide minimum curve fitting errors and generally have log-log slopes of approximately 0.5 indicating that linearity alone is not a sufficient criterion for choosing suitable calibration and curve fitting concentration ranges.  相似文献   

15.
Selectivity and specificity are performance characteristics of analytical methods which are frequently used in analytical literature. In general, the terms are applied verbally and a quantification of selectivity and specificity is given rarely. Excepted are methods like chromatography and ISE sensoring which use individual quantities such as selectivity coefficients, indices and other parameters to characterize analytical procedures and systems. Here a proposal is given to characterize selectivity and specificity quantitatively by relative values in a range of 0 to 1 expressing so a certain degree of selectivity and specificity. By examples it will be shown that the derived quantities characterize analytical methods and problems in a plausible way.  相似文献   

16.
Analytical techniques for the determination of polychorinated dibenzo-p-dioxins (PCDD), polychlorinated dibenzofurans (PCDF) and dioxin-like PCBs (DLPCB) are reviewed. The focus of the review is on recent advances in methodology and analytical procedures. The paper also reviews toxicology, the development of toxic equivalent factors (TEF) and the determination of toxic equivalent quantity (TEQ) values. Sources, occurrence and temporal trends of PCDD/PCDF are summarized to provide examples of levels and concentration ranges for the methods and techniques reviewed.  相似文献   

17.
Supervised pattern recognition in food analysis   总被引:8,自引:0,他引:8  
Data analysis has become a fundamental task in analytical chemistry due to the great quantity of analytical information provided by modern analytical instruments. Supervised pattern recognition aims to establish a classification model based on experimental data in order to assign unknown samples to a previously defined sample class based on its pattern of measured features. The basis of the supervised pattern recognition techniques mostly used in food analysis are reviewed, making special emphasis on the practical requirements of the measured data and discussing common misconceptions and errors that might arise. Applications of supervised pattern recognition in the field of food chemistry appearing in bibliography in the last two years are also reviewed.  相似文献   

18.
In order to perform high accuracy analytical measurements most analytical techniques require some form of calibration using standards of the same quantity as that being measured. The highest accuracy calibration standards are those prepared by mass (gravimetrically) as opposed to by volume (volumetrically). The use of gravimetrically prepared standards to calibrate analytical techniques that rely on fixed volume injections can cause systematic errors, even when the analytical technique does not suffer from a chemical matrix interference. The origin of these errors is explained and is demonstrated experimentally for the analysis of sulphate in synthetic seawater samples, and the measurement of the anionic content of particulate matter following extraction with water and wetting agents; where average measurement biases of +2.7 and -3.2%, respectively, were observed. Proposals are offered for methods to overcome this 'physical matrix effect'.  相似文献   

19.
The extension of the multivariate curve resolution theory presented is based on the minimum assumptions of non-negativity of sensor responses and non-negativity of quantities of components, as given by Lawton and Sylvestre. The theory is explicitly given for three components, while the algorithms developed may potentially be extended to the more general n-component case. The analytical solution to the problem of defining the physically permitted regions for the sensor responses (“spectra”) or quantity profiles of pure components is given. Implementations of the algorithms developed are used for finding the permitted ranges of the pure component spectra from mixture spectra. Extensions of the minimal assumption theory are suggested.  相似文献   

20.
This paper is a review of work on the characterization of coal liquids and petroleum residues and asphaltenes over several decades in which various mass spectrometric methods have been investigated. The limitations of mass spectrometric methods require exploration in order to understand what the different analytical methods can reveal about environmental pollution by these kinds of samples and, perhaps more importantly, what they cannot reveal. The application of mass spectrometry to environmental problems generally requires the detection and determination of the concentration of specific pollutants released into the environment by accident or design. The release of crude petroleum or coal liquids into the environment can be detected and tracked during biodegradation processes through specific chemicals such as alkanes or polyaromatic hydrocarbons (PAHs). However, petroleum asphaltenes are polydisperse materials of unknown mass range and chemical structures and, therefore, there are no individual chemicals to detect. It is necessary to determine methods of detection and the ranges of mass of such materials. This can only be achieved through fractionation to reduce the polydispersity of the initial sample. Comparison of mass spectrometric results with results from an independent analytical method such as size‐exclusion chromatography with a suitable eluent is advisable to confirm that all the sample has been detected and mass discrimination effects avoided. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号