首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到3条相似文献,搜索用时 0 毫秒
1.
Nanoparticle Surface Area Monitor (NSAM, TSI model 3550 and Aerotrak 9000) is an instrument designed to measure airborne surface area concentrations that would deposit in the alveolar or tracheobronchial region of the lung. It was found that the instrument can only be reliably used for the size range of nanoparticles between 20 and 100 nm. The upper size range can be extended to 400 nm, where the minimum in the deposition curves occurs. While the fraction below 20 nm usually contributes only negligibly to the total surface area and is therefore not critical, a preseparator is needed to remove all particles above 400 nm in cases where the size distribution extends into the larger size range. Besides limitations in the particle size range, potential implications of extreme concentrations up to the coagulation limit, particle material (density and composition) and particle morphology are discussed. While concentration does not seem to pose any major constraints, the effect of different agglomerate shapes still has to be further investigated. Particle material has a noticeable impact neither on particle charging in NSAM nor on the deposition curves within the aforementioned size range, but particle hygroscopicity can cause the lung deposition curves to change significantly which currently cannot be mimicked with the instrument. Besides limitations, possible extensions are also discussed. It was found that the tendencies of the particle deposition curves of a reference worker for alveolar, tracheobronchial, total and nasal depositions share the same tendencies in the 20–400 nm size range and that their ratios are almost constant. This also seems to be the case for different individuals and under different breathing conditions. By means of appropriate calibration factors NSAM can be used to deliver the lung deposited surface area concentrations in all these regions, based on a single measurement.  相似文献   

2.
Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining the benefits of these visual representations. In practice, there is little doubt that these visual representations are useful. The recently-proposed information-theoretic measure for analyzing the cost–benefit ratio of visualization processes can explain such usefulness experienced in practice and postulate that the viewers’ knowledge can reduce the potential distortion (e.g., misinterpretation) due to information loss. This suggests that viewers’ knowledge can be estimated by comparing the potential distortion without any knowledge and the actual distortion with some knowledge. However, the existing cost–benefit measure consists of an unbounded divergence term, making the numerical measurements difficult to interpret. This is the second part of a two-part paper, which aims to improve the existing cost–benefit measure. Part I of the paper provided a theoretical discourse about the problem of unboundedness, reported a conceptual analysis of nine candidate divergence measures for resolving the problem, and eliminated three from further consideration. In this Part II, we describe two groups of case studies for evaluating the remaining six candidate measures empirically. In particular, we obtained instance data for (i) supporting the evaluation of the remaining candidate measures and (ii) demonstrating their applicability in practical scenarios for estimating the cost–benefit of visualization processes as well as the impact of human knowledge in the processes. The real world data about visualization provides practical evidence for evaluating the usability and intuitiveness of the candidate measures. The combination of the conceptual analysis in Part I and the empirical evaluation in this part allows us to select the most appropriate bounded divergence measure for improving the existing cost–benefit measure.  相似文献   

3.
Information theory can be used to analyze the cost–benefit of visualization processes. However, the current measure of benefit contains an unbounded term that is neither easy to estimate nor intuitive to interpret. In this work, we propose to revise the existing cost–benefit measure by replacing the unbounded term with a bounded one. We examine a number of bounded measures that include the Jenson–Shannon divergence, its square root, and a new divergence measure formulated as part of this work. We describe the rationale for proposing a new divergence measure. In the first part of this paper, we focus on the conceptual analysis of the mathematical properties of these candidate measures. We use visualization to support the multi-criteria comparison, narrowing the search down to several options with better mathematical properties. The theoretical discourse and conceptual evaluation in this part provides the basis for further data-driven evaluation based on synthetic and experimental case studies that are reported in the second part of this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号