首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Probe testing following wafer fabrication can produce extremely large amounts of data, which is often used to inspect a final product to determine if the product meets specifications. This data can be further utilized in studying the effects of the wafer fabrication process on the quality or yield of the wafers. Relationships among the parameters may provide valuable process information that can improve future production. This paper compares many methods of using the probe test data to determine the cause of low yield wafers. The methods discussed include two classes of traditional multivariate statistical methods, clustering and principal component methods and regression-based methods. These traditional methods are compared to a classification and regression tree (CART) method. The results for each method are presented. CART adequately fits the data and provides a "recipe" for avoiding low yield wafers and because CART is distribution-free there are no assumptions about the distributional properties of the data. CART is strongly recommended for analyzing wafer probe data.  相似文献   

2.
Several statistical methods of image reconstruction are described and objectively compared through the use of receiver-operating-characteristic (ROC) analysis based on a specified detection task performed by a human observer. The simulated imaging system is a multiple-pinhole coded-aperture system for dynamic cardiac imaging, and the objects represent cross sections of the left ventricle at end systole. The task is detection of a profusion representing an akinetic wall segment. Thirteen different reconstruction algorithms are considered. Human observers perform the specified task on this set of reconstructions, and the results are analyzed through the use of ROC analysis. The results show that the methods that utilize the largest amount of (accurate) prior information tend to perform the best.  相似文献   

3.
Continuous speech recognition by statistical methods   总被引:3,自引:0,他引:3  
Statistical methods useful in automatic recognition of continuous speech are described. They concern modeling of a speaker and of an acoustic processor, extraction of the models' statistical parameters and hypothesis search procedures and likelihood computations of linguistic decoding. Experimental results are presented that indicate the power of the methods.  相似文献   

4.
This paper describes a design of the electrostatic discharge (ESD) protection device to minimize its area Ap while maintaining the breakdown voltage VESD. Hypothesis tests using measured data were performed to find the severest applied serge condition and to select control factors for the design-of-experiments (DOE). Also, technology CAD (TCAD) was used to estimate VESD. An optimum device structure, where salicide block was employed, was found using statistical methods and TCAD.  相似文献   

5.
Pritchard  J.A.S. 《Electronics letters》1985,21(25):1183-1185
A novel approach to the uncommitted, real-time demodulation of signals using statistical classifiers is described. Preliminary results are presented together with a brief discussion of current work.  相似文献   

6.
Leung  K.H. Spence  R. 《Electronics letters》1974,10(17):360-362
The letter reports the results of a preliminary feasibility study of an efficient method of statistical analysis for linear nonreciprocal circuits.  相似文献   

7.
本文是基于一阶统计量的CDMA信道估计方法研究,导出了信道估计和方差运算表达式,进行了估计器性能及峰均功率比分析.算法对信道没有特殊要求、运算简单且没有信息速率的损失.仿真结果验证了其有效性.  相似文献   

8.
The objective of the current study is to develop an automatic tool to identify microbiological data types using computer-vision and statistical modeling techniques. Bacteriophage (phage) typing methods are used to identify and extract representative profiles of bacterial types out of species such as the Staphylococcus aureus. Current systems rely on the subjective reading of profiles by a human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.  相似文献   

9.
利用参数统计方法自动识别数字调制信号   总被引:23,自引:0,他引:23  
自动调制方式识别应用范围广泛,对于军用软件无线电侦察接收机更具有十分重要的意义。本文提出了一种基于统计参数识别信号类型的新方法,在满足SNR≥10dB的条件下,可正确识别2ASK、M-AM、QAM、PSK、FSK等五种信号。仿真试验表明:计算简单,识别效果良好。  相似文献   

10.
An improved Monte Carlo method of integration is used to evaluate the radiation fields of a large linear antenna and a large aperture antenna. When compared with the conventional numerical methods of integration, the improved Monte Carlo method saves a considerable amount of effort and time, yet it still gives reasonably accurate results.  相似文献   

11.
A multiresolution statistical method for identifying clinically normal tissue in digitized mammograms is used to construct an algorithm for separating normal regions from potentially abnormal regions; that is, small regions that may contain isolated calcifications. This is the initial phase of the development of a general method for the automatic recognition of normal mammograms. The first step is to decompose the image with a wavelet expansion that yields a sum of independent images, each containing different levels of image detail. When calcifications are present, there is strong empirical evidence that only some of the image components are necessary for the purpose of detecting a deviation from normal. The underlying statistic for each of the selected expansion components can be modeled with a simple parametric probability distribution function. This function serves as an instrument for the development of a statistical test that allows for the recognition of normal tissue regions. The distribution function depends on only one parameter, and this parameter itself has an underlying statistical distribution. The values of this parameter define a summary statistic that can be used to set detection error rates. Once the summary statistic is determined, spatial filters that are matched to resolution are applied independently to each selected expansion image. Regions of the image that correlate with the normal statistical model are discarded and regions in disagreement (suspicious areas) are flagged. These results are combined to produce a detection output image consisting only of suspicious areas. This type of detection output is amenable to further processing that may ultimately lead to a fully automated algorithm for the identification of normal mammograms  相似文献   

12.
The use of three estimation methods was investigated for mapping forest volume over a complex Mediterranean region (Tuscany, central Italy). The first two methods were based on the processing of satellite images, specifically a summer Landsat Thematic Mapper scene. From this scene, information about forest volume was extracted through a nonparametric approach [k-nearest neighbor (k-NN)] and by means of locally calibrated regressions. The last method considered, kriging, instead used only the spatial autocorrelation of tree volume relying on geostatistical principles. The experiments performed demonstrated that, at the original sampling density, the three methods produced nearly equivalent accuracies. This was no more the case when reducing the sampling density to various levels. Whereas, in fact, this reduction marginally affected the performances of the two remote-sensing-based methods, it dramatically degraded that of kriging. Additionally, the investigation showed how per-pixel estimates of error variance were obtainable also by k-NN and locally calibrated regression procedures, in analogy with the same property of kriging. Such estimated error variances were utilized to optimally integrate the outputs of the methods based on remotely sensed data and spatial autocorrelation. In all cases, the integrated estimation outperformed the single procedures. These results are relevant to develop an operational strategy for mapping forest attributes in complex Mediterranean areas.  相似文献   

13.
The new correlation data analysis method based on the complements of classical probability distribution to quantum state and Schmidt decomposition is presented. It is shown that mathematical methods of quantum mechanics allow us to develop new effective tools for the analysis of statistical dependences and relationships. The presented formalism is the natural approach for the analysis of both classical and quantum correlations. Algorithms of the calculation of partial and multiple correlation coefficients using Schmidt numbers were studied. Numerical estimates of these correlation coefficients were calculated for different probability distributions and quantum states.  相似文献   

14.
目的:通过单病种质量分析,反映医院医疗质量和医疗效益,加强医院管理。方法:检索病案首页信息库全年出院第一诊断(1CD -9亚目分类)的疾病谱,取前2 0位病种例数、费用及平均住院日三项医疗指标。结果:前2 0位病种的例数、医疗费用增加,平均住院日缩短。结论:单病种质量高与低,直接影响医院医疗效益和医疗质量。  相似文献   

15.
For the recognition of patterns as members of certain classes it is assumed that the probability distributions of certain characteristic features are known. For a decision based on the maximum likelihood criterion bounds on the statistical error are derived. In the case of normally distributed and statistically independent features these bounds are evaluated. Conditions are given under which the error tends to zero as the number of characteristic features goes to infinity.  相似文献   

16.
Targeted prostate biopsy using statistical image analysis   总被引:1,自引:0,他引:1  
In this paper, a method for maximizing the probability of prostate cancer detection via biopsy is presented, by combining image analysis and optimization techniques. This method consists of three major steps. First, a statistical atlas of the spatial distribution of prostate cancer is constructed from histological images obtained from radical prostatectomy specimen. Second, a probabilistic optimization framework is employed to optimize the biopsy strategy, so that the probability of cancer detection is maximized under needle placement uncertainties. Finally, the optimized biopsy strategy generated in the atlas space is mapped to a specific patient space using an automated segmentation and elastic registration method. Cross-validation experiments showed that the predictive power of the optimized biopsy strategy for cancer detection reached the 94%-96% levels for 6-7 biopsy cores, which is significantly better than standard random-systematic biopsy protocols, thereby encouraging further investigation of optimized biopsy strategies in prospective clinical studies.  相似文献   

17.
The statistical analysis of space-time point processes   总被引:2,自引:0,他引:2  
A space-time point process is a stochastic process having as realizations points with random coordinates in both space and time. We define a general class of space-time point processes which we term {em analytic}. These are point processes that have only finite numbers of points in finite time intervals, absolutely continuous joint-occurrence distributions, and for which points do not occur with certainty in finite time intervals. Analytic point processes possess an intensity determined by the past of the point process. As a class, analytic point processes remain closed under randomization by a parameter. The problem we consider is that of estimating a random parameter of an observed space-time point process. This parameter may be drawn from a function space and can, therefore, model a random variable, random process, or random field that influences the space-time point process. Feedback interactions between the point process and the randomizing parameter are included. The conditional probability measure of the parameter given the observed space-time point process is a sufficient statistic for forming estimates satisfying a wide variety of performance criteria. A general representation for this conditional measure is developed, and applications to filtering, smoothing, prediction, and hypothesis testing are given.  相似文献   

18.
The use of continuation methods in the computer-aided analysis of electronic circuits is surveyed. Such methods are especially suitable when one desires to compute the solutions to a family of circuit analysis problems as a function of a continuous parameter. Applications of the concept to the location of multiple solutions to nonlinear equations, the computation of input-output characteristics for nonlinear networks, large-change sensitivity analysis, and the computation of multivariable Nyquist plots are reviewed.  相似文献   

19.
The authors focus on the use of neural networks to approximate continuous decision functions. In this context, the parameters to be estimated are the synaptic weights of the network. The number of such parameters and the quantity of data (information) available for training greatly influence the quality of the solution obtained. A previous study analysed the influence and interaction of these two features. In order to reach the architecture of the net leading to the best fitting of the training data, two original pruning techniques are proposed. The evolution of the neural network performances, training and test rates, as the number of synaptic weights pruned increases, is shown experimentally. Two kinds of synaptic weights are obvious: irrelevant synaptic weights, which can be suppressed from the model; and relevant synaptic weights, which cannot be removed. In the test problem, it is possible to reduce the size of the network up to 42%. A 4% improvement of the performance in generalisation is observed  相似文献   

20.
为了更加客观地对大学外语课堂教学进行评价,使用多元统计分析中的主成分分析和因子分析法建立了课堂教学质量评价模型,构建了以每个主因子的方差贡献率作为权重的综合评价函数,对教师课堂教学质量进行综合评价排名。以某高校8位外语教师课堂教学质量评价数据进行实例验证,并与其原始排名进行对比,结果显示使用多元统计分析法评价外语课堂教学质量更客观、更合理。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号