首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 125 毫秒
1.
对应分析在恶性肿瘤死亡率分析中的应用   总被引:6,自引:2,他引:4  
应用对应分析对 1996年福建省主要恶性肿瘤与地区分布资料进行分析 ,通过因子载荷系数将样品与变量点画在同一张因子载荷图上 ,从而能直观看出这些地区的癌谱特点。在研究样品与变量间关系时 ,对应分析有着一定的优越性。  相似文献   

2.
应用对应分析方法对福建省67个县市的7种恶性循环性肿瘤死亡率资料作分析,从而找出不同部位的恶性肿瘤与地区之间的关系,揭示不同部位恶性肿瘤高发的地区分布。  相似文献   

3.
BIN法建筑能耗快速计算——对称频数修正法   总被引:2,自引:0,他引:2  
针对BIN能耗计算方法提出了一种更快的统计算法.BIN方法计算能耗将负荷表示为温度的线性关系,满足迭加原理.常数项部分对应的能耗用常数b乘以总频数即可;正比部分对应能耗用对称频数分布时的能耗乘以修正系数而得.给出了不同温度范围内实际温频数、对称分布的温频数以及修正系数.这种算法不用逐一求负荷然后逐一乘以对应的频数再求和,只需几步简单运算,能耗计算速度大大提高.  相似文献   

4.
利用矩阵理论对线性不可逆过程的协同效应进行了分析.与向量空间相类比,定义了热力学流空间中的内积以及协同系数.协同系数的大小反映了两个不可逆过程间的协同程度.由唯象系数矩阵引出了协同矩阵与协同系数矩阵.对于导热不可逆过程,协同矩阵所对应的二次型是耗散函数.对于孤立体系,证明了协同矩阵所对应的二次型对时间的导数为负值,它可以作为体系的一个李雅普诺夫函数.  相似文献   

5.
考虑气象因素的短期负荷预测模型研究   总被引:1,自引:0,他引:1  
短期负荷预测是针对未来一天到数天各时段的负荷预测的研究,是电力系统负荷预测工作的一项重要内容.针对传统神经网络预测模型应用于短期负荷预测的缺陷,改进了多角度数据分析和组织策略,选择不同年份相近历史日作为相似日,通过最小二乘支持向量机填补确实数据,利用聚类算法预测相似日的短期负荷;同时通过灰度关联算法,考虑气象因素作用下的短期负荷预测模型.实例证明:通过建立与负荷数据相适应的数学模型,对负荷数据进行分析与预测,通过气象因素修正预测模型,可以获得更精确的负荷数据预测.  相似文献   

6.
对国外流行的Beozecri对应分析法,这里用变量型数据阵指出该方法很大程度改变了数据阵的特征,不能达到对应分析目的,以致不能解决问题.为此,这里用因子双重信息图解决问题,通过比较,因子双重信息图优良地图示了数据阵中:变量之间、样品之间、样品与变量之间的关系,达到了对应分析目的,方法直接且简便,因子双重信息图较适应变量型数据阵这类问题的对应分析.  相似文献   

7.
文章研究张量响应回归模型及其系数张量的最小二乘估计.为了提高该模型系数张量的估计精度,首先对模型的系数张量进行张量的CP分解和Tucker分解,构建两个新的张量响应回归模型.这两个模型不仅可以捕捉张量数据内部的空间结构信息,还可以大大减少待估参数的个数.然后,给出模型对应的参数估计算法.最后,通过Monte Carlo数值实验说明改进后的两个回归模型的系数张量的估计精度都有显著的提高,且基于Tucker分解的张量响应回归模型的系数张量的估计精度最佳.  相似文献   

8.
漫滩水流二次流项系数研究   总被引:1,自引:0,他引:1  
基于SKM方法,引入二次流项系数,给出了漫滩水流水深平均流速沿横向分布的二维解析解.文中对SERC-FCF的系列试验进行了模拟,计算结果与实测资料吻合较好.在此基础上,进一步研究了复式河道断面形态对二次流项系数的影响,并分析了造成各种影响的原因.计算结果表明,二次流项系数的大小与断面形态有关,而其正负号与二次流的方向有关,这为二次流项系数的选取提供了参考依据.  相似文献   

9.
基于结构方程(SEM)理论,以2008年统计数据为样本对影响城市现代化的测量指标进行了一阶、二阶验证性因素分析,得到了结构方程模型.并以测量指标在因子上的负荷、路径系数为基础构建权重,建立了城市现代化水平综合评价模型.对我国重要城市(直辖市、主要省会城市)现代化水平程度进行了相对评价,得到较为满意的结果.  相似文献   

10.
根据一个已知级数,利用正弦积分与Clausen函数的结果,使用积分-裂项方法得到分母为1个平方因子,平方因子与1个,2个,3个一次因子乘积的二项系数级数.所给出二项式系数级数的和式是函数形式.并给出分母含有奇平方因子的二项式系数数值级数恒等式.  相似文献   

11.
Production planning problems play a vital role in the supply chain management area, by which decision makers can determine the production loading plan—consisting of the quantity of production and the workforce level at each production plant—to fulfil market demand. This paper addresses the production planning problem with additional constraints, such as production plant preference selection. To deal with the uncertain demand data, a stochastic programming approach is proposed to determine optimal medium-term production loading plans under an uncertain environment. A set of data from a multinational lingerie company in Hong Kong is used to demonstrate the robustness and effectiveness of the proposed model. An analysis of the probability distribution of economic demand assumptions is performed. The impact of unit shortage costs on the total cost is also analysed.  相似文献   

12.
Correspondence analysis, a data analytic technique used to study two‐way cross‐classifications, is applied to social relational data. Such data are frequently termed “sociometric” or “network” data. The method allows one to model forms of relational data and types of empirical relationships not easily analyzed using either standard social network methods or common scaling or clustering techniques. In particular, correspondence analysis allows one to model:

—two‐mode networks (rows and columns of a sociomatrix refer to different objects)

—valued relations (e.g. counts, ratings, or frequencies).

In general, the technique provides scale values for row and column units, visual presentation of relationships among rows and columns, and criteria for assessing “dimensionality” or graphical complexity of the data and goodness‐of‐fit to particular models. Correspondence analysis has recently been the subject of research by Goodman, Haberman, and Gilula, who have termed their approach to the problem “canonical analysis” to reflect its similarity to canonical correlation analysis of continuous multivariate data. This generalization links the technique to more standard categorical data analysis models, and provides a much‐needed statistical justificatioa

We review both correspondence and canonical analysis, and present these ideas by analyzing relational data on the 1980 monetary donations from corporations to nonprofit organizations in the Minneapolis St. Paul metropolitan area. We also show how these techniques are related to dyadic independence models, first introduced by Holland, Leinhardt, Fienberg, and Wasserman in the early 1980's. The highlight of this paper is the relationship between correspondence and canonical analysis, and these dyadic independence models, which are designed specifically for relational data. The paper concludes with a discussion of this relationship, and some data analyses that illustrate the fart that correspondence analysis models can be used as approximate dyadic independence models.  相似文献   

13.
To interpret the biplot, it is necessary to know which points—usually variables—are the ones that are important contributors to the solution, especially when there are many variables involved. This information can be calculated separately as part of the biplot's numerical results, but this means that a table has to be consulted along with the graphical display. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic information directly into the display itself, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses, such as correspondence analysis, principal component analysis, log-ratio analysis, and various forms of discriminant analysis, and, in fact, to any method based on dimension reduction through the singular value decomposition. In the contribution biplot, one set of points, usually the rows of a data matrix, optimally represents the spatial positions of the cases or sample units, according to an appropriate distance measure. The other set of points, usually the columns of the data matrix, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that often only one common scale for the row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, the contribution biplot also solves the problem in correspondence analysis and log-ratio analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution. This article has supplementary materials online.  相似文献   

14.
Multiple taxicab correspondence analysis   总被引:1,自引:0,他引:1  
We compare the statistical analysis of multidimensional contingency tables by multiple correspondence analysis (MCA) and multiple taxicab correspondence analysis (MTCA). We will show in this paper: First, MTCA and MCA can produce different results. Second, taxicab correspondence analysis of a Burt table is equivalent to centroid correspondence analysis of the indicator matrix. Third, along the first principal axis, the projected response patterns in MTCA will be clustered and the number of cluster points is less than or equal to 1+ the number of variables. Fourth, visual maps produced by MTCA seem to be clearer and more readable in the presence of rarely occurring categories of the variables than the graphical displays produced by MCA. Two well known data sets are analyzed.  相似文献   

15.
We propose a new procedure for sparse factor analysis (FA) such that each variable loads only one common factor. Thus, the loading matrix has a single nonzero element in each row and zeros elsewhere. Such a loading matrix is the sparsest possible for certain number of variables and common factors. For this reason, the proposed method is named sparsest FA (SSFA). It may also be called FA-based variable clustering, since the variables loading the same common factor can be classified into a cluster. In SSFA, all model parts of FA (common factors, their correlations, loadings, unique factors, and unique variances) are treated as fixed unknown parameter matrices and their least squares function is minimized through specific data matrix decomposition. A useful feature of the algorithm is that the matrix of common factor scores is re-parameterized using QR decomposition in order to efficiently estimate factor correlations. A simulation study shows that the proposed procedure can exactly identify the true sparsest models. Real data examples demonstrate the usefulness of the variable clustering performed by SSFA.  相似文献   

16.
收集2003-2012年三个区域:全国区域、城市区域、农村区域的恶性肿瘤发病及死亡率和污染物数据,采用灰色关联分析方法计算了不同区域与不同污染物的综合关联度,并对污染物致恶性肿瘤死亡的潜伏期作了定量分析.研究结果表明:1)氨氮排放量和二氧化硫对我国三个不同区域居民恶性肿瘤发病和死亡率的影响最大;2)污染物与恶性肿瘤发病率的关联度跟区域无关,但是污染物与恶性肿瘤死亡率的关联度城市明显大于农村,污染物与恶性肿瘤死亡率的关联度男性明显大于女性;3)氨氮和二氧化硫导致居民恶性肿瘤死亡的潜伏期分别为:2和1年.  相似文献   

17.
In this paper mathematical methods for fuzzy stochastic analysis in engineering applications are presented. Fuzzy stochastic analysis maps uncertain input data in the form of fuzzy random variables onto fuzzy random result variables. The operator of the mapping can be any desired deterministic algorithm, e.g. the dynamic analysis of structures. Two different approaches for processing the fuzzy random input data are discussed. For these purposes two types of fuzzy probability distribution functions for describing fuzzy random variables are introduced. On the basis of these two types of fuzzy probability distribution functions two appropriate algorithms for fuzzy stochastic analysis are developed. Both algorithms are demonstrated and compared by way of an example.  相似文献   

18.
A variable selection method using global score estimation is proposed, which is applicable as a selection criterion in any multivariate method without external variables such as principal component analysis, factor analysis and correspondence analysis. This method selects a subset of variables by which we approximate the original global scores as much as possible in the context of least squares, where the global scores, e.g. principal component scores, factor scores and individual scores, are computed based on the selected variables. Global scores are usually orthogonal. Therefore, the estimated global scores should be restricted to being mutually orthogonal. According to how to satisfy that restriction, we propose three computational steps to estimate the scores. Example data is analyzed to demonstrate the performance and usefulness of the proposed method, in which the proposed algorithm is evaluated and the results obtained using four cost-saving selection procedures are compared. This example shows that combining these steps and procedures yields more accurate results quickly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号