首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 96 毫秒
1.
FCM和PCM的混合模型可以克服它们单独聚类时的缺点,在聚类效果上有很大改进,但是对于特征不明显的样本而言,这种混合模型的聚类效果并不太好,为了克服这一缺点,本文引入Mercer核,提出了一种新的基于核的混合c-均值聚类模型(KIPCM),运用核函数使得在原始空间不可分的数据点在核空间变得可分。通过数值实验,得到了较为合理的中心值以及较高的正确分类率,证实了本文算法的可行性和有效性。  相似文献   

2.
自适应约束模糊C均值聚类算法   总被引:1,自引:0,他引:1  
针对经典C均值聚类算法和模糊C均值聚类算法所存在的对初始聚类中心过分依赖以及需要预先知道实际聚类数目的问题,基于模糊C均值聚类算法提出了一种新算法:自适应约束模糊C均值(ACFCM)聚类算法,它在模糊C均值聚类算法的基础上,给目标函数加入了一个惩罚项,使得上述问题得以解决.并通过仿真实验证实了新算法的可行性和有效性.  相似文献   

3.
一种稳健的聚类方法   总被引:5,自引:0,他引:5  
本文讨论一种新的聚类方法 :属性均值聚类 .通过理论分析 ,属性均值聚类是比模糊均值聚类更稳健的聚类方法 .数值实验说明了该方法的有效性  相似文献   

4.
针对评估指标的重要性不一,且存在冗余问题,基于粗集可辨识矩阵,提出了一种计算指标属性重要度和约简的有效、简便算法,对样本信息进行约简,并计算约简后各指标的权重.其中,针对连续属性值离散化过程可能造成信息损失问题,采用了模糊C均值聚类算法离散化连续属性值.最后,建立了基于粗糙集和模糊C均值聚类的空战效能评估模型,并通过实例验证了该模型的可行性和有效性.  相似文献   

5.
针对传统k-均值聚类算法事先必须获知类别数和难以确定初始聚类中心的缺点,建立了关于聚类中心和类别数k的双层规划模型,结合粒子群算法确定出聚类中心,通过在迭代过程中不断更新准则函数的方法搜索并确定出最佳类别数惫,基于所建模型,提出了一种改进的k-均值聚类算法,并将算法应用于冰脊表面形态分析中.结果表明,算法得到的聚类结果不但具有相邻类别边界清晰的优点,而且能够较好地反映出地理位置和生长环境对冰脊形成的影响.  相似文献   

6.
针对传统k-均值聚类算法事先必须获知类别数和难以确定初始聚类中心的缺点,建立了关于聚类中心和类别数k的双层规划模型,结合粒子群算法确定出聚类中心,通过在迭代过程中不断更新准则函数的方法搜索并确定出最佳类别数惫,基于所建模型,提出了一种改进的k-均值聚类算法,并将算法应用于冰脊表面形态分析中.结果表明,算法得到的聚类结果不但具有相邻类别边界清晰的优点,而且能够较好地反映出地理位置和生长环境对冰脊形成的影响.  相似文献   

7.
在模糊C均值(Fuzzy C-Means,FCM)聚类应用过程中,针对目前模糊加权指数的确定缺乏理论依据和有效评价方法这一问题,提出了一种基于子集测度的模糊加权指数计算方法.首先根据子集测度理论定义了一个聚类有效性函数,然后依据该函数在聚类过程中通过循环进化迭代来计算聚类结果的有效性,并将其值反馈到模糊加权指数m的变化中,而使m收敛到一个稳定解,即得到最佳模糊加权指数.理论分析和实验表明,该算法是有效的,为模糊加权指数m的探讨研究提供了一种新的思路和途径.  相似文献   

8.
针对模糊C均值聚类算法对初始聚类中心值敏感和抗噪声能力差的问题,提出一种基于改进的量子遗传优化初始聚类中心的算法,改进双链编码的量子遗传算法增加了全局搜索能力,改变传统的FCM算法计算迭代慢和易陷入局部极值的问题.同时引入空间邻域信息,利用加权隶属度矩阵建立适应度函数来改善对噪声的鲁棒性,实验结果表明,算法具有很好的分割效果和较强的抗噪能力.  相似文献   

9.
土壤是一个多性状的连续体,其分类的首选方法是模糊聚类分析.但是模糊聚类分析中现有的基于模糊等价关系的动态聚类法和模糊c-均值法各有利弊,采用其中一种方法聚类肯定存在不足.为此集成两种聚类方法的优点,避其缺点,提出了用基于模糊等价关系的动态聚类方法和方差分析方法确定聚类数目和初始聚类中心,再用模糊c-均值法决定最终分类结果的集成算法,并将其应用到松花江流域土壤分类中,得到了较为切合实际的分类结果.  相似文献   

10.
FCM聚类算法中模糊加权指数m的优选方法   总被引:23,自引:0,他引:23  
模糊c-均值(FCM)聚类算法是一种通过目标函数的极小化来获得数据集模糊划分的方法。其中,模糊加权指数m对FCM算法的分类性能有着重要的影响,而调用FCM算法进行模糊聚类分析时又必须给m赋值。因此,模糊加权指数m的优选研究就变得很有意义。基于模糊决策的方法本文给出了一种对m的优选方法,实验结果表明该方法是有效的。  相似文献   

11.
为解决模糊C均值算法对初始值敏感、容易陷入局部极值的问题,提出基于混合细菌趋药性的聚类分割算法,在简单细菌趋药性算法的基础上,将粒子群算法引入.新算法使用粒子群算法、细菌趋药性算法两步优化得到的结果作为模糊C均值算法的初始值,同时新算法中引入精英保持策略,进一步提高算法效率.实验结果表明,新算法具有较快的收敛速度,.同时能够获得较好的图像分割效果和质量.  相似文献   

12.
类内距离和类间距离数值量级差异性导致两类距离无法直接融合,进而影响了FCM聚类模型设计。首先,本文全面回顾了经典和改进型的FCM聚类模型,构建了类内距离和类间距离迹的关系模型,分别从类内类间距离的变化不一致性和量级差异性两个方面分析了现有FCM聚类模型的不足;其次,运用高斯核距离替代传统的欧式距离来表征类内类间距离,基于最小化类内紧凑度与类间分离度差的思想,设计了类内类间距离平衡方法,提出了一种改进的FCM聚类目标函数与算法;最后,运用算例说明了本方法的有效性和优越性。  相似文献   

13.
为降低药品物流配送成本、提高药品配送效率。本文针对国家带量集中采购药品配送问题,构建了药品物流多中心选址-路径优化双目标模型。并结合模糊C-均值聚类算法(FCMA)、模拟退火算法和禁忌搜索算法各自优点,设计出了FCM-TS-SA混合算法,最后通过真实案例进行了验证、对比和分析。  相似文献   

14.
针对模糊C均值算法用于图像分割时对初始值敏感、容易陷入局部极值的问题,提出基于混合单纯形算法的模糊均值图像分割算法.算法利用Nelder-Mead单纯形算法计算量小、搜索速度快和粒子群算法自适应能力强、具有较好的全局搜索能力的特点,将混合单纯形算法的结果作为模糊C均值算法的输入,并将其用于图像分割.实验结果表明:基于混合单纯形算法的模糊均值图像分割算法在改善图像分割质量的同时,提高了算法的运行速度.  相似文献   

15.
In this paper, we investigate the problem of determining the number of clusters in the k-modes based categorical data clustering process. We propose a new categorical data clustering algorithm with automatic selection of k. The new algorithm extends the k-modes clustering algorithm by introducing a penalty term to the objective function to make more clusters compete for objects. In the new objective function, we employ a regularization parameter to control the number of clusters in a clustering process. Instead of finding k directly, we choose a suitable value of regularization parameter such that the corresponding clustering result is the most stable one among all the generated clustering results. Experimental results on synthetic data sets and the real data sets are used to demonstrate the effectiveness of the proposed algorithm.  相似文献   

16.
Clustering algorithms divide up a dataset into a set of classes/clusters, where similar data objects are assigned to the same cluster. When the boundary between clusters is ill defined, which yields situations where the same data object belongs to more than one class, the notion of fuzzy clustering becomes relevant. In this course, each datum belongs to a given class with some membership grade, between 0 and 1. The most prominent fuzzy clustering algorithm is the fuzzy c-means introduced by Bezdek (Pattern recognition with fuzzy objective function algorithms, 1981), a fuzzification of the k-means or ISODATA algorithm. On the other hand, several research issues have been raised regarding both the objective function to be minimized and the optimization constraints, which help to identify proper cluster shape (Jain et al., ACM Computing Survey 31(3):264–323, 1999). This paper addresses the issue of clustering by evaluating the distance of fuzzy sets in a feature space. Especially, the fuzzy clustering optimization problem is reformulated when the distance is rather given in terms of divergence distance, which builds a bridge to the notion of probabilistic distance. This leads to a modified fuzzy clustering, which implicitly involves the variance–covariance of input terms. The solution of the underlying optimization problem in terms of optimal solution is determined while the existence and uniqueness of the solution are demonstrated. The performances of the algorithm are assessed through two numerical applications. The former involves clustering of Gaussian membership functions and the latter tackles the well-known Iris dataset. Comparisons with standard fuzzy c-means (FCM) are evaluated and discussed.  相似文献   

17.
In this paper, we propose a new kernel-based fuzzy clustering algorithm which tries to find the best clustering results using optimal parameters of each kernel in each cluster. It is known that data with nonlinear relationships can be separated using one of the kernel-based fuzzy clustering methods. Two common fuzzy clustering approaches are: clustering with a single kernel and clustering with multiple kernels. While clustering with a single kernel doesn’t work well with “multiple-density” clusters, multiple kernel-based fuzzy clustering tries to find an optimal linear weighted combination of kernels with initial fixed (not necessarily the best) parameters. Our algorithm is an extension of the single kernel-based fuzzy c-means and the multiple kernel-based fuzzy clustering algorithms. In this algorithm, there is no need to give “good” parameters of each kernel and no need to give an initial “good” number of kernels. Every cluster will be characterized by a Gaussian kernel with optimal parameters. In order to show its effective clustering performance, we have compared it to other similar clustering algorithms using different databases and different clustering validity measures.  相似文献   

18.
A local geometrical properties application to fuzzy clustering   总被引:1,自引:0,他引:1  
Possibilistic clustering is seen increasingly as a suitable means to resolve the limitations resulting from the constraints imposed in the fuzzy C-means algorithm. Studying the metric derived from the covariance matrix we obtain a membership function and an objective function whether the Mahalanobis distance or the Euclidean distance is used. Applying the theoretical results using the Euclidean distance we obtain a new algorithm called fuzzy-minimals, which detects the possible prototypes of the groups of a sample. We illustrate the new algorithm with several examples.  相似文献   

19.
There are many data clustering techniques available to extract meaningful information from real world data, but the obtained clustering results of the available techniques, running time for the performance of clustering techniques in clustering real world data are highly important. This work is strongly felt that fuzzy clustering technique is suitable one to find meaningful information and appropriate groups into real world datasets. In fuzzy clustering the objective function controls the groups or clusters and computation parts of clustering. Hence researchers in fuzzy clustering algorithm aim is to minimize the objective function that usually has number of computation parts, like calculation of cluster prototypes, degree of membership for objects, computation part for updating and stopping algorithms. This paper introduces some new effective fuzzy objective functions with effective fuzzy parameters that can help to minimize the running time and to obtain strong meaningful information or clusters into the real world datasets. Further this paper tries to introduce new way for predicting membership, centres by minimizing the proposed new fuzzy objective functions. And experimental results of proposed algorithms are given to illustrate the effectiveness of proposed methods.  相似文献   

20.
Application of honey-bee mating optimization algorithm on clustering   总被引:4,自引:0,他引:4  
Cluster analysis is one of attractive data mining technique that use in many fields. One popular class of data clustering algorithms is the center based clustering algorithm. K-means used as a popular clustering method due to its simplicity and high speed in clustering large datasets. However, K-means has two shortcomings: dependency on the initial state and convergence to local optima and global solutions of large problems cannot found with reasonable amount of computation effort. In order to overcome local optima problem lots of studies done in clustering. Over the last decade, modeling the behavior of social insects, such as ants and bees, for the purpose of search and problem solving has been the context of the emerging area of swarm intelligence. Honey-bees are among the most closely studied social insects. Honey-bee mating may also be considered as a typical swarm-based approach to optimization, in which the search algorithm is inspired by the process of marriage in real honey-bee. Honey-bee has been used to model agent-based systems. In this paper, we proposed application of honeybee mating optimization in clustering (HBMK-means). We compared HBMK-means with other heuristics algorithm in clustering, such as GA, SA, TS, and ACO, by implementing them on several well-known datasets. Our finding shows that the proposed algorithm works than the best one.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号