首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 78 毫秒
1.
张璐  孔令臣  陈黄岳 《计算数学》2019,41(3):320-334
随着大数据时代的到来,各个领域涌现出海量数据且结构复杂.如变量的维数不同、尺度不同等.而现实中变量之间往往存在着不确定关系,经典的Pearson相关系数仅能反映两个同维变量间的线性相关关系,不足以完全刻画变量间的相关关系.2007年Szekely等提出的距离相关系数则能描述不同维数变量间的非线性关系.为了探索变量之间的内在信息,本文基于距离相关系数提出了最大距离相关系数法对变量聚类,且有超度量性和空间收缩性.为充分发挥距离相关系数的优势,对上述方法改进得到类整体距离相关系数法.该方法在刻画两类间相似性时,将每类中的所有变量合并成一个整体,再计算这两个不同维数的整体间的距离相关系数.最后,将类整体距离相关系数法应用到几个实际问题中,验证了算法的有效性.  相似文献   

2.
One of the most promising approaches for clustering is based on methods of mathematical programming. In this paper we propose new optimization methods based on DC (Difference of Convex functions) programming for hierarchical clustering. A bilevel hierarchical clustering model is considered with different optimization formulations. They are all nonconvex, nonsmooth optimization problems for which we investigate attractive DC optimization Algorithms called DCA. Numerical results on some artificial and real-world databases are reported. The results demonstrate that the proposed algorithms are more efficient than related existing methods.  相似文献   

3.
Clustering is a fundamental problem in many scientific applications. Standard methods such as k-means, Gaussian mixture models, and hierarchical clustering, however, are beset by local minima, which are sometimes drastically suboptimal. Recently introduced convex relaxations of k-means and hierarchical clustering shrink cluster centroids toward one another and ensure a unique global minimizer. In this work, we present two splitting methods for solving the convex clustering problem. The first is an instance of the alternating direction method of multipliers (ADMM); the second is an instance of the alternating minimization algorithm (AMA). In contrast to previously considered algorithms, our ADMM and AMA formulations provide simple and unified frameworks for solving the convex clustering problem under the previously studied norms and open the door to potentially novel norms. We demonstrate the performance of our algorithm on both simulated and real data examples. While the differences between the two algorithms appear to be minor on the surface, complexity analysis and numerical experiments show AMA to be significantly more efficient. This article has supplementary materials available online.  相似文献   

4.
This article establishes a general formulation for Bayesian model-based clustering, in which subset labels are exchangeable, and items are also exchangeable, possibly up to covariate effects. The notational framework is rich enough to encompass a variety of existing procedures, including some recently discussed methods involving stochastic search or hierarchical clustering, but more importantly allows the formulation of clustering procedures that are optimal with respect to a specified loss function. Our focus is on loss functions based on pairwise coincidences, that is, whether pairs of items are clustered into the same subset or not.

Optimization of the posterior expected loss function can be formulated as a binary integer programming problem, which can be readily solved by standard software when clustering a modest number of items, but quickly becomes impractical as problem scale increases. To combat this, a new heuristic item-swapping algorithm is introduced. This performs well in our numerical experiments, on both simulated and real data examples. The article includes a comparison of the statistical performance of the (approximate) optimal clustering with earlier methods that are model-based but ad hoc in their detailed definition.  相似文献   

5.
Complex data sets are often unmanageable unless they can be subdivided and simplified in an intelligent manner. Clustering is a technique that is used in data mining and scientific analysis for partitioning a data set into groups of similar or nearby items. Hierarchical clustering is an important and well‐studied clustering method involving both top‐down and bottom‐up subdivisions of data. In this article we address the parallel complexity of hierarchical clustering. We describe known sequential algorithms for top‐down and bottom‐up hierarchical clustering. The top‐down algorithm can be parallelized, and when there are n points to be clustered, we provide an O(log n)‐time, n2‐processor Crew Pram algorithm that computes the same output as its corresponding sequential algorithm. We define a natural decision problem based on bottom‐up hierarchical clustering, and add this HIERARCHICAL CLUSTERING PROBLEM (HCP) to the slowly growing list of CC‐complete problems, thereby showing that HCP is one of the computationally most difficult problems in the COMPARATOR CIRCUIT VALUE PROBLEM class. This class contains a variety of interesting problems, and now for the first time a problem from data mining as well. By proving that HCP is CC‐complete, we have demonstrated that HCP is very unlikely to have an NC algorithm. This result is in sharp contrast to the NC algorithm which we give for the top‐down sequential approach, and the result surprisingly shows that the parallel complexities of the top‐down and bottom‐up approaches are different, unless CC equals NC. In addition, we provide a compendium of all known CC‐complete problems. © 2008 Wiley Periodicals, Inc. Complexity, 2008  相似文献   

6.
In recent years, hierarchical model-based clustering has provided promising results in a variety of applications. However, its use with large datasets has been hindered by a time and memory complexity that are at least quadratic in the number of observations. To overcome this difficulty, this article proposes to start the hierarchical agglomeration from an efficient classification of the data in many classes rather than from the usual set of singleton clusters. This initial partition is derived from a subgraph of the minimum spanning tree associated with the data. To this end, we develop graphical tools that assess the presence of clusters in the data and uncover observations difficult to classify. We use this approach to analyze two large, real datasets: a multiband MRI image of the human brain and data on global precipitation climatology. We use the real datasets to discuss ways of integrating the spatial information in the clustering analysis. We focus on two-stage methods, in which a second stage of processing using established methods is applied to the output from the algorithm presented in this article, viewed as a first stage.  相似文献   

7.
A Structured Family of Clustering and Tree Construction Methods   总被引:1,自引:0,他引:1  
A cluster A is an Apresjan cluster if every pair of objects within A is more similar than either is to any object outside A. The criterion is intuitive, compelling, but often too restrictive for applications in classification. We therefore explore extensions of Apresjan clustering to a family of related hierarchical clustering methods. The extensions are shown to be closely connected with the well-known single and average linkage tree constructions. A dual family of methods for classification by splits is also presented. Splits are partitions of the set of objects into two disjoint blocks and are widely used in domains such as phylogenetics. Both the cluster and split methods give rise to progressively refined tree representations. We exploit dualities and connections between the various methods, giving polynomial time construction algorithms for most of the constructions and NP-hardness results for the rest.  相似文献   

8.
Convex clustering, a convex relaxation of k-means clustering and hierarchical clustering, has drawn recent attentions since it nicely addresses the instability issue of traditional nonconvex clustering methods. Although its computational and statistical properties have been recently studied, the performance of convex clustering has not yet been investigated in the high-dimensional clustering scenario, where the data contains a large number of features and many of them carry no information about the clustering structure. In this article, we demonstrate that the performance of convex clustering could be distorted when the uninformative features are included in the clustering. To overcome it, we introduce a new clustering method, referred to as Sparse Convex Clustering, to simultaneously cluster observations and conduct feature selection. The key idea is to formulate convex clustering in a form of regularization, with an adaptive group-lasso penalty term on cluster centers. To optimally balance the trade-off between the cluster fitting and sparsity, a tuning criterion based on clustering stability is developed. Theoretically, we obtain a finite sample error bound for our estimator and further establish its variable selection consistency. The effectiveness of the proposed method is examined through a variety of numerical experiments and a real data application. Supplementary material for this article is available online.  相似文献   

9.
基于加权相似性的BIRCH聚类算法   总被引:1,自引:0,他引:1  
BIRCH方法是一个集成的层次聚类方法.它克服了凝聚层次聚类方法所面临的两个难点:可伸缩性和不能撤销前一步工作的问题.基于BIRCH聚类的多阶段聚类算法思想,结合基于权重的欧式距离度量和基于划分的K-means算法,提出了一种基于加权相似性的BIRCH聚类方法,并将方法应用在时间序列的气象数据分析中.  相似文献   

10.
Hierarchical hesitant fuzzy K-means clustering algorithm   总被引:1,自引:0,他引:1  
Due to the limitation and hesitation in one's knowledge, the membership degree of an element to a given set usually has a few different values, in which the conventional fuzzy sets are invalid. Hesitant fuzzy sets are a powerful tool to treat this case. The present paper focuses on investigating the clustering technique for hesitant fuzzy sets based on the K-means clustering algorithm which takes the results of hierarchical clustering as the initial clusters. Finally, two examples demonstrate the validity of our algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号