Software-defined network (SDN) used a network architecture which separates the control plane and data plane. The control logic of SDN was implemented by the controller. Because controller's capacity was limited, in large scale SDN networks, single controller can not satisfy the requirement of all switches. Multiple controllers were needed to han-dle all data flows. By the reason that the latency between controller and switch would significantly affect the forwarding of new data flow, the rational placement of controllers would effectively improve the performance of entire network. By partition the network into multiple sub domains, on the base of spectral clustering, a method that added a balanced de-ployment object function into k-means was given and a balanced multiple controllers placement algorithm in SDN net-works which has the latency and capacity limitations was proposed. In this approach, a penalty function was introduced in the algorithm to avoid isolation nodes appearing. The simulations show that this algorithm can balance partition the net-work, keep the latency between controller and switch small and keep loads balancing between controllers. 相似文献
We propose a novel segmentation-and-grouping framework for road map inference from sparsely sampled GPS traces. First, we extend Density-Based Spatial Clustering of Application with Noise with an orientation constraint to partition the entire point set of the traces into point clusters representing the road segments. Second, we propose an adaptive k-means algorithm that the k value is determined by an angle threshold to reconstruct nearly straight line segments. Third, the line segments are grouped according to the ‘Good Continuity’ principle of Gestalt Law to form a ‘Stroke’ for recovering the road map. Experimental results demonstrate that our algorithm is robust to noises and sampling rates. In comparison with previous work, our method has advantages to infer road maps from sparsely sampled GPS traces. 相似文献
In recent years,microarray technology has been widely applied in biological and clinical studies for simultaneous monitoring of gene expression in thousands of genes.Gene clustering analysis is found useful for discovering groups of correlated genes potentially co-regulated or associated to the disease or conditions under investigation.Many clustering methods including k-means,fuzzy c-means,and hierarchical clustering have been widely used in literatures.Yet no comprehensive comparative study has been performed to evaluate the effectiveness of these methods,specially,in yeast saccharomyces cerevislae.In this paper,these three gene clustering methods are compared.Classification accuracy and CPU time cost are employed for measuring performance of these algorithms.Our results show that hierarchical clustering outperforms k-means and fuzzy c-means clustering.The analysis provides deep insight to the complicated gene clustering problem of expression profile and serves as a practical guideline for routine microarray cluster analysis of gene expression. 相似文献
In k-means clustering we are given a set of n data points in d-dimensional space
and an integer k, and the problem is to determine a set of k points in
, called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the very high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance.
We consider the question of whether there exists a simple and practical approximation algorithm for k-means clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9+)-approximation algorithm. We present an example showing that any approach based on performing a fixed number of swaps achieves an approximation factor of at least (9−) in all sufficiently high dimensions. Thus, our approximation factor is almost tight for algorithms based on performing a fixed number of swaps. To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with Lloyd's algorithm, this heuristic performs quite well in practice. 相似文献