首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   3篇
化学   2篇
综合类   1篇
数学   8篇
物理学   11篇
  2022年   1篇
  2021年   6篇
  2020年   2篇
  2018年   1篇
  2017年   3篇
  2016年   1篇
  2015年   1篇
  2013年   1篇
  2012年   1篇
  2009年   1篇
  2007年   1篇
  2004年   1篇
  2003年   1篇
  1999年   1篇
排序方式: 共有22条查询结果,搜索用时 31 毫秒
1.
利用图像方差能很好地反映目标边缘信息的特点,提出一种基于方差的K均值聚类红外目标检测算法。利用形态学方法对红外图像进行预处理,运用相应的模板计算得到红外图像的方差图像,利用K均值聚类算法对方差图像进行聚类,从而分离出目标类别和背景类别。实验表明,该算法提取的红外图像中目标信息的兰德指数最高,说明该算法能有效地提取红外图像中目标信息,从而达到目标检测的目的。  相似文献   
2.
Grouping the objects based on their similarities is an important common task in machine learning applications. Many clustering methods have been developed, among them k-means based clustering methods have been broadly used and several extensions have been developed to improve the original k-means clustering method such as k-means ++ and kernel k-means. K-means is a linear clustering method; that is, it divides the objects into linearly separable groups, while kernel k-means is a non-linear technique. Kernel k-means projects the elements to a higher dimensional feature space using a kernel function, and then groups them. Different kernel functions may not perform similarly in clustering of a data set and, in turn, choosing the right kernel for an application could be challenging. In our previous work, we introduced a weighted majority voting method for clustering based on normalized mutual information (NMI). NMI is a supervised method where the true labels for a training set are required to calculate NMI. In this study, we extend our previous work of aggregating the clustering results to develop an unsupervised weighting function where a training set is not available. The proposed weighting function here is based on Silhouette index, as an unsupervised criterion. As a result, a training set is not required to calculate Silhouette index. This makes our new method more sensible in terms of clustering concept.  相似文献   
3.
针对传统的K均值聚类算法在机械故障检测的过程中,由于对于K值的选择具有较强的主观性,最后极易得到局部最优解,而非全局最优解,降低了机械故障检测的准确性。提出一种改进K均值聚类的机械故障智能检测方法。将K均值聚类算法与粒子群算法相结合,在迭代处理的过程中,结合K均值进行优化,即将粒子群算法中的子代个体利用K均值聚类进行运算获取局部最优解,并使用这些个体继续参与迭代处理,这样能够提高算法的收敛速度,避免陷入局部最优解,获得准确的机械故障信号特征。实验结果表明,利用K均值倾斜特征提取的机械故障智能检测算法进行机械故障检测,能够有效提高故障检测的准确性,取得了令人满意的效果。  相似文献   
4.
针对多属性决策中指标的信息重复和不确定性问题,提出了一种基于改进的k-means聚类与粗糙集算法相结合的指标筛选方法。首先,定义样本的空间分布密度,实现初始聚类中心优化的k-means算法,对连续型指标进行离散化处理;然后利用粗糙集的相对约简原理进行指标约简,删除存在信息重复的冗余指标,并结合绿色经济指标体系构建的案例验证了该方法的合理性和有效性。  相似文献   
5.
Abstract

Different USA-origin cannabis samples were analyzed by GC-FID to quantify all possible cannabinoids and terpenoids prior to their clustering. Chromatographic analysis confirmed the presence of seven cannabinoids and sixteen terpenoids with variable levels. Among tested cannabinoids, Δ9-Tetrahydrocannabinol Δ9-THC and cannabinol CBN were available in excess amounts (1.2–8.0?wt%) and (0.22–1.1?wt%), respectively. Fenchol was the most abundant terpenoid with a range of (0.03–1.0?wt%). The measured chemical profile was used to cluster 23 USA states and to group plant samples using different unsupervised multivariate statistical tools. Clustering of plant samples and states was sensitive to the selected cannabinoids/terpenoids. Principal component analysis (PCA) indicated the importance of Δ9-THC, CBN, CBG, CBC, THCV, Δ8-THC, CBL, and fenchol for samples clustering. Δ9-THC was significant to separate California-origin samples while CBN and fenchol were dominant to separate Oregon-origin samples away from the rest of cannabis samples. A special PCA analysis was performed on cannabinoids after excluding Δ9-THC (due to its high variability in the same plant) and CBN (as a degradation byproduct for THC). Results indicated that CBL and Δ8-THC were necessary to separate Nevada and Washington samples, while, CBC was necessary to isolate Oregon and Illinois plant samples. PCA based on terpenoids content confirmed the significance of caryophyllene, guaiol, limonene, linalool, and fenchol for clustering target. Fenchol played a major role for clustering plant samples that originated from Washington and Nevada. k-means method was more flexible than PCA and generated three different classes; samples obtained from Oregon and California in comparison to the rest of other samples were obviously separated alone, which attributed to their unique chemical profile. Finally, both PCA and k-means were useful and quick guides for cannabis clustering based on their chemical profile. Thus, less effort, time, and materials will be consumed in addition to decreasing operational conditions for cannabis clustering.  相似文献   
6.
The main influencing factors of the clustering effect of the k-means algorithm are the selection of the initial clustering center and the distance measurement between the sample points. The traditional k-mean algorithm uses Euclidean distance to measure the distance between sample points, thus it suffers from low differentiation of attributes between sample points and is prone to local optimal solutions. For this feature, this paper proposes an improved k-means algorithm based on evidence distance. Firstly, the attribute values of sample points are modelled as the basic probability assignment (BPA) of sample points. Then, the traditional Euclidean distance is replaced by the evidence distance for measuring the distance between sample points, and finally k-means clustering is carried out using UCI data. Experimental comparisons are made with the traditional k-means algorithm, the k-means algorithm based on the aggregation distance parameter, and the Gaussian mixture model. The experimental results show that the improved k-means algorithm based on evidence distance proposed in this paper has a better clustering effect and the convergence of the algorithm is also better.  相似文献   
7.
Clustering is a fundamental problem in many scientific applications. Standard methods such as k-means, Gaussian mixture models, and hierarchical clustering, however, are beset by local minima, which are sometimes drastically suboptimal. Recently introduced convex relaxations of k-means and hierarchical clustering shrink cluster centroids toward one another and ensure a unique global minimizer. In this work, we present two splitting methods for solving the convex clustering problem. The first is an instance of the alternating direction method of multipliers (ADMM); the second is an instance of the alternating minimization algorithm (AMA). In contrast to previously considered algorithms, our ADMM and AMA formulations provide simple and unified frameworks for solving the convex clustering problem under the previously studied norms and open the door to potentially novel norms. We demonstrate the performance of our algorithm on both simulated and real data examples. While the differences between the two algorithms appear to be minor on the surface, complexity analysis and numerical experiments show AMA to be significantly more efficient. This article has supplementary materials available online.  相似文献   
8.
Traditional information retrieval systems return a ranked list of results to a user’s query. This list is often long, and the user cannot explore all the results retrieved. It is also ineffective for a highly ambiguous language such as Arabic. The modern writing style of Arabic excludes the diacritical marking, without which Arabic words become ambiguous. For a search query, the user has to skim over the document to infer if the word has the same meaning they are after, which is a time-consuming task. It is hoped that clustering the retrieved documents will collate documents into clear and meaningful groups. In this paper, we use an enhanced k-means clustering algorithm, which yields a faster clustering time than the regular k-means. The algorithm uses the distance calculated from previous iterations to minimize the number of distance calculations. We propose a system to cluster Arabic search results using the enhanced k-means algorithm, labeling each cluster with the most frequent word in the cluster. This system will help Arabic web users identify each cluster’s topic and go directly to the required cluster. Experimentally, the enhanced k-means algorithm reduced the execution time by 60% for the stemmed dataset and 47% for the non-stemmed dataset when compared to the regular k-means, while slightly improving the purity.  相似文献   
9.
k-平均问题是计算机科学和组合优化领域的经典问题之一.k-平均聚类作为最受重视而且最简单易懂的一种聚类分析方法流行于数据挖掘领域.k-平均问题可描述为:给定n个元素的观测集,其中每个观测点都是d维实向量,目标是把这n个观测点划分到k(≤n)个集合中,使得所有集合中的点到对应的聚类中心的距离的平方和最小,其中一个集合的聚类中心指的是该集合中所有观测点的均值.k-平均问题在理论上是NP-难的,但有高效的启发式算法,广泛应用在市场划分、机器视觉、地质统计学、天文学和农业等实际背景中.随着实际问题中遇到的k-平均问题更加复杂,数据量更加庞大,还需学者进行更深一步的研究.罗列出k-平均问题及其诸多变形及推广问题的经典算法,并总结k-平均中尚待研究的若干问题.  相似文献   
10.
k-均值问题自提出以来一直吸引组合优化和计算机科学领域的广泛关注, 是经典的NP-难问题之一. 给定N个d维实向量构成的观测集, 目标是把这N个观测点划分到k(\leq N)个集合中, 使得所有集合中的点到对应的聚类中心距离的平方和最小, 一个集合的聚类中心指的是该集合 中所有观测点的均值. k-均值算法作为解决k-均值问题的启发式算法,在实际应用中因其出色的收敛速度而倍受欢迎. k-均值算法可描述为: 给定问题的初始化分组, 交替进行指派(将观测点分配到离其最近的均值点)和更新(计算新的聚类的均值点)直到收敛到某一解. 该算法通常被认为几乎是线性收敛的. 但缺点也很明显, 无法保证得到的是全局最优解, 并且算法结果好坏过于依赖初始解的选取. 于是学者们纷纷提出不同的初始化方法来提高k-均值算法的质量. 现筛选和罗列了关于选取初始解的k-均值算法的初始化方法供读者参考.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号