首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we propose an Instance Transfer Boosting (ITB) framework for object tracking. The proposed tracking framework tries to transfer prior knowledge from the first frame and frame t-2 regarded as source instance to frame t-1 approximately as target instance. Those instances build the online training classifier used in tracking-by-detection for frame t. The novel method presents the tracking task in the current frame from the knowledge transferred by the first frame and the previous two frames, resulting in a more robust tracker for distinguishing the object from the background. Experimental results on several public video sequences demonstrated promising performance of the proposed tracking framework in both tracking accuracy and stability.  相似文献   

2.
《Physica A》2006,371(2):795-813
It appeared recently that the classical random graph model used to represent real-world complex networks does not capture their main properties. Since then, various attempts have been made to provide accurate models. We study here a model which achieves the following challenges: it produces graphs which have the three main wanted properties (clustering, degree distribution, average distance), it is based on some real-world observations, and it is sufficiently simple to make it possible to prove its main properties. This model consists in sampling a random bipartite graph with prescribed degree distribution. Indeed, we show that any complex network may be viewed as a bipartite graph with some specific characteristics, and that its main properties may be viewed as consequences of this underlying structure. We also propose a growing model based on this observation.  相似文献   

3.
In the current keypoint-based image matching methods, not all the keypoints can be reliably matched because of the influence of noise, illumination, and image distortion. In the paper, we propose a novel method based on the Delaunay triangulation to detect and remove possible mismatches. Given previously matched keypoints by a detector in two images, the proposed method utilizes four steps to remove the mismatched point pairs: first, triangulating keypoints in the reference image, and producing a graph consisting of edges that connects the keypoints; second, drawing a graph in the test image by connecting the corresponding points as same as in the reference image; third, detecting abnormal edges in the test image using special constraints; fourth, detecting and removing mismatched point pairs based on the abnormal edges. The experimental results show that the method can detect the mismatches accurately and improve the robustness of current matching algorithms.  相似文献   

4.
Construction of graph-based approximations for multi-dimensional data point clouds is widely used in a variety of areas. Notable examples of applications of such approximators are cellular trajectory inference in single-cell data analysis, analysis of clinical trajectories from synchronic datasets, and skeletonization of images. Several methods have been proposed to construct such approximating graphs, with some based on computation of minimum spanning trees and some based on principal graphs generalizing principal curves. In this article we propose a methodology to compare and benchmark these two graph-based data approximation approaches, as well as to define their hyperparameters. The main idea is to avoid comparing graphs directly, but at first to induce clustering of the data point cloud from the graph approximation and, secondly, to use well-established methods to compare and score the data cloud partitioning induced by the graphs. In particular, mutual information-based approaches prove to be useful in this context. The induced clustering is based on decomposing a graph into non-branching segments, and then clustering the data point cloud by the nearest segment. Such a method allows efficient comparison of graph-based data approximations of arbitrary topology and complexity. The method is implemented in Python using the standard scikit-learn library which provides high speed and efficiency. As a demonstration of the methodology we analyse and compare graph-based data approximation methods using synthetic as well as real-life single cell datasets.  相似文献   

5.
Graph clustering has been an essential part in many methods and thus its accuracy has a significant effect on many applications. In addition, exponential growth of real-world graphs such as social networks, biological networks and electrical circuits demands clustering algorithms with nearly-linear time and space complexity. In this paper we propose Personalized PageRank Clustering (PPC) that employs the inherent cluster exploratory property of random walks to reveal the clusters of a given graph. We combine random walks and modularity to precisely and efficiently reveal the clusters of a graph. PPC is a top-down algorithm so it can reveal inherent clusters of a graph more accurately than other nearly-linear approaches that are mainly bottom-up. It also gives a hierarchy of clusters that is useful in many applications. PPC has a linear time and space complexity and has been superior to most of the available clustering algorithms on many datasets. Furthermore, its top-down approach makes it a flexible solution for clustering problems with different requirements.  相似文献   

6.
The matching problem plays a basic role in combinatorial optimization and in statistical mechanics. In its stochastic variants, optimization decisions have to be taken given only some probabilistic information about the instance. While the deterministic case can be solved in polynomial time, stochastic variants are worst-case intractable. We propose an efficient method to solve stochastic matching problems which combines some features of the survey propagation equations and of the cavity method. We test it on random bipartite graphs, for which we analyze the phase diagram and compare the results with exact bounds. Our approach is shown numerically to be effective on the full range of parameters, and to outperform state-of-the-art methods. Finally we discuss how the method can be generalized to other problems of optimization under uncertainty.  相似文献   

7.
Most existing methods for detection of community overlap cannot balance efficiency and accuracy for large and densely overlapping networks. To quickly identify overlapping communities for such networks, we propose a new method that uses belief propagation and conflict (PCB) to occupy communities. We first identify triangles with maximal clustering coefficients as seed nodes and sow a new type of belief to the seed nodes. Then the beliefs explore their territory by occupying nodes with high assent ability. The beliefs propagate their strength along the graph to consolidate their territory, and conflict with each other when they encounter the same node simultaneously. Finally, the node membership is judged from the belief vectors. The PCB time complexity is nearly linear and its space complexity is linear. The algorithm was tested in extensive experiments on three real-world social networks and three computer-generated artificial graphs. The experimental results show that PCB is very fast and highly reliable. Tests on real and artificial networks give excellent results compared with three newly proposed overlapping community detection algorithms.  相似文献   

8.
江滔  马泳  黄珺  王贺松  樊凡 《应用光学》2022,43(5):921-928+1014
运动恢复结构算法(structure from motion, SfM)是一种通过计算图像匹配关系,恢复出相机位姿和目标三维结构的重建算法。提出一种基于赋权视角连接图的增量式运动恢复结构算法。首先建立基于图像对立体匹配质量的赋权连接图,量化了图像两两之间的匹配关系;其次在赋权连接图中边的权重的基础上,搜索度数感知的最佳初始种子对;最后根据已重建顶点的连通性构建下一张最佳图像候选集,设计了基于顶点度数与特征点分布的下一张最佳图像评价算法。在公开数据集上实验结果显示,本文算法在重建质量、相机校准率和点云生成数量的表现优于现有先进的运动恢复结构算法,相比基准对比算法,本文算法在不同数据集上平均重建耗时至少降低了19%,点云生成速率至少提升了21%。  相似文献   

9.
The problem of extracting meaningful data through graph analysis spans a range of different fields, such as social networks, knowledge graphs, citation networks, the World Wide Web, and so on. As increasingly structured data become available, the importance of being able to effectively mine and learn from such data continues to grow. In this paper, we propose the multi-scale aggregation graph neural network based on feature similarity (MAGN), a novel graph neural network defined in the vertex domain. Our model provides a simple and general semi-supervised learning method for graph-structured data, in which only a very small part of the data is labeled as the training set. We first construct a similarity matrix by calculating the similarity of original features between all adjacent node pairs, and then generate a set of feature extractors utilizing the similarity matrix to perform multi-scale feature propagation on graphs. The output of multi-scale feature propagation is finally aggregated by using the mean-pooling operation. Our method aims to improve the model representation ability via multi-scale neighborhood aggregation based on feature similarity. Extensive experimental evaluation on various open benchmarks shows the competitive performance of our method compared to a variety of popular architectures.  相似文献   

10.
11.
We propose a metric for vulnerability of labeled graphs that has the following two properties: (1) when the labeled graph is considered as an unlabeled one, the metric reduces to the corresponding metric for an unlabeled graph; and (2) the metric has the same value for differently labeled fully connected graphs, reflecting the notion that any arbitrarily labeled fully connected topology is equally vulnerable as any other. A vulnerability analysis of two real-world networks, the power grid of the European Union, and an autonomous system network, has been performed. The networks have been treated as graphs with node labels. The analysis consists of calculating characteristic path lengths between labels of nodes and determining largest connected cluster size under two node and edge attack strategies. Results obtained are more informative of the networks’ vulnerability compared to the case when the networks are modeled with unlabeled graphs.  相似文献   

12.
张百达  吴俊杰  唐玉华  周静 《中国物理 B》2011,20(11):118903-118903
Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom-up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top-down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches.  相似文献   

13.
Recent theoretical work on the modeling of network structure has focused primarily on networks that are static and unchanging, but many real-world networks change their structure over time. There exist natural generalizations to the dynamic case of many static network models, including the classic random graph, the configuration model, and the stochastic block model, where one assumes that the appearance and disappearance of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. Here we give an introduction to this class of models, showing for instance how one can compute their equilibrium properties. We also demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data using the method of maximum likelihood. This allows us, for example, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate these methods with a selection of applications, both to computer-generated test networks and real-world examples.  相似文献   

14.
Point pattern matching is an essential step in many image processing applications.This letter investigates the spectral approaches of point pattern matching,and presents a spectral feature matching algorithm based on kernel partial least squares(KPLS).Given the feature points of two images,we define position similarity matrices for the reference and sensed images,and extract the pattern vectors from the matrices using KPLS,which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method.Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.  相似文献   

15.
There is a wealth of information in real-world social networks. In addition to the topology information, the vertices or edges of a social network often have attributes, with many of the overlapping vertices belonging to several communities simultaneously. It is challenging to fully utilize the additional attribute information to detect overlapping communities. In this paper, we first propose an overlapping community detection algorithm based on an augmented attribute graph. An improved weight adjustment strategy for attributes is embedded in the algorithm to help detect overlapping communities more accurately. Second, we enhance the algorithm to automatically determine the number of communities by a node-density-based fuzzy k-medoids process. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed algorithms can effectively detect overlapping communities with fewer parameters compared to the baseline methods.  相似文献   

16.
Community structure is an important feature in many real-world networks. Many methods and algorithms for identifying communities have been proposed and have attracted great attention in recent years. In this paper, we present a new approach for discovering the community structure in networks. The novelty is that the algorithm uses the strength of the ties for sorting out nodes into communities. More specifically, we use the principle of weak ties hypothesis to determine to what community the node belongs. The advantages of this method are its simplicity, accuracy, and low computational cost. We demonstrate the effectiveness and efficiency of our algorithm both on real-world networks and on benchmark graphs. We also show that the distribution of link strength can give a general view of the basic structure information of graphs.  相似文献   

17.
Temporal graphs     
Vassilis Kostakos 《Physica A》2009,388(6):1007-1023
We introduce the idea of temporal graphs, a representation that encodes temporal data into graphs while fully retaining the temporal information of the original data. This representation lets us explore the dynamic temporal properties of data by using existing graph algorithms (such as shortest-path), with no need for data-driven simulations. We also present a number of metrics that can be used to study and explore temporal graphs. Finally, we use temporal graphs to analyse real-world data and present the results of our analysis.  相似文献   

18.
Jongkwang Kim 《Physica A》2008,387(11):2637-2652
Many papers published in recent years show that real-world graphs G(n,m) (n nodes, m edges) are more or less “complex” in the sense that different topological features deviate from random graphs. Here we narrow the definition of graph complexity and argue that a complex graph contains many different subgraphs. We present different measures that quantify this complexity, for instance C1e, the relative number of non-isomorphic one-edge-deleted subgraphs (i.e. DECK size). However, because these different subgraph measures are computationally demanding, we also study simpler complexity measures focussing on slightly different aspects of graph complexity. We consider heuristically defined “product measures”, the products of two quantities which are zero in the extreme cases of a path and clique, and “entropy measures” quantifying the diversity of different topological features. The previously defined network/graph complexity measures Medium Articulation and Offdiagonal complexity (OdC) belong to these two classes. We study OdC measures in some detail and compare it with our new measures. For all measures, the most complex graph has a medium number of edges, between the edge numbers of the minimum and the maximum connected graph . Interestingly, for some measures this number scales exactly with the geometric mean of the extremes: . All graph complexity measures are characterized with the help of different example graphs. For all measures the corresponding time complexity is given.Finally, we discuss the complexity of 33 real-world graphs of different biological, social and economic systems with the six computationally most simple measures (including OdC). The complexities of the real graphs are compared with average complexities of two different random graph versions: complete random graphs (just fixed n,m) and rewired graphs with fixed node degrees.  相似文献   

19.
Lenwood S. Heath  Nidhi Parikh 《Physica A》2011,390(23-24):4577-4587
Most real-world networks exhibit a high clustering coefficient—the probability that two neighbors of a node are also neighbors of each other. We propose two algorithms, Conf and Throw, that take triangle and single edge degree sequences as input and generate a random graph with a target clustering coefficient. We analyze them theoretically for the case of a regular graph. Conf generates a random graph with the input degree sequence and the clustering coefficient anticipated from the input. Experimental results match quite well with the anticipated clustering coefficient except for highly dense graphs, in which case the experimental clustering coefficient is higher than the anticipated value. For Throw, the degree sequence and the clustering coefficient of the generated graph varies from the input. However, it maintains the expected degree distribution, and the clustering coefficient of the generated graph can also be predicted using analytical results. Experiments show that, for Throw, the results match quite well with the analytical results. Typically, only information about degree distribution is available. We also propose an algorithm Deg that takes degree sequence and clustering coefficient as input and generates a graph with the same properties. Experiments show results for Deg that are quite similar to those for Conf.  相似文献   

20.
吴果林  顾长贵  邱路  杨会杰 《中国物理 B》2017,26(12):128901-128901
Projection is a widely used method in bipartite networks. However, each projection has a specific application scenario and differs in the forms of mapping for bipartite networks. In this paper, inspired by the network-based information exchange dynamics, we propose a uniform framework of projection. Subsequently, an information exchange rate projection based on the nature of community structures of a network(named IERCP) is designed to detect community structures of bipartite networks. Results from the synthetic and real-world networks show that the IERCP algorithm has higher performance compared with the other projection methods. It suggests that the IERCP may extract more information hidden in bipartite networks and minimize information loss.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号