首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到8条相似文献,搜索用时 0 毫秒
1.
宋宇翔  胡伟 《电视技术》2013,37(13):42-44,52
局部线性嵌入是一种有效地非线性维数约减方法,它能保持降维后的数据与原空间有相同的拓扑关系。但是这种方法在降维处理、可视化以及数据分类方面应用不是很广泛,针对上述问题,提出了一种新的、有效的降维以及数据分类方法——基于最大边缘准则图形嵌入方法。该方法首先构建最近邻关系图聚合数据点之间的最近邻样本,同时最大化类间间隔,保证不同类之间数据可分性大,从而更好地实现数据分类。最后,该方法的有效性分别在ORL及Yale两大人脸库上得到了验证。  相似文献   

2.
Recent advances in unsupervised domain adaptation mainly focus on learning shared representations by global statistics alignment, such as the Maximum Mean Discrepancy (MMD) which matches the Mean statistics across domains. The lack of class information, however, may lead to partial alignment (or even misalignment) and poor generalization performance. For robust domain alignment, we argue that the similarities across different features in the source domain should be consistent with that in the target domain. Based on this assumption, we propose a new domain discrepancy metric, i.e., Self-similarity Consistency (SSC), to enforce the pairwise relationship between different features being consistent across domains. The Gram matrix matching and Correlation Alignment is proven to be a special case, and a sub-optimal measure of our proposed SSC. Furthermore, we also propose to mitigate the side effect of the partial alignment and misalignment by incorporating the discriminative information of the deep representations. Specifically, a simple yet effective feature norm constraint is exploited to enlarge the discrepancy of inter-class samples. It relieves the requirements of strict alignment when performing adaptation, therefore improving the adaptation performance significantly. Extensive experiments on visual domain adaptation tasks demonstrate the effectiveness of our proposed SSC metric and feature discrimination approach.  相似文献   

3.
The existing unsupervised domain adaptation (UDA) methods on person re-identification (re-ID) often employ clustering to assign pseudo labels for unlabeled target domain samples. However, it is difficult to give accurate pseudo labels to unlabeled samples in the clustering process. To solve this problem, we propose a novel mutual tri-training network, termed MTNet, for UDA person re-ID. The MTNet method can avoid noisy labels and enhance the complementarity of multiple branches by collaboratively training the three different branch networks. Specifically, the high-confidence pseudo labels are used to update each network branch according to the joint decisions of the other two branches. Moreover, inspired by self-paced learning, we employ a sample filtering scheme to feed unlabeled samples into the network from easy to hard, so as to avoid trapping in the local optimal solution. Extensive experiments show that the proposed method can achieve competitive performance compared with the state-of-the-art person re-ID methods.  相似文献   

4.
张玉华 《光电子.激光》2009,20(10):1361-1364
提出了一种基于离散余弦变换(DCT)和二维最大边缘准则(2DMMC)的2DDM特征提取算法,证明了2DMMC可以直接应用于DCT域,利用欧氏距离测度进行分类的结果与在空域中进行得到的结果完全相同。2DMMC方法可直接应用于基于DCT压缩的JPEG格式的图像。在ORL和Yale人脸数据库上的实验结果表明,在空域2DMMC的识别率高于2DPCA和2DLDA,2DDM的识别率又高于2DMMC,而且2DDM的耗时要低于2DMMC。  相似文献   

5.
In this paper, a manifold learning based method named local maximal margin discriminant embedding (LMMDE) is developed for feature extraction. The proposed algorithm LMMDE and other manifold learning based approaches have a point in common that the locality is preserved. Moreover, LMMDE takes consideration of intra-class compactness and inter-class separability of samples lying in each manifold. More concretely, for each data point, it pulls its neighboring data points with the same class label towards it as near as possible, while simultaneously pushing its neighboring data points with different class labels away from it as far as possible under the constraint of locality preserving. Compared to most of the up-to-date manifold learning based methods, this trick makes contribution to pattern classification from two aspects. On the one hand, the local structure in each manifold is still kept in the embedding space; one the other hand, the discriminant information in each manifold can be explored. Experimental results on the ORL, Yale and FERET face databases show the effectiveness of the proposed method.  相似文献   

6.
Convolutional neural networks (CNN) have achieved outstanding face recognition (FR) performance with increasing large-scale face datasets. With face dataset size grown, noisy data will inevitably increase, undoubtedly bringing difficulties to data cleaning. In this paper, the probability that the sample belongs to noise can be determined based on the cosine distance (cosθ) of normalized angle center and face feature vector in the margin-based loss functions. According to this finding, we propose a two-step learning method integrated into the loss function. The new proposed directional margin loss function combines the noise probability with the label as the supervision information. Experiments show that our method can tolerate noisy data and get high FR accuracy when the training datasets mix with more than 30% noise. Our approach can also achieve a great result of 79.33% in MegaFace challenge one using a noisy training dataset.  相似文献   

7.
A new machine learning methodology, called successive subspace learning (SSL), is introduced in this work. SSL contains four key ingredients: (1) successive near-to-far neighborhood expansion; (2) unsupervised dimension reduction via subspace approximation; (3) supervised dimension reduction via label-assisted regression (LAG); and (4) feature concatenation and decision making. An image-based object classification method, called PixelHop, is proposed to illustrate the SSL design. It is shown by experimental results that the PixelHop method outperforms the classic CNN model of similar model complexity in three benchmarking datasets (MNIST, Fashion MNIST and CIFAR-10). Although SSL and deep learning (DL) have some high-level concept in common, they are fundamentally different in model formulation, the training process and training complexity. Extensive discussion on the comparison of SSL and DL is made to provide further insights into the potential of SSL.  相似文献   

8.
With the continuous development of deep learning, neural networks have made great progress in license plate recognition (LPR). Nevertheless, there is still room to improve the performance of license plate recognition for low-resolution and relatively blurry images in remote surveillance scenarios. When it is difficult to enhance the recognition algorithm, we choose super-resolution (SR) to improve the quality of license plate images and thereby provide clearer input for the subsequent recognition stage. In this paper, we propose an automatic super-resolution license plate recognition (SRLPR) network which consists of four parts separately: license plate detection, character detection, single character super-resolution, and recognition. In the training stage, firstly, LP detection model needs to be trained alone and then its detection results will be used to successively train the three subsequent modules. During the test phase, for each input image, the network can get its LP number automatically. We also collect an applicable and challenging LPR dataset called SRLP, which is collected from real remote traffic surveillance. The experimental results demonstrate that our method achieves comprehensive quality of SR images and higher recognition accuracy compared with state-of-the-art methods. The SRLP dataset and the code for training and testing SRLPR network are available at https://pan.baidu.com/s/1vnhRa-c-dBj6jlfBZV5w4g.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号