全文获取类型
收费全文 | 271篇 |
免费 | 38篇 |
国内免费 | 2篇 |
专业分类
化学 | 8篇 |
力学 | 2篇 |
综合类 | 2篇 |
数学 | 41篇 |
物理学 | 13篇 |
无线电 | 245篇 |
出版年
2024年 | 1篇 |
2023年 | 1篇 |
2022年 | 13篇 |
2021年 | 15篇 |
2020年 | 18篇 |
2019年 | 6篇 |
2018年 | 6篇 |
2017年 | 14篇 |
2016年 | 12篇 |
2015年 | 5篇 |
2014年 | 14篇 |
2013年 | 19篇 |
2012年 | 9篇 |
2011年 | 17篇 |
2010年 | 13篇 |
2009年 | 12篇 |
2008年 | 12篇 |
2007年 | 22篇 |
2006年 | 14篇 |
2005年 | 17篇 |
2004年 | 13篇 |
2003年 | 12篇 |
2002年 | 15篇 |
2001年 | 13篇 |
2000年 | 4篇 |
1999年 | 3篇 |
1998年 | 3篇 |
1997年 | 3篇 |
1996年 | 2篇 |
1994年 | 2篇 |
1993年 | 1篇 |
排序方式: 共有311条查询结果,搜索用时 31 毫秒
311.
In the field of face anti-spoofing (FAS), how to extract the representative features to distinguish between real and spoof faces and train the corresponding deep networks are two vital issues. In this paper, we propose a simple but effective end-to-end FAS model based on an innovative texture extractor and a depth auxiliary supervision mechanism. In the feature extraction stage, we first design the residual gradient convolutions based on the redesigned gradient operators, which are used to extract fine-grained texture features. The extraction of texture features is based on multiple scales by dividing the texture differences between living and spoofing faces into three levels reasonably. Then we construct a multiscale residual gradient attention (MRGA) to obtain representative texture features from multiple levels texture features. By combining the proposed feature extractor MRGA and existing vision transformer (ViT), the MRGA-ViT is proposed to generate related semantics and obtain final classification results. In the training stage, we also propose a local depth auxiliary supervision based on a novel adjacent depth loss, which utilizes the correlation information of adjacent pixels adequately compared with traditional depth loss. The proposed MRGA-ViT model achieves competitive performance in generalization and stability ability, e.g., the ACER(%) values of intra testing on OULU-NPU database are 1.8, 2.6, 1.6 ± 1.2 and 1.9 ± 2.7 respectively, the AUC(%) of cross type testing attains 99.45 ± 0.57, the ACER(%) values of cross dataset testing are 28.1 and 36.7 respectively. Experimental results prove that the proposed model is competitive to other state-of-the-art works on generalization and stability performance. 相似文献