全文获取类型
收费全文 | 15253篇 |
免费 | 2178篇 |
国内免费 | 899篇 |
专业分类
化学 | 870篇 |
晶体学 | 28篇 |
力学 | 751篇 |
综合类 | 173篇 |
数学 | 775篇 |
物理学 | 4895篇 |
无线电 | 10838篇 |
出版年
2024年 | 70篇 |
2023年 | 210篇 |
2022年 | 396篇 |
2021年 | 565篇 |
2020年 | 461篇 |
2019年 | 333篇 |
2018年 | 321篇 |
2017年 | 551篇 |
2016年 | 625篇 |
2015年 | 784篇 |
2014年 | 1213篇 |
2013年 | 1091篇 |
2012年 | 1115篇 |
2011年 | 1074篇 |
2010年 | 834篇 |
2009年 | 885篇 |
2008年 | 1088篇 |
2007年 | 1081篇 |
2006年 | 873篇 |
2005年 | 757篇 |
2004年 | 667篇 |
2003年 | 615篇 |
2002年 | 460篇 |
2001年 | 360篇 |
2000年 | 327篇 |
1999年 | 271篇 |
1998年 | 206篇 |
1997年 | 216篇 |
1996年 | 162篇 |
1995年 | 154篇 |
1994年 | 100篇 |
1993年 | 98篇 |
1992年 | 80篇 |
1991年 | 65篇 |
1990年 | 41篇 |
1989年 | 42篇 |
1988年 | 28篇 |
1987年 | 22篇 |
1986年 | 18篇 |
1985年 | 18篇 |
1984年 | 7篇 |
1983年 | 8篇 |
1982年 | 3篇 |
1981年 | 12篇 |
1980年 | 3篇 |
1979年 | 3篇 |
1977年 | 2篇 |
1976年 | 5篇 |
1974年 | 2篇 |
1959年 | 6篇 |
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
161.
Multi-focus image fusion integrates images from multiple focus regions of the same scene in focus to produce a fully focused image. However, the accurate retention of the focused pixels to the fusion result remains a major challenge. This study proposes a multi-focus image fusion algorithm based on Hessian matrix decomposition and salient difference focus detection, which can effectively retain the sharp pixels in the focus region of a source image. First, the source image was decomposed using a Hessian matrix to obtain the feature map containing the structural information. A focus difference analysis scheme based on the improved sum of a modified Laplacian was designed to effectively determine the focusing information at the corresponding positions of the structural feature map and source image. In the process of the decision-map optimization, considering the variability of image size, an adaptive multiscale consistency verification algorithm was designed, which helped the final fused image to effectively retain the focusing information of the source image. Experimental results showed that our method performed better than some state-of-the-art methods in both subjective and quantitative evaluation. 相似文献
162.
163.
Gulnaz Ahmed Meng Joo Er Mian Muhammad Sadiq Fareed Shahid Zikria Saqib Mahmood Jiao He Muhammad Asad Syeda Fizzah Jilani Muhammad Aslam 《Molecules (Basel, Switzerland)》2022,27(20)
Alzheimer’s Disease (AD) is a neurological brain disorder that causes dementia and neurological dysfunction, affecting memory, behavior, and cognition. Deep Learning (DL), a kind of Artificial Intelligence (AI), has paved the way for new AD detection and automation methods. The DL model’s prediction accuracy depends on the dataset’s size. The DL models lose their accuracy when the dataset has an imbalanced class problem. This study aims to use the deep Convolutional Neural Network (CNN) to develop a reliable and efficient method for identifying Alzheimer’s disease using MRI. In this study, we offer a new CNN architecture for diagnosing Alzheimer’s disease with a modest number of parameters, making it perfect for training a smaller dataset. This proposed model correctly separates the early stages of Alzheimer’s disease and displays class activation patterns on the brain as a heat map. The proposed Detection of Alzheimer’s Disease Network (DAD-Net) is developed from scratch to correctly classify the phases of Alzheimer’s disease while reducing parameters and computation costs. The Kaggle MRI image dataset has a severe problem with class imbalance. Therefore, we used a synthetic oversampling technique to distribute the image throughout the classes and avoid the problem. Precision, recall, F1-score, Area Under the Curve (AUC), and loss are all used to compare the proposed DAD-Net against DEMENET and CNN Model. For accuracy, AUC, F1-score, precision, and recall, the DAD-Net achieved the following values for evaluation metrics: 99.22%, 99.91%, 99.19%, 99.30%, and 99.14%, respectively. The presented DAD-Net outperforms other state-of-the-art models in all evaluation metrics, according to the simulation results. 相似文献
164.
成功合成了两种新型锍鎓盐类光生酸剂,其结构经11HNMR和MS分析确认,并对其基本物性及在405、365nm光下乙腈溶液中的分解及产酸性能进行了研究,通过计算得出了分解及产酸量子产率.结果表明,两种化合物有较高的热分解温度和在常用有机溶剂中有较好的溶解性;在405nm光源下,4-(9′-苯基蒽基)苯基三氟甲磺酸锍鎓盐(PAGS1)和4-(4′-N,N-二乙基-1′-苯乙烯基)苯基三氟甲磺酸锍鎓盐(PAGS2)的分解量子产率分别为10%和15%,产酸量子产率为8.1%和13%;但在365nm光源下,分解及产酸量子产率均很低,说明两种光生酸剂对于405nm波长的光较敏感,适宜作为405nm光源下的光生酸剂. 相似文献
165.
Image steganography, which usually hides a small image (hidden image or secret image) in a large image (carrier) so that the crackers cannot feel the existence of the hidden image in the carrier, has become a hot topic in the community of image security. Recent deep-learning techniques have promoted image steganography to a new stage. To improve the performance of steganography, this paper proposes a novel scheme that uses the Transformer for feature extraction in steganography. In addition, an image encryption algorithm using recursive permutation is proposed to further enhance the security of secret images. We conduct extensive experiments to demonstrate the effectiveness of the proposed scheme. We reveal that the Transformer is superior to the compared state-of-the-art deep-learning models in feature extraction for steganography. In addition, the proposed image encryption algorithm has good attributes for image security, which further enhances the performance of the proposed scheme of steganography. 相似文献
166.
Rafael Rojas-Hernndez Juan Luis Díaz-de-Len-Santiago Grettel Barcel-Alonso Jorge Bautista-Lpez Valentin Trujillo-Mora Julio Csar Salgado-Ramírez 《Entropy (Basel, Switzerland)》2022,24(7)
This paper introduces a new method of compressing digital images by using the Difference Transform applied in medical imaging. The Difference Transform algorithm performs the decorrelation process of image data, and in this way improves the encoding process, achieving a file with a smaller size than the original. The proposed method proves to be competitive and in many cases better than the standards used for medical images such as TIFF or PNG. In addition, the Difference Transform can replace other transforms like Cosine or Wavelet. 相似文献
167.
168.
169.
170.