首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   89079篇
  免费   13867篇
  国内免费   7643篇
化学   58790篇
晶体学   1054篇
力学   5837篇
综合类   434篇
数学   9739篇
物理学   34735篇
  2025年   33篇
  2024年   948篇
  2023年   1938篇
  2022年   3239篇
  2021年   3799篇
  2020年   4203篇
  2019年   4037篇
  2018年   2850篇
  2017年   2645篇
  2016年   4213篇
  2015年   3961篇
  2014年   4873篇
  2013年   6198篇
  2012年   7480篇
  2011年   7812篇
  2010年   5177篇
  2009年   4906篇
  2008年   5252篇
  2007年   4792篇
  2006年   4367篇
  2005年   3690篇
  2004年   2774篇
  2003年   2108篇
  2002年   1904篇
  2001年   1595篇
  2000年   1389篇
  1999年   1671篇
  1998年   1483篇
  1997年   1378篇
  1996年   1526篇
  1995年   1187篇
  1994年   1194篇
  1993年   938篇
  1992年   864篇
  1991年   806篇
  1990年   635篇
  1989年   468篇
  1988年   387篇
  1987年   328篇
  1986年   319篇
  1985年   259篇
  1984年   197篇
  1983年   135篇
  1982年   124篇
  1981年   95篇
  1980年   63篇
  1979年   39篇
  1978年   32篇
  1976年   27篇
  1974年   28篇
排序方式: 共有10000条查询结果,搜索用时 109 毫秒
181.
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this paper studies the planning mode and connotation of dynamic programming, restricts its process with edge and path constraints, and proposes a dynamic programming BN structure learning algorithm with double constraints under small sample conditions. The algorithm uses double constraints to limit the planning process of dynamic programming and reduces the planning space. Then, it uses double constraints to limit the selection of the optimal parent node to ensure that the optimal structure conforms to prior knowledge. Finally, the integrating prior-knowledge method and the non-integrating prior-knowledge method are simulated and compared. The simulation results verify the effectiveness of the method proposed and prove that the integrating prior knowledge can significantly improve the efficiency and accuracy of BN structure learning.  相似文献   
182.
With the continuous improvement of people’s health awareness and the continuous progress of scientific research, consumers have higher requirements for the quality of drinking. Compared with high-sugar-concentrated juice, consumers are more willing to accept healthy and original Not From Concentrated (NFC) juice and packaged drinking water. At the same time, drinking category detection can be used for vending machine self-checkout. However, the current drinking category systems rely on special equipment, which require professional operation, and also rely on signals that are not widely used, such as radar. This paper introduces a novel drinking category detection method based on wireless signals and artificial neural network (ANN). Unlike past work, our design relies on WiFi signals that are widely used in life. The intuition is that when the wireless signals propagate through the detected target, the signals arrive at the receiver through multiple paths and different drinking categories will result in distinct multipath propagation, which can be leveraged to detect the drinking category. We capture the WiFi signals of detected drinking using wireless devices; then, we calculate channel state information (CSI), perform noise removal and feature extraction, and apply ANN for drinking category detection. Results demonstrate that our design has high accuracy in detecting drinking category.  相似文献   
183.
Object detection is challenging in large-scale images captured by unmanned aerial vehicles (UAVs), especially when detecting small objects with significant scale variation. Most solutions employ the fusion of different scale features by building multi-scale feature pyramids to ensure that the detail and semantic information are abundant. Although feature fusion benefits object detection, it still requires the long-range dependencies information necessary for small objects with significant scale variation detection. We propose a simple yet effective scale enhancement pyramid network (SEPNet) to address these problems. A SEPNet consists of a context enhancement module (CEM) and feature alignment module (FAM). Technically, the CEM combines multi-scale atrous convolution and multi-branch grouped convolution to model global relationships. Additionally, it enhances object feature representation, preventing features with lost spatial information from flowing into the feature pyramid network (FPN). The FAM adaptively learns offsets of pixels to preserve feature consistency. The FAM aims to adjust the location of sampling points in the convolutional kernel, effectively alleviating information conflict caused by the fusion of adjacent features. Results indicate that the SEPNet achieves an AP score of 18.9% on VisDrone, which is 7.1% higher than the AP score of state-of-the-art detectors RetinaNet achieves an AP score of 81.5% on PASCAL VOC.  相似文献   
184.
In recent years, video stabilization has improved significantly in simple scenes, but is not as effective as it could be in complex scenes. In this study, we built an unsupervised video stabilization model. In order to improve the accurate distribution of key points in the full frame, a DNN-based key-point detector was introduced to generate rich key points and optimize the key points and the optical flow in the largest area of the untextured region. Furthermore, for complex scenes with moving foreground targets, we used a foreground and background separation-based approach to obtain unstable motion trajectories, which were then smoothed. For the generated frames, adaptive cropping was conducted to completely remove the black edges while maintaining the maximum detail of the original frame. The results of public benchmark tests showed that this method resulted in less visual distortion than current state-of-the-art video stabilization methods, while retaining greater detail in the original stable frames and completely removing black edges. It also outperformed current stabilization models in terms of both quantitative and operational speed.  相似文献   
185.
Constructing the structure of protein signaling networks by Bayesian network technology is a key issue in the field of bioinformatics. The primitive structure learning algorithms of the Bayesian network take no account of the causal relationships between variables, which is unfortunately important in the application of protein signaling networks. In addition, as a combinatorial optimization problem with a large searching space, the computational complexities of the structure learning algorithms are unsurprisingly high. Therefore, in this paper, the causal directions between any two variables are calculated first and stored in a graph matrix as one of the constraints of structure learning. A continuous optimization problem is constructed next by using the fitting losses of the corresponding structure equations as the target, and the directed acyclic prior is used as another constraint at the same time. Finally, a pruning procedure is developed to keep the result of the continuous optimization problem sparse. Experiments show that the proposed method improves the structure of the Bayesian network compared with the existing methods on both the artificial data and the real data, meanwhile, the computational burdens are also reduced significantly.  相似文献   
186.
DBTRU was proposed by Thang and Binh in 2015. As a variant of NTRU, the integer polynomial ring is replaced by two binary truncated polynomial rings GF(2)[x]/(xn+1). DBTRU has some advantages over NTRU in terms of security and performance. In this paper, we propose a polynomial-time linear algebra attack against the DBTRU cryptosystem, which can break DBTRU for all recommended parameter choices. The paper shows that the plaintext can be achieved in less than 1 s via the linear algebra attack on a single PC.  相似文献   
187.
Entropy is a measure of uncertainty or randomness. It is the foundation for almost all cryptographic systems. True random number generators (TRNGs) and physical unclonable functions (PUFs) are the silicon primitives to respectively harvest dynamic and static entropy to generate random bit streams. In this survey paper, we present a systematic and comprehensive review of different state-of-the-art methods to harvest entropy from silicon-based devices, including the implementations, applications, and the security of the designs. Furthermore, we conclude the trends of the entropy source design to point out the current spots of entropy harvesting.  相似文献   
188.
Point cloud data are extensively used in various applications, such as autonomous driving and augmented reality since it can provide both detailed and realistic depictions of 3D scenes or objects. Meanwhile, 3D point clouds generally occupy a large amount of storage space that is a big burden for efficient communication. However, it is difficult to efficiently compress such sparse, disordered, non-uniform and high dimensional data. Therefore, this work proposes a novel deep-learning framework for point cloud geometric compression based on an autoencoder architecture. Specifically, a multi-layer residual module is designed on a sparse convolution-based autoencoders that progressively down-samples the input point clouds and reconstructs the point clouds in a hierarchically way. It effectively constrains the accuracy of the sampling process at the encoder side, which significantly preserves the feature information with a decrease in the data volume. Compared with the state-of-the-art geometry-based point cloud compression (G-PCC) schemes, our approach obtains more than 70–90% BD-Rate gain on an object point cloud dataset and achieves a better point cloud reconstruction quality. Additionally, compared to the state-of-the-art PCGCv2, we achieve an average gain of about 10% in BD-Rate.  相似文献   
189.
Multi-focus image fusion integrates images from multiple focus regions of the same scene in focus to produce a fully focused image. However, the accurate retention of the focused pixels to the fusion result remains a major challenge. This study proposes a multi-focus image fusion algorithm based on Hessian matrix decomposition and salient difference focus detection, which can effectively retain the sharp pixels in the focus region of a source image. First, the source image was decomposed using a Hessian matrix to obtain the feature map containing the structural information. A focus difference analysis scheme based on the improved sum of a modified Laplacian was designed to effectively determine the focusing information at the corresponding positions of the structural feature map and source image. In the process of the decision-map optimization, considering the variability of image size, an adaptive multiscale consistency verification algorithm was designed, which helped the final fused image to effectively retain the focusing information of the source image. Experimental results showed that our method performed better than some state-of-the-art methods in both subjective and quantitative evaluation.  相似文献   
190.
In this work, novel selective recognition materials, namely magnetic molecularly imprinted polymers (MMIPs), were prepared. The recognition materials were used as pretreatment materials for magnetic molecularly imprinted solid-phase extraction (MSPE) to achieve the efficient adsorption, selective recognition, and rapid magnetic separation of methotrexate (MTX) in the patients’ plasma. This method was combined with high-performance liquid chromatography–ultraviolet detection (HPLC–UV) to achieve accurate and rapid detection of the plasma MTX concentration, providing a new method for the clinical detection and monitoring of the MTX concentration. The MMIPs for the selective adsorption of MTX were prepared by the sol–gel method. The materials were characterized by transmission electron microscopy, Fourier transform-infrared spectrometry, X-ray diffractometry, and X-ray photoelectron spectrometry. The MTX adsorption properties of the MMIPs were evaluated using static, dynamic, and selective adsorption experiments. On this basis, the extraction conditions were optimized systematically. The adsorption capacity of MMIPs for MTX was 39.56 mgg−1, the imprinting factor was 9.40, and the adsorption equilibrium time was 60 min. The optimal extraction conditions were as follows: the amount of MMIP was 100 mg, the loading time was 120 min, the leachate was 8:2 (v/v) water–methanol, the eluent was 4:1 (v/v) methanol–acetic acid, and the elution time was 60 min. MTX was linear in the range of 0.00005–0.25 mg mL−1, and the detection limit was 12.51 ng mL−1. The accuracy of the MSPE–HPLC–UV method for MTX detection was excellent, and the result was consistent with that of a drug concentration analyzer.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号