首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4645篇
  免费   734篇
  国内免费   109篇
化学   739篇
晶体学   1篇
力学   87篇
综合类   74篇
数学   795篇
物理学   913篇
无线电   2879篇
  2024年   106篇
  2023年   386篇
  2022年   771篇
  2021年   749篇
  2020年   574篇
  2019年   315篇
  2018年   232篇
  2017年   240篇
  2016年   260篇
  2015年   190篇
  2014年   239篇
  2013年   258篇
  2012年   134篇
  2011年   132篇
  2010年   110篇
  2009年   115篇
  2008年   90篇
  2007年   85篇
  2006年   71篇
  2005年   67篇
  2004年   48篇
  2003年   34篇
  2002年   40篇
  2001年   24篇
  2000年   43篇
  1999年   31篇
  1998年   16篇
  1997年   23篇
  1996年   16篇
  1995年   21篇
  1994年   5篇
  1993年   5篇
  1992年   11篇
  1991年   5篇
  1990年   3篇
  1989年   3篇
  1988年   6篇
  1987年   3篇
  1986年   9篇
  1985年   2篇
  1984年   2篇
  1983年   1篇
  1981年   1篇
  1979年   3篇
  1977年   2篇
  1971年   1篇
  1969年   1篇
  1959年   5篇
排序方式: 共有5488条查询结果,搜索用时 0 毫秒
41.
人工智能助力当代化学研究   总被引:1,自引:0,他引:1  
朱博阳  吴睿龙  于曦 《化学学报》2020,78(12):1366-1382
以机器学习为代表的人工智能在当代的科学研究中正在发挥越来越重要的作用.不同于传统的计算机程序,机器学习人工智能可以通过对大量数据的反复分析和自身模型的优化,即"学习"过程,从而在大量的数据中寻找客观事物的相互联系,形成具有更好预测和决策能力的新模型,做出合理的判断.化学研究的特点恰恰是机器学习人工智能的强项.化学研究经常要面对十分复杂的物质体系和实验过程,从而很难通过化学物理原理进行精准的分析和判断.人工智能可以挖掘化学实验中产生的海量实验数据的相关性,帮助化学家做出合理分析预测,大大加速化学研发过程.本文介绍了当代人工智能方法及用其解决化学问题基本原理,并通过具体案例展示了人工智能辅助解决不同化学研发问题的方法以及对应的机器学习算法.将人工智能运用在化学科学的尝试正处于蓬勃上升期,人工智能已经初步展示出对化学研究的强大助力,希望本文能帮助更多的国内的化学工作者了解和运用这一有力的工具.  相似文献   
42.
Shadow is one of the fundamental indicators of remote sensing image which could cause loss or interference of the target data. As a result, the detection and removal of shadow has already been the hotspot of current study because of the complicated background information. In the following passage, a model combining the Atmospheric Transport Model (hereinafter abbreviated as ATM) with the Poisson Equation, AP ShadowNet, is proposed for the shadow detection and removal of remote sensing images by unsupervised learning. This network based on a preprocessing network based on ATM, A Net, and a network based on the Poisson Equation, P Net. Firstly, corresponding mapping between shadow and unshaded area is generated by the ATM. The brightened image will then enter the Confrontation identification in the P Net. Lastly, the reconstructed image is optimized on color consistency and edge transition by Poisson Equation. At present, most shadow removal models based on neural networks are significantly data-driven. Fortunately, by the model in this passage, the unsupervised shadow detection and removal could be released from the data source restrictions from the remote sensing images themselves. By verifying the shadow removal on our model, the result shows a satisfying effect from a both qualitative and quantitative angle. From a qualitative point of view, our results have a prominent effect on tone consistency and removal of detailed shadows. From the quantitative point of view, we adopt the non-reference evaluation indicators: gradient structure similarity (NRSS) and Natural Image Quality Evaluator (NIQE). Combining various evaluation factors such as reasoning speed and memory occupation, it shows that it is outstanding among other current algorithms.  相似文献   
43.
Quantum Machine Learning (QML) has not yet demonstrated extensively and clearly its advantages compared to the classical machine learning approach. So far, there are only specific cases where some quantum-inspired techniques have achieved small incremental advantages, and a few experimental cases in hybrid quantum computing are promising, considering a mid-term future (not taking into account the achievements purely associated with optimization using quantum-classical algorithms). The current quantum computers are noisy and have few qubits to test, making it difficult to demonstrate the current and potential quantum advantage of QML methods. This study shows that we can achieve better classical encoding and performance of quantum classifiers by using Linear Discriminant Analysis (LDA) during the data preprocessing step. As a result, the Variational Quantum Algorithm (VQA) shows a gain of performance in balanced accuracy with the LDA technique and outperforms baseline classical classifiers.  相似文献   
44.
In recent years, searching and retrieving relevant images from large databases has become an emerging challenge for the researcher. Hashing methods that mapped raw data into a short binary code have attracted increasing attention from the researcher. Most existing hashing approaches map samples to a binary vector via a single linear projection, which restricts the flexibility of those methods and leads to optimization problems. We introduce a CNN-based hashing method that uses multiple nonlinear projections to produce additional short-bit binary code to tackle this issue. Further, an end-to-end hashing system is accomplished using a convolutional neural network. Also, we design a loss function that aims to maintain the similarity between images and minimize the quantization error by providing a uniform distribution of the hash bits to illustrate the proposed technique’s effectiveness and significance. Extensive experiments conducted on various datasets demonstrate the superiority of the proposed method in comparison with state-of-the-art deep hashing methods.  相似文献   
45.
Despite the increasing applications, demands, and capabilities of drones, in practice they have only limited autonomy for accomplishing complex missions, resulting in slow and vulnerable operations and difficulty adapting to dynamic environments. To lessen these weaknesses, we present a computational framework for deducing the original intent of drone swarms by monitoring their movements. We focus on interference, a phenomenon that is not initially anticipated by drones but results in complicated operations due to its significant impact on performance and its challenging nature. We infer interference from predictability by first applying various machine learning methods, including deep learning, and then computing entropy to compare against interference. Our computational framework begins by building a set of computational models called double transition models from the drone movements and revealing reward distributions using inverse reinforcement learning. These reward distributions are then used to compute the entropy and interference across a variety of drone scenarios specified by combining multiple combat strategies and command styles. Our analysis confirmed that drone scenarios experienced more interference, higher performance, and higher entropy as they became more heterogeneous. However, the direction of interference (positive vs. negative) was more dependent on combinations of combat strategies and command styles than homogeneity.  相似文献   
46.
Constructing the structure of protein signaling networks by Bayesian network technology is a key issue in the field of bioinformatics. The primitive structure learning algorithms of the Bayesian network take no account of the causal relationships between variables, which is unfortunately important in the application of protein signaling networks. In addition, as a combinatorial optimization problem with a large searching space, the computational complexities of the structure learning algorithms are unsurprisingly high. Therefore, in this paper, the causal directions between any two variables are calculated first and stored in a graph matrix as one of the constraints of structure learning. A continuous optimization problem is constructed next by using the fitting losses of the corresponding structure equations as the target, and the directed acyclic prior is used as another constraint at the same time. Finally, a pruning procedure is developed to keep the result of the continuous optimization problem sparse. Experiments show that the proposed method improves the structure of the Bayesian network compared with the existing methods on both the artificial data and the real data, meanwhile, the computational burdens are also reduced significantly.  相似文献   
47.
In recent years, video stabilization has improved significantly in simple scenes, but is not as effective as it could be in complex scenes. In this study, we built an unsupervised video stabilization model. In order to improve the accurate distribution of key points in the full frame, a DNN-based key-point detector was introduced to generate rich key points and optimize the key points and the optical flow in the largest area of the untextured region. Furthermore, for complex scenes with moving foreground targets, we used a foreground and background separation-based approach to obtain unstable motion trajectories, which were then smoothed. For the generated frames, adaptive cropping was conducted to completely remove the black edges while maintaining the maximum detail of the original frame. The results of public benchmark tests showed that this method resulted in less visual distortion than current state-of-the-art video stabilization methods, while retaining greater detail in the original stable frames and completely removing black edges. It also outperformed current stabilization models in terms of both quantitative and operational speed.  相似文献   
48.
In this paper, to improve the slow processing speed of the rule-based visible and NIR (near-infrared) image synthesis method, we present a fast image fusion method using DenseFuse, one of the CNN (convolutional neural network)-based image synthesis methods. The proposed method applies a raster scan algorithm to secure visible and NIR datasets for effective learning and presents a dataset classification method using luminance and variance. Additionally, in this paper, a method for synthesizing a feature map in a fusion layer is presented and compared with the method for synthesizing a feature map in other fusion layers. The proposed method learns the superior image quality of the rule-based image synthesis method and shows a clear synthesized image with better visibility than other existing learning-based image synthesis methods. Compared with the rule-based image synthesis method used as the target image, the proposed method has an advantage in processing speed by reducing the processing time to three times or more.  相似文献   
49.
This paper compares model development strategies based on different performance metrics. The study was conducted in the area of credit risk modeling with the usage of diverse metrics, including general-purpose Area Under the ROC curve (AUC), problem-dedicated Expected Maximum Profit (EMP) and the novel case-tailored Calculated Profit (CP). The metrics were used to optimize competitive credit risk scoring models based on two predictive algorithms that are widely used in the financial industry: Logistic Regression and extreme gradient boosting machine (XGBoost). A dataset provided by the American Fannie Mae agency was utilized to conduct the study. In addition to the baseline study, the paper also includes a stability analysis. In each case examined the proposed CP metric that allowed us to achieve the most profitable loan portfolio.  相似文献   
50.
We consider information-theoretic bounds on the expected generalization error for statistical learning problems in a network setting. In this setting, there are K nodes, each with its own independent dataset, and the models from the K nodes have to be aggregated into a final centralized model. We consider both simple averaging of the models as well as more complicated multi-round algorithms. We give upper bounds on the expected generalization error for a variety of problems, such as those with Bregman divergence or Lipschitz continuous losses, that demonstrate an improved dependence of 1/K on the number of nodes. These “per node” bounds are in terms of the mutual information between the training dataset and the trained weights at each node and are therefore useful in describing the generalization properties inherent to having communication or privacy constraints at each node.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号