首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3046篇
  免费   342篇
  国内免费   125篇
化学   831篇
晶体学   2篇
力学   184篇
综合类   138篇
数学   1240篇
物理学   1118篇
  2024年   5篇
  2023年   123篇
  2022年   267篇
  2021年   321篇
  2020年   256篇
  2019年   191篇
  2018年   143篇
  2017年   146篇
  2016年   177篇
  2015年   90篇
  2014年   189篇
  2013年   218篇
  2012年   136篇
  2011年   143篇
  2010年   130篇
  2009年   139篇
  2008年   133篇
  2007年   139篇
  2006年   97篇
  2005年   79篇
  2004年   50篇
  2003年   44篇
  2002年   28篇
  2001年   21篇
  2000年   28篇
  1999年   19篇
  1998年   21篇
  1997年   28篇
  1996年   24篇
  1995年   22篇
  1994年   7篇
  1993年   10篇
  1992年   11篇
  1991年   8篇
  1990年   9篇
  1989年   6篇
  1988年   8篇
  1987年   3篇
  1986年   9篇
  1985年   4篇
  1984年   6篇
  1983年   5篇
  1980年   3篇
  1979年   3篇
  1978年   2篇
  1977年   2篇
  1971年   1篇
  1969年   1篇
  1967年   1篇
  1959年   5篇
排序方式: 共有3513条查询结果,搜索用时 15 毫秒
1.
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such hybrid quantizers is quite complex, and their implementation requires complete knowledge of the statistical model of the analog signal. In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools. These mappings are designed to facilitate the task of recovering underlying information from the quantized signals. By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it. Our main target application is multiple-input multiple-output (MIMO) communication receivers, which simultaneously acquire a set of analog signals, and are commonly subject to constraints on the number of bits. Our results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model. Furthermore, for a symbol detection scenario, it is demonstrated that the proposed approach can realize reliable bit-efficient hybrid MIMO receivers capable of setting their quantization rule in light of the task.  相似文献   
2.
Malware detection is in a coevolutionary arms race where the attackers and defenders are constantly seeking advantage. This arms race is asymmetric: detection is harder and more expensive than evasion. White hats must be conservative to avoid false positives when searching for malicious behaviour. We seek to redress this imbalance. Most of the time, black hats need only make incremental changes to evade them. On occasion, white hats make a disruptive move and find a new technique that forces black hats to work harder. Examples include system calls, signatures and machine learning. We present a method, called Hothouse, that combines simulation and search to accelerate the white hat’s ability to counter the black hat’s incremental moves, thereby forcing black hats to perform disruptive moves more often. To realise Hothouse, we evolve EEE, an entropy-based polymorphic packer for Windows executables. Playing the role of a black hat, EEE uses evolutionary computation to disrupt the creation of malware signatures. We enter EEE into the detection arms race with VirusTotal, the most prominent cloud service for running anti-virus tools on software. During our 6 month study, we continually improved EEE in response to VirusTotal, eventually learning a packer that produces packed malware whose evasiveness goes from an initial 51.8% median to 19.6%. We report both how well VirusTotal learns to detect EEE-packed binaries and how well VirusTotal forgets in order to reduce false positives. VirusTotal’s tools learn and forget fast, actually in about 3 days. We also show where VirusTotal focuses its detection efforts, by analysing EEE’s variants.  相似文献   
3.
Prediction of drag reduction effect caused by pulsating pipe flows is examined using machine learning. First, a large set of flow field data is obtained experimentally by measuring turbulent pipe flows with various pulsation patterns. Consequently, more than 7000 waveforms are applied, obtaining a maximum drag reduction rate and maximum energy saving rate of 38.6% and 31.4%, respectively. The results indicate that the pulsating flow effect can be characterized by the pulsation period and pressure gradient during acceleration and deceleration. Subsequently, two machine learning models are tested to predict the drag reduction rate. The results confirm that the machine learning model developed for predicting the time variation of the flow velocity and differential pressure with respect to the pump voltage can accurately predict the nonlinearity of pressure gradients. Therefore, using this model, the drag reduction effect can be estimated with high accuracy.  相似文献   
4.
为提高生鲜羊肉储存期内(4,8和20 ℃环境)挥发性盐基氮(TVB-N)的近红外光谱(NIR)检测的稳定性和准确性,选取特征光谱和预测模型是关键步骤。以121个羊肉样品为实验对象,采集生鲜羊肉680~2 600 nm波段的近红外光谱。以多元散射校正(MSC)、标准正态变换(SNV)等散射校正方法,Savitzky-Golay卷积平滑(SGS)、移动平均平滑(MAS)等平滑处理方法,以及归一化(Normalization)、中心化(Centering)、标准化(Autoscaling)等尺度缩放方法分别预处理光谱数据后建立偏最小二乘法(PLS)预测模型。比较发现SGS处理的光谱建模效果最好。利用蒙特卡洛采样(MCS)法及马氏距离法(MD)消除了羊肉光谱的5个异常数据。运用光谱-理化值共生距离(SPXY)算法划分总样本的75%(87个)为校正集样本,剩余29个为验证集样本,利用竞争性自适应重加权法(CARS)、无信息变量消除法(UVE)、改进的无信息变量消除法(IUVE)和连续投影算法(SPA)提取特征光谱得到的波长个数分别为14,713,144和15。将全光谱和4种方法提取的特征波长作为输入变量建立预测模型,CARS提取的波长所建立模型的性能优于UVE、IUVE和SPA提取的波长所建立模型的性能,表明CARS方法可以有效简化输入变量并提高预测模型的性能。改进后得到的IUVE法相比于UVE法,筛选出的波长数更少且模型性能有所提升。以提取的特征波长建立PLS,支持向量机(SVM)和最小二乘支持向量机(LS-SVM)预测模型,SVM模型得到最优的校正集预测结果,其中CARS-SVM预测模型的校正决定系数(R2C)和校正均方根误差(RMSEC)分别为0.939 1和1.426 7,最优的验证集预测效果为LS-SVM预测模型得到,其中IUVE-LS-SVM预测模型的验证决定系数(R2V)和验证均方根误差(RMSEV)分别为0.856 8和1.886 2。基于近红外特征光谱建立简化、优化的生鲜羊肉储存期TVB-N预测模型,为实现快速无损检测生鲜羊肉中的TVB-N浓度提供技术支持。  相似文献   
5.
科学评价大学生科研创新能力对我国科研水平的提高具有重要意义.采用机器学习模型来预测大学生科研能力可以起到良好的效果,提出一种GAXGBoost模型来实现对大学生的科研能力预测.此模型是以Xgboost算法为基础,然后充分利用遗传算法的全局搜索能力自动搜索Xgboost最优超参数,避免了人为经验调参不准确的缺陷,最后采用精英选择策略以此确保每一轮都是最佳的进化结果.通过分析表明,所采用的GAXGBoost模型在大学生科研能力预测的结果中具有很高的精度,将此模型与Logistic Regression、Random Forest、SVM等模型进行对比,GAXGBoost模型的预测精度最高.  相似文献   
6.
The 3D modelling of indoor environments and the generation of process simulations play an important role in factory and assembly planning. In brownfield planning cases, existing data are often outdated and incomplete especially for older plants, which were mostly planned in 2D. Thus, current environment models cannot be generated directly on the basis of existing data and a holistic approach on how to build such a factory model in a highly automated fashion is mostly non-existent. Major steps in generating an environment model of a production plant include data collection, data pre-processing and object identification as well as pose estimation. In this work, we elaborate on a methodical modelling approach, which starts with the digitalization of large-scale indoor environments and ends with the generation of a static environment or simulation model. The object identification step is realized using a Bayesian neural network capable of point cloud segmentation. We elaborate on the impact of the uncertainty information estimated by a Bayesian segmentation framework on the accuracy of the generated environment model. The steps of data collection and point cloud segmentation as well as the resulting model accuracy are evaluated on a real-world data set collected at the assembly line of a large-scale automotive production plant. The Bayesian segmentation network clearly surpasses the performance of the frequentist baseline and allows us to considerably increase the accuracy of the model placement in a simulation scene.  相似文献   
7.
Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex systems, increase the performance of optimization, and even extend the enabling of the potential for creativity. In this article, the authors discuss the fractional dynamics, FOT and rich fractional stochastic models. First, the use of fractional dynamics in big data analytics for quantifying big data variability stemming from the generation of complex systems is justified. Second, we show why fractional dynamics is needed in machine learning and optimal randomness when asking: “is there a more optimal way to optimize?”. Third, an optimal randomness case study for a stochastic configuration network (SCN) machine-learning method with heavy-tailed distributions is discussed. Finally, views on big data and (physics-informed) machine learning with fractional dynamics for future research are presented with concluding remarks.  相似文献   
8.
随着电力计量业务的不断扩展,迫切需要由业务信息、技术知识、行业标准及其内在联系所组成的电力计量知识图谱,为电网的决策和发展提供更为全面有效的支持。命名实体识别是构建知识图谱的基础。针对电力计量领域需要,结合中文分词技术特点,基于联合学习思想,提出了一种基于联合学习的中文电力计量命名实体识别技术。该技术联合CNN-BLSTM-CRF模型与整合词典知识的分词模型,使其共享实体类别和置信度;同时将2个模型的先后计算顺序改为并行计算,减少了识别误差累积。结果表明,在不需要人工构建特征的情况下,方法的正确率、召回率、F值等均显著优于以往方法。  相似文献   
9.
In the past decade, big data has become increasingly prevalent in a large number of applications. As a result, datasets suffering from noise and redundancy issues have necessitated the use of feature selection across multiple domains. However, a common concern in feature selection is that different approaches can give very different results when applied to similar datasets. Aggregating the results of different selection methods helps to resolve this concern and control the diversity of selected feature subsets. In this work, we implemented a general framework for the ensemble of multiple feature selection methods. Based on diversified datasets generated from the original set of observations, we aggregated the importance scores generated by multiple feature selection techniques using two methods: the Within Aggregation Method (WAM), which refers to aggregating importance scores within a single feature selection; and the Between Aggregation Method (BAM), which refers to aggregating importance scores between multiple feature selection methods. We applied the proposed framework on 13 real datasets with diverse performances and characteristics. The experimental evaluation showed that WAM provides an effective tool for determining the best feature selection method for a given dataset. WAM has also shown greater stability than BAM in terms of identifying important features. The computational demands of the two methods appeared to be comparable. The results of this work suggest that by applying both WAM and BAM, practitioners can gain a deeper understanding of the feature selection process.  相似文献   
10.
基于一款市场较为畅销的注塑机, 设计出一种能精确控制注射速度的模糊神经元PID控制器. 首先, 设计出具有自学能力的神经元PID控制器, 利用模糊算法对其进行优化; 其次, 在原有注射速度线性数学模型的基础上, 构建注塑机注射速度的非线性模型; 最后, 利用MATLAB在所建数学模型的基础上对模糊神经元PID控制器进行仿真实验. 实验结果表明, 所设计控制器具有响应迅速、无超调量、控制精度高、控制稳定等优点.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号