首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   603篇
  免费   24篇
  国内免费   3篇
化学   58篇
力学   2篇
综合类   1篇
数学   254篇
物理学   45篇
无线电   270篇
  2024年   5篇
  2023年   57篇
  2022年   26篇
  2021年   31篇
  2020年   45篇
  2019年   18篇
  2018年   30篇
  2017年   20篇
  2016年   25篇
  2015年   13篇
  2014年   40篇
  2013年   52篇
  2012年   30篇
  2011年   30篇
  2010年   23篇
  2009年   26篇
  2008年   22篇
  2007年   25篇
  2006年   13篇
  2005年   15篇
  2004年   9篇
  2003年   2篇
  2002年   9篇
  2001年   3篇
  2000年   4篇
  1999年   11篇
  1998年   5篇
  1997年   2篇
  1996年   13篇
  1995年   8篇
  1994年   4篇
  1993年   4篇
  1992年   5篇
  1990年   1篇
  1986年   1篇
  1985年   1篇
  1980年   1篇
  1971年   1篇
排序方式: 共有630条查询结果,搜索用时 0 毫秒
331.
介绍一个自主开发的移动学习系统的设计和实现。该系统利用J2ME技术开发手机服务器和客户端程序,将系统配置在移动通信设备上,满足了随时随地的移动学习需要。  相似文献   
332.
Probabilistic Decision Graphs (PDGs) are a class of graphical models that can naturally encode some context specific independencies that cannot always be efficiently captured by other popular models, such as Bayesian Networks. Furthermore, inference can be carried out efficiently over a PDG, in time linear in the size of the model. The problem of learning PDGs from data has been studied in the literature, but only for the case of complete data. We propose an algorithm for learning PDGs in the presence of missing data. The proposed method is based on the Expectation-Maximisation principle for estimating the structure of the model as well as the parameters. We test our proposal on both artificially generated data with different rates of missing cells and real incomplete data. We also compare the PDG models learnt by our approach to the commonly used Bayesian Network (BN) model. The results indicate that the PDG model is less sensitive to the rate of missing data than BN model. Also, though the BN models usually attain higher likelihood, the PDGs are close to them also in size, which makes the learnt PDGs preferable for probabilistic inference purposes.  相似文献   
333.
A due-date assignment problem with learning effect and deteriorating jobs   总被引:1,自引:0,他引:1  
In this paper we consider a single-machine scheduling problem with the effects of learning and deterioration. In this model, job processing times are defined by functions of their starting times and positions in the sequence. The problem is to determine an optimal combination of the due-date and schedule so as to minimize the sum of earliness, tardiness and due-date. We show that the problem remains polynomially solvable under the proposed model.  相似文献   
334.
Possible dependencies of serial learning data on physiological parameters such as spiking thresholds, arousal level, and decay rate of potentials are considered in a rigorous learning model. Influence of these parameters on the invertedU in learning, skewing of the bowed curve, primacy vs. recency, associational span, distribution of remote associations, and growth of associations is studied. A smooth variation of parameters leads from phenomena characteristic of normal subjects to abnormal phenomena, which can be interpreted in terms of increased response interference and consequent poor paying attention in the presence of overarousal. The study involves a type of biological many-body problem including dynamical time-reversals due to macroscopically nonlocal interactions.Supported in part by the A. P. Sloan Foundation (71609), the NSF (GP-13778), and the ONR (N00014-67-A-0204-00-0051).Supported in part by the ONR 4102 (02).  相似文献   
335.
A new algorithm to exploit the learning rates of gradient descent method is presented, based on the second-order Taylor expansion of the error energy function with respect to learning rate, at some values decided by "award-punish" strategy. Detailed deduction of the algorithm applied to RBF networks is given. Simulation studies show that this algorithm can increase the rate of convergence and improve the performance of the gradient descent method.  相似文献   
336.
In this paper, we consider unregularized online learning algorithms in a Reproducing Kernel Hilbert Space (RKHS). Firstly, we derive explicit convergence rates of the unregularized online learning algorithms for classification associated with a general α-activating loss (see Definition 1 below). Our results extend and refine the results in [30] for the least square loss and the recent result [3] for the loss function with a Lipschitz-continuous gradient. Moreover, we establish a very general condition on the step sizes which guarantees the convergence of the last iterate of such algorithms. Secondly, we establish, for the first time, the convergence of the unregularized pairwise learning algorithm with a general loss function and derive explicit rates under the assumption of polynomially decaying step sizes. Concrete examples are used to illustrate our main results. The main techniques are tools from convex analysis, refined inequalities of Gaussian averages [5], and an induction approach.  相似文献   
337.
In solving challenging pattern recognition problems, deep neural networks have shown excellent performance by forming powerful mappings between inputs and targets, learning representations (features) and making subsequent predictions. A recent tool to help understand how representations are formed is based on observing the dynamics of learning on an information plane using mutual information, linking the input to the representation (I(X;T)) and the representation to the target (I(T;Y)). In this paper, we use an information theoretical approach to understand how Cascade Learning (CL), a method to train deep neural networks layer-by-layer, learns representations, as CL has shown comparable results while saving computation and memory costs. We observe that performance is not linked to information–compression, which differs from observation on End-to-End (E2E) learning. Additionally, CL can inherit information about targets, and gradually specialise extracted features layer-by-layer. We evaluate this effect by proposing an information transition ratio, I(T;Y)/I(X;T), and show that it can serve as a useful heuristic in setting the depth of a neural network that achieves satisfactory accuracy of classification.  相似文献   
338.
支持向量机方法与模糊系统   总被引:11,自引:1,他引:11  
概括介绍了近年来倍受瞩目的一种新的计算机学习方法——支持向量机(Support Vector Machines,简称SVM)方法,这一方法具有坚实的理论基础和出色的应用效果;并分析了SVM方法与模糊系统的关系,对这两种方法的交互促进和发展提出了看法。  相似文献   
339.
陈卓  江辉  周杨 《电子与信息学报》2024,46(3):1119-1127
联邦学习(FL)基于终端本地的学习以及终端与服务器之间持续地模型参数交互完成模型训练,有效地解决了集中式机器学习模型存在的数据泄露和隐私风险。但由于参与联邦学习的多个恶意终端能够在进行本地学习的过程中通过输入微小扰动即可实现对抗性攻击,并进而导致全局模型输出不正确的结果。该文提出一种有效的联邦防御策略-SelectiveFL,该策略首先建立起一个选择性联邦防御框架,然后通过在终端进行对抗性训练提取攻击特性的基础上,在服务器端对上传的本地模型更新的同时根据攻击特性进行选择性聚合,最终得到多个适应性的防御模型。该文在多个具有代表性的基准数据集上评估了所提出的防御方法。实验结果表明,与已有研究工作相比能够提升模型准确率提高了2%~11%。  相似文献   
340.
在加性高斯白噪声的影响下,对于三阶多项式相位信号(CPS),经典的字典学习算法,如K-means Singular Value Decomposition(K-SVD), 递归最小二乘字典学习算法(RLS-DLA)和K-means Singular Value Decomposition Denoising (K-SVDD)得到的学习字典,通过稀疏分解,不能有效去除信号的噪声。为此,该文提出了针对CPS去噪的字典学习算法。该算法首先利用RLS-DLA对的字典进行学习;其次采用非线性最小二乘(NLLS)法修改了该算法对字典更新的部分;最后对训练后的字典通过对信号的稀疏表示得到重构信号。对比其它的字典学习算法,该算法的信噪比(SNR)值明显高于其它算法,而均方误差(MSE)显著低于其它算法,具有明显的降噪效果。实验结果表明,采用该算法得到的字典通过稀疏分解,信号的平均信噪比比K-SVD, RLS-DLS和K-SVDD高出9.55 dB, 13.94 dB和9.76 dB。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号