首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   151篇
  免费   21篇
  国内免费   9篇
化学   7篇
力学   11篇
综合类   2篇
数学   82篇
物理学   27篇
无线电   52篇
  2023年   3篇
  2022年   3篇
  2021年   7篇
  2020年   2篇
  2019年   7篇
  2018年   1篇
  2017年   5篇
  2016年   7篇
  2015年   1篇
  2014年   9篇
  2013年   10篇
  2012年   5篇
  2011年   9篇
  2010年   8篇
  2009年   7篇
  2008年   9篇
  2007年   7篇
  2006年   13篇
  2005年   17篇
  2004年   13篇
  2003年   10篇
  2002年   4篇
  2001年   3篇
  2000年   4篇
  1999年   1篇
  1998年   3篇
  1997年   1篇
  1996年   2篇
  1995年   3篇
  1990年   2篇
  1987年   2篇
  1983年   1篇
  1982年   1篇
  1972年   1篇
排序方式: 共有181条查询结果,搜索用时 15 毫秒
1.
According to the new method of preparing core-shell nanospheres developed by our group, by using two monomers, 2-hydroxypropyl methacrylate(HPMA) and vinyl acetate(VAc), two kinds of core-shell nanospheres with poly(ɛ-caprolactone) (PCL) as the core and crosslinked poly(2-hydroxypropyl methacrylate) (PHPMA) or poly(vinyl acetate) (PVAc) as the shell were successfully prepared under similar conditions. After degrading the PCL cores of the two kinds of nanospheres by lipase, the corresponding crosslinked poly(methyl acrylic acid) hollow spheres and crosslinked poly(vinyl alcohol) hollow spheres were obtained. Results indicate that the new method we proposed for preparing core-shell polymeric nanospheres via in-situ polymerization can be generalized to a certain extent, and it is suitable for many systems provided the monomer used is soluble in water, while its corresponding polymer is insoluble in water. Translated from Chemical Journal of Chinese University, 2006, 27(9): 1762–1766 [译自: 高等学校化学学报]  相似文献   
2.
We consider information-theoretic bounds on the expected generalization error for statistical learning problems in a network setting. In this setting, there are K nodes, each with its own independent dataset, and the models from the K nodes have to be aggregated into a final centralized model. We consider both simple averaging of the models as well as more complicated multi-round algorithms. We give upper bounds on the expected generalization error for a variety of problems, such as those with Bregman divergence or Lipschitz continuous losses, that demonstrate an improved dependence of 1/K on the number of nodes. These “per node” bounds are in terms of the mutual information between the training dataset and the trained weights at each node and are therefore useful in describing the generalization properties inherent to having communication or privacy constraints at each node.  相似文献   
3.
于雄香  沈良忠  尚学群  刘文斌 《电子学报》2015,43(10):2076-2081
布尔网络是研究基因调控网络的一种非常重要的模型,通过时序数据推理基因之间的调控关系是研究网络动态行为和干预策略的基础.现有的预测研究主要集中在基因之间的调控关系,而对调控基因与目标基因之间的布尔函数的作用方式研究甚少.由于基因调控网络是一种处于有序和无序之间的临界网络,本文研究了众数规则、基于偏斜和基于互信息的三种泛化方法对临界布尔网络的稳态分布距离和灵敏度误差的影响.结果表明合理的泛化能够明显提高预测网络的稳态分布距离和灵敏度误差指标.三种泛化方法中,基于互信息的泛化方法的总体性能最好.  相似文献   
4.
吕品  于文兵  汪鑫  计春雷  周曦民 《电子学报》2019,47(10):2228-2234
恶意评论检测是预防社会媒体平台给用户带来负面影响的一项重要工作,是自然语言处理的重要领域之一.为解决单分类器实现恶意评论检测时模型精度不稳定、boosting集成模型精度较低的问题,提出一种异构分类器堆叠泛化的方法.该方法用深度循环神经网络将多标签的恶意评论分类问题转变为二类分类,防止了模型精度不稳定;用堆叠泛化集成时单个分类器GRU(Gated Recurrent Unit)和NB-SVM(Naïve Bayes-Support Vector Machine)在模型结构和分类偏差上的差异性,改善了模型精度.在维基百科恶意评论数据集上的对比实验证明:提出的方法优于boosting集成,说明堆叠泛化异构分类器实现恶意评论检测是可行且有效的.  相似文献   
5.
支持向量机推广能力估计方法比较   总被引:4,自引:0,他引:4  
支持向量机是一种新的机器学习算法,与其它学习算法相比,它的最大优点是基于结构风险最小化原则,因而能够保证推广能力。推广能力估计是机器学习中的一个重要问题,是实现自适应调整、参数选择、模型选择的等方法的基础。本文详细比较当前较有影响的几种推广能力估计方法,指出了这些方法适应范围和优缺点,并结合各种方法的原理讨论了推广能力估计可能的发展方向。  相似文献   
6.
BP网络过拟合现象满足的不确定关系新的改进式   总被引:1,自引:0,他引:1  
类比信息传递过程中的一般测不准关系式 ,引进表征问题复杂性的函数复相关系数R和代表网络结构特性的隐节点数h ,揭示了BP网络过拟合现象出现时的网络学习能力与推广能力之间满足的不确定关系式 ;通过模拟了 12种不同类型复杂程度函数的过拟合数值试验 ,确定出关系式中的过拟合参数 p的取值范围已缩小为 1×10 -5~ 5× 10 -4;给出应用BP网络对给定样本集的训练过程中 ,判断出现过拟合现象的方法  相似文献   
7.
Fuzzy半群中的Fuzzy理想   总被引:4,自引:4,他引:4  
本文先引入Fuzzy半群中Fuzzy理想的概念,进而讨论它们的一些代数性质,推广了前人的一些结果。  相似文献   
8.
The problem is the classification of the ideals of free differential algebras, or the associated quotient algebras, the q-algebras; being finitely generated, unital C-algebras with homogeneous relations and a q-differential structure. This family of algebras includes the quantum groups, or at least those that are based on simple (super) Lie or Kac–Moody algebras. Their classification would encompass the so far incompleted classification of quantized (super) Kac–Moody algebras and of the (super) Kac–Moody algebras themselves. These can be defined as singular limits of q-algebras, and it is evident that to deal with the q-algebras in their full generality is more rational than the examination of each singular limit separately. This is not just because quantization unifies algebras and superalgebras, but also because the points q=1 and q=–1 are the most singular points in parameter space. In this Letter, one of two major hurdles in this classification program has been overcome. Fix a set of integers n 1,...,n k, and consider the space of homogeneous polynomials of degree n 1 in the generator e 1, and so on. Assume that there are no constants among the polynomials of lower degree, in any one of the generators; in this case all constants in the space have been classified. The task that remains, the more formidable one, is to remove the stipulation that there are no constants of lower degree.  相似文献   
9.
The main purpose of this paper is to use the properties of the Gauss sums, primitive characters and the mean value of Dirichlet L-functions to study the hybrid mean value of the error term E(n, l, c, q) and the hyper-Kloosterman sums K(h,n+1,q), the asymptotic property of the mean square value ∑^p c=1 E^2(n, 1, c, p), and give two interesting mean value formulae.  相似文献   
10.
Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号