全文获取类型
收费全文 | 1106篇 |
免费 | 108篇 |
国内免费 | 83篇 |
专业分类
化学 | 196篇 |
力学 | 23篇 |
综合类 | 10篇 |
数学 | 215篇 |
物理学 | 410篇 |
综合类 | 443篇 |
出版年
2024年 | 11篇 |
2023年 | 17篇 |
2022年 | 53篇 |
2021年 | 56篇 |
2020年 | 51篇 |
2019年 | 30篇 |
2018年 | 47篇 |
2017年 | 46篇 |
2016年 | 35篇 |
2015年 | 35篇 |
2014年 | 42篇 |
2013年 | 90篇 |
2012年 | 59篇 |
2011年 | 76篇 |
2010年 | 62篇 |
2009年 | 72篇 |
2008年 | 67篇 |
2007年 | 71篇 |
2006年 | 61篇 |
2005年 | 29篇 |
2004年 | 33篇 |
2003年 | 32篇 |
2002年 | 21篇 |
2001年 | 29篇 |
2000年 | 18篇 |
1999年 | 21篇 |
1998年 | 24篇 |
1997年 | 10篇 |
1996年 | 18篇 |
1995年 | 8篇 |
1994年 | 6篇 |
1993年 | 13篇 |
1991年 | 6篇 |
1990年 | 8篇 |
1989年 | 8篇 |
1988年 | 3篇 |
1986年 | 5篇 |
1985年 | 5篇 |
1984年 | 1篇 |
1983年 | 1篇 |
1982年 | 4篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1976年 | 1篇 |
1975年 | 1篇 |
1972年 | 1篇 |
1970年 | 2篇 |
1959年 | 1篇 |
排序方式: 共有1297条查询结果,搜索用时 9 毫秒
61.
神经网络集成技术能有效地提高神经网络的预测精度和泛化能力,已经成为机器学习和神经计算领域的一个研究热点.利用Bagging技术和不同的神经网络算法生成集成个体,并用偏最小二乘回归方法从中提取集成因子,再利用贝叶斯正则化神经网络对其集成,以此建立上证指数预测模型.通过上证指数开、收盘价进行实例分析,计算结果表明该方法预测精度高、稳定性好. 相似文献
62.
Marco Bee Giuseppe Espa Diego Giuliani Flavio Santi 《Journal of computational and graphical statistics》2017,26(3):695-708
In this article, we use the cross-entropy method for noisy optimization for fitting generalized linear multilevel models through maximum likelihood. We propose specifications of the instrumental distributions for positive and bounded parameters that improve the computational performance. We also introduce a new stopping criterion, which has the advantage of being problem-independent. In a second step we find, by means of extensive Monte Carlo experiments, the most suitable values of the input parameters of the algorithm. Finally, we compare the method to the benchmark estimation technique based on numerical integration. The cross-entropy approach turns out to be preferable from both the statistical and the computational point of view. In the last part of the article, the method is used to model the probability of firm exits in the healthcare industry in Italy. Supplemental materials are available online. 相似文献
63.
The topic of clustering has been widely studied in the field of Data Analysis, where it is defined as an unsupervised process of grouping objects together based on notions of similarity. Clustering in the field of Multi-Criteria Decision Aid (MCDA) has seen a few adaptations of methods from Data Analysis, most of them however using concepts native to that field, such as the notions of similarity and distance measures. As in MCDA we model the preferences of a decision maker over a set of decision alternatives, we can find more diverse ways of comparing them than in Data Analysis. As a result, these alternatives may also be arranged into different potential structures. In this paper we wish to formally define the problem of clustering in MCDA using notions that are native to this field alone, and highlight the different structures which we may try to uncover through this process. Following this we propose a method for finding these structures. As in any clustering problem, finding the optimal result in an exact manner is impractical, and so we propose a stochastic heuristic approach, which we validate through tests on a large set of artificially generated benchmarks. 相似文献
64.
In DEA, there are typically two schemes for measuring efficiency of DMUs; radial and non-radial. Radial models assume proportional change of inputs/outputs and usually remaining slacks are not directly accounted for inefficiency. On the other hand, non-radial models deal with slacks of each input/output individually and independently, and integrate them into an efficiency measure, called slacks-based measure (SBM). In this paper, we point out shortcomings of the SBM and propose four variants of the SBM model. The original SBM model evaluates efficiency of DMUs referring to the furthest frontier point within a range. This results in the hardest score for the objective DMU and the projection may go to a remote point on the efficient frontier which may be inappropriate as the reference. In an effort to overcome this shortcoming, we first investigate frontier (facet) structure of the production possibility set. Then we propose Variation I that evaluates each DMU by the nearest point on the same frontier as the SBM found. However, there exist other potential facets for evaluating DMUs. Therefore we propose Variation II that evaluates each DMU from all facets. We then employ clustering methods to classify DMUs into several groups, and apply Variation II within each cluster. This Variation III gives more reasonable efficiency scores with less effort. Lastly we propose a random search method (Variation IV) for reducing the burden of enumeration of facets. The results are approximate but practical in usage. 相似文献
65.
66.
The segmentation of customers on multiple bases is a pervasive problem in marketing research. For example, segmentation service providers partition customers using a variety of demographic and psychographic characteristics, as well as an array of consumption attributes such as brand loyalty, switching behavior, and product/service satisfaction. Unfortunately, the partitions obtained from multiple bases are often not in good agreement with one another, making effective segmentation a difficult managerial task. Therefore, the construction of segments using multiple independent bases often results in a need to establish a partition that represents an amalgamation or consensus of the individual partitions. In this paper, we compare three methods for finding a consensus partition. The first two methods are deterministic, do not use a statistical model in the development of the consensus partition, and are representative of methods used in commercial settings, whereas the third method is based on finite mixture modeling. In a large-scale simulation experiment the finite mixture model yielded better average recovery of holdout (validation) partitions than its non-model-based competitors. This result calls for important changes in the current practice of segmentation service providers that group customers for a variety of managerial goals related to the design and marketing of products and services. 相似文献
67.
个性化试题推荐、试题难度预测、学习者建模等教育数据挖掘任务需要使用到学生作答数据资源及试题知识点标注,现阶段的试题数据都是由人工标注知识点。因此,利用机器学习方法自动标注试题知识点是一项迫切的需求。针对海量试题资源情况下的试题知识点自动标注问题,本文提出了一种基于集成学习的试题多知识点标注方法。首先,形式化定义了试题知识点标注问题,并借助教材目录和领域知识构建知识点的知识图谱作为类别标签。其次,采用基于集成学习的方法训练多个支持向量机作为基分类器,筛选出表现优异的基分类器进行集成,构建出试题多知识点标注模型。最后,以某在线教育平台数据库中的高中数学试题为实验数据集,应用所提方法预测试题考察的知识点,取得了较好的效果。 相似文献
68.
针对短时交通流具有随机性和不确定性等特征,提出一种基于小波分析和集成学习的组合预测模型.首先,对原始交通流数据的平均行程时间序列应用Mallat算法进行多尺度小波分解,且对各尺度上分量进行单支重构;其次,对于各重构的单支序列分别使用极端梯度提升模型(extreme gradient boosting,XGBoost)进... 相似文献
69.
《Operations Research Letters》2021,49(5):787-789
We study fair center based clustering problems. In an influential paper, Chierichetti, Kumar, Lattanzi and Vassilvitskii (NIPS 2017) consider the problem of finding a good clustering, say of women and men, such that every cluster contains an equal number of women and men. They were able to obtain a constant factor approximation for this problem for most center based k-clustering objectives such as k-median, k-means, and k-center. Despite considerable interest in extending this problem for multiple protected attributes (e.g. women and men, with or without citizenship), so far constant factor approximations for these problems have remained elusive except in special cases. We settle this question in the affirmative by giving the first constant factor approximation for a wide range of center based k-clustering objectives. 相似文献
70.
Minghui Jiang 《Discrete Applied Mathematics》2007,155(17):2355-2361
Given a set S of n points in R3, we wish to decide whether S has a subset of size at least k with Euclidean diameter at most r. It is unknown whether this decision problem is NP-hard. The two closely related optimization problems, (i) finding a largest subset of diameter at most r, and (ii) finding a subset of the smallest diameter of size at least k, were recently considered by Afshani and Chan. For maximizing the size, they presented several polynomial-time algorithms with constant approximation factors, the best of which has a factor of . For maximizing the diameter, they presented a polynomial-time approximation scheme. In this paper, we present improved approximation algorithms for both optimization problems. For maximizing the size, we present two algorithms: the first one improves the approximation factor to 2.5 and the running time by an O(n) factor; the second one improves the approximation factor to 2 and the running time by an O(n2) factor. For minimizing the diameter, we improve the running time of the PTAS from O(nlogn+2O(1/ε3)n) to O(nlogn+2O(1/(ε1.5logε))n). 相似文献