全文获取类型
收费全文 | 2827篇 |
免费 | 56篇 |
国内免费 | 71篇 |
专业分类
化学 | 331篇 |
晶体学 | 1篇 |
力学 | 27篇 |
综合类 | 10篇 |
数学 | 770篇 |
物理学 | 456篇 |
综合类 | 1359篇 |
出版年
2024年 | 11篇 |
2023年 | 28篇 |
2022年 | 31篇 |
2021年 | 34篇 |
2020年 | 30篇 |
2019年 | 39篇 |
2018年 | 41篇 |
2017年 | 59篇 |
2016年 | 53篇 |
2015年 | 66篇 |
2014年 | 151篇 |
2013年 | 212篇 |
2012年 | 165篇 |
2011年 | 175篇 |
2010年 | 164篇 |
2009年 | 211篇 |
2008年 | 141篇 |
2007年 | 161篇 |
2006年 | 142篇 |
2005年 | 112篇 |
2004年 | 112篇 |
2003年 | 100篇 |
2002年 | 83篇 |
2001年 | 64篇 |
2000年 | 45篇 |
1999年 | 39篇 |
1998年 | 67篇 |
1997年 | 66篇 |
1996年 | 61篇 |
1995年 | 32篇 |
1994年 | 32篇 |
1993年 | 24篇 |
1992年 | 32篇 |
1991年 | 40篇 |
1990年 | 22篇 |
1989年 | 21篇 |
1988年 | 30篇 |
1987年 | 11篇 |
1986年 | 20篇 |
1985年 | 6篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1982年 | 4篇 |
1980年 | 2篇 |
1979年 | 3篇 |
1978年 | 2篇 |
1974年 | 1篇 |
1973年 | 1篇 |
1969年 | 1篇 |
1966年 | 1篇 |
排序方式: 共有2954条查询结果,搜索用时 8 毫秒
171.
在数据包络分析(DEA)中,公共权重模型是决策单元效率评价与排序的常用方法之一。与传统DEA模型相比,公共权重模型用一组公共的投入产出权重评价所有决策单元,评价结果往往更具有区分度且更为客观。本文考虑决策单元对排序位置的满意程度,提出了基于最大化最小满意度和最大化平均满意度两类新的公共权重模型。首先,基于随机多准则可接受度分析(SMAA)方法,计算出每个决策单元处于各个排名位置的可接受度;然后,通过逆权重空间分析,分别求得使最小满意度和平均满意度最大化的一组公共权重;最后,利用所求的公共权重,计算各决策单元的效率值及相应的排序。算例分析验证了本文提出的基于SMAA的公共权重模型用于决策单元效率评价与排序的可行性。 相似文献
172.
173.
Marlos Viana 《Methodology and Computing in Applied Probability》2007,9(2):325-341
This paper describes some of the basic applications of the algebraic theory of canonical decomposition to the analysis of
data. The notions of structured data and symmetry studies are discussed and applied to demonstrate their role in well known
principles of analysis of variance and their applicability in more general experimental settings.
相似文献
174.
The paper is devoted to statistical nonparametric estimation of multivariate distribution density. The influence of data pre-clustering
on the estimation accuracy of multimodal density is analyzed by means of the Monte Carlo method. It is shown that the soft
clustering is more advantageous than the hard one. While a moderate increase in the number of clusters also increases the
calculation time, it considerably reduces the estimation error. 相似文献
175.
In this paper, an adaptive FE analysis is presented based on error estimation, adaptive mesh refinement and data transfer for enriched plasticity continua in the modelling of strain localization. As the classical continuum models suffer from pathological mesh-dependence in the strain softening models, the governing equations are regularized by adding rotational degrees-of-freedom to the conventional degrees-of-freedom. Adaptive strategy using element elongation is applied to compute the distribution of required element size using the estimated error distribution. Once a new mesh is generated, state variables and history-dependent variables are mapped from the old finite element mesh to the new one. In order to transfer the history-dependent variables from the old to new mesh, the values of internal variables available at Gauss point are first projected at nodes of old mesh, then the values of the old nodes are transferred to the nodes of new mesh and finally, the values at Gauss points of new elements are determined with respect to nodal values of the new mesh. Finally, the efficiency of the proposed model and computational algorithms is demonstrated by several numerical examples. 相似文献
176.
This paper describes the k-means range algorithm, a combination of the partitional k-means clustering algorithm with a well known spatial data structure, namely the range tree, which allows fast range searches. It offers a real-time solution for the development of distributed interactive decision aids in e-commerce since it allows the consumer to model his preferences along multiple dimensions, search for product information, and then produce the data clusters of the products retrieved to enhance his purchase decisions. This paper also discusses the implications and advantages of this approach in the development of on-line shopping environments and consumer decision aids in traditional and mobile e-commerce applications. 相似文献
177.
A general framework is developed to treat inverse problems with parameters that are random fields. It involves a sampling
method that exploits the sensitivity derivatives of the control variable with respect to the random parameters. As the sensitivity
derivatives are computed only at the mean values of the relevant parameters, the related extra cost of the present method
is a fraction of the total cost of the Monte Carlo method. The effectiveness of the method is demonstrated on an example problem
governed by the Burgers equation with random viscosity. It is specifically shown that this method is two orders of magnitude
more efficient compared to the conventional Monte Carlo method. In other words, for a given number of samples, the present
method yields two orders of magnitude higher accuracy than its conventional counterpart. 相似文献
178.
Fast detection of string differences is a prerequisite for string clustering problems. An example of such a problem is the identification of duplicate information in the data cleansing stage of the data mining process. The relevant algorithms allow the application of large-scale clustering techniques in order to create clusters of similar strings. The vast majority of comparisons, in such cases, is between very dissimilar strings, therefore methods that perform better at detecting large differences are preferable. This paper presents approaches which comply with this requirement, based on reformulation of the underlying shortest path problem. It is believed that such methods can lead to a family of new algorithms. An upper bound algorithm is presented, as an example, which produces promising results. 相似文献
179.
In this paper various ensemble learning methods from machine learning and statistics are considered and applied to the customer choice modeling problem. The application of ensemble learning usually improves the prediction quality of flexible models like decision trees and thus leads to improved predictions. We give experimental results for two real-life marketing datasets using decision trees, ensemble versions of decision trees and the logistic regression model, which is a standard approach for this problem. The ensemble models are found to improve upon individual decision trees and outperform logistic regression. 相似文献
180.
Applicants for credit have to provide information for the risk assessment process. In the current conditions of a saturated consumer lending market, and hence falling “take” rates, can such information be used to assess the probability of a customer accepting the offer? 相似文献