首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1588篇
  免费   42篇
  国内免费   34篇
化学   361篇
晶体学   1篇
力学   36篇
综合类   10篇
数学   791篇
物理学   465篇
  2024年   4篇
  2023年   18篇
  2022年   21篇
  2021年   31篇
  2020年   14篇
  2019年   31篇
  2018年   35篇
  2017年   49篇
  2016年   44篇
  2015年   47篇
  2014年   101篇
  2013年   169篇
  2012年   95篇
  2011年   103篇
  2010年   93篇
  2009年   147篇
  2008年   89篇
  2007年   109篇
  2006年   78篇
  2005年   53篇
  2004年   44篇
  2003年   29篇
  2002年   31篇
  2001年   28篇
  2000年   13篇
  1999年   12篇
  1998年   16篇
  1997年   35篇
  1996年   21篇
  1995年   15篇
  1994年   7篇
  1993年   6篇
  1992年   9篇
  1991年   14篇
  1990年   6篇
  1989年   6篇
  1988年   4篇
  1987年   5篇
  1986年   6篇
  1985年   6篇
  1983年   2篇
  1982年   3篇
  1980年   3篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
  1974年   1篇
  1973年   1篇
  1969年   1篇
  1966年   1篇
排序方式: 共有1664条查询结果,搜索用时 15 毫秒
101.
《Applied Mathematical Modelling》2014,38(15-16):3890-3896
Data envelopment analysis (DEA) is a linear programming technique that is used to measure the relative efficiency of decision-making units (DMUs). Liu et al. (2008) [13] used common weights analysis (CWA) methodology to generate a CSW using linear programming. They classified the DMUs as CWA-efficient and CWA-inefficient DMUs and ranked the DMUs using CWA-ranking rules. The aim of this study is to show that the criteria used by Liu et al. are not theoretically strong enough to discriminate among the CWA-efficient DMUs with equal efficiency. Moreover, there is no guarantee that their proposed model can select one optimal solution from the alternative components. The optimal solution is considered to be the only unique optimal solution. This study shows that the proposal by Liu et al. is not generally correct. The claims made by the authors against the theorem proposed by Liu et al. are fully supported using two counter examples.  相似文献   
102.
So far, in the nonparametric literature only full frontier nonparametric methods have been applied to search for economies of scope and scale, particularly the data envelopment analysis method (DEA). However, these methods present some drawbacks that might lead to biased results. This paper proposes a methodology based on more robust partial frontier nonparametric methods to look for scope and scale economies. Through this methodology it is possible to assess the robustness of these economies, and in particular to assess the influence that extreme data or outliers might have on them. The influence of the imposition of convexity on the production set of firms was also investigated. This methodology was applied to the water utilities that operated in Portugal between 2002 and 2008. There is evidence of economies of vertical integration and economies of scale in drinking water supply utilities and in water and wastewater utilities operating mainly in the retail segment. Economies of scale were found in water and wastewater utilities operating exclusively in the wholesale, and in some of these utilities diseconomies of scope were also found. The proposed methodology also allowed us to conclude that the existence of some smaller utilities makes the minimum optimal scales go down.  相似文献   
103.
The variable returns to scale data envelopment analysis (DEA) model is developed with a maintained hypothesis of convexity in input–output space. This hypothesis is not consistent with standard microeconomic production theory that posits an S-shape for the production frontier, i.e. for production technologies that obey the Regular Ultra Passum Law. Consequently, measures of technical efficiency assuming convexity are biased downward. In this paper, we provide a more general DEA model that allows the S-shape.  相似文献   
104.
The identification of different dynamics in sequential data has become an every day need in scientific fields such as marketing, bioinformatics, finance, or social sciences. Contrary to cross-sectional or static data, this type of observations (also known as stream data, temporal data, longitudinal data or repeated measures) are more challenging as one has to incorporate data dependency in the clustering process. In this research we focus on clustering categorical sequences. The method proposed here combines model-based and heuristic clustering. In the first step, the categorical sequences are transformed by an extension of the hidden Markov model into a probabilistic space, where a symmetric Kullback–Leibler distance can operate. Then, in the second step, using hierarchical clustering on the matrix of distances, the sequences can be clustered. This paper illustrates the enormous potential of this type of hybrid approach using a synthetic data set as well as the well-known Microsoft dataset with website users search patterns and a survey on job career dynamics.  相似文献   
105.
In this paper we consider aggregate Malmquist productivity index measures which allow inputs to be reallocated within the group (when in output orientation). This merges the single period aggregation results allowing input reallocation of Nesterenko and Zelenyuk (2007) with the aggregate Malmquist productivity index results of Zelenyuk (2006) to determine aggregate Malmquist productivity indexes that are justified by economic theory, consistent with previous aggregation results, and which maintain analogous decompositions to the original measures. Such measures are of direct relevance to firms or countries who have merged (making input reallocation possible), allowing them to measure potential productivity gains and how these have been realised (or not) over time.  相似文献   
106.
Analytical processing on multi-dimensional data is performed over data warehouse. This, in general, is presented in the form of cuboids. The central theme of the data warehouse is represented in the form of fact table. A fact table is built from the related dimension tables. The cuboid that corresponds to the fact table is called base cuboid. All possible combination of the cuboids could be generated from base cuboid using successive roll-up operations and this corresponds to a lattice structure. Some of the dimensions may have a concept hierarchy in terms of multiple granularities of data. This means a dimension is represented in more than one abstract form. Typically, neither all the cuboids nor all the concept hierarchy are required for a specific business processing. These cuboids are resided in different layers of memory hierarchy like cache memory, primary memory, secondary memory, etc. This research work dynamically finds the most cost effective path from the lattice structure of cuboids based on concept hierarchy to minimize the query access time. The knowledge of location of cuboids at different memory elements is used for the purpose.  相似文献   
107.
This paper presents a parameter adaptive harmony search algorithm (PAHS) for solving optimization problems. The two important parameters of harmony search algorithm namely Harmony Memory Consideration Rate (HMCR) and Pitch Adjusting Rate (PAR), which were either kept constant or the PAR value was dynamically changed while still keeping HMCR fixed, as observed from literature, are both being allowed to change dynamically in this proposed PAHS. This change in the parameters has been done to get the global optimal solution. Four different cases of linear and exponential changes have been explored. The change has been allowed during the process of improvization. The proposed algorithm is evaluated on 15 standard benchmark functions of various characteristics. Its performance is investigated and compared with three existing harmony search algorithms. Experimental results reveal that proposed algorithm outperforms the existing approaches when applied to 15 benchmark functions. The effects of scalability, noise, and harmony memory size have also been investigated on four approaches of HS. The proposed algorithm is also employed for data clustering. Five real life datasets selected from UCI machine learning repository are used. The results show that, for data clustering, the proposed algorithm achieved results better than other algorithms.  相似文献   
108.
In DEA, we have two measures of technical efficiency with different characteristics: radial and non-radial. In this paper we compile them into a composite model called “epsilon-based measure (EBM).” For this purpose we introduce two parameters which connect radial and non-radial models. These two parameters are obtained from the newly defined affinity index between inputs or outputs along with principal component analysis on the affinity matrix. Thus, EBM takes into account diversity of input/output data and their relative importance for measuring technical efficiency.  相似文献   
109.
The validity of many efficiency measurement methods rely upon the assumption that variables such as input quantities and output mixes are independent of (or uncorrelated with) technical efficiency, however few studies have attempted to test these assumptions. In a recent paper, Wilson (2003) investigates a number of independence tests and finds that they have poor size properties and low power in moderate sample sizes. In this study we discuss the implications of these assumptions in three situations: (i) bootstrapping non-parametric efficiency models; (ii) estimating stochastic frontier models and (iii) obtaining aggregate measures of industry efficiency. We propose a semi-parametric Hausmann-type asymptotic test for linear independence (uncorrelation), and use a Monte Carlo experiment to show that it has good size and power properties in finite samples. We also describe how the test can be generalized in order to detect higher order dependencies, such as heteroscedasticity, so that the test can be used to test for (full) independence when the efficiency distribution has a finite number of moments. Finally, an empirical illustration is provided using data on US electric power generation.  相似文献   
110.
Analytical chemistry often involves a large amount of experimental data, and the reliability and accuracy of the experimental results are related to whether the original data can be properly recorded and calculated. In this paper, starting from the importance of significant figures, analyzing the frequently encountered problems related to the significant figures in teaching process, and giving some solutions, we try to help students to learn and master the concept of significant figures, rounding off numerical values, rule of operation and data processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号