全文获取类型
收费全文 | 4059篇 |
免费 | 222篇 |
国内免费 | 56篇 |
专业分类
化学 | 423篇 |
晶体学 | 3篇 |
力学 | 41篇 |
综合类 | 13篇 |
数学 | 818篇 |
物理学 | 599篇 |
无线电 | 2440篇 |
出版年
2024年 | 10篇 |
2023年 | 45篇 |
2022年 | 59篇 |
2021年 | 121篇 |
2020年 | 77篇 |
2019年 | 66篇 |
2018年 | 82篇 |
2017年 | 126篇 |
2016年 | 146篇 |
2015年 | 137篇 |
2014年 | 305篇 |
2013年 | 324篇 |
2012年 | 259篇 |
2011年 | 231篇 |
2010年 | 179篇 |
2009年 | 234篇 |
2008年 | 184篇 |
2007年 | 220篇 |
2006年 | 208篇 |
2005年 | 197篇 |
2004年 | 184篇 |
2003年 | 132篇 |
2002年 | 133篇 |
2001年 | 108篇 |
2000年 | 60篇 |
1999年 | 70篇 |
1998年 | 62篇 |
1997年 | 82篇 |
1996年 | 66篇 |
1995年 | 37篇 |
1994年 | 26篇 |
1993年 | 29篇 |
1992年 | 20篇 |
1991年 | 23篇 |
1990年 | 17篇 |
1989年 | 11篇 |
1988年 | 13篇 |
1987年 | 7篇 |
1986年 | 9篇 |
1985年 | 8篇 |
1984年 | 2篇 |
1983年 | 3篇 |
1982年 | 4篇 |
1981年 | 2篇 |
1980年 | 6篇 |
1979年 | 3篇 |
1978年 | 2篇 |
1977年 | 2篇 |
1973年 | 2篇 |
1966年 | 1篇 |
排序方式: 共有4337条查询结果,搜索用时 0 毫秒
121.
Paola Ravelojaona 《European Journal of Operational Research》2019,272(2):780-791
This paper aims to present non-linear CES (Constant Elasticity of Substitution)–CET (Constant Elasticity of Transformation) Directional Distance Functions. These measures inherit the structure of the standard Directional Distance Functions and that of the CES–CET technology. These functions allow non-parametric estimation of efficiency scores through linear programming method. Besides, the CES –CET technology gives the opportunity to explore α-returns to scale assumption for the new distance functions. The duality theory is investigated through pseudo profit, cost and revenue functions. The dual standpoint provides non-linear adjusted prices that can occur into non-linear pricing practices. An application is proposed to give an illustrative example of the primal CES–CET Directional Distance Functions. 相似文献
122.
José Manuel Cordero Edurne Alonso-Morán Roberto Nuño-Solinis Juan F. Orueta Regina Sauto Arce 《European Journal of Operational Research》2015
This paper uses a fully nonparametric approach to estimate efficiency measures for primary care units incorporating the effect of (exogenous) environmental factors. This methodology allows us to account for different types of variables (continuous and discrete) describing the main characteristics of patients served by those providers. In addition, we use an extension of this nonparametric approach to deal with the presence of undesirable outputs in data, represented by the rates of hospitalization for ambulatory care sensitive condition (ACSC). The empirical results show that all the exogenous variables considered have a significant and negative effect on efficiency estimates. 相似文献
123.
Nicky Rogge Richard Simper Marijn Verschelde Maximilian Hall 《European Journal of Operational Research》2015
This paper fills a noticeable gap in the current economic and penology literature by proposing new performance-enhancing policies based on an efficiency analysis of a sample of male prisons in England and Wales over the period 2009/10. In addition, we advance the empirical literature by integrating the managerialism of four strategic functions of prisons, employment and accommodation, capacity utilization, quality of life in prison and the rehabilitation and re-offending of prisoners. We find that by estimating multiple models focussing on these different areas some prisons are more efficient than other establishments. In terms of policy, it is therefore necessary to consider not just an overall performance metric for individual prisons, as currently undertaken annually by the UK Ministry of Justice, but to look into the administration and managerialism of their main functions in both a business and public policy perspective. Indeed, it is further necessary to view prisons together and not as single entities, so as to obtain a best practice frontier for the different operations that management undertakes in English and Welsh prisons. 相似文献
124.
Therese Biedl Angèle M. Hamel Alejandro López-Ortiz 《Discrete Applied Mathematics》2010,158(15):1579-1586
We consider the problem of sorting a permutation using a network of data structures as introduced by Knuth and Tarjan. In general the model as considered previously was restricted to networks that are directed acyclic graphs (DAGs) of stacks and/or queues. In this paper we study the question of which are the smallest general graphs that can sort an arbitrary permutation and what is their efficiency. We show that certain two-node graphs can sort in time Θ(n2) and no simpler graph can sort all permutations. We then show that certain three-node graphs sort in time Ω(n3/2), and that there exist graphs of k nodes which can sort in time Θ(nlogkn), which is optimal. 相似文献
125.
The recycling of urban solid wastes is a critical point for the “closing supply chains” of many products, mainly when their value cannot be completely recovered after use. In addition to environmental aspects, the process of recycling involves technical, economic, social and political challenges for public management. For most of the urban solid waste, the management of the end-of-life depends on selective collection to start the recycling process. For this reason, an efficient selective collection has become a mainstream tool in the Brazilian National Solid Waste Policy. In this paper, we study effective models that might support the location planning of sorting centers in a medium-sized Brazilian city that has been discussing waste management policies over the past few years. The main goal of this work is to provide an optimal location planning design for recycling urban solid wastes that fall within the financial budget agreed between the municipal government and the National Bank for Economic and Social Development. Moreover, facility planning involves deciding on the best sites for locating sorting centers along the four-year period as well as finding ways to meet the demand for collecting recyclable materials, given that economic factors, consumer behavior and environmental awareness are inherently uncertain future outcomes. To deal with these issues, we propose a deterministic version of the classical capacity facility location problem, and both a two-stage recourse formulation and risk-averse models to reduce the variability of the second-stage costs. Numerical results suggest that it is possible to improve the current selective collection, as well as hedge against data uncertainty by using stochastic and risk-averse optimization models. 相似文献
126.
《Applied Mathematical Modelling》2014,38(15-16):3890-3896
Data envelopment analysis (DEA) is a linear programming technique that is used to measure the relative efficiency of decision-making units (DMUs). Liu et al. (2008) [13] used common weights analysis (CWA) methodology to generate a CSW using linear programming. They classified the DMUs as CWA-efficient and CWA-inefficient DMUs and ranked the DMUs using CWA-ranking rules. The aim of this study is to show that the criteria used by Liu et al. are not theoretically strong enough to discriminate among the CWA-efficient DMUs with equal efficiency. Moreover, there is no guarantee that their proposed model can select one optimal solution from the alternative components. The optimal solution is considered to be the only unique optimal solution. This study shows that the proposal by Liu et al. is not generally correct. The claims made by the authors against the theorem proposed by Liu et al. are fully supported using two counter examples. 相似文献
127.
Taguchi method is the usual strategy in robust design and involves conducting experiments using orthogonal arrays and estimating the combination of factor levels that optimizes a given performance measure, typically a signal-to-noise ratio. The problem is more complex in the case of multiple responses since the combinations of factor levels that optimize the different responses usually differ. In this paper, an Artificial Neural Network, trained with the experiments results, is used to estimate the responses for all factor level combinations. After that, Data Envelopment Analysis (DEA) is used first to select the efficient (i.e. non-dominated) factor level combinations and then for choosing among them the one which leads to a most robust quality loss penalization. Mean Square Deviations of the quality characteristics are used as DEA inputs. Among the advantages of the proposed approach over traditional Taguchi method are the non-parametric, non-linear way of estimating quality loss measures for unobserved factor combinations and the non-parametric character of the performance evaluation of all the factor combinations. The proposed approach is applied to a number of case studies from the literature and compared with existing approaches. 相似文献
128.
Production possibility set (PPS) is intersection of the several halfspaces. Every halfspace corresponds with one strong or weak defining hyperplane (facet). This research proposes a method to find weak defining hyperplanes of PPS of BCC model. We state and prove some properties relative to our method. Numerical examples are provided for illustration. 相似文献
129.
Ali Emrouznejad Abdel Latef Anouze Emmanuel Thanassoulis 《European Journal of Operational Research》2010
Data Envelopment Analysis (DEA) is a nonparametric method for measuring the efficiency of a set of decision making units such as firms or public sector agencies, first introduced into the operational research and management science literature by Charnes, Cooper, and Rhodes (CCR) [Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The original DEA models were applicable only to technologies characterized by positive inputs/outputs. In subsequent literature there have been various approaches to enable DEA to deal with negative data. 相似文献
130.
Within the data envelopment analysis context, problems of discrimination between efficient and inefficient decision-making units often arise, particularly if there are a relatively large number of variables with respect to observations. This paper applies Monte Carlo simulation to generalize and compare two discrimination improving methods; principal component analysis applied to data envelopment analysis (PCA–DEA) and variable reduction based on partial covariance (VR). Performance criteria are based on the percentage of observations incorrectly classified; efficient decision-making units mistakenly defined as inefficient and inefficient units defined as efficient. A trade-off was observed with both methods improving discrimination by reducing the probability of the latter error at the expense of a small increase in the probability of the former error. A comparison of the methodologies demonstrates that PCA–DEA provides a more powerful tool than VR with consistently more accurate results. PCA–DEA is applied to all basic DEA models and guidelines for its application are presented in order to minimize misclassification and prove particularly useful when analyzing relatively small datasets, removing the need for additional preference information. 相似文献