首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1527篇
  免费   42篇
  国内免费   28篇
化学   333篇
晶体学   1篇
力学   27篇
综合类   10篇
数学   770篇
物理学   456篇
  2024年   4篇
  2023年   17篇
  2022年   20篇
  2021年   29篇
  2020年   14篇
  2019年   29篇
  2018年   34篇
  2017年   47篇
  2016年   42篇
  2015年   46篇
  2014年   99篇
  2013年   163篇
  2012年   93篇
  2011年   101篇
  2010年   92篇
  2009年   137篇
  2008年   82篇
  2007年   104篇
  2006年   72篇
  2005年   51篇
  2004年   45篇
  2003年   28篇
  2002年   30篇
  2001年   27篇
  2000年   13篇
  1999年   10篇
  1998年   16篇
  1997年   33篇
  1996年   20篇
  1995年   15篇
  1994年   7篇
  1993年   6篇
  1992年   9篇
  1991年   11篇
  1990年   6篇
  1989年   5篇
  1988年   4篇
  1987年   5篇
  1986年   6篇
  1985年   6篇
  1983年   2篇
  1982年   3篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
  1974年   1篇
  1973年   1篇
  1969年   1篇
  1966年   1篇
排序方式: 共有1597条查询结果,搜索用时 15 毫秒
91.
本文在总结近几年来国内外文献和专利报道的最新研究成果基础上,简要阐述了新一代高密度可录光盘的数据存储机理以及激光器工作波长与有机材料薄膜最大吸收波长相匹配的要求,介绍了在蓝(紫)光波段内(350~400nm)具有显著吸收的有机材料研究概况,并讨论了这些有机材料作为新一代高密度可录光盘数据存储介质的可能性。最后,对有机材料用于新一代高密度可录光盘数据存储介质的研究重点以及今后的发展前景进行简要讨论和展望。  相似文献   
92.
基于小波变换和高斯拟合的在线谱图综合处理方法   总被引:1,自引:0,他引:1  
Li CP  Han JQ  Huang QB  Mu N  Zhu DZ  Guo CT  Cao BQ  Zhang L 《光谱学与光谱分析》2011,31(11):3050-3054
微小型移动式现场在线检测技术是分析仪器发展的新领域。针对复杂工作环境中谱图存在强噪声干扰、谱峰重叠、不规则峰形等严重影响仪器的定性和定量准确度的瓶颈技术,提出了一种基于小波变换和高斯拟合相结合的谱图在线综合处理方法,用自研的仪器对甲苯和全氟三丁胺两种典型化合物的谱图进行了处理,并与实验室分析仪器普遍应用的算法进行了对比分析。结果表明,综合方法能够有效解决强噪声干扰、谱峰重叠、不规则峰形问题,提高仪器的定性和定量准确性,同时能够实现数据压缩,满足仪器的在线实时检测要求。综合方法处理甲苯特征峰的平均信噪比(SNR)较移动平滑方法提高了1.3倍,峰位误差ΔM降低了3.6倍,处理全氟三丁胺谱图的数据压缩比为197∶1。  相似文献   
93.
Employing a Mach-Zehnder Interferometer (MZI), this paper describes simulation demonstration of an all-optical scheme for data format conversion between non-return-to-zero (NRZ) and return-to-zero (RZ). Data format conversion between NRZ and RZ at 120 Gb/s has been simulated for the first time using an MZI. In addition, we have proposed for the first time data format conversion from NRZ to RZ by using a single SOA in an MZI.  相似文献   
94.
Mehmet Ozger 《Physica A》2011,390(6):981-989
Fluctuations in the significant wave height can be quantified by using scaling statistics. In this paper, the scaling properties of the significant wave height were explored by using a large data set of hourly series from 25 monitoring stations located off the west coast of the US. Detrended fluctuation analysis (DFA) was used to investigate the scaling properties of the series. DFA is a robust technique that can be used to detect long-range correlations in nonstationary time series. The significant wave height data was analyzed by using scales from hourly to monthly. It was found that a common scaling behavior can be observed for all stations. A breakpoint in the scaling region around 4-5 days was apparent. Spectral analysis confirms this result. This breakpoint divided the scaling region into two distinct parts. The first part was for finer scales (up to 4 days) which exhibited Brown noise characteristics, while the second one showed 1/f noise behavior at coarser scales (5 days to 1 month). The first order and the second order DFA (DFA1 and DFA2) were used to check the effect of seasonality. It was found that there were no differences between DFA1 and DFA2 results, indicating that there is no effect of trends in the wave height time series. The resulting scaling coefficients range from 0.696 to 0.890 indicating that the wave height exhibits long-term persistence. There were no coherent spatial variations in the scaling coefficients.  相似文献   
95.
《Applied Mathematical Modelling》2014,38(15-16):3890-3896
Data envelopment analysis (DEA) is a linear programming technique that is used to measure the relative efficiency of decision-making units (DMUs). Liu et al. (2008) [13] used common weights analysis (CWA) methodology to generate a CSW using linear programming. They classified the DMUs as CWA-efficient and CWA-inefficient DMUs and ranked the DMUs using CWA-ranking rules. The aim of this study is to show that the criteria used by Liu et al. are not theoretically strong enough to discriminate among the CWA-efficient DMUs with equal efficiency. Moreover, there is no guarantee that their proposed model can select one optimal solution from the alternative components. The optimal solution is considered to be the only unique optimal solution. This study shows that the proposal by Liu et al. is not generally correct. The claims made by the authors against the theorem proposed by Liu et al. are fully supported using two counter examples.  相似文献   
96.
So far, in the nonparametric literature only full frontier nonparametric methods have been applied to search for economies of scope and scale, particularly the data envelopment analysis method (DEA). However, these methods present some drawbacks that might lead to biased results. This paper proposes a methodology based on more robust partial frontier nonparametric methods to look for scope and scale economies. Through this methodology it is possible to assess the robustness of these economies, and in particular to assess the influence that extreme data or outliers might have on them. The influence of the imposition of convexity on the production set of firms was also investigated. This methodology was applied to the water utilities that operated in Portugal between 2002 and 2008. There is evidence of economies of vertical integration and economies of scale in drinking water supply utilities and in water and wastewater utilities operating mainly in the retail segment. Economies of scale were found in water and wastewater utilities operating exclusively in the wholesale, and in some of these utilities diseconomies of scope were also found. The proposed methodology also allowed us to conclude that the existence of some smaller utilities makes the minimum optimal scales go down.  相似文献   
97.
The variable returns to scale data envelopment analysis (DEA) model is developed with a maintained hypothesis of convexity in input–output space. This hypothesis is not consistent with standard microeconomic production theory that posits an S-shape for the production frontier, i.e. for production technologies that obey the Regular Ultra Passum Law. Consequently, measures of technical efficiency assuming convexity are biased downward. In this paper, we provide a more general DEA model that allows the S-shape.  相似文献   
98.
The identification of different dynamics in sequential data has become an every day need in scientific fields such as marketing, bioinformatics, finance, or social sciences. Contrary to cross-sectional or static data, this type of observations (also known as stream data, temporal data, longitudinal data or repeated measures) are more challenging as one has to incorporate data dependency in the clustering process. In this research we focus on clustering categorical sequences. The method proposed here combines model-based and heuristic clustering. In the first step, the categorical sequences are transformed by an extension of the hidden Markov model into a probabilistic space, where a symmetric Kullback–Leibler distance can operate. Then, in the second step, using hierarchical clustering on the matrix of distances, the sequences can be clustered. This paper illustrates the enormous potential of this type of hybrid approach using a synthetic data set as well as the well-known Microsoft dataset with website users search patterns and a survey on job career dynamics.  相似文献   
99.
In this paper we consider aggregate Malmquist productivity index measures which allow inputs to be reallocated within the group (when in output orientation). This merges the single period aggregation results allowing input reallocation of Nesterenko and Zelenyuk (2007) with the aggregate Malmquist productivity index results of Zelenyuk (2006) to determine aggregate Malmquist productivity indexes that are justified by economic theory, consistent with previous aggregation results, and which maintain analogous decompositions to the original measures. Such measures are of direct relevance to firms or countries who have merged (making input reallocation possible), allowing them to measure potential productivity gains and how these have been realised (or not) over time.  相似文献   
100.
Analytical processing on multi-dimensional data is performed over data warehouse. This, in general, is presented in the form of cuboids. The central theme of the data warehouse is represented in the form of fact table. A fact table is built from the related dimension tables. The cuboid that corresponds to the fact table is called base cuboid. All possible combination of the cuboids could be generated from base cuboid using successive roll-up operations and this corresponds to a lattice structure. Some of the dimensions may have a concept hierarchy in terms of multiple granularities of data. This means a dimension is represented in more than one abstract form. Typically, neither all the cuboids nor all the concept hierarchy are required for a specific business processing. These cuboids are resided in different layers of memory hierarchy like cache memory, primary memory, secondary memory, etc. This research work dynamically finds the most cost effective path from the lattice structure of cuboids based on concept hierarchy to minimize the query access time. The knowledge of location of cuboids at different memory elements is used for the purpose.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号