首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1555篇
  免费   48篇
  国内免费   29篇
化学   333篇
力学   44篇
综合类   10篇
数学   788篇
物理学   457篇
  2023年   18篇
  2022年   19篇
  2021年   29篇
  2020年   20篇
  2019年   29篇
  2018年   33篇
  2017年   47篇
  2016年   45篇
  2015年   49篇
  2014年   103篇
  2013年   163篇
  2012年   97篇
  2011年   103篇
  2010年   98篇
  2009年   137篇
  2008年   85篇
  2007年   108篇
  2006年   72篇
  2005年   53篇
  2004年   43篇
  2003年   29篇
  2002年   30篇
  2001年   26篇
  2000年   13篇
  1999年   10篇
  1998年   16篇
  1997年   33篇
  1996年   21篇
  1995年   16篇
  1994年   8篇
  1993年   6篇
  1992年   10篇
  1991年   11篇
  1990年   6篇
  1989年   5篇
  1988年   4篇
  1987年   5篇
  1986年   6篇
  1985年   6篇
  1983年   2篇
  1982年   3篇
  1981年   1篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
  1974年   1篇
  1973年   1篇
  1969年   1篇
  1966年   1篇
排序方式: 共有1632条查询结果,搜索用时 20 毫秒
1.
The aim of this paper is to present a new classification and regression algorithm based on Artificial Intelligence. The main feature of this algorithm, which will be called Code2Vect, is the nature of the data to treat: qualitative or quantitative and continuous or discrete. Contrary to other artificial intelligence techniques based on the “Big-Data,” this new approach will enable working with a reduced amount of data, within the so-called “Smart Data” paradigm. Moreover, the main purpose of this algorithm is to enable the representation of high-dimensional data and more specifically grouping and visualizing this data according to a given target. For that purpose, the data will be projected into a vectorial space equipped with an appropriate metric, able to group data according to their affinity (with respect to a given output of interest). Furthermore, another application of this algorithm lies on its prediction capability. As it occurs with most common data-mining techniques such as regression trees, by giving an input the output will be inferred, in this case considering the nature of the data formerly described. In order to illustrate its potentialities, two different applications will be addressed, one concerning the representation of high-dimensional and categorical data and another featuring the prediction capabilities of the algorithm.  相似文献   
2.
This article presents a correction method for a better resolution of the problem of estimating and predicting pollution, governed by Burgers' equations. The originality of the method consists in the introduction of an error function into the system's equations of state to model uncertainty in the model. The initial conditions and diffusion coefficients, present in the equations for pollution and concentration, and also those in the model error equations, are estimated by solving a data assimilation problem. The efficiency of the correction method is compared with that produced by the traditional method without introduction of an error function.Three test cases are presented in this study in order to compare the performances of the proposed methods. In the first two tests, the reference is the analytical solution and the last test is formulated as part of the “twin experiment”.The numerical results obtained confirm the important role of the model error equation for improving the prediction capability of the system, in terms of both accuracy and speed of convergence.  相似文献   
3.
4.
The National Natural Science Foundation of China (NSFC) is a vital government agency supporting basic research and people to create knowledge and meet major national needs, where a rigorous and objective merit-based peer review mechanism is the key to funding the most promising research proposals. This invited comment overviews some recent attempts aimed at bettering the academic evaluation environment at the Department of Chemical Science in 2019, through measures such as grouped panel committee meetings, standardized panel committee meeting procedures, and review process refinement to improve the project review at panel committee meeting levels.  相似文献   
5.
6.
The ever increasing interest of consumers for safety, authenticity and quality of food commodities has driven the attention towards the analytical techniques used for analyzing these commodities. In recent years, rapid and reliable sensor, spectroscopic and chromatographic techniques have emerged that, together with multivariate and multiway chemometrics, have improved the whole control process by reducing the time of analysis and providing more informative results. In this progression of more and better information, the combination (fusion) of outputs of different instrumental techniques has emerged as a means for increasing the reliability of classification or prediction of foodstuff specifications as compared to using a single analytical technique. Although promising results have been obtained in food and beverage authentication and quality assessment, the combination of data from several techniques is not straightforward and represents an important challenge for chemometricians. This review provides a general overview of data fusion strategies that have been used in the field of food and beverage authentication and quality assessment.  相似文献   
7.
The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k‐means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a “gray area” and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k‐means nearest neighbor and the best approximation of positioning real zeros.  相似文献   
8.
Leccinum rugosiceps is an edible mushroom belonging to genus Leccinum of Boletaceae. Its fruiting bodies are richer in nutrients than many vegetables and fruit. The model of support vector machine was established for the discrimination of L. rugosiceps from regions based on rapid and low-cost ultraviolet and infrared spectroscopies. The mid-level data fusion was performed by support vector machine. Compared to a single spectroscopic technique, mid-level data fusion provided higher accuracy by selecting the most significant variance from data matrixes based on partial least squares discriminant analysis. The accuracy of the classification of samples in the calibration and test sets were 85.00 and 94.74%, higher than separate measurements by ultraviolet or infrared spectroscopy. This approach has applications for authentication and quality assessment of L. rugosiceps.  相似文献   
9.
Applications of traditional data envelopments analysis (DEA) models require knowledge of crisp input and output data. However, the real-world problems often deal with imprecise or ambiguous data. In this paper, the problem of considering uncertainty in the equality constraints is analyzed and by using the equivalent form of CCR model, a suitable robust DEA model is derived in order to analyze the efficiency of decision-making units (DMUs) under the assumption of uncertainty in both input and output spaces. The new model based on the robust optimization approach is suggested. Using the proposed model, it is possible to evaluate the efficiency of the DMUs in the presence of uncertainty in a fewer steps compared to other models. In addition, using the new proposed robust DEA model and envelopment form of CCR model, two linear robust super-efficiency models for complete ranking of DMUs are proposed. Two different case studies of different contexts are taken as numerical examples in order to compare the proposed model with other approaches. The examples also illustrate various possible applications of new models.  相似文献   
10.
Donoho’s article “50 Years of Data Science” is a well-thought explanation of a newly developed discipline called “data science.” In this article, we examine his explanations and suggestions about data science, follow-up on some of the issues he mentioned, and share our experiences in developing a data science curriculum and the teaching of related courses.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号