首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1336篇
  免费   31篇
  国内免费   114篇
化学   956篇
力学   57篇
综合类   6篇
数学   152篇
物理学   310篇
  2024年   1篇
  2023年   61篇
  2022年   45篇
  2021年   39篇
  2020年   33篇
  2019年   25篇
  2018年   29篇
  2017年   43篇
  2016年   35篇
  2015年   38篇
  2014年   57篇
  2013年   78篇
  2012年   82篇
  2011年   73篇
  2010年   74篇
  2009年   87篇
  2008年   85篇
  2007年   91篇
  2006年   73篇
  2005年   61篇
  2004年   50篇
  2003年   47篇
  2002年   32篇
  2001年   32篇
  2000年   26篇
  1999年   26篇
  1998年   18篇
  1997年   29篇
  1996年   23篇
  1995年   11篇
  1994年   17篇
  1993年   12篇
  1992年   7篇
  1991年   8篇
  1990年   6篇
  1989年   4篇
  1988年   9篇
  1987年   6篇
  1985年   3篇
  1984年   3篇
  1982年   1篇
  1981年   1篇
排序方式: 共有1481条查询结果,搜索用时 15 毫秒
151.
Summary We have constructed two phage display libraries expressing N-terminal pIII fusions in M13 composed of 37 and 43 random amino acid domains, respectively. The D38 library expresses 37 random amino acids with a central alanine residue, and the DC43 library contains 43 random amino acids with a central cysteine flanked by two glycine residues, giving the displayed peptide the potential to form disulfide loops of various sizes. We demonstrate that the majority of random sequences in both libraries are compatible in pentavalent display with phage viability. The M13 phage display vector itself has been engineered to contain a factor Xa protease cleavage site to provide an alternative to acid elution during affinity selection. An in-frame amber mutation has been inserted between the pIII cloning sites to allow for efficient selection against nonrecombinant phage in the library. These libraries have been panned against mAb 7E11-C5, which recognizes the prostate-specific membrane antigen (PSM). Isolated phage display a consensus sequence that is homologous to a region in the PSM molecule.  相似文献   
152.
ABSTRACT

The present study mainly focuses on enhancing the performance of solar still unit using solar energy through cylindrical parabolic collector and solar panels. A 300 W solar panel is used to heat saline water by thermal elements outside the solar still unit. Solar panels are cooled during the hot hours of the day; thus, reducing their temperature may lead to an increase in solar panel efficiency followed by an increase in the efficiency of the solar still unit. The maximum amount of freshwater used in the experiment was 2.132 kg/day. The experiments were modelled using ANNs. Based on neural network simulation results, there is a significant correlation between experimental data and neural network modelling. This paper compares experimental data with data obtained from mathematical modelling and ANNs. As a conclusion, the artificial neural network prediction has been more accurate than the simplified first principles model presented.  相似文献   
153.
Although artificial neural networks (ANN) have been widely used in forecasting time series, the determination of the best model is still a problem that has been studied a lot. Various approaches available in the literature have been proposed in order to select the best model for forecasting in ANN in recent years. One of these approaches is to use a model selection strategy based on the weighted information criterion (WIC). WIC is calculated by summing weighted different selection criteria which measure the forecasting accuracy of an ANN model in different ways. In the calculation of WIC, the weights of different selection criteria are determined heuristically. In this study, these weights are calculated by using optimization in order to obtain a more consistent criterion. Four real time series are analyzed in order to show the efficiency of the improved WIC. When the weights are determined based on the optimization, it is obviously seen that the improved WIC produces better results.  相似文献   
154.
In this article, we aim to analyze the limitations of learning in automata-based systems by introducing the L+L+ algorithm to replicate quasi-perfect learning, i.e., a situation in which the learner can get the correct answer to any of his queries. This extreme assumption allows the generalization of any limitations of the learning algorithm to less sophisticated learning systems. We analyze the conditions under which the L+L+ infers the correct automaton and when it fails to do so. In the context of the repeated prisoners’ dilemma, we exemplify how the L+L+ may fail to learn the correct automaton. We prove that a sufficient condition for the L+L+ algorithm to learn the correct automaton is to use a large number of look-ahead steps. Finally, we show empirically, in the product differentiation problem, that the computational time of the L+L+ algorithm is polynomial on the number of states but exponential on the number of agents.  相似文献   
155.
精度与程度的逻辑或近似算子的性质   总被引:1,自引:0,他引:1  
本文目的是探讨精度与程度的复合,探索新的粗糙集拓展模型.从精度与程度的逻辑或运算出发,定义了精度与程度的逻辑或粗糙集模型.在该模型中,通过变精度近似与程度近似的转化公式,研究了精度与程度的逻辑或近似算子,并得到了该近似算子的幂作用等性质.用精度与程度的逻辑或粗糙集模型统一了变精度粗糙集模型、程度粗糙集模型和经典粗糙集模型,并在这些粗糙集模型中得到了近似算子幂作用的相应性质.  相似文献   
156.
Taguchi method is the usual strategy in robust design and involves conducting experiments using orthogonal arrays and estimating the combination of factor levels that optimizes a given performance measure, typically a signal-to-noise ratio. The problem is more complex in the case of multiple responses since the combinations of factor levels that optimize the different responses usually differ. In this paper, an Artificial Neural Network, trained with the experiments results, is used to estimate the responses for all factor level combinations. After that, Data Envelopment Analysis (DEA) is used first to select the efficient (i.e. non-dominated) factor level combinations and then for choosing among them the one which leads to a most robust quality loss penalization. Mean Square Deviations of the quality characteristics are used as DEA inputs. Among the advantages of the proposed approach over traditional Taguchi method are the non-parametric, non-linear way of estimating quality loss measures for unobserved factor combinations and the non-parametric character of the performance evaluation of all the factor combinations. The proposed approach is applied to a number of case studies from the literature and compared with existing approaches.  相似文献   
157.
Deep learning is a recent technology that has shown excellent capabilities for recognition and identification tasks. This study applies these techniques in open cranial vault remodeling surgeries performed to correct craniosynostosis. The objective was to automatically recognize surgical tools in real-time and estimate the surgical phase based on those predictions. For this purpose, we implemented, trained, and tested three algorithms based on previously proposed Convolutional Neural Network architectures (VGG16, MobileNetV2, and InceptionV3) and one new architecture with fewer parameters (CranioNet). A novel 3D Slicer module was specifically developed to implement these networks and recognize surgical tools in real time via video streaming. The training and test data were acquired during a surgical simulation using a 3D printed patient-based realistic phantom of an infant’s head. The results showed that CranioNet presents the lowest accuracy for tool recognition (93.4%), while the highest accuracy is achieved by the MobileNetV2 model (99.6%), followed by VGG16 and InceptionV3 (98.8% and 97.2%, respectively). Regarding phase detection, InceptionV3 and VGG16 obtained the best results (94.5% and 94.4%), whereas MobileNetV2 and CranioNet presented worse values (91.1% and 89.8%). Our results prove the feasibility of applying deep learning architectures for real-time tool detection and phase estimation in craniosynostosis surgeries.  相似文献   
158.
Abstract

The concept of statistical strategy is introduced and used to develop a structured graphical user interface for guiding data analysis. The interface visually represents statistical strategies that are designed by expert data analysts to guide novices. The representation is an abstraction of the expert's concepts of the essence of a data analysis. We argue that an environment that visually guides and structures data analysis will improve data analysis productivity, accuracy, accessibility, and satisfaction in comparison to an environment without such aids, especially for novice data analysts. Our concepts are based on notions from cognitive science, and can be empirically evaluated. The interface consists of two interacting windows—the guidemap and the workmap. Each window contains a graph that has nodes and edges. The guidemap graph represents the statistical strategy for a specific statistical task (such as describing data). Nodes represent potential data analysis actions that can be taken by the system. Edges represent potential actions that can be taken by the analyst. The guidemap graph exists prior to the data analysis session, having been created by an expert. The workmap graph represents the complete history of all steps taken by the data analyst. It is constructed during the data analysis session as a result of the analyst's actions. Workmap nodes represent data sets, data models, or data analysis procedures that have been created or used by the analyst. Workmap edges represent the chronological sequence of the analyst's actions. One workmap node is highlighted to indicate which statistical object is the focus of the strategy. We illustrate our concepts with ViSta, the Visual Statistics system that we have developed.  相似文献   
159.
Chun-Xia Yang  Rui Wang  Sen Hu 《Physics letters. A》2013,377(34-36):2041-2046
We constructed an agent-based stock market model which concisely describe investors? heterogeneity and adaptability by introducing price sensitivity and feedback time. Under different parameters, the peak and fat-tail property of return distribution is produced and the obtained statistic values coincide with empirical results: the center peak exponents range from ?0.787 to ?0.661, and the tail exponents range from ?4.29 to ?2.37. Besides, long-term correlation in volatility is examined by DFA1 method, and the obtained exponent α is 0.803, which also coincides with the exponent of 0.78 found in real market.  相似文献   
160.
Relative Radiometric Normalization is often required in remote sensing image analyses particularly in the land cover change detection process. Normalization process minimizes the radiometric differences between two images caused by inequalities in the acquisition conditions rather than changes in surface reflectance. A wide range of RRN methods have been developed to adjust linear models. This paper proposes an automated Relative Radiometric Normalization (RRN) method to adjust a non-linear model based on an Artificial Neural Network (ANN) and unchanged pixels. The proposed method includes the following stages: (1) automatic detection of unchanged pixels based on a new idea that uses CVA (Change Vector Análysis) method, PCA (Principal Component Analysis) transformation and K-means clustering technique, (2) evaluation of different architectures of perceptron neural networks to find the best architecture for this specific task and (3) use of the aforementioned network for normalizing the subject image. The method has been implemented on two images taken by the TM sensor. Experimental results confirm the effectiveness of the presented technique in the automatic detection of unchanged pixels and minimizing imaging condition effects (i.e., atmosphere and other effective parameters).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号