首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   720篇
  免费   40篇
  国内免费   47篇
化学   186篇
力学   46篇
综合类   2篇
数学   271篇
物理学   302篇
  2024年   2篇
  2023年   13篇
  2022年   19篇
  2021年   19篇
  2020年   23篇
  2019年   23篇
  2018年   7篇
  2017年   15篇
  2016年   23篇
  2015年   22篇
  2014年   41篇
  2013年   30篇
  2012年   26篇
  2011年   45篇
  2010年   48篇
  2009年   60篇
  2008年   46篇
  2007年   56篇
  2006年   45篇
  2005年   27篇
  2004年   33篇
  2003年   26篇
  2002年   14篇
  2001年   14篇
  2000年   17篇
  1999年   18篇
  1998年   12篇
  1997年   13篇
  1996年   19篇
  1995年   10篇
  1994年   12篇
  1993年   10篇
  1992年   6篇
  1991年   2篇
  1990年   2篇
  1989年   3篇
  1988年   2篇
  1986年   1篇
  1981年   1篇
  1980年   1篇
  1972年   1篇
排序方式: 共有807条查询结果,搜索用时 0 毫秒
21.
为了从含有噪声的大气红外光谱中提取微量待测污染组分的定量特征,进而建立校正模型,本文提出了一种基于统计理论的波长选择方法。该方法针对待测组分,在对光谱各波长位置的噪声强度进行统计估计的基础上,提出了选择最佳波长子集的目标函数。该目标函数包含波长子集的噪声和长度参数,这使得在最小化模型误差的同时也防止了模型规模的无限制膨胀。为了检验该方法的性能。文章利用含背景噪声的实测光谱数据针对三种气体进行了波长选择,并利用神经网络技术分别建立了校正模型。实验结果与波长子集的优化结果相符,所选择的波长子集的长度不足光谱波长总点数的2%,同时,光谱中的噪声也在校正模型中得到了明显的抑制。实验结果证明了该波长选择方法的有效性。  相似文献   
22.
Forecasting the number of warranty claims is vitally important for manufacturers/warranty providers in preparing fiscal plans. In existing literature, a number of techniques such as log-linear Poisson models, Kalman filter, time series models, and artificial neural network models have been developed. Nevertheless, one might find two weaknesses existing in these approaches: (1) they do not consider the fact that warranty claims reported in the recent months might be more important in forecasting future warranty claims than those reported in the earlier months, and (2) they are developed based on repair rates (i.e., the total number of claims divided by the total number of products in service), which can cause information loss through such an arithmetic-mean operation.To overcome the above two weaknesses, this paper introduces two different approaches to forecasting warranty claims: the first is a weighted support vector regression (SVR) model and the second is a weighted SVR-based time series model. These two approaches can be applied to two scenarios: when only claim rate data are available and when original claim data are available. Two case studies are conducted to validate the two modelling approaches. On the basis of model evaluation over six months ahead forecasting, the results show that the proposed models exhibit superior performance compared to that of multilayer perceptrons, radial basis function networks and ordinary support vector regression models.  相似文献   
23.
Taguchi method is the usual strategy in robust design and involves conducting experiments using orthogonal arrays and estimating the combination of factor levels that optimizes a given performance measure, typically a signal-to-noise ratio. The problem is more complex in the case of multiple responses since the combinations of factor levels that optimize the different responses usually differ. In this paper, an Artificial Neural Network, trained with the experiments results, is used to estimate the responses for all factor level combinations. After that, Data Envelopment Analysis (DEA) is used first to select the efficient (i.e. non-dominated) factor level combinations and then for choosing among them the one which leads to a most robust quality loss penalization. Mean Square Deviations of the quality characteristics are used as DEA inputs. Among the advantages of the proposed approach over traditional Taguchi method are the non-parametric, non-linear way of estimating quality loss measures for unobserved factor combinations and the non-parametric character of the performance evaluation of all the factor combinations. The proposed approach is applied to a number of case studies from the literature and compared with existing approaches.  相似文献   
24.
How, in the face of both intrinsic and extrinsic volatility, can unconventional computing fabrics store information over arbitrarily long periods? Here, we argue that the predictable structure of many realistic environments, both natural and artificial, can be used to maintain useful categorical boundaries even when the computational fabric itself is inherently volatile and the inputs and outputs are partially stochastic. As a concrete example, we consider the storage of binary classifications in connectionist networks, although the underlying principles should be applicable to other unconventional computing paradigms. Specifically, we demonstrate that an unsupervised, activity dependent plasticity rule, AHAH (Anti-Hebbian-And-Hebbian), allows binary classifications to remain stable even when the underlying synaptic weights are subject to random noise. When embedded in environments composed of separable features, the weight vector is restricted by the AHAH rule to local attractors representing stable partitions of the input space, allowing unsupervised recovery of stored binary classifications following random perturbations that leave the system in the same basin of attraction. We conclude that the stability of long-term memories may depend not so much on the reliability of the underlying substrate, but rather on the reproducible structure of the environment itself, suggesting a new paradigm for reliable computing with unreliable components.  相似文献   
25.
An accurate phase-height mapping algorithm based on phase-shifting and a neural network is proposed to improve the performance of the structured light system with digital fringe projection. As phase-height mapping is nonlinear, it is difficult to find the best camera model for the system. In order to achieve high accuracy, a trained three-layer back propagation neural network is employed to obtain the complicated transformation. The phase error caused by the non-sinusoidal attribute of the fringe image is analyzed. During the phase calculation process, a pre-calibrated phase error look-up-table is used to reduce the phase error. The detailed procedures of the sample data collection are described. By training the network, the relationship between the image coordinates and the 3D coordinates of the object can be obtained. Experimental results demonstrate that the proposed method is not sensitive to the non-sinusoidal attribute of the fringe image and it can recover complex free-form objects with high accuracy.  相似文献   
26.
For decades, organizational researchers have employed standard statistical methods to uncover relationships among variables and constructs. However, in complex organization systems, the prevalence of non-linearity and outliers is to be expected. Under such circumstances, the use of standard statistical methods becomes unreliable and, correspondingly, results in degraded predictions of the relationships within the organizational systems. We describe the use of neural network analyses to model team effectiveness so as to provide more accurate predictions for managers.  相似文献   
27.
Simulation optimization aims at determining the best values of input parameters, while the analytical objective function and constraints are not explicitly known in terms of design variables and their values only can be estimated by complicated analysis or time-consuming simulation. In this paper, a hybrid genetic algorithm–neural network strategy (GA–NN) is proposed for such kind of optimization problems. The good approximation performance of neural network (NN) and the effective and robust evolutionary searching ability of genetic algorithm (GA) are applied in hybrid sense, where NNs are employed in predicting the objective value, and GA is adopted in searching optimal designs based on the predicted fitness values. Numerical simulation results and comparisons based on a well-known pressure vessel design problem demonstrate the feasibility and effectiveness of the framework, and much better results are achieved than some existed literature results.  相似文献   
28.
Hopfield neural networks and affine scaling interior point methods are combined in a hybrid approach for solving linear optimization problems. The Hopfield networks perform the early stages of the optimization procedures, providing enhanced feasible starting points for both primal and dual affine scaling interior point methods, thus facilitating the steps towards optimality. The hybrid approach is applied to a set of real world linear programming problems. The results show the potential of the integrated approach, indicating that the combination of neural networks and affine scaling interior point methods can be a good alternative to obtain solutions for large-scale optimization problems.  相似文献   
29.
CH4气体的精准检测对防止矿井瓦斯爆炸,确保安全生产至关重要。目前基于可调谐半导体激光吸收光谱技术(TDLAS)存在因温度变化导致气体浓度测量误差较大。探究了基于TDLAS的CH4气体检测系统与温度补偿方法,分析温度对CH4气体吸收谱线的影响,通过算法补偿模型消除环境温度对CH4气体检测的影响。依据TDLAS技术原理及相关理论,对系统发射单元、吸收池、信号接收单元、数据处理单元进行设计,搭建了基于TDLAS技术的CH4气体浓度检测系统,实验检测了不同环境温度(10~50 ℃)时0.04%CH4气体浓度,分析温度变化对CH4气体在波长为1.653 μm处吸收谱线强度和半宽度的影响。为消除温度对CH4气体检测的影响并提高补偿效果,采用粒子群优化算法(PSO)优化BP神经网络(BPNN)的最佳权值和阈值,建立CH4气体的PSO-BP温度补偿模型,克服了BP神经网络收敛速度慢、易陷入局部最优的缺点。结果表明:(1)基于TDLAS的CH4气体检测浓度随环境温度升高而下降,整个实验温度内相对误差范围为4.25%~12.13%,不同环境温度下CH4气体检测浓度与温度之间的关系可用一元三次多项式表示;(2)CH4气体的吸收强度和半宽度随着温度的升高而下降,与温度变化之间的关系为单调递减函数,温度对CH4气体吸收谱线强度的相对变化率大于吸收谱线半宽度的相对变化率,CH4气体吸收谱线的强度更容易受温度变化的影响;(3)BP神经网络和PSO-BP模型测试样本的绝对平均误差(MAE)分别为12.88%和1.81%,平均绝对百分比误差(MAPE)分别为2.3%和0.3%,均方根误差(RMSE)分别为15.96%和2.69%,相关系数R2分别为0.980 6和0.999 6。通过建立PSO-BP温度补偿模型,补偿效果大部分分布在±1.0%的误差范围内,MAE,MAPE,RMSE和R2等评价指标均大幅度提升,对提高TDLAS技术在矿井CH4的精准检测具有一定的参考意义。  相似文献   
30.
Artificial Neural Networks (ANNs) offer an alternative way to tackle complex problems. They can learn from the examples and once trained can perform predictions and generalizations at high speed. They are particularly useful in behavior or system identification. According to the above advantages of ANN in the present paper ANN is used to predict natural convection heat transfer and fluid flow from a column of cold horizontal circular cylinders having uniform surface temperature. Governing equations are solved in a few specified cases by finite volume method to generate the database for training the ANN in the range of Rayleigh numbers of 105–108 and a range of cylinder spacing of 0.5, 1.0, and 1.5 diameters, thereafter a Multi-Layer Perceptron (MLP) network is used to capture the behavior of flow and temperature fields and then generalized this behavior to predict the flow and temperature fields for any other Rayleigh numbers. Different training algorithms are used and it is found that the resilient back-propagation algorithm is the best algorithm regarding the faster training procedure. To validate the accuracy of the trained network, comparison is performed among the ANN and available CFD results. It is observed that ANN can be used more efficiently to determine cold plume and thermal field in lesser computational time. Based on the generalized results from the ANN new correlations are developed to estimate natural convection from a column of cold horizontal cylinders with respect to a single horizontal cylinder.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号