首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   467篇
  免费   13篇
  国内免费   6篇
化学   6篇
力学   16篇
综合类   5篇
数学   427篇
物理学   32篇
  2023年   1篇
  2022年   5篇
  2021年   1篇
  2020年   11篇
  2019年   2篇
  2018年   8篇
  2017年   9篇
  2016年   16篇
  2015年   9篇
  2014年   20篇
  2013年   28篇
  2012年   37篇
  2011年   16篇
  2010年   21篇
  2009年   24篇
  2008年   31篇
  2007年   26篇
  2006年   34篇
  2005年   21篇
  2004年   12篇
  2003年   18篇
  2002年   15篇
  2001年   16篇
  2000年   7篇
  1999年   8篇
  1998年   2篇
  1997年   6篇
  1996年   6篇
  1995年   5篇
  1994年   8篇
  1993年   1篇
  1992年   4篇
  1991年   4篇
  1990年   2篇
  1989年   3篇
  1988年   4篇
  1987年   3篇
  1986年   3篇
  1985年   12篇
  1984年   6篇
  1983年   1篇
  1982年   1篇
  1981年   3篇
  1980年   3篇
  1979年   5篇
  1978年   2篇
  1977年   2篇
  1976年   4篇
排序方式: 共有486条查询结果,搜索用时 286 毫秒
11.
Necessary conditions for a given pointx 0 to be a locally weak solution to the Pareto minimization problem of a vector-valued functionF=(f 1,...,f m ),F:XR m,XR m, are presented. As noted in Ref. 1, the classical necessary condition-conv {Df 1(x 0)|i=1,...,m}T *(X, x 0) need not hold when the contingent coneT is used. We have proven, however, that a properly adjusted approximate version of this classical condition always holds. Strangely enough, the approximation form>2 must be weaker than form=2.The authors would like to thank the anonymous referee for the suggestions which led to an improved presentation of the paper.  相似文献   
12.
The emerging technology in net-zero building and smart grids drives research moving from centralized operation decisions on a single building to decentralized decisions on a group of buildings, termed a building cluster which shares energy resources locally and globally. However, current research has focused on developing an accurate simulation of single building energy usage which limits its application to building clusters as scenarios such as energy sharing and competition cannot be modeled and studied. We hypothesize that the study of energy usage for a group of buildings instead of one single building will result in a cost effective building system which in turn will be resilient to power disruption. To this end, this paper develops a decision model based on a building cluster simulator with each building modeled by energy consumption, storage and generation sub modules. Assuming each building is interested in minimizing its energy cost, a bi-level operation decision framework based on a memetic algorithm is proposed to study the tradeoff in energy usage among the group of buildings. Two additional metrics, measuring the comfort level and the degree of dependencies on the power grid are introduced for the analysis. The experimental result demonstrates that the proposed framework is capable of deriving the Pareto solutions for the building cluster in a decentralized manner. The Pareto solutions not only enable multiple dimensional tradeoff analysis, but also provide valuable insight for determining pricing mechanisms and power grid capacity.  相似文献   
13.
We develop a methodology for the estimation of extreme loss event probability and the value at risk, which takes into account both the magnitudes and the intensity of the extreme losses. Specifically, the extreme loss magnitudes are modeled with a generalized Pareto distribution, whereas their intensity is captured by an autoregressive conditional duration model, a type of self‐exciting point process. This allows for an explicit interaction between the magnitude of the past losses and the intensity of future extreme losses. The intensity is further used in the estimation of extreme loss event probability. The method is illustrated and backtested on 10 assets and compared with the established and baseline methods. The results show that our method outperforms the baseline methods, competes with an established method, and provides additional insight and interpretation into the prediction of extreme loss event probability. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
14.
《Optimization》2012,61(3):199-203
The concept of monotone semicontinuity is introduced. It is shown that every monotonically semicontinuous function with values in a space equipped with arbitrary preference relation achieves its extremes on compacts  相似文献   
15.
In this paper we present a new approach, based on the Nearest Interval Approximation Operator, for dealing with a multiobjective programming problem with fuzzy-valued objective functions.  相似文献   
16.
ABSTRACT

We provide an asymptotic analysis of multi-objective sequential stochastic assignment problems (MOSSAP). In MOSSAP, a fixed number of tasks arrive sequentially, with an n-dimensional value vector revealed upon arrival. Each task is assigned to one of a group of known workers immediately upon arrival, with the reward given by an n-dimensional product-form vector. The objective is to maximize each component of the expected reward vector. We provide expressions for the asymptotic expected reward per task for each component of the reward vector and compare the convergence rates for three classes of Pareto optimal policies.  相似文献   
17.
How to choose an optimal threshold is a key problemin the generalized Pareto distribution (GPD) model.This paper attains the exactthreshold by testing for GPD,and shows that GPD model allows the actuary to easily estimate high quantiles and the probable maximum loss from the medical insurance claims data.  相似文献   
18.
Using a multi‐objective evolutionary algorithm (MOEA) and enhanced surrogate approximations, the present study demonstrates the numerical analysis and optimization of staggered‐dimple channels. Two surrogates, the response surface approximation (RSA) model and the Kriging (KRG) model, are applied in light of the surrogate fidelity of the approximate analysis. An enhanced Pareto‐optimal front is obtained by performing local resampling of the Pareto‐optimal front, which provides relatively more accurate Pareto‐optimal solutions in the design space for each surrogate model. Three dimensionless design variables are selected, which are related to geometric parameters, namely, the channel height, dimple print diameter, dimple spacing, and dimple depth. Two objective functions are selected that are related to the heat transfer and pressure loss, respectively. The objective‐function values are numerically evaluated through Reynolds‐averaged Navier–Stokes analysis at the design points that are selected through the Latin hypercube sampling method. Using these numerical simulations two surrogates, viz, the RSA and Kriging models, are constructed for each objective function and a hybrid MOEA is applied to obtain the Pareto‐optimal front. For the particular implementation of surrogate models, it is observed that Pareto‐optimal predictions of the RSA model are better than those of the KRG model, whereas the KRG model predicts equally well at the off‐Pareto‐region (region away from the Pareto‐optimal solutions), which is not the case with the RSA model. The local resampling of the Pareto‐optimal front increases the fidelity of the approximate solutions near the Pareto‐optimal region. The ratios of the channel height to the dimple print diameter and of the dimple print diameter to the dimple pitch are found to be more sensitive along the Pareto‐optimal front than the ratio of the dimple depth to the print diameter. The decrease of the ratio of the channel height to the dimple diameter and the increase of the ratio of the dimple print diameter to the pitch lead to greater heat transfer at the expense of the pressure loss, whereas the ratio of the dimple depth to the print diameter is rather insensitive to Pareto‐optimal solutions. Pareto‐optimal solutions at higher values of the Nusselt number are associated with higher values of the pressure loss due to the increased recirculation, mixing of fluid and vorticity generation. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
19.
20.
This paper provides simulation comparisons among the performance of 11 possible prediction intervals for the geometric.mean of a Pareto distribution with parameters (αB, ). Six different procedures were used to obtain these intervals , namely; true inter -val , pivotal interval , maximum likelihood estimation interval, centrallimit teorem interval, variance stabilizing interval and a mixture of the above intervals . Some of these intervals are valid if the observed sample size m,are large , others are valid if both, n and the future sample size m, are large. Some of these intervals require a knowledge of α or B, while others do not. The simulation validation and efficiency study shows that intervals depending on the MLE's are the best. The second best intervalsare those obtained through pivotal methods or variance stabilization transformation. The third group of intervals is that which depends on the central limit theorem when λ is known. There are two intervals which proved to be unacceptable under any criterion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号