首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   198篇
  免费   6篇
  国内免费   9篇
化学   11篇
力学   2篇
综合类   2篇
数学   164篇
物理学   34篇
  2022年   1篇
  2021年   2篇
  2019年   3篇
  2018年   2篇
  2017年   6篇
  2016年   7篇
  2015年   4篇
  2014年   6篇
  2013年   30篇
  2012年   11篇
  2011年   12篇
  2010年   8篇
  2009年   16篇
  2008年   14篇
  2007年   15篇
  2006年   9篇
  2005年   6篇
  2004年   5篇
  2003年   2篇
  2002年   7篇
  2001年   4篇
  2000年   4篇
  1999年   3篇
  1998年   3篇
  1997年   6篇
  1996年   6篇
  1995年   3篇
  1994年   3篇
  1993年   2篇
  1992年   1篇
  1991年   2篇
  1990年   3篇
  1989年   1篇
  1988年   2篇
  1987年   1篇
  1986年   1篇
  1983年   2篇
排序方式: 共有213条查询结果,搜索用时 78 毫秒
121.
In reliability and life-testing experiments, the researcher is often interested in the effects of extreme or varying stress factors such as temperature, voltage and load on the lifetimes of experimental units. Step-stress test, which is a special class of accelerated life-tests, allows the experimenter to increase the stress levels at fixed times during the experiment in order to obtain information on the parameters of the life distributions more quickly than under normal operating conditions. In this paper, we consider the simple step-stress model from the exponential distribution when there is time constraint on the duration of the experiment. We derive the maximum likelihood estimators (MLEs) of the parameters assuming a cumulative exposure model with lifetimes being exponentially distributed. The exact distributions of the MLEs of parameters are obtained through the use of conditional moment generating functions. We also derive confidence intervals for the parameters using these exact distributions, asymptotic distributions of the MLEs and the parametric bootstrap methods, and assess their performance through a Monte Carlo simulation study. Finally, we present two examples to illustrate all the methods of inference discussed here.  相似文献   
122.
The problem of imputing missing observations under the linear regression model is considered. It is assumed that observations are missing at random and all the observations on the auxiliary or independent variables are available. Estimates of the regression parameters based on singly and multiply imputed values are given. Jackknife as well as bootstrap estimates of the variance of the singly imputed estimator of the regression parameters are given. These estimators are shown to be consistent estimators. The asymptotic distributions of the imputed estimators are also given to obtain interval estimates of the parameters of interest. These interval estimates are then compared with the interval estimates obtained from multiple imputation. It is shown that singly imputed estimators perform at least as good as multiply imputed estimators. A new nonparametric multiply imputed estimator is proposed and shown to perform as good as a multiply imputed estimator under normality. The singly imputed estimator, however, still remains at least as good as a multiply imputed estimator.  相似文献   
123.
We consider inverse regression models with convolution-type operators which mediate convolution on (d≥1) and prove a pointwise central limit theorem for spectral regularisation estimators which can be applied to construct pointwise confidence regions. Here, we cope with the unknown bias of such estimators by undersmoothing. Moreover, we prove consistency of the residual bootstrap in this setting and demonstrate the feasibility of the bootstrap confidence bands at moderate sample sizes in a simulation study.  相似文献   
124.
Confidence interval procedures used in low-dimensional settings are often inappropriate for high-dimensional applications. When many parameters are estimated, marginal confidence intervals associated with the most significant estimates have very low coverage rates: They are too small and centered at biased estimates. The problem of forming confidence intervals in high-dimensional settings has previously been studied through the lens of selection adjustment. In that framework, the goal is to control the proportion of noncovering intervals formed for selected parameters. In this article, we approach the problem by considering the relationship between rank and coverage probability. Marginal confidence intervals have very low coverage rates for the most significant parameters and high rates for parameters with more boring estimates. Many selection adjusted intervals have the same behavior despite controlling the coverage rate within a selected set. This relationship between rank and coverage rate means that the parameters most likely to be pursued further in follow-up or replication studies are the least likely to be covered by the constructed intervals. In this article, we propose rank conditional coverage (RCC) as a new coverage criterion for confidence intervals in multiple testing/covering problems. The RCC is the expected coverage rate of an interval given the significance ranking for the associated estimator. We also propose two methods that use bootstrapping to construct confidence intervals that control the RCC. Because these methods make use of additional information captured by the ranks of the parameter estimates, they often produce smaller intervals than marginal or selection adjusted methods. These methods are implemented in R (R Core Team, 2017 R Core Team (2017), R: A Language and Environment for Statistical Computing, Vienna, Austria: R Foundation for Statistical Computing. [Google Scholar]) in the package rcc available on CRAN at https://cran.r-project.org/web/packages/rcc/index.html. Supplementary material for this article is available online.  相似文献   
125.
Dynamic life tables arise as an alternative to the standard (static) life table, with the aim of incorporating the evolution of mortality over time. The parametric model introduced by Lee and Carter in 1992 for projected mortality rates in the US is one of the most outstanding and has been used a great deal since then. Different versions of the model have been developed but all of them, together with other parametric models, consider the observed mortality rates as independent observations. This is a difficult hypothesis to justify when looking at the graph of the residuals obtained with any of these methods.Methods of adjustment and prediction based on geostatistical techniques which exploit the dependence structure existing among the residuals are an alternative to classical methods. Dynamic life tables can be considered as two-way tables on a grid equally spaced in either the vertical (age) or horizontal (year) direction, and the data can be decomposed into a deterministic large-scale variation (trend) plus a stochastic small-scale variation (residuals).Our contribution consists of applying geostatistical techniques for estimating the dependence structure of the mortality data and for prediction purposes, also including the influence of the year of birth (cohort). We compare the performance of this new approach with different versions of the Lee-Carter model. Additionally, we obtain bootstrap confidence intervals for predicted qxt resulting from applying both methodologies, and we study their influence on the predictions of e65t and a65t.  相似文献   
126.
This article presents a novel heuristic for constrained optimization of computationally expensive random simulation models. One output is selected as objective to be minimized, while other outputs must satisfy given threshold values. Moreover, the simulation inputs must be integer and satisfy linear or nonlinear constraints. The heuristic combines (i) sequentialized experimental designs to specify the simulation input combinations, (ii) Kriging (or Gaussian process or spatial correlation modeling) to analyze the global simulation input/output data resulting from these designs, and (iii) integer nonlinear programming to estimate the optimal solution from the Kriging metamodels. The heuristic is applied to an (s,S)(s,S) inventory system and a call-center simulation, and compared with the popular commercial heuristic OptQuest embedded in the Arena versions 11 and 12. In these two applications the novel heuristic outperforms OptQuest in terms of number of simulated input combinations and quality of the estimated optimum.  相似文献   
127.
We study the threshold θ bootstrap percolation model on the homogeneous tree with degree b+1, 2≤θb, and initial density p. It is known that there exists a nontrivial critical value for p, which we call p f , such that a) for p>p f , the final bootstrapped configuration is fully occupied for almost every initial configuration, and b) if p<p f , then for almost every initial configuration, the final bootstrapped configuration has density of occupied vertices less than 1. In this paper, we establish the existence of a distinct critical value for p, p c , such that 0<p c <p f , with the following properties: 1) if pp c , then for almost every initial configuration there is no infinite cluster of occupied vertices in the final bootstrapped configuration; 2) if p>p c , then for almost every initial configuration there are infinite clusters of occupied vertices in the final bootstrapped configuration. Moreover, we show that 3) for p<p c , the distribution of the occupied cluster size in the final bootstrapped configuration has an exponential tail; 4) at p=p c , the expected occupied cluster size in the final bootstrapped configuration is infinite; 5) the probability of percolation of occupied vertices in the final bootstrapped configuration is continuous on [0,p f ] and analytic on (p c ,p f ), admitting an analytic continuation from the right at p c and, only in the case θ=b, also from the left at p f . L.R.G. Fontes partially supported by the Brazilians CNPq through grants 475833/2003-1, 307978/2004-4 and 484351/2006-0, and FAPESP through grant 04/07276-2. R.H. Schonmann partially supported by the American N.S.F. through grant DMS-0300672.  相似文献   
128.
Let a low densityp of sites on the lattice Z2 be occupied, remove a proportionq of them, and call the remaining sites empty. Then update this configuration in discrete time by iteration of the following synchronous rule: an empty site becomes occupied by contact with at least two occupied nearest neighbors, while occupied and removed sites nerver change their states. Ifq/p 2 is large most sites remain unoccupied forever, while ifq/p 2 is small, this dynamics eventually makes most sites occupied. This demonstrates how sensitive the usual bootstrap percolation rule (theq=0 case) is to the pollution of space.  相似文献   
129.
The SLEX Model of a Non-Stationary Random Process   总被引:1,自引:0,他引:1  
We propose a new model for non-stationary random processes to represent time series with a time-varying spectral structure. Our SLEX model can be considered as a discrete time-dependent Cramér spectral representation. It is based on the so-called Smooth Localized complex EXponential basis functions which are orthogonal and localized in both time and frequency domains. Our model delivers a finite sample size representation of a SLEX process having a SLEX spectrum which is piecewise constant over time segments. In addition, we embed it into a sequence of models with a limit spectrum, a smoothly in time varying evolutionary spectrum. Hence, we develop the SLEX model parallel to the Dahlhaus (1997, Ann. Statist., 25, 1–37) model of local stationarity, and we show that the two models are asymptotically mean square equivalent. Moreover, to define both the growing complexity of our model sequence and the regularity of the SLEX spectrum we use a wavelet expansion of the spectrum over time. Finally, we develop theory on how to estimate the spectral quantities, and we briefly discuss how to form inference based on resampling (bootstrapping) made possible by the special structure of the SLEX model which allows for simple synthesis of non-stationary processes.  相似文献   
130.
Summary  This paper considers different bootstrap procedures for investigating the estimation of the fractional parameter d in a particular case of long memory processes, i.e. for ARFIMA models withd in (0.0, 0.5). We propose two bootstrap techniques to deal with semiparametric estimation methods of d. One approach consists of the local bootstrap method for time frequency initially suggested for the ARMA case by Paparoditis and Politis (1999), and the other consists of the bootstrapping in the residuals of the frequency-domain regression equation. Through Monte Carlo simulation, these alternative bootstrap methods are compared, based on the mean and the mean square error of the estimators, with the well-known parametric and nonparametric bootstrap techniques for time series models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号