首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   226篇
  免费   4篇
  国内免费   8篇
化学   4篇
数学   227篇
物理学   7篇
  2023年   1篇
  2022年   2篇
  2021年   1篇
  2019年   4篇
  2018年   2篇
  2017年   3篇
  2016年   8篇
  2015年   2篇
  2014年   15篇
  2013年   70篇
  2012年   5篇
  2011年   15篇
  2010年   10篇
  2009年   18篇
  2008年   9篇
  2007年   10篇
  2006年   8篇
  2005年   6篇
  2004年   5篇
  2003年   6篇
  2002年   6篇
  2001年   3篇
  2000年   1篇
  1999年   1篇
  1998年   3篇
  1996年   2篇
  1995年   4篇
  1994年   3篇
  1992年   1篇
  1991年   2篇
  1990年   2篇
  1988年   1篇
  1985年   3篇
  1984年   2篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1976年   1篇
排序方式: 共有238条查询结果,搜索用时 15 毫秒
91.
Many procedures have been proposed to compute nonparametric maximum likelihood estimators (NPMLEs) of survival functions under stochastic ordering constraints. However, each of them is only applicable to a specific type of stochastic ordering constraint and censoring, and is often hard to implement. In this article, we describe a general and flexible method based on geometric programming for computing the NPMLEs from right- or interval-censored data. To this end, we show that the monotonicity properties of the likelihood function and the stochastic ordering constraints considered in the literature allow us to reformulate the estimation problem as a geometric program (GP), a special type of mathematical optimization problem, which can be transformed to a convex optimization problem, and then solved globally and efficiently. We apply this GP-based method to real data examples to illustrate its generality in handling different types of ordering constraints and censoring. We also conduct simulation studies to examine its numerical performance for various sample sizes. Supplemental materials including technical details, computer code, and data files are available online.  相似文献   
92.
We describe and contrast several different bootstrap procedures for penalized spline smoothers. The bootstrap methods considered are variations on existing methods, developed under two different probabilistic frameworks. Under the first framework, penalized spline regression is considered as an estimation technique to find an unknown smooth function. The smooth function is represented in a high-dimensional spline basis, with spline coefficients estimated in a penalized form. Under the second framework, the unknown function is treated as a realization of a set of random spline coefficients, which are then predicted in a linear mixed model. We describe how bootstrap methods can be implemented under both frameworks, and we show theoretically and through simulations and examples that bootstrapping provides valid inference in both cases. We compare the inference obtained under both frameworks, and conclude that the latter generally produces better results than the former. The bootstrap ideas are extended to hypothesis testing, where parametric components in a model are tested against nonparametric alternatives.

Datasets and computer code are available in the online supplements.  相似文献   
93.
A new kernel-type estimator of the conditional density is proposed. It is based on an efficient quantile transformation of the data. The proposed estimator, which is based on the copula representation, turns out to have a remarkable product form. Its large-sample properties are considered and comparisons in terms of bias and variance are made with competitors based on nonparametric regression. A comparative simulation study is also provided.  相似文献   
94.
This article presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool. It has been shown that, for estimating functions whose derivatives are of bounded variation, trend filtering achieves the minimax optimal error rate, while other popular methods like smoothing splines and kernels do not. Standing in the way of a more widespread practical adoption, however, is a lack of scalable and numerically stable algorithms for fitting trend filtering estimates. This article presents a highly efficient, specialized alternating direction method of multipliers (ADMM) routine for trend filtering. Our algorithm is competitive with the specialized interior point methods that are currently in use, and yet is far more numerically robust. Furthermore, the proposed ADMM implementation is very simple, and, importantly, it is flexible enough to extend to many interesting related problems, such as sparse trend filtering and isotonic trend filtering. Software for our method is freely available, in both the C and R languages.  相似文献   
95.
Let (X, Y), X Rp, Y R1 have the regression function r(x) = E(Y¦X = x). We consider the kernel nonparametric estimate rn(x) of r(x) and obtain a sequence of distribution functions approximating the distribution of the maximal deviation with power rate. It is shown that the distribution of the maximal deviation tends to double exponent (which is a conventional form of such theorems) with logarithmic rate and this rate cannot be improved.  相似文献   
96.
Among recent measures for risk management, value at risk (VaR) has been criticized because it is not coherent and expected shortfall (ES) has been criticized because it is not robust to outliers. Recently,[Math. Oper. Res., 38, 393-417 (2013)] proposed a risk measure called median shortfall (MS) which is distributional robust and easy to implement. In this paper, we propose a more generalized risk measure called quantile shortfall (QS) which includes MS as a special case. QS measures the conditional quantile loss of the tail risk and inherits the merits of MS. We construct an estimator of the QS and establish the asymptotic normality behavior of the estimator. Our simulation shows that the newly proposed measures compare favorably in robustness with other widely used measures such as ES and VaR.  相似文献   
97.
OPTIMALGLOBALRATESOFCONVERGENCEOFM-ESTIMATESFORMULTIVARIATENONPARAMETRIC REGRESSIONSHIPEIDE(施沛德)(InstituteofSystemsScience,th...  相似文献   
98.
Many nonparametric tests admit improvement by identifying a functional on a set of probability measures , of which the test statistic is an estimator. We call such a functional a gauge for the problem if it induces the partition of into null and alternative and enjoys certain invariance properties. Two nonparametric testing problems are explored here: a dependency problem and an equidistribution problem. In each a dual smoothing problem is posed and optimally solved in the estimation framework, and a corresponding testing procedure gives a consistency rate improvement over the original test.  相似文献   
99.
In this paper, we do a comparative simulation study of the standard empirical distribution function estimator versus a new class of nonparametric estimators of a distribution function F, called the iterated function system (IFS) estimator. The target distribution function F is supposed to have compact support. The IFS estimator of a distribution function F is considered as the fixed point of a contractive operator T defined in terms of a vector of parameters p and a family of affine maps which can be both dependent on the sample (X1,X2,…,Xn). Given , the problem consists in finding a vector p such that the fixed point of T is “sufficiently near” to F. It turns out that this is a quadratic constrained optimization problem that we propose to solve by penalization techniques. Analytical results prove that IFS estimators for F are asymptotically equivalent to the empirical distribution function (EDF) estimator. We will study the relative efficiency of the IFS estimators with respect to the empirical distribution function for small samples via the Monte Carlo approach.For well-behaved distribution functions F and for a particular family of the so-called wavelet maps the IFS estimators can be dramatically better than the empirical distribution function in the presence of missing data, i.e. when it is only possible to observe data on subsets of the whole support of F.  相似文献   
100.
Recently Haezendonck–Goovaerts (H–G) risk measure has received much attention in actuarial science. Nonparametric inference has been studied by Ahn and Shyamalkumar (2014) and Peng et al. (2015) when the risk measure is defined at a fixed level. In risk management, the level is usually set to be quite near one by regulators. Therefore, especially when the sample size is not large enough, it is useful to treat the level as a function of the sample size, which diverges to one as the sample size goes to infinity. In this paper, we extend the results in Peng et al. (2015) from a fixed level to an intermediate level. Although the proposed maximum empirical likelihood estimator for the H–G risk measure has a different limit for a fixed level and an intermediate level, the proposed empirical likelihood method indeed gives a unified interval estimation for both cases. A simulation study is conducted to examine the finite sample performance of the proposed method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号