首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1375篇
  免费   81篇
  国内免费   86篇
化学   75篇
晶体学   2篇
力学   29篇
综合类   21篇
数学   1262篇
物理学   153篇
  2023年   8篇
  2022年   10篇
  2021年   16篇
  2020年   28篇
  2019年   19篇
  2018年   24篇
  2017年   33篇
  2016年   45篇
  2015年   19篇
  2014年   63篇
  2013年   125篇
  2012年   57篇
  2011年   66篇
  2010年   72篇
  2009年   109篇
  2008年   83篇
  2007年   80篇
  2006年   71篇
  2005年   72篇
  2004年   43篇
  2003年   36篇
  2002年   46篇
  2001年   33篇
  2000年   34篇
  1999年   25篇
  1998年   28篇
  1997年   25篇
  1996年   20篇
  1995年   22篇
  1994年   32篇
  1993年   17篇
  1992年   15篇
  1991年   12篇
  1990年   14篇
  1989年   14篇
  1988年   14篇
  1987年   13篇
  1986年   8篇
  1985年   16篇
  1984年   10篇
  1983年   4篇
  1982年   11篇
  1981年   6篇
  1980年   7篇
  1979年   8篇
  1978年   11篇
  1976年   4篇
  1975年   4篇
  1974年   4篇
  1973年   3篇
排序方式: 共有1542条查询结果,搜索用时 15 毫秒
1.
This paper states that most commonly used minimum divergence estimators are MLEs for suited generalized bootstrapped sampling schemes. Optimality in the sense of Bahadur for associated tests of fit under such sampling is considered.  相似文献   
2.
提出一种采用子带二值加权累积的海洋环境噪声互相关函数(NCF)提取经验格林函数(EGF)方法。首先将每一快拍NCF在频域划分为多个子带,每个子带内根据“累积后提取的EGF信噪比增加”的准则,确定各快拍NCF的加权系数为0或1,将各子带的加权累积结果谱白化后在频域拼接,再反傅里叶变换得到时域EGF。子带二值加权累积方法实现了每一快拍NCF累积前的频率自动选择,相比于已有的原始累积方法与时域二值加权累积算法,可以有效提高从较宽频率带宽的海洋环境噪声中提取EGF的信噪比。海试实验数据验证了该方法的有效性与优越性。  相似文献   
3.
于洋  侯文 《经济数学》2020,37(3):221-226
讨论了响应变量为单参数指数族且在零点处膨胀的广义线性模型的大样本性质,对其参数进行了极大似然估计,给出了一些正则条件.基于所提出的正则条件,证明了模型参数极大似然估计的相合性与渐近正态性.  相似文献   
4.
面对需要实时计算的相机位姿估计问题,针对经典的广泛应用的正交迭代算法,提出了一种加速正交迭代算法。其关键思想是将每一次迭代过程规整化,从而提炼出每一次迭代的重复计算,若将此重复计算在迭代开始前提前计算,则可以大幅度的减少迭代过程中的计算量,使得每一次迭代的计算复杂度从O(n)降低为O(1)。因此,可以在更短的时间内迭代更多的次数,从而获得更高的精度。进行了对比实验,结果显示本加速算法计算精度更高,速度更快。并通过实验提出了选择稳健n点透视(RPn P)计算初值,再使用加速正交迭代算法进行迭代运算的方法,在控制点不多的情况下,是一种精度接近最大似然估计,计算速度最快的算法。  相似文献   
5.
We consider parametric optimization problems from an algebraic viewpoint. The idea is to find all of the critical points of an objective function thereby determining a global optimum. For generic parameters (data) in the objective function the number of critical points remains constant. This number is known as the algebraic degree of an optimization problem. In this article, we go further by considering the inverse problem of finding parameters of the objective function so it gives rise to critical points exhibiting a special structure. For example if the critical point is in the singular locus, has some symmetry, or satisfies some other algebraic property. Our main result is a theorem describing such parameters.  相似文献   
6.
This work deals with log‐symmetric regression models, which are particularly useful when the response variable is continuous, strictly positive, and following an asymmetric distribution, with the possibility of modeling atypical observations by means of robust estimation. In these regression models, the distribution of the random errors is a member of the log‐symmetric family, which is composed by the log‐contaminated‐normal, log‐hyperbolic, log‐normal, log‐power‐exponential, log‐slash and log‐Student‐t distributions, among others. One way to select the best family member in log‐symmetric regression models is using information criteria. In this paper, we formulate log‐symmetric regression models and conduct a Monte Carlo simulation study to investigate the accuracy of popular information criteria, as Akaike, Bayesian, and Hannan‐Quinn, and their respective corrected versions to choose adequate log‐symmetric regressions models. As a business application, a movie data set assembled by authors is analyzed to compare and obtain the best possible log‐symmetric regression model for box offices. The results provide relevant information for model selection criteria in log‐symmetric regressions and for the movie industry. Economic implications of our study are discussed after the numerical illustrations.  相似文献   
7.
Point estimators for the parameters of the component lifetime distribution in coherent systems are evolved assuming to be independently and identically Weibull distributed component lifetimes. We study both complete and incomplete information under continuous monitoring of the essential component lifetimes. First, we prove that the maximum likelihood estimator (MLE) under complete information based on progressively Type‐II censored system lifetimes uniquely exists and we present two approaches to compute the estimates. Furthermore, we consider an ad hoc estimator, a max‐probability plan estimator and the MLE for the parameters under incomplete information. In order to compute the MLEs, we consider a direct maximization of the likelihood and an EM‐algorithm–type approach, respectively. In all cases, we illustrate the results by simulations of the five‐component bridge system and the 10‐component parallel system, respectively.  相似文献   
8.
We investigate cosmological dark energy models where the accelerated expansion of the universe is driven by a field with an anisotropic universe. The constraints on the parameters are obtained by maximum likelihood analysis using observational of 194 Type Ia supernovae(SNIa) and the most recent joint light-curve analysis(JLA) sample. In particular we reconstruct the dark energy equation of state parameter w(z) and the deceleration parameter q(z). We find that the best fit dynamical w(z) obtained from the 194 SNIa dataset does not cross the phantom divide line w(z) =-1 and remains above and close to w(z)≈-0.92 line for the whole redshift range 0 ≤ z ≤ 1.75 showing no evidence for phantom behavior. By applying the anisotropy effect on the ΛCDM model, the joint analysis indicates that ?_(σ0)= 0.0163 ± 0.03,with 194 SNIa, ?_(σ0)=-0.0032 ± 0.032 with 238 the SiFTO sample of JLA and ?_(σ0)= 0.011 ± 0.0117 with 1048 the SALT2 sample of Pantheon at 1σ′confidence interval. The analysis shows that by considering the anisotropy, it leads to more best fit parameters in all models with JLA SNe datasets. Furthermore, we use two statistical tests such as the usual χ_(min)~2/dof and p-test to compare two dark energy models with ΛCDM model. Finally we show that the presence of anisotropy is confirmed in mentioned models via SNIa dataset.  相似文献   
9.
We propose a penalized likelihood method to fit the linear discriminant analysis model when the predictor is matrix valued. We simultaneously estimate the means and the precision matrix, which we assume has a Kronecker product decomposition. Our penalties encourage pairs of response category mean matrix estimators to have equal entries and also encourage zeros in the precision matrix estimator. To compute our estimators, we use a blockwise coordinate descent algorithm. To update the optimization variables corresponding to response category mean matrices, we use an alternating minimization algorithm that takes advantage of the Kronecker structure of the precision matrix. We show that our method can outperform relevant competitors in classification, even when our modeling assumptions are violated. We analyze three real datasets to demonstrate our method’s applicability. Supplementary materials, including an R package implementing our method, are available online.  相似文献   
10.
Auxiliary population information is often available in finite population inference problems, and the empirical likelihood (EL) approach has been demonstrated to be flexible and useful for such problems. The present paper concerns EL when interest centers on inference for the mean of the baseline distribution under two-sample density ratio models. Although dual EL is a convenient technical tool since it has the same maximum point and maximum likelihood as DRM-based EL, it can not combine such auxiliary information into the likelihood conveniently and may have loss of efficiency. By contrast, the classical EL approach of Qin and Lawless\ucite{21} does not have this problem and incorporate seamlessly auxiliary information. Based on the EL using auxiliary information and the dual EL methods, we construct both point and interval estimations and make a careful comparison. Though the point estimation efficiency gain obtained by the former is not noticeable, we find that they may have different performances in interval estimation. In terms of coverage accuracy, the two intervals are comparable for not or moderate skewed populations, and the EL interval using auxiliary information can be much superior for severely skewed populations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号