首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1248篇
  免费   101篇
  国内免费   34篇
化学   173篇
晶体学   1篇
力学   65篇
综合类   17篇
数学   864篇
物理学   263篇
  2024年   2篇
  2023年   15篇
  2022年   50篇
  2021年   77篇
  2020年   50篇
  2019年   49篇
  2018年   29篇
  2017年   38篇
  2016年   70篇
  2015年   40篇
  2014年   68篇
  2013年   170篇
  2012年   49篇
  2011年   62篇
  2010年   61篇
  2009年   54篇
  2008年   59篇
  2007年   53篇
  2006年   49篇
  2005年   45篇
  2004年   20篇
  2003年   33篇
  2002年   18篇
  2001年   26篇
  2000年   20篇
  1999年   20篇
  1998年   21篇
  1997年   12篇
  1996年   14篇
  1995年   10篇
  1994年   14篇
  1993年   6篇
  1992年   11篇
  1991年   8篇
  1990年   6篇
  1989年   7篇
  1988年   8篇
  1987年   12篇
  1986年   4篇
  1985年   3篇
  1984年   8篇
  1983年   2篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1976年   2篇
  1973年   2篇
排序方式: 共有1383条查询结果,搜索用时 328 毫秒
191.
In applied sciences, generalized linear mixed models have become one of the preferred tools to analyze a variety of longitudinal and clustered data. Due to software limitations, the analyses are often restricted to the setting in which the random effects terms follow a multivariate normal distribution. However, this assumption may be unrealistic, obscuring important features of among-unit variation. This work describes a widely applicable semiparametric Bayesian approach that relaxes the normality assumption by using a novel mixture of multivariate Polya trees prior to define a flexible nonparametric model for the random effects distribution. The nonparametric prior is centered on the commonly used parametric normal family. We allow this parametric family to hold only approximately, thereby providing a robust alternative for modeling. We discuss and implement practical procedures for addressing the computational challenges that arise under this approach. We illustrate the methodology by applying it to real-life examples.

Supplemental materials for this paper are available online.  相似文献   
192.
In this article we propose a modification to the output from Metropolis-within-Gibbs samplers that can lead to substantial reductions in the variance over standard estimates. The idea is simple: at each time step of the algorithm, introduce an extra sample into the estimate that is negatively correlated with the current sample, the rationale being that this provides a two-sample numerical approximation to a Rao–Blackwellized estimate. As the conditional sampling distribution at each step has already been constructed, the generation of the antithetic sample often requires negligible computational effort. Our method is implementable whenever one subvector of the state can be sampled from its full conditional and the corresponding distribution function may be inverted, or the full conditional has a symmetric density. We demonstrate our approach in the context of logistic regression and hierarchical Poisson models. The data and computer code used in this article are available online.  相似文献   
193.
We describe a serial algorithm called feature-inclusion stochastic search, or FINCS, that uses online estimates of edge-inclusion probabilities to guide Bayesian model determination in Gaussian graphical models. FINCS is compared to MCMC, to Metropolis-based search methods, and to the popular lasso; it is found to be superior along a variety of dimensions, leading to better sets of discovered models, greater speed and stability, and reasonable estimates of edge-inclusion probabilities. We illustrate FINCS on an example involving mutual-fund data, where we compare the model-averaged predictive performance of models discovered with FINCS to those discovered by competing methods.  相似文献   
194.
In recent years, parallel processing has become widely available to researchers. It can be applied in an obvious way in the context of Monte Carlo simulation, but techniques for “parallelizing” Markov chain Monte Carlo (MCMC) algorithms are not so obvious, apart from the natural approach of generating multiple chains in parallel. Although generation of parallel chains is generally the easiest approach, in cases where burn-in is a serious problem, it is often desirable to use parallelization to speed up generation of a single chain. This article briefly discusses some existing methods for parallelization of MCMC algorithms, and proposes a new “pre-fetching” algorithm to parallelize generation of a single chain.  相似文献   
195.
A novel method is proposed to compute the Bayes estimate for a logistic Gaussian process prior for density estimation. The method gains speed by drawing samples from the posterior of a finite-dimensional surrogate prior, which is obtained by imputation of the underlying Gaussian process. We establish that imputation results in quite accurate computation. Simulation studies show that accuracy and high speed can be combined. This fact, along with known flexibility of the logistic Gaussian priors for modeling smoothness and recent results on their large support, makes these priors and the resulting density estimate very attractive.  相似文献   
196.
This article describes posterior simulation methods for mixture models whose mixing distribution has a Normalized Random Measure prior. The methods use slice sampling ideas and introduce no truncation error. The approach can be easily applied to both homogeneous and nonhomogeneous Normalized Random Measures and allows the updating of the parameters of the random measure. The methods are illustrated on data examples using both Dirichlet and Normalized Generalized Gamma process priors. In particular, the methods are shown to be computationally competitive with previously developed samplers for Dirichlet process mixture models. Matlab code to implement these methods is available as supplemental material.  相似文献   
197.
Classical optimal design criteria suffer from two major flaws when applied to nonlinear problems. First, they are based on linearizing the model around a point estimate of the unknown parameter and therefore depend on the uncertain value of that parameter. Second, classical design methods are unavailable in ill-posed estimation situations, where previous data lack the information needed to properly construct the design criteria. Bayesian optimal design can, in principle, solve these problems. However, Bayesian design methods are not widely applied, mainly due to the fact that standard implementations for efficient and robust routine use are not available. In this article, we point out a concrete recipe for implementing Bayesian optimal design, based on the concept of simulation-based design introduced by Muller, Sanso, and De Iorio (2004 Muller, P., Sanso, B. and De Iorio, M. 2004. Optimal Bayesian Design by Inhomogeneous Markov Chain Simulation. Journal of the American Statistical Association, 99(467): 788798. [Taylor & Francis Online] [Google Scholar]). We develop further a predictive variance criterion and introduce an importance weighting mechanism for efficient computation of the variances. The simulation-based approach allows one to start the model-based optimization of experiments at an early stage of the parameter estimation process, in situations where the classical design criteria are not available. We demonstrate that the approach can significantly reduce the number of experiments needed to obtain a desired level of accuracy in the parameter estimates. A computer code package that implements the approach in a simple case is provided as supplemental material (available online).  相似文献   
198.
Piecewise affine inverse problems form a general class of nonlinear inverse problems. In particular inverse problems obeying certain variational structures, such as Fermat's principle in travel time tomography, are of this type. In a piecewise affine inverse problem a parameter is to be reconstructed when its mapping through a piecewise affine operator is observed, possibly with errors. A piecewise affine operator is defined by partitioning the parameter space and assigning a specific affine operator to each part. A Bayesian approach with a Gaussian random field prior on the parameter space is used. Both problems with a discrete finite partition and a continuous partition of the parameter space are considered.

The main result is that the posterior distribution is decomposed into a mixture of truncated Gaussian distributions, and the expression for the mixing distribution is partially analytically tractable. The general framework has, to the authors' knowledge, not previously been published, although the result for the finite partition is generally known.

Inverse problems are currently of large interest in many fields. The Bayesian approach is popular and most often highly computer intensive. The posterior distribution is frequently concentrated close to high-dimensional nonlinear spaces, resulting in slow mixing for generic sampling algorithms. Inverse problems are, however, often highly structured. In order to develop efficient sampling algorithms for a problem at hand, the problem structure must be exploited.

The decomposition of the posterior distribution that is derived in the current work can be used to develop specialized sampling algorithms. The article contains examples of such sampling algorithms. The proposed algorithms are applicable also for problems with exact observations. This is a case for which generic sampling algorithms tend to fail.  相似文献   
199.
Credit risk measurement and management are important and current issues in the modern finance world from both the theoretical and practical perspectives. There are two major schools of thought for credit risk analysis, namely the structural models based on the asset value model originally proposed by Merton and the intensity‐based reduced form models. One of the popular credit risk models used in practice is the Binomial Expansion Technique (BET) introduced by Moody's. However, its one‐period static nature and the independence assumption for credit entities' defaults are two shortcomings for the use of BET in practical situations. Davis and Lo provided elegant ways to ease the two shortcomings of BET with their default infection and dynamic continuous‐time intensity‐based approaches. This paper first proposes a discrete‐time dynamic extension to the BET in order to incorporate the time‐dependent and time‐varying behaviour of default probabilities for measuring the risk of a credit risky portfolio. In reality, the ‘true’ default probabilities are unobservable to credit analysts and traders. Here, the uncertainties of ‘true’ default probabilities are incorporated in the context of a dynamic Bayesian paradigm. Numerical studies of the proposed model are provided.  相似文献   
200.
本文应用动态线性模型研究我国的狭义货币需求,利用贝叶斯吉布斯抽样方法估计模型的参数和方差,获得了潜在货币需求趋势和货币缺口,对我国的货币供给进行分析并获得一些有益结论,对于央行更好地实施货币管理,保持经济平稳发展有积极意义。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号