首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2308篇
  免费   160篇
  国内免费   152篇
化学   1058篇
晶体学   2篇
力学   172篇
综合类   16篇
数学   848篇
物理学   524篇
  2024年   2篇
  2023年   23篇
  2022年   49篇
  2021年   60篇
  2020年   72篇
  2019年   63篇
  2018年   42篇
  2017年   78篇
  2016年   88篇
  2015年   64篇
  2014年   85篇
  2013年   248篇
  2012年   120篇
  2011年   140篇
  2010年   105篇
  2009年   124篇
  2008年   104篇
  2007年   153篇
  2006年   111篇
  2005年   88篇
  2004年   97篇
  2003年   83篇
  2002年   69篇
  2001年   67篇
  2000年   63篇
  1999年   69篇
  1998年   48篇
  1997年   43篇
  1996年   37篇
  1995年   44篇
  1994年   31篇
  1993年   22篇
  1992年   13篇
  1991年   22篇
  1990年   5篇
  1989年   11篇
  1988年   9篇
  1987年   9篇
  1986年   5篇
  1985年   16篇
  1984年   5篇
  1983年   6篇
  1982年   5篇
  1981年   3篇
  1980年   4篇
  1979年   9篇
  1973年   1篇
  1971年   1篇
  1970年   1篇
  1969年   1篇
排序方式: 共有2620条查询结果,搜索用时 546 毫秒
981.
Abstract

Maximum pseudo-likelihood estimation has hitherto been viewed as a practical but flawed alternative to maximum likelihood estimation, necessary because the maximum likelihood estimator is too hard to compute, but flawed because of its inefficiency when the spatial interactions are strong. We demonstrate that a single Newton-Raphson step starting from the maximum pseudo-likelihood estimator produces an estimator which is close to the maximum likelihood estimator in terms of its actual value, attained likelihood, and efficiency, even in the presence of strong interactions. This hybrid technique greatly increases the practical applicability of pseudo-likelihood-based estimation. Additionally, in the case of the spatial point processes, we propose a proper maximum pseudo-likelihood estimator which is different from the conventional one. The proper maximum pseudo-likelihood estimator clearly shows better performance than the conventional one does when the spatial interactions are strong.  相似文献   
982.
Abstract

We consider Markov mixture models for multiple longitudinal binary sequences. Prior uncertainty in the mixing distribution is characterized by a Dirichlet process centered on a matrix beta measure. We use this setting to evaluate and compare the performance of three competing algorithms that arise more generally in Dirichlet process mixture calculations: sequential imputations, Gibbs sampling, and a predictive recursion, for which an extension of the sequential calculations is introduced. This facilitates the estimation of quantities related to clustering structure which is not available in the original formulation. A numerical comparison is carried out in three examples. Our findings suggest that the sequential imputations method is most useful for relatively small problems, and that the predictive recursion can be an efficient preliminary tool for more reliable, but computationally intensive, Gibbs sampling implementations.  相似文献   
983.
The article is concerned with the use of Markov chain Monte Carlo methods for posterior sampling in Bayesian nonparametric mixture models.In particular, we consider the problem of slice sampling mixture models for a large class of mixing measures generalizing the celebrated Dirichlet process. Such a class of measures, known in the literature as σ-stable Poisson-Kingman models, includes as special cases most of the discrete priors currently known in Bayesian nonparametrics, for example, the two-parameter Poisson-Dirichlet process and the normalized generalized Gamma process. The proposed approach is illustrated on some simulated data examples. This article has online supplementary material.  相似文献   
984.
This article describes a multistage Markov chain Monte Carlo (MSMCMC) procedure for estimating the count, , where denotes the set of all a × b contingency tables with specified row and column sums and m total entries. On each stage s = 1, …, r, Hastings–Metropolis (HM) sampling generates states with equilibrium distribution , with inverse-temperature schedule β1 = 0 < β2 < ??? < β r < β r + 1 = ∞; nonnegative penalty function H, with global minima H(x) = 0 only for ; and superset , which facilitates sampling by relaxing column sums while maintaining row sums. Two kernels are employed for nominating states. For small β s , one admits moderate-to-large penalty changes. For large β s , the other allows only small changes. Neither kernel admits local minima, thus avoiding an impediment to convergence. Preliminary sampling determines when to switch from one kernel to the other to minimize relative error. Cycling through stages in the order r to 1, rather than 1 to r, speeds convergence for large β s . A comparison of estimates for examples, whose dimensions range from 15 to 24, with exact counts based on Barvinok’s algorithm and estimates based on sequential importance sampling (SIS) favors these alternatives. However, the comparison strongly favors MSMCMC for an example with 64 dimensions. It estimated count with 3.3354 × 10? 3 relative error, whereas the exact counting method was unable to produce a result in more than 182 CPU (computational) days of execution, and SIS would have required at least 42 times as much CPU time to generate an estimate with the same relative error. This latter comparison confirms the known limitations of exact counting methods and of SIS for larger-dimensional problems and suggests that MSMCMC may be a suitable alternative. Proofs not given in the article appear in the Appendix in the online supplemental materials.  相似文献   
985.
Implementations of the Monte Carlo EM Algorithm   总被引:1,自引:0,他引:1  
The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm where the expectation in the E-step is computed numerically through Monte Carlo simulations. The most exible and generally applicable approach to obtaining a Monte Carlo sample in each iteration of an MCEM algorithm is through Markov chain Monte Carlo (MCMC) routines such as the Gibbs and Metropolis–Hastings samplers. Although MCMC estimation presents a tractable solution to problems where the E-step is not available in closed form, two issues arise when implementing this MCEM routine: (1) how do we minimize the computational cost in obtaining an MCMC sample? and (2) how do we choose the Monte Carlo sample size? We address the first question through an application of importance sampling whereby samples drawn during previous EM iterations are recycled rather than running an MCMC sampler each MCEM iteration. The second question is addressed through an application of regenerative simulation. We obtain approximate independent and identical samples by subsampling the generated MCMC sample during different renewal periods. Standard central limit theorems may thus be used to gauge Monte Carlo error. In particular, we apply an automated rule for increasing the Monte Carlo sample size when the Monte Carlo error overwhelms the EM estimate at any given iteration. We illustrate our MCEM algorithm through analyses of two datasets fit by generalized linear mixed models. As a part of these applications, we demonstrate the improvement in computational cost and efficiency of our routine over alternative MCEM strategies.  相似文献   
986.
This article proposes a method for approximating integrated likelihoods in finite mixture models. We formulate the model in terms of the unobserved group memberships, z, and make them the variables of integration. The integral is then evaluated using importance sampling over the z. We propose an adaptive importance sampling function which is itself a mixture, with two types of component distributions, one concentrated and one diffuse. The more concentrated type of component serves the usual purpose of an importance sampling function, sampling mostly group assignments of high posterior probability. The less concentrated type of component allows for the importance sampling function to explore the space in a controlled way to find other, unvisited assignments with high posterior probability. Components are added adaptively, one at a time, to cover areas of high posterior probability not well covered by the current importance sampling function. The method is called incremental mixture importance sampling (IMIS).

IMIS is easy to implement and to monitor for convergence. It scales easily for higher dimensional mixture distributions when a conjugate prior is specified for the mixture parameters. The simulated values on which the estimate is based are independent, which allows for straightforward estimation of standard errors. The self-monitoring aspects of the method make it easier to adjust tuning parameters in the course of estimation than standard Markov chain Monte Carlo algorithms. With only small modifications to the code, one can use the method for a wide variety of mixture distributions of different dimensions. The method performed well in simulations and in mixture problems in astronomy and medical research.  相似文献   
987.
Markov chain Monte Carlo (MCMC) methods have been used in many fields (physics, chemistry, biology, and computer science) for simulation, inference, and optimization. In many applications, Markov chains are simulated for sampling from target probabilities π(X) defined on graphs G. The graph vertices represent elements of the system, the edges represent spatial relationships, while X is a vector of variables on the vertices which often take discrete values called labels or colors. Designing efficient Markov chains is a challenging task when the variables are strongly coupled. Because of this, methods such as the single-site Gibbs sampler often experience suboptimal performance. A well-celebrated algorithm, the Swendsen–Wang (SW) method, can address the coupling problem. It clusters the vertices as connected components after turning off some edges probabilistically, and changes the color of one cluster as a whole. It is known to mix rapidly under certain conditions. Unfortunately, the SW method has limited applicability and slows down in the presence of “external fields;” for example, likelihoods in Bayesian inference. In this article, we present a general cluster algorithm that extends the SW algorithm to general Bayesian inference on graphs. We focus on image analysis problems where the graph sizes are in the order of 103–106 with small connectivity. The edge probabilities for clustering are computed using discriminative probabilities from data. We design versions of the algorithm to work on multi grid and multilevel graphs, and present applications to two typical problems in image analysis, namely image segmentation and motion analysis. In our experiments, the algorithm is at least two orders of magnitude faster (in CPU time) than the single-site Gibbs sampler.  相似文献   
988.
This article proposes a four-pronged approach to efficient Bayesian estimation and prediction for complex Bayesian hierarchical Gaussian models for spatial and spatiotemporal data. The method involves reparameterizing the covariance structure of the model, reformulating the means structure, marginalizing the joint posterior distribution, and applying a simplex-based slice sampling algorithm. The approach permits fusion of point-source data and areal data measured at different resolutions and accommodates nonspatial correlation and variance heterogeneity as well as spatial and/or temporal correlation. The method produces Markov chain Monte Carlo samplers with low autocorrelation in the output, so that fewer iterations are needed for Bayesian inference than would be the case with other sampling algorithms. Supplemental materials are available online.  相似文献   
989.
We describe a method for generating independent samples from univariate density functions using adaptive rejection sampling without the log-concavity requirement. The method makes use of the fact that many functions can be expressed as a sum of concave and convex functions. Using a concave-convex decomposition, we bound the log-density by separately bounding the concave and convex parts using piecewise linear functions. The upper bound can then be used as the proposal distribution in rejection sampling. We demonstrate the applicability of the concave-convex approach on a number of standard distributions and describe an application to the efficient construction of sequential Monte Carlo proposal distributions for inference over genealogical trees. Computer code for the proposed algorithms is available online.  相似文献   
990.
In this work we drive sampling theorems associated with infinite Sturm-Linuville difference problems. Among them, we obtain the analogze, for difference problems, of the result obtained by Zayed for the continuous case on the half-line [O, +∞). Also, we obtain a sampling theorem when the operator associated with the problem has a Hilbert-Schmidt resolvent operator. In particular, sampling theorems associated with Green's functions are included.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号