首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Regression models with a large number of predictors arise in diverse fields of social sciences and natural sciences. For proper interpretation, we often would like to identify a smaller subset of the variables that shows the strongest information. In such a large size of candidate predictors setting, one would encounter a computationally cumbersome search in practice by optimizing some criteria for selecting variables, such as AIC, \(C_{P}\) and BIC, through all possible subsets. In this paper, we present two efficient optimization algorithms vis Markov chain Monte Carlo (MCMC) approach for searching the global optimal subset. Simulated examples as well as one real data set exhibit that our proposed MCMC algorithms did find better solutions than other popular search methods in terms of minimizing a given criterion.  相似文献   

2.
In recent years, the Hamiltonian Monte Carlo (HMC) algorithm has been found to work more efficiently compared to other popular Markov chain Monte Carlo (MCMC) methods (such as random walk Metropolis–Hastings) in generating samples from a high-dimensional probability distribution. HMC has proven more efficient in terms of mixing rates and effective sample size than previous MCMC techniques, but still may not be sufficiently fast for particularly large problems. The use of GPUs promises to push HMC even further greatly increasing the utility of the algorithm. By expressing the computationally intensive portions of HMC (the evaluations of the probability kernel and its gradient) in terms of linear or element-wise operations, HMC can be made highly amenable to the use of graphics processing units (GPUs). A multinomial regression example demonstrates the promise of GPU-based HMC sampling. Using GPU-based memory objects to perform the entire HMC simulation, most of the latency penalties associated with transferring data from main to GPU memory can be avoided. Thus, the proposed computational framework may appear conceptually very simple, but has the potential to be applied to a wide class of hierarchical models relying on HMC sampling. Models whose posterior density and corresponding gradients can be reduced to linear or element-wise operations are amenable to significant speed ups through the use of GPUs. Analyses of datasets that were previously intractable for fully Bayesian approaches due to the prohibitively high computational cost are now feasible using the proposed framework.  相似文献   

3.
In count data regression there can be several problems that prevent the use of the standard Poisson log‐linear model: overdispersion, caused by unobserved heterogeneity or correlation, excess of zeros, non‐linear effects of continuous covariates or of time scales, and spatial effects. We develop Bayesian count data models that can deal with these issues simultaneously and within a unified inferential approach. Models for overdispersed or zero‐inflated data are combined with semiparametrically structured additive predictors, resulting in a rich class of count data regression models. Inference is fully Bayesian and is carried out by computationally efficient MCMC techniques. Simulation studies investigate performance, in particular how well different model components can be identified. Applications to patent data and to data from a car insurance illustrate the potential and, to some extent, limitations of our approach. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
Markov Chain Monte Carlo (MCMC) algorithms play an important role in statistical inference problems dealing with intractable probability distributions. Recently, many MCMC algorithms such as Hamiltonian Monte Carlo (HMC) and Riemannian Manifold HMC have been proposed to provide distant proposals with high acceptance rate. These algorithms, however, tend to be computationally intensive which could limit their usefulness, especially for big data problems due to repetitive evaluations of functions and statistical quantities that depend on the data. This issue occurs in many statistic computing problems. In this paper, we propose a novel strategy that exploits smoothness (regularity) in parameter space to improve computational efficiency of MCMC algorithms. When evaluation of functions or statistical quantities are needed at a point in parameter space, interpolation from precomputed values or previous computed values is used. More specifically, we focus on HMC algorithms that use geometric information for faster exploration of probability distributions. Our proposed method is based on precomputing the required geometric information on a set of grids before running sampling algorithm and approximating the geometric information for the current location of the sampler using the precomputed information at nearby grids at each iteration of HMC. Sparse grid interpolation method is used for high dimensional problems. Tests on computational examples are shown to illustrate the advantages of our method.  相似文献   

5.
The performance of Markov chain Monte Carlo (MCMC) algorithms like the Metropolis Hastings Random Walk (MHRW) is highly dependent on the choice of scaling matrix for the proposal distributions. A popular choice of scaling matrix in adaptive MCMC methods is to use the empirical covariance matrix (ECM) of previous samples. However, this choice is problematic if the dimension of the target distribution is large, since the ECM then converges slowly and is computationally expensive to use. We propose two algorithms to improve convergence and decrease computational cost of adaptive MCMC methods in cases when the precision (inverse covariance) matrix of the target density can be well-approximated by a sparse matrix. The first is an algorithm for online estimation of the Cholesky factor of a sparse precision matrix. The second estimates the sparsity structure of the precision matrix. Combining the two algorithms allows us to construct precision-based adaptive MCMC algorithms that can be used as black-box methods for densities with unknown dependency structures. We construct precision-based versions of the adaptive MHRW and the adaptive Metropolis adjusted Langevin algorithm and demonstrate the performance of the methods in two examples. Supplementary materials for this article are available online.  相似文献   

6.
This article presents a method for generating samples from an unnormalized posterior distribution f(·) using Markov chain Monte Carlo (MCMC) in which the evaluation of f(·) is very difficult or computationally demanding. Commonly, a less computationally demanding, perhaps local, approximation to f(·) is available, say f**x(·). An algorithm is proposed to generate an MCMC that uses such an approximation to calculate acceptance probabilities at each step of a modified Metropolis–Hastings algorithm. Once a proposal is accepted using the approximation, f(·) is calculated with full precision ensuring convergence to the desired distribution. We give sufficient conditions for the algorithm to converge to f(·) and give both theoretical and practical justifications for its usage. Typical applications are in inverse problems using physical data models where computing time is dominated by complex model simulation. We outline Bayesian inference and computing for inverse problems. A stylized example is given of recovering resistor values in a network from electrical measurements made at the boundary. Although this inverse problem has appeared in studies of underground reservoirs, it has primarily been chosen for pedagogical value because model simulation has precisely the same computational structure as a finite element method solution of the complete electrode model used in conductivity imaging, or “electrical impedance tomography.” This example shows a dramatic decrease in CPU time, compared to a standard Metropolis–Hastings algorithm.  相似文献   

7.
Fitting hierarchical Bayesian models to spatially correlated datasets using Markov chain Monte Carlo (MCMC) techniques is computationally expensive. Complicated covariance structures of the underlying spatial processes, together with high-dimensional parameter space, mean that the number of calculations required grows cubically with the number of spatial locations at each MCMC iteration. This necessitates the need for efficient model parameterizations that hasten the convergence and improve the mixing of the associated algorithms. We consider partially centred parameterizations (PCPs) which lie on a continuum between what are known as the centered (CP) and noncentered parameterizations (NCP). By introducing a weight matrix we remove the conditional posterior correlation between the fixed and the random effects, and hence construct a PCP which achieves immediate convergence for a three-stage model, based on multiple Gaussian processes with known covariance parameters. When the covariance parameters are unknown we dynamically update the parameterization within the sampler. The PCP outperforms both the CP and the NCP and leads to a fully automated algorithm which has been demonstrated in two simulation examples. The effectiveness of the spatially varying PCP is illustrated with a practical dataset of nitrogen dioxide concentration levels. Supplemental materials consisting of appendices, datasets, and computer code to reproduce the results are available online.  相似文献   

8.
基于蒙特卡洛-马尔科夫链(MCMC)的ARMA模型选择   总被引:2,自引:0,他引:2  
AIC与SIC等准则函数方法是ARMA模型选择过程中经常使用的方法。但是,当模型的阶数很高时,无法计算并比较每一个备选模型的准则函数值。本文提出了一个基于蒙特卡洛-马尔科夫链方法的随机模型生成方法,以产生准则函数值最小的备选模型。实际应用表明本文的方法在处理拥有大量备选模型的ARMA模型选择问题时有很好的效果。  相似文献   

9.
Multicriteria optimization with a multiobjective golden section line search   总被引:1,自引:0,他引:1  
This work presents an algorithm for multiobjective optimization that is structured as: (i) a descent direction is calculated, within the cone of descent and feasible directions, and (ii) a multiobjective line search is conducted over such direction, with a new multiobjective golden section segment partitioning scheme that directly finds line-constrained efficient points that dominate the current one. This multiobjective line search procedure exploits the structure of the line-constrained efficient set, presenting a faster compression rate of the search segment than single-objective golden section line search. The proposed multiobjective optimization algorithm converges to points that satisfy the Kuhn-Tucker first-order necessary conditions for efficiency (the Pareto-critical points). Numerical results on two antenna design problems support the conclusion that the proposed method can solve robustly difficult nonlinear multiobjective problems defined in terms of computationally expensive black-box objective functions.  相似文献   

10.
A new algorithm to predict partial sheet cavity behavior on hydrofoils is proposed. The proposed algorithm models the unsteady partial cavitation using Boundary Element Method (BEM). In the proposed method the spatial iterative scheme is removed by means of a new approach determining the instantaneous cavity length. This iterative scheme is required in conventional algorithms to obtain the cavity length at each time step. Performance of the new algorithm for various unsteady cavitating flows with different reduced frequencies, cavitation numbers, hydrofoil geometries and inflow conditions are investigated. Comparison between the obtained results using the proposed method and those of conventional ones indicates that the present algorithm works well with sufficient accuracy. Moreover, it is shown that the proposed method is computationally more efficient than the conventional one for unsteady sheet cavitation analysis on hydrofoils.  相似文献   

11.
An efficient method for solving linear goal programming problems   总被引:6,自引:0,他引:6  
This note proposes a solution algorithm for linear goal programming problems. The proposed method simplifies the traditional solution methods. Also, the proposed method is computationally efficient.  相似文献   

12.
We develop an efficient computational algorithm that produces efficient Markov chain Monte Carlo (MCMC) transition matrices. The first level of efficiency is measured in terms of the number of operations needed to produce the resulting matrix. The second level of efficiency is evaluated in terms of the asymptotic variance of the resulting MCMC estimators. Results are first given for transition matrices in finite state spaces and then extended to transition kernels in more general settings.  相似文献   

13.
Markov chain Monte Carlo (MCMC) algorithms offer a very general approach for sampling from arbitrary distributions. However, designing and tuning MCMC algorithms for each new distribution can be challenging and time consuming. It is particularly difficult to create an efficient sampler when there is strong dependence among the variables in a multivariate distribution. We describe a two-pronged approach for constructing efficient, automated MCMC algorithms: (1) we propose the “factor slice sampler,” a generalization of the univariate slice sampler where we treat the selection of a coordinate basis (factors) as an additional tuning parameter, and (2) we develop an approach for automatically selecting tuning parameters to construct an efficient factor slice sampler. In addition to automating the factor slice sampler, our tuning approach also applies to the standard univariate slice samplers. We demonstrate the efficiency and general applicability of our automated MCMC algorithm with a number of illustrative examples. This article has online supplementary materials.  相似文献   

14.
The periodic Riccati equation that results from periodic state space models plays an important role in many fields of mathematics, science, and engineering. In most applications, it is essential that the solution to the Riccati equation be obtained in the shortest possible time. Such a computationally effective doubling algorithm that solves the discrete periodic Riccati equation is proposed in this paper. Moreover, the memory requirements and the calculation burden needed for the sequential implementation of the proposed algorithm are established, and compared to the memory requirements and the calculation burden needed for the sequential implementation of classical algorithms. The basic conclusion of the above comparison is that the calculation time required to solve the periodic Riccati equation using the classical algorithms is in general much greater than the calculation time required to solve the periodic Riccati equation by using the proposed algorithm. Finally, the numerical behavior of the proposed algorithm is tested through simulation examples. It is established that the proposed algorithm is fast, computationally efficient, and numerically stable, and possesses very good parallelism efficiency.  相似文献   

15.
求整体优化问题全部解的胞腔排除法   总被引:5,自引:0,他引:5  
张讲社  王卫  陈白丽 《计算数学》1995,17(4):443-455
其中X~0为一各边平行于坐标轴的立方体(以下将这类立方体简称为胞腔),X~0的最大边长度称为网径,记为W(X~0).在本文中,X(?)X~0表示任一胞腔,f(X)={f(x):x∈X}表示f在X上的值域,f在X~0上的整体极小值记为f~*,f在X~0上全部整体极小点集合记为M.以下恒假定M仅由有限个孤立点组成,且M中所有点含于X~O内部.于是,(?)x~*∈M,有  相似文献   

16.
《Mathematical Modelling》1986,7(2-3):469-482
A finite element algorithm is described which implements the Galerkin approximation to the Navier-Stokes equations and incorporates five predominate features. Although none of these five features is unique to this algorithm, their orchestration, as described in this paper, results in an algorithm which is not only easy to implement but also stable, accurate, and robust, as well as computationally efficient. The zero stress natural boundary condition is implemented which permits calculation of the outflow velocity distribution. A nine-node, Lagrangian, isoparametric, quadrilateral element is used to represent the velocity while the pressure uses a four-node, Lagrangian, superparametric element coincident with the velocity element. The easily implemented, computationally efficient frontal solution technique is used to assemble the element coefficient matrices, impose the boundary conditions, and solve the resulting linear system of equations. An implicit backward Euler time integration rule provides a very stable solution method for time-dependent problems. A Picard scheme with a relatively large radius of convergence is used for iteration of the nonlinear equations at each time step. Results are given for the calculation of two-dimensional, steady- state and time-dependent convection dominated flows of viscous incompressible fluids.  相似文献   

17.
A finite element algorithm is described which implements the Galerkin approximation to the Navier-Stokes equations and incorporates five predominate features. Although none of these five features is unique to this algorithm, their orchestration, as described in this paper, results in an algorithm which is not only easy to implement but also stable, accurate, and robust, as well as computationally efficient. The zero stress natural boundary condition is implemented which permits calculation of the outflow velocity distribution. A nine-node, Lagrangian, isoparametric, quadrilateral elementis used to represent the velocity while the pressure uses a four-node, Lagrangian, superparametric element coincident with the velocity element. The easily implemented, computationally efficient frontal solution technique is used to assemble the element coefficient matrices, impose the boundary conditions, and solve the resulting linear system of equations. An implicit backward Euler time integration rule provides a very stable solution method for time dependent problems. A Picard scheme with a relatively large radius of convergence is used for iteration of the non-linear equations at each time step. Results are given from the calculation of two dimensional, steady-state and time-dependent convection dominated flows of viscous incompressible fluids.  相似文献   

18.
We consider bicriteria scheduling on identical parallel machines in a nontraditional context: jobs belong to two disjoint sets, and each set has a different criterion to be minimized. The jobs are all available at time zero and have to be scheduled (non-preemptively) on m parallel machines. The goal is to generate the set of all non-dominated solutions, so the decision maker can evaluate the tradeoffs and choose the schedule to be implemented. We consider the case where, for one of the two sets, the criterion to be minimized is makespan while for the other the total completion time needs to be minimized. Given that the problem is NP-hard, we propose an iterative SPT–LPT–SPT heuristic and a bicriteria genetic algorithm for the problem. Both approaches are designed to exploit the problem structure and generate a set of non-dominated solutions. In the genetic algorithm we use a special encoding scheme and also a unique strategy – based on the properties of a non-dominated solution – to ensure that all parts of the non-dominated front are explored. The heuristic and the genetic algorithm are compared with a time-indexed integer programming formulation for small and large instances. Results indicate that the both the heuristic and the genetic algorithm provide high solution quality and are computationally efficient. The heuristics proposed also have the potential to be generalized for the problem of interfering job sets involving other bicriteria pairs.  相似文献   

19.

In this paper, we consider the problem of computing different types of finite time survival probabilities for a Markov-Modulated risk model and a Markov-Modulated risk model with reinsurance, both with varying premium rates. We use the multinomial approximation scheme to derive an efficient recursive algorithm to compute finite time survival probabilities and finite time draw-down survival probabilities. Numerical results show that by comparing with MCMC approximation, discretize approximation and diffusion approximation methods, the proposed scheme performs accurate results in all the considered cases and with better computation efficiency.

  相似文献   

20.
This article deals with a computationally efficient serial distributed arithmetic algorithm producing a device efficient field programmable gate array implementation. The proposed algorithm takes half the computations time as compared to the conventional symmetric algorithms. The algorithm is analyzed for a direct conversion transmitter of ionospheric radar. Matching and buffering criterion are used to reduce the arithmetic process. The algorithm can be extended to applications with similar characteristics, particularly for System On Chip (SOC) techniques. © 2005 Wiley Periodicals, Inc. Complexity 11: 24–29, 2005  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号