首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Although generalized linear mixed effects models have received much attention in the statistical literature, there is still no computationally efficient algorithm for computing maximum likelihood estimates for such models when there are a moderate number of random effects. Existing algorithms are either computationally intensive or they compute estimates from an approximate likelihood. Here we propose an algorithm—the spherical–radial algorithm—that is computationally efficient and computes maximum likelihood estimates. Although we concentrate on two-level, generalized linear mixed effects models, the same algorithm can be applied to many other models as well, including nonlinear mixed effects models and frailty models. The computational difficulty for estimation in these models is in integrating the joint distribution of the data and the random effects to obtain the marginal distribution of the data. Our algorithm uses a multidimensional quadrature rule developed in earlier literature to integrate the joint density. This article discusses how this rule may be combined with an optimization algorithm to efficiently compute maximum likelihood estimates. Because of stratification and other aspects of the quadrature rule, the resulting integral estimator has significantly less variance than can be obtained through simple Monte Carlo integration. Computational efficiency is achieved, in part, because relatively few evaluations of the joint density may be required in the numerical integration.  相似文献   

2.
Variational approximations have the potential to scale Bayesian computations to large datasets and highly parameterized models. Gaussian approximations are popular, but can be computationally burdensome when an unrestricted covariance matrix is employed and the dimension of the model parameter is high. To circumvent this problem, we consider a factor covariance structure as a parsimonious representation. General stochastic gradient ascent methods are described for efficient implementation, with gradient estimates obtained using the so-called “reparameterization trick.” The end result is a flexible and efficient approach to high-dimensional Gaussian variational approximation. We illustrate using robust P-spline regression and logistic regression models. For the latter, we consider eight real datasets, including datasets with many more covariates than observations, and another with mixed effects. In all cases, our variational method provides fast and accurate estimates. Supplementary material for this article is available online.  相似文献   

3.
An open queueing network model in heavy traffic is developed. Such models are mathematical models of computer networks in heavy traffic. Laws of the iterated logarithm for the virtual waiting time of the customer in open queueing networks and homogeneous computer networks are proved.  相似文献   

4.
In this paper, we developed a mathematical model to find the parameters of prosthetic damper which will provide a similar trajectory for the prosthetic knee joint. Two popular searching methods namely grid searching and optimization are used to determine the damper's parameters. The proposed model is validated with a simulation process using the data of the able-body individuals. We utilized the ground reaction force of sound limb to determine the values of the damper parameters of a prosthetic knee joint for maximum symmetry. Symmetry between knee moments was also improved in the stance period with optimized parameters. Finally, optimization-based searching was observed to be more computationally efficient than the grid-based searching method. The present study will provide a virtual solution to set the prosthetic dampers parameters based on user needs. In the future, the present method can be used for adjusting damping of microprocessor prosthetic knee joint for symmetrical gait pattern.  相似文献   

5.
It is important to consider the decision making unit (DMU)'s or decision maker's preference over the potential adjustments of various inputs and outputs when data envelopment analysis (DEA) is employed. On the basis of the so-called Russell measure, this paper develops some weighted non-radial CCR models by specifying a proper set of ‘preference weights’ that reflect the relative degree of desirability of the potential adjustments of current input or output levels. These input or output adjustments can be either less or greater than one; that is, the approach enables certain inputs actually to be increased, or certain outputs actually to be decreased. It is shown that the preference structure prescribes fixed weights (virtual multiplier bounds) or regions that invalidate some virtual multipliers and hence it generates preferred (efficient) input and output targets for each DMU. In addition to providing the preferred target, the approach gives a scalar efficiency score for each DMU to secure comparability. It is also shown how specific cases of our approach handle non-controllable factors in DEA and measure allocative and technical efficiency. Finally, the methodology is applied with the industrial performance of 14 open coastal cities and four special economic zones in 1991 in China. As applied here, the DEA/preference structure model refines the original DEA model's result and eliminates apparently efficient DMUs.  相似文献   

6.
The model of an open queueing network in heavy traffic has been developed. These models are mathematical models of computer networks in heavy traffic. A limit theorem has been presented for the virtual waiting time of a customer in heavy traffic in open queueing networks. Finally, we present an application of the theorem—a reliability model from computer network practice.  相似文献   

7.
One of the challenges with emulating the response of a multivariate function to its inputs is the quantity of data that must be assimilated, which is the product of the number of model evaluations and the number of outputs. This article shows how even large calculations can be made tractable. It is already appreciated that gains can be made when the emulator residual covariance function is treated as separable in the model-inputs and model-outputs. Here, an additional simplification on the structure of the regressors in the emulator mean function allows very substantial further gains. The result is that it is now possible to emulate rapidly—on a desktop computer—models with hundreds of evaluations and hundreds of outputs. This is demonstrated through calculating costs in floating-point operations, and in an illustration. Even larger sets of outputs are possible if they have additional structure, for example, spatial-temporal.  相似文献   

8.
A mathematical model of portfolio optimization is usually quantified with mean-risk models offering a lucid form of two criteria with possible trade-off analysis. In the classical Markowitz model the risk is measured by a variance, thus resulting in a quadratic programming model. Following Sharpe’s work on linear approximation to the mean-variance model, many attempts have been made to linearize the portfolio optimization problem. There were introduced several alternative risk measures which are computationally attractive as (for discrete random variables) they result in solving linear programming (LP) problems. Typical LP computable risk measures, like the mean absolute deviation (MAD) or the Gini’s mean absolute difference (GMD) are symmetric with respect to the below-mean and over-mean performances. The paper shows how the measures can be further combined to extend their modeling capabilities with respect to enhancement of the below-mean downside risk aversion. The relations of the below-mean downside stochastic dominance are formally introduced and the corresponding techniques to enhance risk measures are derived.The resulting mean-risk models generate efficient solutions with respect to second degree stochastic dominance, while at the same time preserving simplicity and LP computability of the original models. The models are tested on real-life historical data.The research was supported by the grant PBZ-KBN-016/P03/99 from The State Committee for Scientific Research.  相似文献   

9.
The recent accelerated growth in the computing power has generated popularization of experimentation with dynamic computer models in various physical and engineering applications. Despite the extensive statistical research in computer experiments, most of the focus had been on the theoretical and algorithmic innovations for the design and analysis of computer models with scalar responses. In this article, we propose a computationally efficient statistical emulator for a large-scale dynamic computer simulator (i.e., simulator which gives time series outputs). The main idea is to first find a good local neighborhood for every input location, and then emulate the simulator output via a singular value decomposition (SVD) based Gaussian process (GP) model. We develop a new design criterion for sequentially finding this local neighborhood set of training points. Several test functions and a real-life application have been used to demonstrate the performance of the proposed approach over a naive method of choosing local neighborhood set using the Euclidean distance among design points. The supplementary material, which contains proof of the theoretical results, detailed algorithms, additional simulation results, and R codes, are available online.  相似文献   

10.
The demand for computational efficiency and reduced cost presents a big challenge for the development of more applicable and practical approaches in the field of uncertainty model updating. In this article, a computationally efficient approach, which is a combination of Stochastic Response Surface Method (SRSM) and Monte Carlo inverse error propagation, for stochastic model updating is developed based on a surrogate model. This stochastic surrogate model is determined using the Hermite polynomial chaos expansion and regression-based efficient collocation method. This paper addresses the critical issue of effectiveness and efficiency of the presented method. The efficiency of this method is demonstrated as a large number of computationally demanding full model simulations are no longer essential, and instead, the updating of parameter mean values and variances is implemented on the stochastic surrogate model expressed as an explicit mathematical expression. A three degree-of-freedom numerical model and a double-hat structure formed by a number of bolted joints are employed to illustrate the implementation of the method. Using the Monte Carlo-based method as the benchmark, the effectiveness and efficiency of the proposed method is verified.  相似文献   

11.
Chemotaxis refers to mechanisms by which cellular motion occurs in response to an external stimulus, usually a chemical one. Chemotaxis phenomenon plays an important role in bacteria/cell aggregation and pattern formation mechanisms, as well as in tumor growth. A common property of all chemotaxis systems is their ability to model a concentration phenomenon that mathematically results in rapid growth of solutions in small neighborhoods of concentration points/curves. The solutions may blow up or may exhibit a very singular, spiky behavior. There is consequently a need for accurate and computationally efficient numerical methods for the chemotaxis models. In this work, we develop and study novel high-order hybrid finite-volume-finite-difference schemes for the Patlak-Keller-Segel chemotaxis system and related models. We demonstrate high-accuracy, stability and computational efficiency of the proposed schemes in a number of numerical examples.  相似文献   

12.
A model is developed mathematically to represent sound propagation in a three-dimensional ocean. The complete development is based on characteristics of the physical environment, mathematical theory, and computational accuracy.While the two-dimentional underwater acoustic wave propagation problem is not yet solved completely for range-dependent environments,three-dimentional environmental effects, such as fronts and eddies, often cannot be neglected. To predict underwater sound propagation, one usually deals with the solution of the Helmholtz (reduced wave) equation. This elliptical equation, along with a set of boundary conditions including a wall condition at the maximum range, forms a well-posed problem, which is pure boundary-value problem. An existing approach to economically solve this three-dimensional range-dependent problem is by means of a two-dimensional parabolic partial differential equation. This parabolic approximation approach, within the limitation of mathematical and acoustical approximations, offers efficient solutions to a class of long-range propagation problems. The parabolic wave equation is much easier to solve than the elliptic equation; one major saving is the removal of the wall boundary condition at the maximum range. The application of the two-dimensional parabolic wave equation to a number of realistic problems has been successful.We discuss the extension of the parabolic equation approach to three-dimensional problems. This paper begins with general considerations of the three-dimensional elliptic wave equation and shows how to transform this equation into parabolic equations which are easier to solve. The development of this paper focuses on wide angle three-dimensional underwater acoustic propagation and accommodates as a special case prevoius developments by other authors. In the course of our development, the physical properties, mathematical validity, and computational accuracy are the primary factors considered. We describe how parabolic wave equations are derived and how wide angle propagation is taken into consideration. Then, a discussion of the limitations and the advantages of the parabolic equation approximation is highlighted. These provide the background for the mathematical formulation of three-dimensional underwater acoustic wave propagation models.Modelling the mathematical solution to three-dimensional underwater acoustic wave propagation involves difficulties both in describing the theoretical acoustics and in performing the large scale computations. We have used the mathematical and physical properties of the problem to simplify considerably. Simplications allow us to introduce a three-dimensional mathematical model for underwater acoustic propagation predictions. Our wide angle three-dimensional parabolic equation model is theoretically justifiable and computationally accurate. This model offers a variety of capabilities to handle a class of long-range propagation problems under acoustical environments with three-dimensional variations.  相似文献   

13.
Data envelopment analysis is a mathematical programming technique for identifying efficient frontiers for peer decision making units with multiple inputs and multiple outputs. These performance factors (inputs and outputs) are classified into two groups: desirable and undesirable. Obviously, undesirable factors in production process should be reduced to improve the performance. In the current paper, we present a data envelopment analysis (DEA) model in which can be used to improve the relative performance via increasing undesirable inputs and decreasing undesirable outputs.  相似文献   

14.
A heuristic method for PERT analysis is presented, designed in such a way as to keep computational and informational needs to a minimum and to be easy to implement. The procedure is computationally efficient and quite accurate, even though the only information required on activity time distributions is in the form of simple discrete approximations. A numerical study is described which provides guidelines on the number of values to use and the computer time required.  相似文献   

15.
An underlying assumption in DEA is that the weights coupled with the ratio scales of the inputs and outputs imply linear value functions. In this paper, we present a general modeling approach to deal with outputs and/or inputs that are characterized by nonlinear value functions. To this end, we represent the nonlinear virtual outputs and/or inputs in a piece-wise linear fashion. We give the CCR model that can assess the efficiency of the units in the presence of nonlinear virtual inputs and outputs. Further, we extend the models with the assurance region approach to deal with concave output and convex input value functions. Actually, our formulations indicate a transformation of the original data set to an augmented data set where standard DEA models can then be applied, remaining thus in the grounds of the standard DEA methodology. To underline the usefulness of such a new development, we revisit a previous work of one of the authors dealing with the assessment of the human development index on the light of DEA.  相似文献   

16.
《Computational Geometry》2000,15(1-3):41-49
Polygonal models have been widely applied in the community of CAD and computer graphics. Since a polygonal surface usually has no intrinsic parameterization, it is very difficult to map textures onto it with low distortion. In this paper, we present an efficient texture mapping algorithm for polygonal models. For each region to be mapped, the algorithm first constructs a B-spline patch with similar shape to surround the model. The mapped region is then projected onto the constructed B-spline patch to achieve a parameterization. By interactively controlling the B-spline patch, the user can conveniently decorate the surface of the model to meet his requirements. Both local and global texture mapping are discussed. The experimental results demonstrate that the algorithm has a great of potential applications in computer animation and virtual reality systems.  相似文献   

17.
Datasets in the fields of climate and environment are often very large and irregularly spaced. To model such datasets, the widely used Gaussian process models in spatial statistics face tremendous challenges due to the prohibitive computational burden. Various approximation methods have been introduced to reduce the computational cost. However, most of them rely on unrealistic assumptions for the underlying process and retaining statistical efficiency remains an issue. We develop a new approximation scheme for maximum likelihood estimation. We show how the composite likelihood method can be adapted to provide different types of hierarchical low rank approximations that are both computationally and statistically efficient. The improvement of the proposed method is explored theoretically; the performance is investigated by numerical and simulation studies; and the practicality is illustrated through applying our methods to two million measurements of soil moisture in the area of the Mississippi River basin, which facilitates a better understanding of the climate variability. Supplementary material for this article is available online.  相似文献   

18.
Abstract

The valuation of American options is an optimal stopping time problem which typically leads to a free boundary problem. We introduce here the randomization of the exercisability of the option. This method considerably simplifies the problematic by transforming the free boundary problem into an evolution equation. This evolution equation can be transformed in a way that decomposes the value of the randomized option into a European option and the present value of continuously paid benefits. This yields a new binomial approximation for American options. We prove that the method is accurate and numerical results illustrate that it is computationally efficient.  相似文献   

19.
In count data regression there can be several problems that prevent the use of the standard Poisson log‐linear model: overdispersion, caused by unobserved heterogeneity or correlation, excess of zeros, non‐linear effects of continuous covariates or of time scales, and spatial effects. We develop Bayesian count data models that can deal with these issues simultaneously and within a unified inferential approach. Models for overdispersed or zero‐inflated data are combined with semiparametrically structured additive predictors, resulting in a rich class of count data regression models. Inference is fully Bayesian and is carried out by computationally efficient MCMC techniques. Simulation studies investigate performance, in particular how well different model components can be identified. Applications to patent data and to data from a car insurance illustrate the potential and, to some extent, limitations of our approach. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
The Dirichlet process and its extension, the Pitman–Yor process, are stochastic processes that take probability distributions as a parameter. These processes can be stacked up to form a hierarchical nonparametric Bayesian model. In this article, we present efficient methods for the use of these processes in this hierarchical context, and apply them to latent variable models for text analytics. In particular, we propose a general framework for designing these Bayesian models, which are called topic models in the computer science community. We then propose a specific nonparametric Bayesian topic model for modelling text from social media. We focus on tweets (posts on Twitter) in this article due to their ease of access. We find that our nonparametric model performs better than existing parametric models in both goodness of fit and real world applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号