首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is devoted to the study of a binary homogeneous mixture of an isotropic micropolar linear elastic solid with an incompressible micropolar viscous fluid. Assuming that the internal energy of the solid and the dissipation energy are positive definite forms, some uniqueness and continuous data dependence results are presented. Also, some estimates which describe the time behaviour of solution are established, provided the above two energies are positive definite forms. These estimates are used to prove that for the linearized equations, in the absence of body loads and for null boundary conditions, the solution is asymptotically stable. Then a uniqueness result under mild assumptions on the constitutive constants is given using the Lagrange-Brun identities method. These mathematical results prove that the approach of a micropolar solid-fluid mixture is in agreement with physical expectations.  相似文献   

2.
Abstract

We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the log-likelihood. We present a simple implementation using the Newton-Raphson algorithm with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to least squares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow “moments” that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by specifying moments of a distribution using prior information. We present two examples—specification of a multivariate prior distribution in a constrained-parameter family and estimation of parameters in an image model. The former example, used for an application in pharmacokinetics, motivated this work. This work is similar to Ruppert's method in stochastic approximation, combines Monte Carlo simulation and the Newton-Raphson algorithm as in Penttinen, uses computational ideas and importance sampling identities of Gelfand and Carlin, Geyer, and Geyer and Thompson developed for Monte Carlo maximum likelihood, and has some similarities to the maximum likelihood methods of Wei and Tanner.  相似文献   

3.
We study the free energy fluctuations for a mixture of directed polymers, which was first introduced by Borodin et al. (2015) to investigate the limiting distribution of a stationary Kardar-Parisi-Zhang (KPZ) equation. The main results consist of both the law of large numbers and the asymptotic fluctuation for the free energy as the model size tends to infinity. In particular, we find the explicit values (which may depend on model parameters) of normalizing constants in the fluctuation. This shows that such a mixture model is in the KPZ university class.  相似文献   

4.
In this article, we introduce a likelihood‐based estimation method for the stochastic volatility in mean (SVM) model with scale mixtures of normal (SMN) distributions. Our estimation method is based on the fact that the powerful hidden Markov model (HMM) machinery can be applied in order to evaluate an arbitrarily accurate approximation of the likelihood of an SVM model with SMN distributions. Likelihood‐based estimation of the parameters of stochastic volatility models, in general, and SVM models with SMN distributions, in particular, is usually regarded as challenging as the likelihood is a high‐dimensional multiple integral. However, the HMM approximation, which is very easy to implement, makes numerical maximum of the likelihood feasible and leads to simple formulae for forecast distributions, for computing appropriately defined residuals, and for decoding, that is, estimating the volatility of the process. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

5.
Linear stochastic programming provides a flexible toolbox for analyzing real-life decision situations, but it can become computationally cumbersome when recourse decisions are involved. The latter are usually modeled as decision rules, i.e., functions of the uncertain problem data. It has recently been argued that stochastic programs can quite generally be made tractable by restricting the space of decision rules to those that exhibit a linear data dependence. In this paper, we propose an efficient method to estimate the approximation error introduced by this rather drastic means of complexity reduction: we apply the linear decision rule restriction not only to the primal but also to a dual version of the stochastic program. By employing techniques that are commonly used in modern robust optimization, we show that both arising approximate problems are equivalent to tractable linear or semidefinite programs of moderate sizes. The gap between their optimal values estimates the loss of optimality incurred by the linear decision rule approximation. Our method remains applicable if the stochastic program has random recourse and multiple decision stages. It also extends to cases involving ambiguous probability distributions.  相似文献   

6.
The nonlinear filtering problem of estimating the state of a linear stochastic system from noisy observations is solved for a broad class of probability distributions of the initial state. It is shown that the conditional density of the present state, given the past observations, is a mixture of Gaussian distributions, and is parametrically determined by two sets of sufficient statistics which satisfy stochastic DEs; this result leads to a generalization of the Kalman–Bucy filter to a structure with a conditional mean vector, and additional sufficient statistics that obey nonlinear equations, and determine a generalized (random) Kalman gain. The theory is used to solve explicitly a control problem with quadratic running and terminal costs, and bounded controls.  相似文献   

7.
A sample from a mixture of two symmetric distributions is observed. The considered distributions differ only by a shift. Estimates are constructed by the method of estimating equations for parameters of mean locations and concentrations (mixing probabilities) of both components. We obtain conditions for the asymptotic normality of these estimates. The greatest lower bounds for the coefficients of dispersion of the estimates are determined.  相似文献   

8.
A method for constructing priors is proposed that allows the off-diagonal elements of the concentration matrix of Gaussian data to be zero. The priors have the property that the marginal prior distribution of the number of nonzero off-diagonal elements of the concentration matrix (referred to below as model size) can be specified flexibly. The priors have normalizing constants for each model size, rather than for each model, giving a tractable number of normalizing constants that need to be estimated. The article shows how to estimate the normalizing constants using Markov chain Monte Carlo simulation and supersedes the method of Wong et al. (2003) [24] because it is more accurate and more general. The method is applied to two examples. The first is a mixture of constrained Wisharts. The second is from Wong et al. (2003) [24] and decomposes the concentration matrix into a function of partial correlations and conditional variances using a mixture distribution on the matrix of partial correlations. The approach detects structural zeros in the concentration matrix and estimates the covariance matrix parsimoniously if the concentration matrix is sparse.  相似文献   

9.
Viewing the classical Bernstein polynomials as sampling operators, we study a generalization by allowing the sampling operation to take place at scattered sites. We utilize both stochastic and deterministic approaches. On the stochastic side, we consider the sampling sites as random variables that obey some naturally derived probabilistic distributions, and obtain Chebyshev type estimates. On the deterministic side, we incorporate the theory of uniform distribution of point sets (within the framework of Weyl’s criterion) and the discrepancy method. We establish convergence results and error estimates under practical assumptions on the distribution of the sampling sites.  相似文献   

10.
Predictive recursion (PR) is a fast stochastic algorithm for nonparametric estimation of mixing distributions in mixture models. It is known that the PR estimates of both the mixing and mixture densities are consistent under fairly mild conditions, but currently very little is known about the rate of convergence. Here I first investigate asymptotic convergence properties of the PR estimate under model misspecification in the special case of finite mixtures with known support. Tools from stochastic approximation theory are used to prove that the PR estimates converge, to the best Kullback-Leibler approximation, at a nearly root-n rate. When the support is unknown, PR can be used to construct an objective function which, when optimized, yields an estimate of the support. I apply the known-support results to derive a rate of convergence for this modified PR estimate in the unknown support case, which compares favorably to known optimal rates.  相似文献   

11.
In this study, we derive stochastic models of population dynamics and devise a new method of estimating the models. The models allow growth and harvest to be nonlinear functions of stochastic processes and the error terms to be nonlinear and heteroskedastic. Ordinary least-squares estimates would be biased and inefficient and generalized least-squares estimates cannot be calculated. Therefore, we implement nonlinear maximum likelihood methods to find unbiased and efficient estimates of parameters. The method is applied to the population dynamics of kangaroos in South Australia. Aerial survey data of kangaroo numbers are combined with harvest, effort and rainfall data to estimate the growth and harvest functions and the variances of the stochastic processes which drive the system. Results suggest that growth and harvest should be modeled as functions of stochastic processes and that observations on kangaroo numbers are critical for estimating population dynamics. The results also indicate that the estimation method works well and is a viable alternative to ARIMA and GARCH models, particularly for small data sets.  相似文献   

12.
The problem of clustering a group of observations according to some objective function (e.g., K-means clustering, variable selection) or a density (e.g., posterior from a Dirichlet process mixture model prior) can be cast in the framework of Monte Carlo sampling for cluster indicators. We propose a new method called the evolutionary Monte Carlo clustering (EMCC) algorithm, in which three new “crossover moves,” based on swapping and reshuffling sub cluster intersections, are proposed. We apply the EMCC algorithm to several clustering problems including Bernoulli clustering, biological sequence motif clustering, BIC based variable selection, and mixture of normals clustering. We compare EMCC's performance both as a sampler and as a stochastic optimizer with Gibbs sampling, “split-merge” Metropolis–Hastings algorithms, K-means clustering, and the MCLUST algorithm.  相似文献   

13.
The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials.  相似文献   

14.
The article presents a method for approximating the amplitude distribution of images from synthetic aperture radar (SAR), using finite mixtures of distributions. The method involves a stochastic expectation maximization algorithm and a method of logarithmic cumulants for estimating the parameters of the components in the mixture. The distributions for the components are taken from a special dictionary containing distributions typical for SAR. Experiments with real high-resolution SAR images exhibit highly accurate results from the viewpoint of both visual analysis and quantitative characteristics (the correlation coefficient and the Kolmogorov-Smirnov distance).  相似文献   

15.
The success behind effective project management lies in estimating the time for individual activities. In many cases, these activity times are non-deterministic. In such situations, the conventional method (project evaluation and review technique (PERT)) obtains three time estimates, which are then used to calculate the expected time. In practice, it is often difficult to get three accurate time estimates. A recent paper suggests using just two time estimates and an approximation of the normal distribution to obtain the expected time and variance for that activity. In this paper, we propose an alternate method that uses only two bits of information: the most-likely and either the optimistic or the pessimistic time. We use a lognormal approximation and experimental results to show that our method is not only better than the normal approximation, but also better than the conventional method when the underlying activity distributions are moderately or heavily right skewed.  相似文献   

16.
This paper studies the limit distributions for discretization error of irregular sampling approximations of stochastic integral. The irregular sampling approximation was first presented in Hayashi et al.[3], which was more general than the sampling approximation in Lindberg and Rootz′en [10]. As applications, we derive the asymptotic distribution of hedging error and the Euler scheme of stochastic differential equation respectively.  相似文献   

17.
In this paper we derive estimates of the sample sizes required to solve a multistage stochastic programming problem with a given accuracy by the (conditional sampling) sample average approximation method. The presented analysis is self-contained and is based on a relatively elementary, one-dimensional, Cramér's Large Deviations Theorem.  相似文献   

18.
We consider the problem of optimally allocating the seats on a single flight leg to the demands from multiple fare classes that arrive sequentially. It is well-known that the optimal policy for this problem is characterized by a set of protection levels. In this paper, we develop a new stochastic approximation method to compute the optimal protection levels under the assumption that the demand distributions are not known and we only have access to the samples from the demand distributions. The novel aspect of our method is that it works with the nonsmooth version of the problem where the capacity can only be allocated in integer quantities. We show that the sequence of protection levels generated by our method converges to a set of optimal protection levels with probability one. We discuss applications to the case where the demand information is censored by the seat availability. Computational experiments indicate that our method is especially advantageous when the total expected demand exceeds the capacity by a significant margin and we do not have good a priori estimates of the optimal protection levels.  相似文献   

19.
Utilizing the well-known aggregation technique, we propose a smoothing sample average approximation (SAA) method for a stochastic linear complementarity problem, where the underlying functions are represented by expectations of stochastic functions. The method is proved to be convergent and the preliminary numerical results are reported.  相似文献   

20.
Summary The usual Bayes-Stein shrinkages of maximum likelihood estimates towards a common value may be refined by taking fuller account of the locations of the individual observations. Under a Bayesian formulation, the types of shrinkages depend critically upon the nature of the common distribution assumed for the parameters at the second stage of the prior model. In the present paper this distribution is estimated empirically from the data, permitting the data to determine the nature of the shrinkages. For example, when the observations are located in two or more clearly distinct groups, the maximum likelihood estimates are roughly speaking constrained towards common values within each group. The method also detects outliers; an extreme observation will either the regarded as an outlier and not substantially adjusted towards the other observations, or it will be rejected as an outlier, in which case a more radical adjustment takes place. The method is appropriate for a wide range of sampling distributions and may also be viewed as an alternative to standard multiple comparisons, cluster analysis, and nonparametric kernel methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号