首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
In this paper, we consider the problem of estimating the location and scale parameters of the skew normal distribution introduced by Azzalini. For this distribution, the classic maximum likelihood estimators(MLEs) do not take explicit forms. We approximate the likelihood equations and derive explicit estimators of the parameters. The bias and variance of the estimators are investigated and Monte Carlo simulation studies show that the estimators are as efficient as the classic MLEs. We demonstrate that the probability coverages of the pivotal quantities (for location and scale parameters) based on asymptotic normality are unsatisfactory, especially when the sample size is small. The use of unconditional simulated percentage points of these quantities is suggested. Finally, a numerical example is used to illustrate the proposed inference methods.  相似文献   

2.
Monte Carlo sampling-based estimators of optimality gaps for stochastic programs are known to be biased. When bias is a prominent factor, estimates of optimality gaps tend to be large on average even for high-quality solutions. This diminishes our ability to recognize high-quality solutions. In this paper, we present a method for reducing the bias of the optimality gap estimators for two-stage stochastic linear programs with recourse via a probability metrics approach, motivated by stability results in stochastic programming. We apply this method to the Averaged Two-Replication Procedure (A2RP) by partitioning the observations in an effort to reduce bias, which can be done in polynomial time in sample size. We call the resulting procedure the Averaged Two-Replication Procedure with Bias Reduction (A2RP-B). We provide conditions under which A2RP-B produces strongly consistent point estimators and an asymptotically valid confidence interval. We illustrate the effectiveness of our approach analytically on a newsvendor problem and test the small-sample behavior of A2RP-B on a number of two-stage stochastic linear programs from the literature. Our computational results indicate that the procedure effectively reduces bias. We also observe variance reduction in certain circumstances.  相似文献   

3.
Abstract

This article discusses a new technique for calculating maximum likelihood estimators (MLEs) of probability measures when it is assumed the measures are constrained to a compact, convex set. Measures in such sets can be represented as mixtures of simple, known extreme measures, and so the problem of maximizing the likelihood in the constrained measures becomes one of maximizing in an unconstrained mixing measure. Such convex constraints arise in many modeling situations, such as empirical likelihood and estimation under stochastic ordering constraints. This article describes the mixture representation technique for these two situations and presents a data analysis of an experiment in cancer genetics, where a partial stochastic ordering is assumed but the data are incomplete.  相似文献   

4.
In reliability and life-testing experiments, the researcher is often interested in the effects of extreme or varying stress factors such as temperature, voltage and load on the lifetimes of experimental units. Step-stress test, which is a special class of accelerated life-tests, allows the experimenter to increase the stress levels at fixed times during the experiment in order to obtain information on the parameters of the life distributions more quickly than under normal operating conditions. In this paper, we consider the simple step-stress model from the exponential distribution when there is time constraint on the duration of the experiment. We derive the maximum likelihood estimators (MLEs) of the parameters assuming a cumulative exposure model with lifetimes being exponentially distributed. The exact distributions of the MLEs of parameters are obtained through the use of conditional moment generating functions. We also derive confidence intervals for the parameters using these exact distributions, asymptotic distributions of the MLEs and the parametric bootstrap methods, and assess their performance through a Monte Carlo simulation study. Finally, we present two examples to illustrate all the methods of inference discussed here.  相似文献   

5.
Gradient-based simulation optimization under probability constraints   总被引:1,自引:0,他引:1  
We study optimization problems subject to possible fatal failures. The probability of failure should not exceed a given confidence level. The distribution of the failure event is assumed unknown, but it can be generated via simulation or observation of historical data. Gradient-based simulation-optimization methods pose the difficulty of the estimation of the gradient of the probability constraint under no knowledge of the distribution. In this work we provide two single-path estimators with bias: a convolution method and a finite difference, and we provide a full analysis of convergence of the Arrow-Hurwicz algorithm, which we use as our solver for optimization. Convergence results are used to tune the parameters of the numerical algorithms in order to achieve best convergence rates, and numerical results are included via an example of application in finance.  相似文献   

6.
This paper discusses inference for ordered parameters of multinomial distributions. We first show that the asymptotic distributions of their maximum likelihood estimators (MLEs) are not always normal and the bootstrap distribution estimators of the MLEs can be inconsistent. Then a class of weighted sum estimators (WSEs) of the ordered parameters is proposed. Properties of the WSEs are studied, including their asymptotic normality. Based on those results, large sample inferences for smooth functions of the ordered parameters can be made. Especially, the confidence intervals of the maximum cell probabilities are constructed. Simulation results indicate that this interval estimation performs much better than the bootstrap approaches in the literature. Finally, the above results for ordered parameters of multinomial distributions are extended to more general distribution models. This work was supported by National Natural Science Foundation of China (Grant No. 10371126)  相似文献   

7.
For general step-stress experiments with arbitrary baseline distributions, wherein the stress levels change immediately after having observed pre-specified numbers of observations under each stress level, a sequential order statistics model is proposed and associated inferential issues are discussed. Maximum likelihood estimators (MLEs) of the mean lifetimes at different stress levels are derived, and some useful properties of the MLEs are established. Joint MLEs are also derived when an additional location parameter is introduced into the model, and estimation under order restriction of the parameters at different stress levels is finally discussed.  相似文献   

8.
In this paper we define semi-stable probability measures (laws) on a real separable Hilbert space and are identified as limit laws. We characterize them in terms of their Lévy-Khinchine measure and the exponent 0 < p ≤ 2. Finally we prove that every semi-stable probability measure of exponent p has finite absolute moments of order 0 ≤ α < p.  相似文献   

9.
本文得出了连续时间下均值-VaR模型的最优投资策略。在这个最优解的基础上,我们比较说明了概率和分位数作为风险度量方法在管理风险中发挥的作用。我们的分析结果表明:从管理风险的角度出发控制损失发生的概率要比控制损失的水平更为有意义;并且选择的VaR置信度水平越高,监管的效果会越好。  相似文献   

10.
Kim  Jisoo  Jun  Chi-Hyuck 《Queueing Systems》2002,42(3):221-237
We consider a discrete-time queueing system with a single deterministic server, heterogeneous Markovian arrivals and finite capacity. Most existing techniques model the queueing system using a direct bivariate Markov chain which requires a state space that grows rapidly as the number of customer types increases. In this paper, we define renewal cycles in terms of the input process and model the system occupancy level on each renewal cycle using a one-dimensional Markov chain. We derive the exact joint steady-state probability distribution of both states of input and system occupancy with a considerably reduced state space, which leads to the efficient calculation of overall/individual performance measures such as loss probability and average delay.  相似文献   

11.
We investigate stationarity and stability of half-spaces as isoperimetric sets for product probability measures, considering the cases of coordinate and non-coordinate half-spaces. Moreover, we present several examples to which our results can be applied, with a particular emphasis on the logistic measure.  相似文献   

12.
In this paper we focus on the sequential k-out-of-n model with covariates. We assume that the lifetime distribution given covariates belongs to the exponential family, and deal with log-linear model of the scale parameter of the exponential distribution. The maximum likelihood estimators (MLEs) of the model parameters with order restrictions are derived and some properties of the MLEs are discussed, and we give the algorithm of MLES and the result of simulation.  相似文献   

13.
Estimating the probabilities by which different events might occur is usually a delicate task, subject to many sources of inaccuracies. Moreover, these probabilities can change over time, leading to a very difficult evaluation of the risk induced by any particular decision. Given a set of probability measures and a set of nominal risk measures, we define in this paper the concept of robust risk measure as the worst possible of our risks when each of our probability measures is likely to occur. We study how some properties of this new object can be related with those of our nominal risk measures, such as convexity or coherence. We introduce a robust version of the Conditional Value-at-Risk (CVaR) and of entropy-based risk measures. We show how to compute and optimize the Robust CVaR using convex duality methods and illustrate its behavior using data from the New York Stock Exchange and from the NASDAQ between 2005 and 2010.  相似文献   

14.
We build a new probability measure on closed space and plane polygons. The key construction is a map, given by Hausmann and Knutson, using the Hopf map on quaternions from the complex Stiefel manifold of 2‐frames in n‐space to the space of closed n‐gons in 3‐space of total length 2. Our probability measure on polygon space is defined by pushing forward Haar measure on the Stiefel manifold by this map. A similar construction yields a probability measure on plane polygons that comes from a real Stiefel manifold. The edgelengths of polygons sampled according to our measures obey beta distributions. This makes our polygon measures different from those usually studied, which have Gaussian or fixed edgelengths. One advantage of our measures is that we can explicitly compute expectations and moments for chord lengths and radii of gyration. Another is that direct sampling according to our measures is fast (linear in the number of edges) and easy to code. Some of our methods will be of independent interest in studying other probability measures on polygon spaces. We define an edge set ensemble (ESE) to be the set of polygons created by rearranging a given set of n edges. A key theorem gives a formula for the average over an ESE of the squared lengths of chords skipping k vertices in terms of k, n, and the edgelengths of the ensemble. This allows one to easily compute expected values of squared chord lengths and radii of gyration for any probability measure on polygon space invariant under rearrangements of edges. © 2014 Wiley Periodicals, Inc.  相似文献   

15.
This paper develops a discrete reliability growth (RG) model for an inverse sampling scheme, e.g., for destructive tests of expensive single-shot operations systems where design changes are made only and immediately after the occurrence of failures. For qi, the probability of failure at the i-th stage, a specific parametric form is chosen which conforms to the concept of the Duane (1964, IEEE Trans. Aerospace Electron. Systems, 2, 563-566) learning curve in the continuous-time RG setting. A generalized linear model approach is pursued which efficiently handles a certain non-standard situation arising in the study of large-sample properties of the maximum likelihood estimators (MLEs) of the parameters. Alternative closed-form estimators of the model parameters are proposed and compared with the MLEs through asymptotic efficiency as well as small and moderate sample size simulation studies.  相似文献   

16.
In this paper, we deal with parameter estimation of the log-logistic distribution. It is widely known that the maximum likelihood estimators (MLEs) are usually biased in the case of the finite sample size. This motivates a study of obtaining unbiased or nearly unbiased estimators for this distribution. Specifically, we consider a certain ‘corrective’ approach and Efron’s bootstrap resampling method, which both can reduce the biases of the MLEs to the second order of magnitude. As a comparison, the commonly used generalized moments method is also considered for estimating parameters. Monte Carlo simulation studies are conducted to compare the performances of the various estimators under consideration. Finally, two real-data examples are analyzed to illustrate the potential usefulness of the proposed estimators, especially when the sample size is small or moderate.  相似文献   

17.
This paper studies moderate deviation behaviors of the generalized method of moments and generalized empirical likelihood estimators for generalized estimating equations, where the number of equations can be larger than the number of unknown parameters. We consider two cases for the data generating probability measure: the model assumption and local contaminations or deviations from the model assumption. For both cases, we characterize the first-order terms of the moderate deviation error probabilities of these estimators. Our moderate deviation analysis complements the existing literature of the local asymptotic analysis and misspecification analysis for estimating equations, and is useful to evaluate power and robust properties of statistical tests for estimating equations which typically involve some estimators for nuisance parameters.  相似文献   

18.
Coherent multiperiod risk adjusted values and Bellman’s principle   总被引:1,自引:0,他引:1  
Starting with a time-0 coherent risk measure defined for “value processes”, we also define risk measurement processes. Two other constructions of measurement processes are given in terms of sets of test probabilities. These latter constructions are identical and are related to the former construction when the sets fulfill a stability condition also met in multiperiod treatment of ambiguity as in decision-making. We finally deduce risk measurements for the final value of locked-in positions and repeat a warning concerning Tail-Value-at-Risk.  相似文献   

19.
We consider minimum distance estimators where the discrepancy function is defined in terms of a supremum-norm based on a Donsker-class of functions. If the parameter set is contained in a normed linear space we prove a Portmanteau-type theorem. Here, the limit in general is not a probability measure, but an outer measure given by the hitting family of the set of all minimizing points of a certain stochastic process. In case there is exactly one minimizer one obtains traditional weak convergence.  相似文献   

20.
本文基于最优线性最小偏差估计的谱分解,定义了秩亏线性模型未知参数的一个新的线性有偏估计类,并讨论了它的许多重要性质,通过选取偏参数的适当形式,构造了许多很有意义的线性有偏估计,最后,给出了一个算例。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号