首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Poisson distribution is often a good approximation to the underlying sampling distribution and is central to the study of categorical data. In this paper, we propose a new unified approach to an investigation of point properties of simultaneous estimations of Poisson population parameters with general quadratic loss functions. The main accent is made on the shrinkage estimation. We build a series of estimators that could be represented as a convex combination of linear statistics such as maximum likelihood estimator (benchmark estimator), restricted estimator, composite estimator, preliminary test estimator, shrinkage estimator, positive rule shrinkage estimator (James-Stein type estimator). All these estimators are represented in a general integrated estimation approach, which allows us to unify our investigation and order them with respect to the risk. A simulation study with numerical and graphical results is conducted to illustrate the properties of the investigated estimators.  相似文献   

2.
Many hard problems in the computational sciences are equivalent to counting the leaves of a decision tree, or, more generally, by summing a cost function over the nodes. These problems include calculating the permanent of a matrix, finding the volume of a convex polyhedron, and counting the number of linear extensions of a partially ordered set. Many approximation algorithms exist to estimate such sums. One of the most recent is Stochastic Enumeration (SE), introduced in 2013 by Rubinstein. In 2015, Vaisman and Kroese provided a rigorous analysis of the variance of SE, and showed that SE can be extended to a fully polynomial randomized approximation scheme for certain cost functions on random trees. We present an algorithm that incorporates an importance function into SE, and provide theoretical analysis of its efficacy. We also present the results of numerical experiments to measure the variance of an application of the algorithm to the problem of counting linear extensions of a poset, and show that introducing importance sampling results in a significant reduction of variance as compared to the original version of SE.  相似文献   

3.
Approximate solutions for discrete stochastic optimization problems are often obtained via simulation. It is reasonable to complement these solutions by confidence regions for the argmin-set. We address the question how a certain total number of random draws should be distributed among the set of alternatives. Two goals are considered: the minimization of the costs caused by using a statistical estimate of the true argmin, and the minimization of the expected size of the confidence sets. We show that an asymptotically optimal sampling strategy in the case of normal errors can be obtained by solving a convex optimization problem. To reduce the computational effort we propose a regularization that leads to a simple one-step allocation rule.  相似文献   

4.
对不完全事后分层的估计   总被引:2,自引:1,他引:1  
事后分层估计量是抽样调查中经常用到的一种估计方法。在使用多个分类变量对样本进行交叉事后分层时,边缘总值已知、格子总值未知的不完全事后分层问题是估计时又常面临的情况。本论文将对这一情况进行系统的总结,给出两个经典的估计量:搜索比率估计量和广义搜索比率估计量。  相似文献   

5.
We develop importance sampling estimators for Monte Carlo pricing of European and path-dependent options in models driven by Lévy processes. Using results from the theory of large deviations for processes with independent increments, we compute an explicit asymptotic approximation for the variance of the pay-off under a time-dependent Esscher-style change of measure. Minimizing this asymptotic variance using convex duality, we then obtain an importance sampling estimator of the option price. We show that our estimator is logarithmically optimal among all importance sampling estimators. Numerical tests in the variance gamma model show consistent variance reduction with a small computational overhead.  相似文献   

6.
We show that the Glauber dynamics on proper 9‐colourings of the triangular lattice is rapidly mixing, which allows for efficient sampling. Consequently, there is a fully polynomial randomised approximation scheme (FPRAS) for counting proper 9‐colourings of the triangular lattice. Proper colourings correspond to configurations in the zero‐temperature anti‐ferromagnetic Potts model. We show that the spin system consisting of proper 9‐colourings of the triangular lattice has strong spatial mixing. This implies that there is a unique infinite‐volume Gibbs distribution, which is an important property studied in statistical physics. Our results build on previous work by Goldberg, Martin and Paterson, who showed similar results for 10 colours on the triangular lattice. Their work was preceded by Salas and Sokal's 11‐colour result. Both proofs rely on computational assistance, and so does our 9‐colour proof. We have used a randomised heuristic to guide us towards rigourous results. © 2011 Wiley Periodicals, Inc. Random Struct. Alg., 40, 501–533, 2012  相似文献   

7.
The generalized median (GM) estimator is a family of robust estimators that balances the competing demands of statistical efficiency and robustness. By choosing a kernel that is efficient for the parameter, the GM estimator gains robustness by computing the median of the kernel evaluated at all possible subsets from the sample. The GM estimator is often computationally infeasible because the number of subsets can be large for even modest sample sizes. Writing the estimator in terms of the quantile function facilitates an approximation using a sample of all possible subsets. While both sampling with and without replacement are feasible, sampling without replacement is preferred because of the reduction in variance from the sampling fraction. The proposed algorithm uses sequential sampling to compute an approximation within a user-chosen margin of error.  相似文献   

8.
The diameter of a convex set C is the length of the longest segment in C, and the local diameter at a point p is the length of the longest segment which contains p. It is easy to see that the local diameter at any point equals at least half of the diameter of C.

This paper looks at the analogous question in a discrete setting; namely we look at convex lattice polygons in the plane. The analogue of Euclidean diameter is lattice diameter, defined as the maximal number of collinear points from a figure. In this setting, lattice diameter and local lattice diameter need not be related. However, for figures of a certain size, the local lattice diameter at any point must equal at least (n − 2)/2, where n is the lattice diameter of the figure. The exact minimal size for which this result holds is determined, as a special case of an exact combinatorial formula.  相似文献   


9.
A wide variety of topics in pure and applied mathematics involve the problem of counting the number of lattice points inside a convex bounded polyhedron, for short called a polytope. Applications range from the very pure (number theory, toric Hilbert functions, Kostant’s partition function in representation theory) to the most applied (cryptography, integer programming, contingency tables). This paper is a survey of this problem and its applications. We review the basic structure theorems about this type of counting problem. Perhaps the most famous special case is the theory of Ehrhart polynomials, introduced in the 1960s by Eugène Ehrhart. These polynomials count the number of lattice points in the different integral dilations of an integral convex polytope. We discuss recent algorithmic solutions to this problem and conclude with a look at what happens when trying to count lattice points in more complicated regions of space.  相似文献   

10.
We consider the simultaneous linear minimax estimation problem in linear models with ellipsoidal constraints imposed on an unknown parameter. Using convex analysis, we derive necessary and sufficient optimality conditions for a matrix to define the linear minimax estimator. For certain regions of the set of characteristics of linear models and constraints, we exploit these optimality conditions and get explicit formulae for linear minimax estimators.  相似文献   

11.
By means of second-order asymptotic approximation, the paper clarifies the relationship between the Fisher information of first-order asymptotically efficient estimators and their decision-theoretic performance. It shows that if the estimators are modified so that they have the same asymptotic bias, the information amount can be connected with the risk based on convex loss functions in such a way that the greater information loss of an estimator implies its greater risk. The information loss of the maximum likelihood estimator is shown to be minimal in a general set-up. A multinomial model is used for illustration.  相似文献   

12.
徐士英 《数学杂志》1996,16(3):321-328
本文首先指出文献[1]中的一个错误,举例说明弱拟凸集的最佳逼近未必具有广义强唯一性,进而讨论两类共同逼近的强唯一性,在空间是一致凸、逼近集是共同太阳集的条件下,证明了最佳共同逼近具有广义强唯一性  相似文献   

13.
In this paper, we consider a reverse convex programming problem constrained by a convex set and a reverse convex set, which is defined by the complement of the interior of a compact convex set X. We propose an inner approximation method to solve the problem in the case where X is not necessarily a polytope. The algorithm utilizes an inner approximation of X by a sequence of polytopes to generate relaxed problems. It is shown that every accumulation point of the sequence of optimal solutions of the relaxed problems is an optimal solution of the original problem.  相似文献   

14.
In this article, a branch and-bound outer approximation algorithm is presented for globally solving a sum-of-ratios fractional programming problem. To solve this problem, the algorithm instead solves an equivalent problem that involves minimizing an indefinite quadratic function over a nonempty, compact convex set. This problem is globally solved by a branch-and-bound outer approximation approach that can create several closed-form linear inequality cuts per iteration. In contrast to pure outer approximation techniques, the algorithm does not require computing the new vertices that are created as these cuts are added. Computationally, the main work of the algorithm involves solving a sequence of convex programming problems whose feasible regions are identical to one another except for certain linear constraints. As a result, to solve these problems, an optimal solution to one problem can potentially be used to good effect as a starting solution for the next problem.  相似文献   

15.
In productivity and efficiency analysis, the technical efficiency of a production unit is measured through its distance to the efficient frontier of the production set. The most familiar non-parametric methods use Farrell–Debreu, Shephard, or hyperbolic radial measures. These approaches require that inputs and outputs be non-negative, which can be problematic when using financial data. Recently, Chambers et al. (1998) have introduced directional distance functions which can be viewed as additive (rather than multiplicative) measures efficiency. Directional distance functions are not restricted to non-negative input and output quantities; in addition, the traditional input and output-oriented measures are nested as special cases of directional distance functions. Consequently, directional distances provide greater flexibility. However, until now, only free disposal hull (FDH) estimators of directional distances (and their conditional and robust extensions) have known statistical properties (Simar and Vanhems, 2012). This paper develops the statistical properties of directional d estimators, which are especially useful when the production set is assumed convex. We first establish that the directional Data Envelopment Analysis (DEA) estimators share the known properties of the traditional radial DEA estimators. We then use these properties to develop consistent bootstrap procedures for statistical inference about directional distance, estimation of confidence intervals, and bias correction. The methods are illustrated in some empirical examples.  相似文献   

16.
Recently, we proposed variants as a statistical model for treating ambiguity. If data are extracted from an object with a machine then it might not be able to give a unique safe answer due to ambiguity about the correct interpretation of the object. On the other hand, the machine is often able to produce a finite number of alternative feature sets (of the same object) that contain the desired one. We call these feature sets variants of the object. Data sets that contain variants may be analyzed by means of statistical methods and all chapters of multivariate analysis can be seen in the light of variants. In this communication, we focus on point estimation in the presence of variants and outliers. Besides robust parameter estimation, this task requires also selecting the regular objects and their valid feature sets (regular variants). We determine the mixed MAP-ML estimator for a model with spurious variants and outliers as well as estimators based on the integrated likelihood. We also prove asymptotic results which show that the estimators are nearly consistent.The problem of variant selection turns out to be computationally hard; therefore, we also design algorithms for efficient approximation. We finally demonstrate their efficacy with a simulated data set and a real data set from genetics.  相似文献   

17.
This article concerns the computational problem of counting the lattice points inside convex polytopes, when each point must be counted with a weight associated to it. We describe an efficient algorithm for computing the highest degree coefficients of the weighted Ehrhart quasi-polynomial for a rational simple polytope in varying dimension, when the weights of the lattice points are given by a polynomial function h. Our technique is based on a refinement of an algorithm of A.?Barvinok in the unweighted case (i.e., h≡1). In contrast to Barvinok’s method, our method is local, obtains an approximation on the level of generating functions, handles the general weighted case, and provides the coefficients in closed form as step polynomials of the dilation. To demonstrate the practicality of our approach, we report on computational experiments which show that even our simple implementation can compete with state-of-the-art software.  相似文献   

18.
We study the existence problem of a zero point of a function defined on a finite set of elements of the integer lattice Zn of the n-dimensional Euclidean space Rn. It is assumed that the set is integrally convex, which implies that the convex hull of the set can be subdivided in simplices such that every vertex is an element of Zn and each simplex of the triangulation lies in an n-dimensional cube of size one. With respect to this triangulation we assume that the function satisfies some property that replaces continuity. Under this property and some boundary condition the function has a zero point. To prove this we use a simplicial algorithm that terminates with a zero point within a finite number of iterations. The standard technique of applying a fixed point theorem to a piecewise linear approximation cannot be applied, because the ‘continuity property’ is too weak to assure that a zero point of the piecewise linear approximation induces a zero point of the function itself. We apply the main existence result to prove the existence of a pure Cournot-Nash equilibrium in a Cournot oligopoly model. We further obtain a discrete analogue of the well-known Borsuk-Ulam theorem and a theorem for the existence of a solution for the discrete nonlinear complementarity problem.  相似文献   

19.
In this paper, we propose an exponential ratio type estimator of the finite population mean when auxiliary information is qualitative in nature. Under simple random sampling without replacement scheme, the expressions for the bias and the mean square error of the proposed estimator have been obtained, up to first order of approximation. To show that our proposed estimator is more efficient as compared to the existing estimators, we have made a comparative study with respect to their mean square errors. Theoretically and numerically, we have found that our proposed estimator is always more efficient as compared to its competitor estimators including all the estimators of Abd-Elfattah et al. [1] [A.M. Abd-Elfattah, E.A. El-Sherpieny, S.M. Mohamed, and O.F. Abdou. Improvement in estimating the population mean in simple random sampling using information on auxiliary attribute. Applied Mathematics and Computation, 215 (2010), 4198-4202].  相似文献   

20.
This paper develops a discrete reliability growth (RG) model for an inverse sampling scheme, e.g., for destructive tests of expensive single-shot operations systems where design changes are made only and immediately after the occurrence of failures. For qi, the probability of failure at the i-th stage, a specific parametric form is chosen which conforms to the concept of the Duane (1964, IEEE Trans. Aerospace Electron. Systems, 2, 563-566) learning curve in the continuous-time RG setting. A generalized linear model approach is pursued which efficiently handles a certain non-standard situation arising in the study of large-sample properties of the maximum likelihood estimators (MLEs) of the parameters. Alternative closed-form estimators of the model parameters are proposed and compared with the MLEs through asymptotic efficiency as well as small and moderate sample size simulation studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号