首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
样本量的确定是抽样设计中的关键问题,传统方法利用总体方差和调查费用的有关信息来确定样本量可能产生两种结果,一种是样本量过低,无法保证希望的估计精度要求;一种是样本量过高,导致调查经费的浪费。计算机辅助电话调查中即时的数据运算和管理功能为序贯抽样的应用奠定了基础。利用前期抽取样本的计算结果,可以规定进一步需抽取的样本量,最终样本量是对真正期望样本量的一个最佳近似,它比传统方法更能保证以最少的费用满足预先设定的精度要求。  相似文献   

2.
A smoothing method for solving stochastic linear complementarity problems is proposed. The expected residual minimization reformulation of the problem is considered, and it is approximated by the sample average approximation (SAA). The proposed method is based on sequential solving of a sequence of smoothing problems where each of the smoothing problems is defined with its own sample average approximation. A nonmonotone line search with a variant of the Barzilai–Borwein (BB) gradient direction is used for solving each of the smoothing problems. The BB search direction is efficient and low cost, particularly suitable for nonmonotone line search procedure. The variable sample size scheme allows the sample size to vary across the iterations and the method tends to use smaller sample size far away from the solution. The key point of this strategy is a good balance between the variable sample size strategy, the smoothing sequence and nonmonotonicity. Eventually, the maximal sample size is used and the SAA problem is solved. Presented numerical results indicate that the proposed strategy reduces the overall computational cost.  相似文献   

3.

We study asymptotic properties of Bayesian multiple testing procedures and provide sufficient conditions for strong consistency under general dependence structure. We also consider a novel Bayesian multiple testing procedure and associated error measures that coherently accounts for the dependence structure present in the model. We advocate posterior versions of FDR and FNR as appropriate error rates and show that their asymptotic convergence rates are directly associated with the Kullback–Leibler divergence from the true model. The theories hold regardless of the class of postulated models being misspecified. We illustrate our results in a variable selection problem with autoregressive response variables and compare our procedure with some existing methods through simulation studies. Superior performance of the new procedure compared to the others indicates that proper exploitation of the dependence structure by multiple testing methods is indeed important. Moreover, we obtain encouraging results in a maize dataset, where we select influential marker variables.

  相似文献   

4.
We present a new algorithm, iterative estimation maximization (IEM), for stochastic linear programs with conditional value-at-risk constraints. IEM iteratively constructs a sequence of linear optimization problems, and solves them sequentially to find the optimal solution. The size of the problem that IEM solves in each iteration is unaffected by the size of random sample points, which makes it extremely efficient for real-world, large-scale problems. We prove the convergence of IEM, and give a lower bound on the number of sample points required to probabilistically bound the solution error. We also present computational performance on large problem instances and a financial portfolio optimization example using an S&P 500 data set.  相似文献   

5.
A considerable amount of research on ways of testing creative ability in mathematics is now being written and this takes two forms. On the one hand there are fundamental researches which try to provide diagnostic test items on mathematical creativity and then to use these to find the relationship between this variable and a number of others. On the other hand there are attempts to create assessment items which try to measure the end‐product of a modern discovery‐based mathematics curriculum. This paper examines the criteria and form of some of these items.  相似文献   

6.
We consider in this paper the use of Monte Carlo simulation to numerically approximate the asymptotic variance of an estimator of a population parameter. When the variance of an estimator does not exist in finite samples, the variance of its limiting distribution is often used for inferences. However, in this case, the numerical approximation of asymptotic variances is less straightforward, unless their analytical derivation is mathematically tractable. The method proposed does not assume the existence of variance in finite samples. If finite sample variance does exist, it provides a more efficient approximation than the one based on the convergence of finite sample variances. Furthermore, the results obtained will be potentially useful in evaluating and comparing different estimation procedures based on their asymptotic variances for various types of distributions. The method is also applicable in surveys where the sample size required to achieve a fixed margin of error is based on the asymptotic variance of the estimator. The proposed method can be routinely applied and alleviates the complex theoretical treatment usually associated with the analytical derivation of the asymptotic variance of an estimator which is often managed on a case by case basis. This is particularly appealing in view of the advance of modern computing technology. The proposed numerical approximation is based on the variances of a certain truncated statistic for two selected sample sizes, using a Richardson extrapolation type formulation. The variances of the truncated statistic for the two sample sizes are computed based on Monte Carlo simulations, and the theory for optimizing the computing resources is also given. The accuracy of the proposed method is numerically demonstrated in a classical errors-in-variables model where analytical results are available for the purpose of comparisons.  相似文献   

7.
Concerning non-iterative co-simulation, stepwise extrapolation of coupling signals is required to solve an overall system of interconnected subsystems. Each extrapolation is some kind of estimation and is directly associated with an estimation error. The introduced disturbance depends significantly on the macro-step size, i.e. the coupling step size, and influences the entire system behaviour. In addition, for synchronization purposes, sampling of the coupling signals can cause aliasing. Instead of analysing the coupling effects in the time domain, as it is commonly practised, we concentrate on a model-based approach to gain more insight into the coupling process. In this work, we consider commonly used polynomial extrapolation techniques and analyse them in the frequency domain. Based on this system-oriented point of view of the coupling process, a relation between the coupling signals and the macro-step size is available. In accordance to the dynamics of the interconnected subsystems, the model-based relation is used to select the most critical parameter, i.e. the macro-step size. Besides a ‘rule of thumb’ for meaningful step-size selection, a co-simulation benchmark example describing a two degree of freedom (2-DOF) mechanical system is used to demonstrate the advantages of modelling and the efficiency of the proposed method.  相似文献   

8.
基于OLS估计残差,本文将Bootstrap方法用于空间误差相关性LM-Error检验,综合考虑Bootstrap模拟抽样次数、空间衔接结构以及样本量,研究并比较空间误差相关Bootstrap LM-Error检验与渐近检验的水平扭曲。大量Monte Carlo实验结果显示,当模型误差不满足独立正态分布的假设条件时,空间误差相关LM-Error渐近检验的水平扭曲较大,采用Bootstrap方法可以较好地降低该水平扭曲;不管模型误差是否满足独立正态分布的假设条件,Bootstrap方法均能够有效地降低LMError渐近检验的水平扭曲。  相似文献   

9.
Recently, interest in combinatorial auctions has extended to include trade in multiple units of heterogeneous items. Combinatorial bidding is complex and iterative auctions are used to allow bidders to sequentially express their preferences with the aid of auction market information provided in the form of price feedbacks. There are different competing designs for the provision of item price feedbacks; however, most of these have not been thoroughly studied for multiple unit combinatorial auctions. This paper focuses on addressing this gap by evaluating several feedback schemes or algorithms in the context of multiple unit auctions. We numerically evaluate these algorithms under different scenarios that vary in bidder package selection strategies and in the degree of competition. We observe that auction outcomes are best when bidders use a naïve bidding strategy and competition is strong. Performance deteriorates significantly when bidders strategically select packages to maximize their profit. Finally, the performances of some algorithms are more sensitive to strategic bidding than others.  相似文献   

10.
The quickest path problem has been proposed to cope with flow problems through networks whose arcs are characterized by both travel times and flowrate constraints. Basically, it consists in finding a path in a network to transmit a given amount of items from a source node to a sink in as little time as possible, when the transmission time depends on both the traversal times of the arcs and the rates of flow along arcs. This paper is focused on the solution procedure when the items transmission must be partitioned into batches with size limits. For this problem we determine how many batches must be made and what the sizes should be.  相似文献   

11.
In this paper new algorithms for step size prediction in variable step size Adams methods are proposed. It is shown that, when large step size changes are necessary for an efficient integration, the new algorithms provide a prediction that follows more closely the local error estimation than the standard step size prediction. The new predictors can be considered as a easily computable alternative to the step size predictors given by Willé [9] in terms of differential equations.  相似文献   

12.
In linear regression analysis, outliers often have large influence in the model/variable selection process. The aim of this study is to select the subsets of independent variables which explain dependent variables in the presence of multicollinearity, outliers and possible departures from the normality assumption of the error distribution in robust regression analysis. In this study to overcome this combined problem of multicollinearity and outliers, we suggest to use robust selection criterion with Liu and Liu-type M(LM) estimators.  相似文献   

13.
The normal approximation of the confidence level of the standard confidence intervals leaves an error of the order O(1/n) (and not only O(n -1/2)). We use the first order term in the error to obtain simple lower bounds for the sample size.  相似文献   

14.
Summary This paper establishes asymptotic lower bounds which specify, in a variety of contexts, how well (in terms of relative rate of convergence) one may select the bandwidth of a kernel density estimator. These results provide important new insights concerning how the bandwidth selection problem should be considered. In particular it is shown that if the error criterion is Integrated Squared Error (ISE) then, even under very strong assumptions on the underlying density, relative error of the selected bandwidth cannot be reduced below ordern –1/10 (as the sample size grows). This very large error indicates that any technique which aims specifically to minimize ISE will be subject to serious practical difficulties arising from sampling fluctuations. Cross-validation exhibits this very slow convergence rate, and does suffer from unacceptably large sampling variation. On the other hand, if the error criterion is Mean Integrated Squared Error (MISE) then relative error of bandwidth selection can be reduced to ordern –1/2, when enough smoothness is assumed. Therefore bandwidth selection techniques which aim to minimize MISE can be much more stable, and less sensitive to small sampling fluctuations, than those which try to minimize ISE. We feel this indicates that performance in minimizing MISE, rather than ISE, should become the benchmark for measuring performance of bandwidth selection methods.Research partially supported by National Science Foundation Grants DMS-8701201 and DMS-8902973Research of the first author was done while on leave from the Australian National University  相似文献   

15.
《Applied Mathematical Modelling》2014,38(15-16):4049-4061
Many products such as fruits, vegetables, pharmaceuticals, volatile liquids, and others not only deteriorate continuously due to evaporation, obsolescence, spoilage, etc. but also have their expiration dates (i.e., a deteriorating item has its maximum lifetime). Although numerous researchers have studied economic order quantity (EOQ) models for deteriorating items, few of them have taken the maximum lifetime of a deteriorating item into consideration. In addition, a supplier frequently offers her/his retailers a permissible delay in payments in order to stimulate sales and reduce inventory. There is no interest charge to a retailer if the purchasing amount is paid to a supplier within the credit period, and vice versa. In this paper, we propose an EOQ model for a retailer when: (1) her/his product deteriorates continuously, and has a maximum lifetime, and (2) her/his supplier offers a permissible delay in payments. We then characterize the retailer’s optimal replenishment cycle time. Furthermore, we discuss a special case for non-deteriorating items. Finally, we run several numerical examples to illustrate the problem and provide some managerial insights.  相似文献   

16.
This article develops a convex polyhedral cone-based preference modeling framework for decision making with multiple criteria which extends the classical notion of Pareto optimality and accounts for relative importance of the criteria. The decision maker’s perception of the relative importance is quantified by an allowable tradeoffs between two objectives representing the maximum allowable amount of decay of a less important objective per one unit of improvement of a more important objective. Two cone-based models of relative importance are developed. In the first model, one criterion is designated as less important while all the others are more important. In the second model, more than one criterion may be classified as less important while all the others are considered more important. Complete algebraic characterization of the models is derived and the relationship between them and the classical Pareto preference is examined. Their relevance to decision making is discussed.  相似文献   

17.
运用聚类方法把公司财务状况分为5个等级,分别为财务状况健康,良好,一般,预警和危机,与以往将研究样本分为ST和非ST两类的财务预警模型相比,5分类模型更加精确合理,贴近实际。同时基于指标相关性和指标重要度对33个财务指标进行了约简,得到9个能够反映企业财务状况的财务指标。以约简后的9个指标及5个等级的财务状况来建立决策树,指标体系和财务等级更加合理。树的生成过程运用粗糙集中的变精度加权平均粗糙度作为选择测试属性的方法,每次选择变精度加权平均粗糙度值最小的属性作为分支结点。变精度加权平均粗糙度的应用提高了决策树的防噪声能力,复杂性较低且能有效提高分类效果。实证研究表明将它应用到财务预警领域,提高了财务预警的分类精度。  相似文献   

18.
This paper proposes a novel single-loop procedure for time-variant reliability analysis based on a Kriging model. A new strategy is presented to decouple the double-loop Kriging model for time-variant reliability analysis, in which the extreme value response in double-loop procedure is replaced by the best value in the current sampled points to avoid the inner optimization loop. Consequently, the extreme value response surface for time-variant reliability analysis can be directly established through a single-loop Kriging surrogate model. To further improve the accuracy of the proposed Kriging model, two methods are provided to adaptively choose a new sample point for updating the model. One method is to apply two commonly used learning functions to select the new sample point that resides as close to the extreme value response surface as possible, and the other is to apply a new learning function to select the new point. Synchronously, the corresponding different stopping criteria are also provided. It is worth nothing that the proposed single-loop Kriging model for time-variant reliability analysis is for a single time-variant performance function. To verify the proposed method, it is applied to four examples, two of which have with random process and others have not. Other popular methods for time-variant reliability analysis including the existing single-loop Kriging model are also used for the comparative analysis and their results testify the effectiveness of the proposed method.  相似文献   

19.
This paper provides simulation comparisons among the performance of 11 possible prediction intervals for the geometric.mean of a Pareto distribution with parameters (αB, ). Six different procedures were used to obtain these intervals , namely; true inter -val , pivotal interval , maximum likelihood estimation interval, centrallimit teorem interval, variance stabilizing interval and a mixture of the above intervals . Some of these intervals are valid if the observed sample size m,are large , others are valid if both, n and the future sample size m, are large. Some of these intervals require a knowledge of α or B, while others do not. The simulation validation and efficiency study shows that intervals depending on the MLE's are the best. The second best intervalsare those obtained through pivotal methods or variance stabilization transformation. The third group of intervals is that which depends on the central limit theorem when λ is known. There are two intervals which proved to be unacceptable under any criterion.  相似文献   

20.
The classic economic production quantity (EPQ) model assumes a continuous inventory-issuing policy for satisfying product demand and a perfect production for all items produced. However, in a real-life vendor–buyer integrated system, a multi-delivery policy is often used in lieu of continuous issuing policy and it is inevitable to generate defective items during a production run. This study addresses these issues by incorporating multiple deliveries of the finished batch, customer's inventory-holding cost and manufacturer's quality assurance cost into an EPQ model with the imperfect reworking of random defective items. Mathematical modelling and analyses are employed. Convexity of the long-run expected cost function is proved by the use of Hessian matrix equations, and the closed-form solutions in terms of the optimal lot size and optimal number of deliveries are obtained. The research results are demonstrated with a numerical example with a discussion on its practical usage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号