首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In this paper, we design an attribute np control chart using multiple deferred state (MDS) sampling under Weibull distribution based on time truncated life test. This chart is constructed for monitoring the variation of mean life of the product in a manufacturing process. The optimal parameters of MDS sampling and the control limit coefficients are determined so that the in‐control average run length (ARL) is as close as to the target ARL. The optimal parameters of MDS sampling are sample size and number of successive subgroups required for declaring the current state of process. Out‐of‐control ARL is considered as a measure of the performance of proposed chart and reported with determined optimal parameters for various shift constants. The out‐of‐control ARL of the proposed chart obtained under various distributions is compared with each other. The performance of proposed control chart is compared with the performance of the existing control chart designed under single sampling. In addition, the economic design of proposed chart using variable sampling interval scheme is discussed, and sensitivity analysis on expected costs is also investigated.  相似文献   

3.
混凝土构件检测时,一般采用百分比抽样的方式.以回弹法中对混凝土强度进行检测时采用百分比抽样的抽样方式为例,从绝对误差限和相对误差限的角度分析了不同构件总量均采用此方法抽取样本的不合理性,并提出在不同混凝土强度等级、不同构件总数的情况下,通过控制一定的误差限来确定样本数量的方法.通过理论和实例分析,提出在抽样过程中,采用分层抽样技术对检测构件进行合理分层,降低总体方差,可减少样本数量.这种方法也适用于其它检测问题的样本容量的确定.  相似文献   

4.
The design of attribute sampling inspection plans based on compressed or narrow limits for food safety applications is covered. Artificially compressed limits allow a significant reduction in the number of analytical tests to be carried out while maintaining the risks at predefined levels. The design of optimal sampling plans is discussed for two given points on the operating characteristic curve and especially for the zero acceptance number case. Compressed limit plans matching the attribute plans of the International Commission on Microbiological Specifications for Foods are also given. The case of unknown batch standard deviation is also discussed. Three‐class attribute plans with optimal positions for given microbiological limit M and good manufacturing practices limit m are derived. The proposed plans are illustrated through examples. R software codes to obtain sampling plans are also given. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
The skip‐lot sampling program can be used for reducing the amount of inspection on a product that has excellent quality history. Thus skip‐lot sampling plans are designed to reduce inspection costs. Moreover, the skip‐lot concept is sound and useful and is economically advantageous to use in the design of sampling plans. Hence, a new system of skip‐lot sampling plans designated as the SkSP‐V plan is developed in this paper. The proposed plan requires a return to normal inspection whenever a lot is rejected during sampling inspection, but has a provision for a reduced normal inspection upon demonstration of superior product quality. A Markov chain formulation and derivation of performance measures for this new plan are presented. The properties of SkSP‐V plan are studied with single sampling plan as the reference plan. Advantages of this new plan are also discussed. Finally, certain cost models are given for the economic design of the SkSP‐V plan. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
During the sampling of particulate mixtures, samples taken are analyzed for their mass concentration, which generally has non‐zero sample‐to‐sample variance. Bias, variance, and mean squared error (MSE) of a number of variance estimators, derived by Geelhoed, were studied in this article. The Monte Carlo simulation was applied using an observable first‐order Markov Chain with transition probabilities that served as a model for the sample drawing process. Because the bias and variance of a variance estimator could depend on the specific circumstances under which it is applied, Monte Carlo simulation was performed for a wide range of practically relevant scenarios. Using the ‘smallest mean squared error’ as a criterion, an adaptation of an estimator based on a first‐order Taylor linearization of the sample concentration is the best. An estimator based on the Horvitz–Thompson estimator is not practically applicable because of the potentially high MSE for the cases studied. The results indicate that the Poisson estimator leads to a biased estimator for the variance of fundamental sampling error (up to 428% absolute value of relative bias) in case of low levels of grouping and segregation. The uncertainty of the results obtained by the simulations was also addressed and it was found that the results were not significantly affected. The potentials of a recently described other approach are discussed for extending the first‐order Markov Chain described here to account also for higher levels of grouping and segregation. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
A resource selection probability function is a function that gives the probability that a resource unit (e.g., a plot of land) that is described by a set of habitat variables X1 to Xp will be used by an animal or group of animals in a certain period of time. The estimation of a resource selection function is usually based on the comparison of a sample of resource units used by an animal with a sample of the resource units that were available for use, with both samples being assumed to be effectively randomly selected from the relevant populations. In this paper the possibility of using a modified sampling scheme is examined, with the used units obtained by line transect sampling. A logistic regression type of model is proposed, with estimation by conditional maximum likelihood. A simulation study indicates that the proposed method should be useful in practice.  相似文献   

8.
In this paper, an adaptive method for sampling and reconstructing high‐dimensional shift‐invariant signals is proposed. First, the integrate‐and‐fire sampling scheme and an approximate reconstruction algorithm for one‐dimensional bandlimited signals are generalized to shift‐invariant signals. Then, a high‐dimensional shift‐invariant signal is reduced to be a sequence of one‐dimensional shift‐invariant signals along the trajectories parallel to some coordinate axis, which can be approximately reconstructed by the generalized integrate‐and‐fire sampling scheme. Finally, an approximate reconstruction for the high‐dimensional shift‐invariant signal is obtained by solving a series of stable linear systems of equations. The main result shows that the final reconstructed error is completely determined by the initial threshold in integrate‐and‐fire sampling scheme, which is generally very small. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we discuss the problem of estimating the common mean of a bivariate normal population based on paired data as well as data on one of the marginals. Two double sampling schemes with the second stage sampling being either a simple random sampling (SRS) or a ranked set sampling (RSS) are considered. Two common mean estimators are proposed. It is found that under normality, the proposed RSS common mean estimator is always superior to the proposed SRS common mean estimator and other existing estimators such as the RSS regression estimator proposed by Yu and Lam (1997, Biometrics, 53, 1070–1080). The problem of estimating the mean Reid Vapor Pressure (RVP) of regular gasoline based on field and laboratory data is considered.  相似文献   

10.
We propose new sequential importance sampling methods for sampling contingency tables with given margins. The proposal for each method is based on asymptotic approximations to the number of tables with fixed margins. These methods generate tables that are very close to the uniform distribution. The tables, along with their importance weights, can be used to approximate the null distribution of test statistics and calculate the total number of tables. We apply the methods to a number of examples and demonstrate an improvement over other methods in a variety of real problems. Supplementary materials are available online.  相似文献   

11.
你也需要蒙特卡罗方法——提高应用水平的若干技巧   总被引:3,自引:1,他引:2  
本文是《你也需要蒙特卡罗方法》中的第二篇。文中讨论提高应用水平的一些技巧,涉及模拟模型的选取,提高计算速度或降低抽样方差的一些方法,诸如重要抽样、相关抽样、对偶抽样和分层抽样等。还讨论了模拟中所需的抽样次数的确定和模拟结果的精度评估等实用问题。  相似文献   

12.
When the data has heavy tail feature or contains outliers, conventional variable selection methods based on penalized least squares or likelihood functions perform poorly. Based on Bayesian inference method, we study the Bayesian variable selection problem for median linear models. The Bayesian estimation method is proposed by using Bayesian model selection theory and Bayesian estimation method through selecting the Spike and Slab prior for regression coefficients, and the effective posterior Gibbs sampling procedure is also given. Extensive numerical simulations and Boston house price data analysis are used to illustrate the effectiveness of the proposed method.  相似文献   

13.
Signal processing problems arising in the study of the linearly viscoelastic behavior of polymers and composites are considered. It is shown that the great amount of data conversions is associated with integral transforms using kernels which depend on the ratio or product of arguments for monotonic long-time-interval and wide-frequency-band functions (signals). A unified method of carrying out these integral transforms is developed by combining a logarithmic transformation of the signal time scale with digital filtering. For integral transforms leading to ill-conditioned inverse problems, a method of regularization is proposed based on choosing a sampling rate which ensures an acceptable error variance of the output signal. The specific features of the functional filters used for performing the functional (integral) transforms are discussed. Examples of performing the Heaviside-Carson sine transform and an inherently ill-conditioned problem of inverting the integral transform for determining the relaxation spectrum are represented by digital functional filters.  相似文献   

14.
This paper deals with a class of dynamic games that are used for modelling oligopolistic competition in discrete time with random disturbances that can be described as an event tree with exogenously given probabilities. The concepts of S-adapted information structure and S-adapted equilibrium are reviewed and a characterization of the equilibrium as the solution of a variational inequality (VI) is proposed. Conditions for existence and uniqueness of the equilibrium are provided. In order to deal with the large dimension of the VI an approximation method is proposed which is based on the use of random sampling of scenarios in the event tree. A proof of convergence is provided and these results are illustrated numerically on two dynamic oligopoly models.  相似文献   

15.
In this note, we discuss a class of so-called generalized sampling functions. These functions are defined to be the inverse Fourier transform of a family of piecewise constant functions that are either square integrable or Lebegue integrable on the real number line. They are in fact the generalization of the classic sinc function. Two approaches of constructing the generalized sampling functions are reviewed. Their properties such as cardinality, orthogonality, and decaying properties are discussed. The interactions of those functions and Hilbert transformer are also discussed.  相似文献   

16.
We propose a sequential importance sampling strategy to estimate subgraph frequencies and detect network motifs. The method is developed by sampling subgraphs sequentially node by node using a carefully chosen proposal distribution. Viewing the subgraphs as rooted trees, we propose a recursive formula that approximates the number of subgraphs containing a particular node or set of nodes. The proposal used to sample nodes is proportional to this estimated number of subgraphs. The method generates subgraphs from a distribution close to uniform, and performs better than competing methods. We apply the method to four real-world networks and demonstrate outstanding performance in practical examples. Supplemental materials for the article are available online.  相似文献   

17.
Generalized linear mixed models (GLMM) are used in situations where a number of characteristics (covariates) affect a nonnormal response variable and the responses are correlated due to the existence of clusters or groups. For example, the responses in biological applications may be correlated due to common genetic factors or environmental factors. The clustering or grouping is addressed by introducing cluster effects to the model; the associated parameters are often treated as random effects parameters. In many applications, the magnitude of the variance components corresponding to one or more of the sets of random effects parameters are of interest, especially the point null hypothesis that one or more of the variance components is zero. A Bayesian approach to test the hypothesis is to use Bayes factors comparing the models with and without the random effects in question—this work reviews a number of approaches for estimating the Bayes factor. We perform a comparative study of the different approaches to compute Bayes factors for GLMMs by applying them to two different datasets. The first example employs a probit regression model with a single variance component to data from a natural selection study on turtles. The second example uses a disease mapping model from epidemiology, a Poisson regression model with two variance components. Bridge sampling and a recent improvement known as warp bridge sampling, importance sampling, and Chib's marginal likelihood calculation are all found to be effective. The relative advantages of the different approaches are discussed.  相似文献   

18.
Abstract

This article focuses on improving estimation for Markov chain Monte Carlo simulation. The proposed methodology is based upon the use of importance link functions. With the help of appropriate importance sampling weights, effective estimates of functionals are developed. The method is most easily applied to irreducible Markov chains, where application is typically immediate. An important conceptual point is the applicability of the method to reducible Markov chains through the use of many-to-many importance link functions. Applications discussed include estimation of marginal genotypic probabilities for pedigree data, estimation for models with and without influential observations, and importance sampling for a target distribution with thick tails.  相似文献   

19.
Summary  Sampling from probability density functions (pdfs) has become more and more important in many areas of applied science, and has therefore been the subject of great attention. Many sampling procedures proposed allow for approximate or asymptotic sampling. On the other hand, very few methods allow for exact sampling. Direct sampling of standard pdfs is feasible, but sampling of much more complicated pdfs is often required. Rejection sampling allows to exactly sample from univariate pdfs, but has the huge drawback of needing a case-by-case calculation of a comparison function that often reveals as a tremendous chore, whose results dramatically affect the efficiency of the sampling procedure. In this paper, we restrict ourselves to a pdf that is proportional to a product of standard distributions. From there, we show that an automated selection of both the comparison function and the upper bound is possible. Moreover, this choice is performed in order to optimize the sampling efficiency among a range of potential solutions. Finally, the method is illustrated on a few examples.  相似文献   

20.
To predict or control the response of a complicated numerical model which involves a large number of input variables but is mainly affected by only a part of variables, it is necessary to screening those active variables. This paper proposes a new space-filling sampling strategy, which is used to screening the parameters based on the Morris’ elementary effect method. The beginning points of sampling trajectories are selected by using the maximin principle of Latin Hypercube Sampling method. The remaining points of trajectories are determined by using the one-factor-at-a-time design. Being different from other sampling strategies to determine the sequence of factors randomly in one-factor-at-a-time design, the proposed method formulates the sequence of factors by a deterministic algorithm, which sequentially maximizes the Euclidean distance among sampling trajectories. A new efficient algorithm is proposed to transform the distance maximization problem to a coordinate sorting problem, which saves computational cost much. After the elementary effects are computed using the sampling points, a detailed criterion is presented to select the active factors. Two mathematic examples and an engineering problem are used to validate the proposed sampling method, which demonstrates the priority in computational efficiency, space-filling performance, and screening efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号