首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we design an attribute np control chart using multiple deferred state (MDS) sampling under Weibull distribution based on time truncated life test. This chart is constructed for monitoring the variation of mean life of the product in a manufacturing process. The optimal parameters of MDS sampling and the control limit coefficients are determined so that the in‐control average run length (ARL) is as close as to the target ARL. The optimal parameters of MDS sampling are sample size and number of successive subgroups required for declaring the current state of process. Out‐of‐control ARL is considered as a measure of the performance of proposed chart and reported with determined optimal parameters for various shift constants. The out‐of‐control ARL of the proposed chart obtained under various distributions is compared with each other. The performance of proposed control chart is compared with the performance of the existing control chart designed under single sampling. In addition, the economic design of proposed chart using variable sampling interval scheme is discussed, and sensitivity analysis on expected costs is also investigated.  相似文献   

2.
In addition to accepting or rejecting a candidate arriving at time r, we may consider purchasing an option at a cost cx to recall the candidate at time r + x, but this privilege may be invoked only once. For large sample size, using the best-choice criterion and deducting option costs, the optimal strategy and return are obtained.  相似文献   

3.
We propose to study a EOQ-type inventory model with unreliable supply, with each order containing a random proportion of defective items. Every time an order is received, an acceptance sampling plan is applied to the lot, according to which only a sample is inspected instead of the whole lot. If the sample conforms to the standards, i.e. if the number of imperfect items is below an “acceptance number”, no further screening is performed. Otherwise, the lot is subject to 100% screening. We formulate an integer non-linear mathematical program that integrates inventory and quality decisions into a unified profit model, to jointly determine the optimal lot size and optimal sampling plan, characterized by a sample size, and an acceptance number. The optimal decisions are determined in a way to achieve a certain average outgoing quality limit (AOQL), which is the highest proportion of defective items in the outgoing material sold to customers. We provide a counter-example demonstrating that the expected profit function, objective of the mathematical program, is not jointly concave in the lot and sample size. However, we show that for a given sampling plan, the expected profit function is concave in the lot size. A solution procedure is presented to compute the optimal solution. Numerical analysis is provided to gain managerial insights by analyzing the impact of changing various model parameters on the optimal solution. We also show numerically that the optimal profit determined using this model is significantly higher when compared to the optimal profit obtained using Salameh and Jaber (2000)’s [1] model, indicating much higher profits when acceptance sampling is used.  相似文献   

4.
A stratified random sampling plan is one in which the elements of the population are first divided into nonoverlapping groups, and then a simple random sample is selected from each group. In this paper, we focus on determining the optimal sample size of each group. We show that various versions of this problem can be transformed into a particular nonlinear program with a convex objective function, a single linear constraint, and bounded variables. Two branch and bound algorithms are presented for solving the problem. The first algorithm solves the transformed subproblems in the branch and bound tree using a variable pegging procedure. The second algorithm solves the subproblems by performing a search to identify the optimal Lagrange multiplier of the single constraint. We also present linearization and dynamic programming methods that can be used for solving the stratified sampling problem. Computational testing indicates that the pegging branch and bound algorithm is fastest for some classes of problems, and the linearization method is fastest for other classes of problems.  相似文献   

5.
In this paper we consider the sampling properties of the bootstrap process, that is, the empirical process obtained from a random sample of size n (with replacement) of a fixed sample of size n of a continuous distribution. The cumulants of the bootstrap process are given up to the order n –1 and their unbiased estimation is discussed. Furthermore, it is shown that the bootstrap process has an asymptotic minimax property for some class of distributions up to the order n –1/2.  相似文献   

6.
We discuss in this paper statistical inference of sample average approximations of multistage stochastic programming problems. We show that any random sampling scheme provides a valid statistical lower bound for the optimal (minimum) value of the true problem. However, in order for such lower bound to be consistent one needs to employ the conditional sampling procedure. We also indicate that fixing a feasible first-stage solution and then solving the sampling approximation of the corresponding (T–1)-stage problem, does not give a valid statistical upper bound for the optimal value of the true problem.Supported, in part, by the National Science Foundation under grant DMS-0073770.  相似文献   

7.
The multidimensional assignment problem (MAP) is an NP-hard combinatorial optimization problem occurring in applications such as data association and target tracking. In this paper, we investigate characteristics of the mean optimal solution values for random MAPs with axial constraints. Throughout the study, we consider cost coefficients taken from three different random distributions: uniform, exponential and standard normal. In the cases of uniform and exponential costs, experimental data indicates that the mean optimal value converges to zero when the problem size increases. We give a short proof of this result for the case of exponentially distributed costs when the number of elements in each dimension is restricted to two. In the case of standard normal costs, experimental data indicates the mean optimal value goes to negative infinity with increasing problem size. Using curve fitting techniques, we develop numerical estimates of the mean optimal value for various sized problems. The experiments indicate that numerical estimates are quite accurate in predicting the optimal solution value of a random instance of the MAP.  相似文献   

8.
In this article, we study a model of a single variable sampling plan with Type I censoring. Assume that the quality of an item in a batch is measured by a random variable which follows a Weibull distributionW (λ,m), with scale parameter λ and shape parameterm having a gamma-discrete prior distribution or σ=1/λ andm having an inverse gamma-uniform prior distribution. The decision function is based on the Kaplan-Meier estimator. Then, the explicit expressions of the Bayes risk are derived. In addition, an algorithm is suggested so that an optimal sampling plan can be determined approximately after a finite number of searching steps.  相似文献   

9.
We consider a stochastic serial inventory system with a given fixed batch size per stage and linear inventory holding and penalty costs. For this system, echelon stock (R,nQ) policies are known to be optimal. On the basis of new average costs formulas, we obtain newsvendor equations for the optimal reorder levels.  相似文献   

10.
In this paper we consider the component structure of decomposable combinatorial objects, both labeled and unlabeled, from a probabilistic point of view. In both cases we show that when the generating function for the components of a structure is a logarithmic function, then the joint distribution of the normalized order statistics of the component sizes of a random object of size n coverges to the Poisson–Dirichlet distribution on the simplex ?{{xi}: Σ xi = 1 x1 ? x2 ? …? ? 0}. This result complements recent results obtained by Flajolet and Soria on the total number of components in a random combinatorial structure. © 1994 John Wiley & Sons, Inc.  相似文献   

11.
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid, however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this article, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general setup, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effect models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.  相似文献   

12.
Stochastic linear programs can be solved approximately by drawing a subset of all possible random scenarios and solving the problem based on this subset, an approach known as sample average approximation (SAA). The value of the objective function at the optimal solution obtained via SAA provides an estimate of the true optimal objective function value. This estimator is known to be optimistically biased; the expected optimal objective function value for the sampled problem is lower (for minimization problems) than the optimal objective function value for the true problem. We investigate how two alternative sampling methods, antithetic variates (AV) and Latin Hypercube (LH) sampling, affect both the bias and variance, and thus the mean squared error (MSE), of this estimator. For a simple example, we analytically express the reductions in bias and variance obtained by these two alternative sampling methods. For eight test problems from the literature, we computationally investigate the impact of these sampling methods on bias and variance. We find that both sampling methods are effective at reducing mean squared error, with Latin Hypercube sampling outperforming antithetic variates. For our analytic example and the eight test problems we derive or estimate the condition number as defined in Shapiro et al. (Math. Program. 94:1–19, 2002). We find that for ill-conditioned problems, bias plays a larger role in MSE, and AV and LH sampling methods are more likely to reduce bias.  相似文献   

13.
The multidimensional assignment problem (MAP) is a NP-hard combinatorial optimization problem, occurring in many applications, such as data association. In this paper, we prove two conjectures made in Ref. 1 and based on data from computational experiments on MAPs. We show that the mean optimal objective function cost of random instances of the MAP goes to zero as the problem size increases, when assignment costs are independent exponentially or uniformly distributed random variables. We prove also that the mean optimal solution goes to negative infinity when assignment costs are independent normally distributed random variables.  相似文献   

14.
A well-known improvement on the basic Quicksort algorithm is to sample from the subarray at each recursive stage and to use the sample median as the partition element. General sampling strategies, which allow sample size to vary as a function of subarray size, are analyzed here in terms of the total cost of comparisons required for sorting plus those required for median selection. Both this generalization and this cost measure are new to the analysis of Quicksort. A square-root strategy, which takes a sample of size Φ(√n) for a subarray of size n, is shown to be optimal over a large class of strategies. The square-root strategy has O(n1,5) worst-case cost. The exact optimal strategy for a standard implementation of Quicksort is found computationally for n below 3000. © 1995 John Wiley & Sons, Inc.  相似文献   

15.
Modern random matrix theory indicates that when the population size p is not negligible with respect to the sample size n, the sample covariance matrices demonstrate significant deviations from the population covariance matrices. In order to recover the characteristics of the population covariance matrices from the observed sample covariance matrices, several recent solutions are proposed when the order of the underlying population spectral distribution is known. In this paper, we deal with the underlying order selection problem and propose a solution based on the cross-validation principle. We prove the consistency of the proposed procedure.  相似文献   

16.
Random sampling is an efficient method to deal with constrained optimization problems in computational geometry. In a first step, one finds the optimal solution subject to a random subset of the constraints; in many cases, the expected number of constraints still violated by that solution is then significantly smaller than the overall number of constraints that remain. This phenomenon can be exploited in several ways, and typically results in simple and asymptotically fast algorithms. Very often the analysis of random sampling in this context boils down to a simple identity (the sampling lemma ) which holds in a general framework, yet has not been stated explicitly in the literature. In the more restricted but still general setting of LP-type problems , we prove tail estimates for the sampling lemma, giving Chernoff-type bounds for the number of constraints violated by the solution of a random subset. As an application, we provide the first theoretical analysis of multiple pricing , a heuristic used in the simplex method for linear programming in order to reduce a large problem to few small ones. This follows from our analysis of a reduction scheme for general LP-type problems, which can be considered as a simplification of an algorithm due to Clarkson. The simplified version needs less random resources and allows a Chernoff-type tail estimate. Received June 8, 2000, and in revised form September 10, 2000. Online publication March 26, 2001.  相似文献   

17.
In this paper we develop two efficient discrete stochastic search methods based on random walk procedure for maximizing system reliability subjected to imperfect fault coverage where uncovered component failures cause immediate system failure, even in the presence of adequate redundancy. The first search method uses a sequential sampling procedure with fixed boundaries at each iteration. We show that this search process satisfies local balance equations and its equilibrium distribution gives most weight to the optimal solution. We also show that the solution that has been visited most often in the first m iterations converges almost surely to the optimal solution. The second search method uses a sequential sampling procedure with increasing boundaries at each iteration. We show that if the increase occurs slower than a certain rate, this search process will converge to the optimal set with probability 1. We consider the system where reliability cannot be evaluated exactly but must be estimated through Monte Carlo simulation. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
Interval allocation has been suggested as a possible formalization for the PRAM of the (vaguely defined) processor allocation problem, which is of fundamental importance in parallel computing. The interval allocation problem is, given n nonnegative integers x1, . . ., xn, to allocate n nonoverlapping subarrays of sizes x1, . . ., xn from within a base array of Onj=1xj) cells. We show that interval allocation problems of size n can be solved in O((log log n)3) time with optimal speedup on a deterministic CRCW PRAM. In addition to a general solution to the processor allocation problem, this implies an improved deterministic algorithm for the problem of approximate summation. For both interval allocation and approximate summation, the fastest previous deterministic algorithms have running times of Θ(log n/log log n). We describe an application to the problem of computing the connected components of an undirected graph. Finally we show that there is a nonuniform deterministic algorithm that solves interval allocation problems of size n in O(log log n) time with optimal speedup.  相似文献   

19.
Given a sample of n observations from a density ƒ on d, a natural estimator of ƒ(x) is formed by counting the number of points in some region surrounding x and dividing this count by the d dimensional volume of . This paper presents an asymptotically optimal choice for . The optimal shape turns out to be an ellipsoid, with shape depending on x. An extension of the idea that uses a kernel function to put greater weight on points nearer x is given. Among nonnegative kernels, the familiar Bartlett-Epanechnikov kernel used with an ellipsoidal region is optimal. When using higher order kernels, the optimal region shapes are related to Lp balls for even positive integers p.  相似文献   

20.
Geometric processes and replacement problem   总被引:31,自引:0,他引:31  
In this paper, we introduce and study the geometric process which is a sequence of independent non-negative random variablesX 1,X 2,... such that the distribution function ofX n isF (a n–1 x), wherea is a positive constant. Ifa>1, then it is a decreasing geometric process, ifa<1, it is an increasing geometric process. Then, we consider a replacement model as follows: the successive survival times of the system after repair form a decreasing geometric process or a renewal process while the consecutive repair times of the system constitute an increasing geometric process or a renewal process. Besides the replacement policy based on the working age of the system, a new kind of replacement policy which is determined by the number of failures is considered. The explicit expressions of the long-run average costs per unit time under each replacement policy are then calculated, and therefore the corresponding optimal replacement policies can be found analytically or numerically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号