首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Latent trait models such as item response theory (IRT) hypothesize a functional relationship between an unobservable, or latent, variable and an observable outcome variable. In educational measurement, a discrete item response is usually the observable outcome variable, and the latent variable is associated with an examinee’s trait level (e.g., skill, proficiency). The link between the two variables is called an item response function. This function, defined by a set of item parameters, models the probability of observing a given item response, conditional on a specific trait level. Typically in a measurement setting, neither the item parameters nor the trait levels are known, and so must be estimated from the pattern of observed item responses. Although a maximum likelihood approach can be taken in estimating these parameters, it usually cannot be employed directly. Instead, a method of marginal maximum likelihood (MML) is utilized, via the expectation-maximization (EM) algorithm. Alternating between an expectation (E) step and a maximization (M) step, the EM algorithm assures that the marginal log likelihood function will not decrease after each EM cycle, and will converge to a local maximum. Interestingly, the negative of this marginal log likelihood function is equal to the relative entropy, or Kullback-Leibler divergence, between the conditional distribution of the latent variables given the observable variables and the joint likelihood of the latent and observable variables. With an unconstrained optimization for the M-step proposed here, the EM algorithm as minimization of Kullback-Leibler divergence admits the convergence results due to Csiszár and Tusnády (Statistics & Decisions, 1:205–237, 1984), a consequence of the binomial likelihood common to latent trait models with dichotomous response variables. For this unconstrained optimization, the EM algorithm converges to a global maximum of the marginal log likelihood function, yielding an information bound that permits a fixed point of reference against which models may be tested. A likelihood ratio test between marginal log likelihood functions obtained through constrained and unconstrained M-steps is provided as a means for testing models against this bound. Empirical examples demonstrate the approach.  相似文献   

2.
In this paper we model a single-base multi-indentured repairable item inventory system. The base has a maximum number of identical on-line machines, and each machine consists of several module types. Our objective is to calculate the steady-state operating characteristics of the system. The usual Markovian approach leads to multidimensional state spaces that are large even for relatively small problems. Solving such a multidimensional system is very difficult because of the huge number of states. Consequently, we propose an approximation technique that allows us to solve large problems relatively quickly. Although the resulting solution is only approximate, a variety of test problems indicates that the algorithm is quite accurate.  相似文献   

3.
Many large universities, community colleges and some smaller four-year colleges are turning to hybrid or online instruction for remedial and entry level mathematics courses, often assessed using online exams in a proctored computer lab environment. Faculty face the task of choosing questions from a publisher's text bank with very little, if any, background in test theory and design. We present a new item parameter, item efficiency, that is calculated from the results of an item response theory analysis of a comprehensive college algebra final examination and show that this new parameter may be used to identify items better suited for similar comprehensive final assessments. Further, by relating Item Efficiency to classical test theory item statistics, we propose guidelines that can be used to identify suitable items prior to testing with little or no background in psychometric theory.  相似文献   

4.
In automated test assembly (ATA), 0-1 linear programming (0-1 LP) methods are applied to select questions (items) from an item bank to assemble an optimal test. The objective in this 0-1 LP optimization problem is to assemble a test that measures, in as precise a way as possible, the ability of candidates. Item response theory (IRT) is commonly applied to model the relationship between the responses of candidates and their ability level. Parameters that describe the characteristics of each item, such as difficulty level and the extent to which an item differentiates between more and less able test takers (discrimination) are estimated in the application of the IRT model. Unfortunately, since all parameters in IRT models have to be estimated, they do have a level of uncertainty to them. Some of the other parameters in the test assembly model, such as average response times, have been estimated with uncertainty as well. General 0-1 LP methods do not take this uncertainty into account, and overestimate the predicted level of measurement precision. In this paper, alternative robust optimization methods are applied. It is demonstrated how the Bertsimas and Sim method can be applied to take this uncertainty into account in ATA. The impact of applying this method is illustrated in two numerical examples. Implications are discussed, and some directions for future research are presented.  相似文献   

5.
The bus rollover warning has achieved many progresses for researcher endeavors in the past decades. But these researches have not taken account for dynamic road bank. To fill up the gap, this paper presents a real-time rollover trend prediction to indicate bus rollover risk with road bank estimation. The prediction algorithm consists of a dynamic roll stability analysis, which is based on a suspended roll plane model, and a real-time warning velocity calculation. An estimator for the dynamic road bank and vehicle sideslip angle estimation using the dynamic simplex algorithm (DSA) is designed to take into account the influence of road bank on rollover trend. By comparing maximum stable lateral acceleration to the affordable lateral acceleration depending on the tire/road friction limit, a warning velocity is determined using the measurement of lateral acceleration and the estimate of instantaneous vehicle turning radius. The proposed rollover trend prediction algorithm is evaluated by TruckSim software. Simulation results show that the proposed warning velocity can represent the vehicle potential to resist rollover and give appropriate prediction of vehicle rollover crashes in typical scenarios.  相似文献   

6.
One-dimensional bin-packing problems require the assignment of a collection of items to bins with the goal of optimizing some criterion related to the number of bins used or the ‘weights’ of the items assigned to the bins. In many instances, the number of bins is fixed and the goal is to assign the items such that the sums of the item weights for each bin are approximately equal. Among the possible applications of one-dimensional bin-packing in the field of psychology are the assignment of subjects to treatments and the allocation of students to groups. An especially important application in the psychometric literature pertains to splitting of a set of test items to create distinct subtests, each containing the same number of items, such that the maximum sum of item weights across all bins is minimized. In this context, the weights typically correspond to item statistics derived from difficulty and discrimination indices. We present a mixed zero-one integer linear programming (MZOILP) formulation of this one-dimensional minimax bin-packing problem and develop an approximate procedure for its solution that is based on the simulated annealing algorithm. In two comparisons that focused on 34 practically-sized test problems (up to 6000 items and 300 bins), the simulated annealing heuristic generally provided better solutions than were obtained when using a commercial mathematical programming software package to solve the MZOILP formulation directly.  相似文献   

7.
This paper presents an alternative approach using genetic algorithm to a new variant of the unbalanced assignment problem that dealing with an additional constraint on the maximum number of jobs that can be assigned to some agent(s). In this approach, genetic algorithm is also improved by introducing newly proposed initialization, crossover and mutation in such a way that the developed algorithm is capable to assign optimally all the jobs to agents. Computational results with comparative performance of the algorithm are reported for four test problems.  相似文献   

8.
A new heuristic approach is put forward for tackling container loading problems where the cargo involved has varying degrees of load bearing strength. In such cases the placement rules must ensure that the weight resting on an item remains below the maximum it can withstand without suffering crushing damage. The construction heuristic proposed is embedded in a search algorithm which seeks to optimise the parameter settings of the procedure. Limiting the time required to produce a good solution and the amount of technical expertise needed by the user are key considerations. The approach is evaluated in a series of tests against benchmarks from the literature. The results demonstrate that it outperforms other approaches which have been suggested for this type of problem and that it also performs well on problems where load bearing strength is not an issue. Potentially useful extensions of the work are discussed.  相似文献   

9.
In this paper we present an exact method for computing the Weibull renewal function and its derivative for application in maintenance optimization. The computational method provides a solid extension to previous work by which an approximation to the renewal function was used in a Bayesian approach to determine optimal replacement times. In the maintenance scenario, under the assumption an item is replaced by a new one upon failure, the underlying process between planned replacement times is a renewal process. The Bayesian approach takes into account failure and survival information at each planned replacement stage to update the optimal time until the next planned replacement. To provide a simple approach to carry out in practice, we limit the decision process to a one‐step optimization problem in the sequential decision problem. We make the Weibull assumption for the lifetime distribution of an item and calculate accurately the renewal function and its derivative. A method for finding zeros of a function is adapted to the maintenance optimization problem, making use of the availability of the derivative of the renewal function. Furthermore, we develop the maximum likelihood estimate version of the Bayesian approach and illustrate it with simulated examples. The maintenance algorithm retains the adaptive concept of the Bayesian methodology but reduces the computational need. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Three-staged patterns are often used to solve the 2D cutting stock problem of rectangular items. They can be divided into items in three stages: Vertical cuts divide the plate into segments; then horizontal cuts divide the segments into strips, and finally vertical cuts divide the strips into items. An algorithm for unconstrained three-staged patterns is presented, where a set of rectangular item types are packed into the plate so as to maximize the pattern value, and there is no constraint on the frequencies of each item type. It can be used jointly with the linear programming approach to solve the cutting stock problem. The algorithm solves three large knapsack problems to obtain the optimal pattern: One for the item layout on the widest strip, one for the strip layout on the longest segment, and the third for the segment layout on the plate. The computational results indicate that the algorithm is efficient.  相似文献   

11.
Sequential heuristic for the two-dimensional bin-packing problem   总被引:1,自引:0,他引:1  
A heuristic approach for the two-dimensional bin-packing problem is proposed. The algorithm is based on the sequential heuristic procedure that generates each pattern to produce some items and repeats until all items are produced. Both guillotine and non-guillotine patterns can be used. Each pattern is obtained from calling a pattern-generation procedure, where the objective is to maximize the pattern value. The item values are adjusted after the generation of each pattern using a value correction formula. The algorithm is compared with five published algorithms, using 50 groups of benchmark instances. The results indicate that the algorithm is the most efficient in improving solution quality.  相似文献   

12.
Shelf space is one of the most important resources of a retail firm. This paper formulates a model and proposes an approach which is similar to the algorithm used for solving a knapsack problem. Subject to given constraints, the proposed heuristic allocates shelf space item by item according to a descending order of sales profit for each item per display area or length. Through the use of simulations, the performances of objective value and the computational efficiency of this method are evaluated. Three options are also proposed for improving the heuristics. Compared to an optimal method, the improved heuristic is shown to be a very efficient algorithm which allocates shelf space at near-optimal levels.  相似文献   

13.
In order to evaluate the effectiveness of curricular or instructional innovations, researchers often attempt to measure change in students' conceptual understanding of the target subject matter. The measurement of change is therefore a critical endeavor. Often, this is accomplished through pre–post testing using an assessment such as a concept inventory, and aggregate test scores are compared from pre to post‐test in order to characterize gains. These comparisons of raw or normalized scores are most often made under the assumptions of Classical Test Theory (CTT). This study argues that measuring change at the item level (rather than the person level) on the Force and Motion Conceptual Evaluation (FMCE) can provide a more detailed insight into the observed change in students' Newtonian thinking. Further, such an approach is more warranted under the assumptions of Item Response Theory (IRT). In comparing item‐level measures of change under CTT and IRT measurement models, it was found that the inferences drawn from each analysis are similar, but those derived from IRT modeling stand on a stronger foundation statistically. Second, the IRT approach leads to analyzing common item groupings which provide further information about change at the item and topic level.  相似文献   

14.
This paper investigates the two-dimensional strip packing problem considering the case in which items should be arranged to form a physically stable packing satisfying a predefined item unloading order from the top of the strip. The packing stability analysis is based on conditions for the static equilibrium of rigid bodies, differing from others strategies which are based on area and percentage of support. We consider an integer linear programming model for the strip packing problem with the order constraint, and a cutting plane algorithm to handle stability, leading to a branch-and-cut approach. We also present two heuristics: the first is based on a stack building algorithm; and, the last is a slight modification of the branch-and-cut approach. The computational experiments show that the branch-and-cut model can handle small and medium-sized instances, whereas the heuristics found almost optimal solutions quickly for several instances. With the combination of heuristics and the branch-and-cut algorithm, many instances are solved to near optimality in a few seconds.  相似文献   

15.
A heuristic algorithm for the one-dimensional cutting stock problem with usable leftover (residual length) is presented. The algorithm consists of two procedures. The first is a linear programming procedure that fulfills the major portion of the item demand. The second is a sequential heuristic procedure that fulfills the remaining portion of the item demand. The algorithm can balance the cost of the consumed bars, the profit from leftovers and the profit from shorter stocks reduction. The computational results show that the algorithm performs better than a recently published algorithm.  相似文献   

16.
Staircase structured linear programs arise naturally in the study of engineering economic systems. One general approach to solving such LP's is the technique of nested decomposition of the primal or dual problem. The research described in this paper proposes a revised decomposition algorithm that incorporates knowledge of the structure of the staircase basis in forming the decomposed linear programs. Column proposals from the revised subproblems are shown to achieve maximum penetration against the master problem basis. The proposed algorithm resorts to the regular Dantzig-Wolfe subproblem to test for optimality. The algorithm is shown to be finite and is compared to the Abrahamson-Wittrock algorithm. Computational results indicate substantial improvement over the Dantzig-Wolfe algorithm in most cases. A numerical example of the algorithm is provide in the appendix. This research was supported by National Science Foundation grant ECS-8106455 to Cornell University.  相似文献   

17.
A mixture approach to clustering is an important technique in cluster analysis. A mixture of multivariate multinomial distributions is usually used to analyze categorical data with latent class model. The parameter estimation is an important step for a mixture distribution. Described here are four approaches to estimating the parameters of a mixture of multivariate multinomial distributions. The first approach is an extended maximum likelihood (ML) method. The second approach is based on the well-known expectation maximization (EM) algorithm. The third approach is the classification maximum likelihood (CML) algorithm. In this paper, we propose a new approach using the so-called fuzzy class model and then create the fuzzy classification maximum likelihood (FCML) approach for categorical data. The accuracy, robustness and effectiveness of these four types of algorithms for estimating the parameters of multivariate binomial mixtures are compared using real empirical data and samples drawn from the multivariate binomial mixtures of two classes. The results show that the proposed FCML algorithm presents better accuracy, robustness and effectiveness. Overall, the FCML algorithm has the superiority over the ML, EM and CML algorithms. Thus, we recommend FCML as another good tool for estimating the parameters of mixture multivariate multinomial models.  相似文献   

18.
Maximum likelihood methods are important for system modeling and parameter estimation. This paper derives a recursive maximum likelihood least squares identification algorithm for systems with autoregressive moving average noises, based on the maximum likelihood principle. In this derivation, we prove that the maximum of the likelihood function is equivalent to minimizing the least squares cost function. The proposed algorithm is different from the corresponding generalized extended least squares algorithm. The simulation test shows that the proposed algorithm has a higher estimation accuracy than the recursive generalized extended least squares algorithm.  相似文献   

19.
A finite mixture model has been used to fit the data from heterogeneous populations to many applications. An Expectation Maximization (EM) algorithm is the most popular method to estimate parameters in a finite mixture model. A Bayesian approach is another method for fitting a mixture model. However, the EM algorithm often converges to the local maximum regions, and it is sensitive to the choice of starting points. In the Bayesian approach, the Markov Chain Monte Carlo (MCMC) sometimes converges to the local mode and is difficult to move to another mode. Hence, in this paper we propose a new method to improve the limitation of EM algorithm so that the EM can estimate the parameters at the global maximum region and to develop a more effective Bayesian approach so that the MCMC chain moves from one mode to another more easily in the mixture model. Our approach is developed by using both simulated annealing (SA) and adaptive rejection metropolis sampling (ARMS). Although SA is a well-known approach for detecting distinct modes, the limitation of SA is the difficulty in choosing sequences of proper proposal distributions for a target distribution. Since ARMS uses a piecewise linear envelope function for a proposal distribution, we incorporate ARMS into an SA approach so that we can start a more proper proposal distribution and detect separate modes. As a result, we can detect the maximum region and estimate parameters for this global region. We refer to this approach as ARMS annealing. By putting together ARMS annealing with the EM algorithm and with the Bayesian approach, respectively, we have proposed two approaches: an EM-ARMS annealing algorithm and a Bayesian-ARMS annealing approach. We compare our two approaches with traditional EM algorithm alone and Bayesian approach alone using simulation, showing that our two approaches are comparable to each other but perform better than EM algorithm alone and Bayesian approach alone. Our two approaches detect the global maximum region well and estimate the parameters in this region. We demonstrate the advantage of our approaches using an example of the mixture of two Poisson regression models. This mixture model is used to analyze a survey data on the number of charitable donations.  相似文献   

20.
This article presents a Markov chain Monte Carlo algorithm for both variable and covariance selection in the context of logistic mixed effects models. This algorithm allows us to sample solely from standard densities with no additional tuning. We apply a stochastic search variable approach to select explanatory variables as well as to determine the structure of the random effects covariance matrix.

Prior determination of explanatory variables and random effects is not a prerequisite because the definite structure is chosen in a data-driven manner in the course of the modeling procedure. To illustrate the method, we give two bank data examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号