首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Analysis of means (ANOM), similar to Shewhart control chart that exhibits individual mean effects on a graphical display, is an attractive alternative mean testing procedure for the analysis of variance (ANOVA). The procedure is primarily used to analyze experimental data from designs with only fixed effects. Recently introduced, the ANOM procedure based on the q‐distribution (ANOMQ procedure) generalizes the ANOM approach to random effects models. This article reveals that the application of ANOM and ANOMQ procedures in advanced designs such as hierarchically nested and split‐plot designs with fixed, random, and mixed effects enhances the data visualization aspect in graphical testing. Data from two real‐world experiments are used to illustrate the proposed procedure; furthermore, these experiments exhibit the ANOM procedures' visualization ability compared with ANOVA from the point of view of the practitioner.  相似文献   

2.
This paper presents an effective procedure that finds lower bounds for the travelling salesman problem based on the 1-tree using a learning-based Lagrangian relaxation technique. The procedure can dynamically alter its step-size depending upon its previous iterations. Along with having the capability of expansion–contraction, the procedure performs a learning process in which Lagrange multipliers are influenced by a weighted cost function of their neighbouring nodes. In analogy with simulated annealing paradigm, here a learning process is equivalent to escaping local optimality via exploiting the structure of the problem. Computational results conducted on Euclidean benchmarks from the TSPLIB library show that the procedure is very effective.  相似文献   

3.
In this paper, a procedure is presented which allows the optimal reconstruction of images from blurred noisy data. The procedure relies on a general Bayesian approach, which makes proper use of all the available information. Special attention is devoted to the informative content of the edges; thus, a preprocessing phase is included, with the aim of estimating the jump sizes in the gray level. The optimization phase follows; existence and uniqueness of the solution is secured. The procedure is tested against simple simulated data and real data.  相似文献   

4.
One of the most important steps in the application of modeling using data envelopment analysis (DEA) is the choice of input and output variables. In this paper, we develop a formal procedure for a “stepwise” approach to variable selection that involves sequentially maximizing (or minimizing) the average change in the efficiencies as variables are added or dropped from the analysis. After developing the stepwise procedure, applications from classic DEA studies are presented and the new managerial insights gained from the stepwise procedure are discussed. We discuss how this easy to understand and intuitively sound method yields useful managerial results and assists in identifying DEA models that include variables with the largest impact on the DEA results.  相似文献   

5.
In this work, the NP-hard maximum clique problem on graphs is considered. Starting from basic greedy heuristics, modifications and improvements are proposed and combined in a two-phase heuristic procedure. In the first phase an improved greedy procedure is applied starting from each node of the graph; on the basis of the results of this phase a reduced subset of nodes is selected and an adaptive greedy algorithm is repeatedly started to build cliques around such nodes. In each restart the selection of nodes is biased by the maximal clique generated in the previous execution. Computational results are reported on the DIMACS benchmarks suite. Remarkably, the two-phase procedure successfully solves the difficult Brockington-Culberson instances, and is generally competitive with state-of-the-art much more complex heuristics.  相似文献   

6.
The objective of this article is to present a step-by-step problem-solving procedure of shape optimization. The procedure is carried out to design an airfoil in the presence of compressible and viscous flows using a control theory approach based on measure theory. An optimal shape design (OSD) problem governed by full Navier-Stokes equations is given. Then, a weak variational form is derived from the linearized governing equations. During the procedure, because the measure theory (MT) approach is implemented using fixed geometry versus moving geometry, a proper bijective transformation is introduced. Finally, an approximating linear programming (LP) problem of the original shape optimization problem is obtained by means of MT approach that is not iterative and does not need any initial guess to proceed. Illustrative examples are provided to demonstrate efficiency of the proposed procedure.  相似文献   

7.
The generalized information criterion (GIC) proposed by Rao and Wu [A strongly consistent procedure for model selection in a regression problem, Biometrika 76 (1989) 369-374] is a generalization of Akaike's information criterion (AIC) and the Bayesian information criterion (BIC). In this paper, we extend the GIC to select linear mixed-effects models that are widely applied in analyzing longitudinal data. The procedure for selecting fixed effects and random effects based on the extended GIC is provided. The asymptotic behavior of the extended GIC method for selecting fixed effects is studied. We prove that, under mild conditions, the selection procedure is asymptotically loss efficient regardless of the existence of a true model and consistent if a true model exists. A simulation study is carried out to empirically evaluate the performance of the extended GIC procedure. The results from the simulation show that if the signal-to-noise ratio is moderate or high, the percentages of choosing the correct fixed effects by the GIC procedure are close to one for finite samples, while the procedure performs relatively poorly when it is used to select random effects.  相似文献   

8.
9.
We show a simple proof of the existence of a path on the “border of water and rocks” based on combinatorial induction procedure and we present an algorithm for computing L1 shortest path in “Fjord Scenery”. The proposed algorithm is a version of the Dijkstra technique adapted to a rectangle map with a square network. A few pre-processing modifications of the algorithm following from the combinatorial procedure are included. The validity of this approach is shown by numerical calculations for an example.  相似文献   

10.
Abstract

A method for simulating a stationary Gaussian process on a fine rectangular grid in [0, 1]d ??d is described. It is assumed that the process is stationary with respect to translations of ?d, but the method does not require the process to be isotropic. As with some other approaches to this simulation problem, our procedure uses discrete Fourier methods and exploits the efficiency of the fast Fourier transform. However, the introduction of a novel feature leads to a procedure that is exact in principle when it can be applied. It is established that sufficient conditions for it to be possible to apply the procedure are (1) the covariance function is summable on ?d, and (2) a certain spectral density on the d-dimensional torus, which is determined by the covariance function on ?d, is strictly positive. The procedure can cope with more than 50,000 grid points in many cases, even on a relatively modest computer. An approximate procedure is also proposed to cover cases where it is not feasible to apply the procedure in its exact form.  相似文献   

11.
An iterative procedure for the synthesis of discrete minimum-amplitude and minimum-time controls is presented. The algorithm is based on some new relations obtained by extending well-known results on the minimum-energy control problem as given in Refs. 1–3. This approach yields a set of implicit algebraic equations from which the desired optimal control sequence is determined by the iteration procedure referred to above. The algorithm has the advantage that convergence to the optimal solution can be guaranteed. Simplicity of the recursion formulas and insensitivity to numerical errors make the procedure well suited for on-line or off-line computations.This work was done at the Institut für Regelungstechnik, Technische Universität Berlin, West Berlin, Germany. The author is indebted to Professor G. Schneider for many stimulating discussions and criticisms during the course of this research.  相似文献   

12.
The numerical differentiation of data divides naturally into two distinct problems:
  1. the differentiation of exact data, and
  2. the differentiation of non-exact (experimental) data.
In this paper, we examine the latter. Because methods developed for exact data are based on abstract formalisms which are independent of the structure within the data, they prove, except for the regularization procedure of Cullum, to be unsatisfactory for non-exact data. We therefore adopt the point of view that satisfactory methods for non-exact data must take the structure within the data into account in some natural way, and use the concepts of regression and spectrum analysis as a basis for the development of such methods. The regression procedure is used when either the structure within the non-exact data is known on independent grounds, or the assumptions which underlie the spectrum analysis procedure [viz., stationarity of the (detrended) data] do not apply. In this latter case, the data could be modelled using splines. The spectrum analysis procedure is used when the structure within the nonexact data (or a suitable transformation of it, where the transformation can be differentiated exactly) behaves as if it were generated by a stationary stochastic process. By proving that the regularization procedure of Cullum is equivalent to a certain spectrum analysis procedure, we derive a fast Fourier transform implementation for regularization (based on this equivalence) in which an acceptable value of the regularization parameter is estimated directly from a time series formulation based on this equivalence. Compared with the regularization procedure, which involvesO(n 3) operations (wheren is the number of data points), the fast Fourier transform implementation only involvesO(n logn).  相似文献   

13.
On the basis of a statistical approach presupposing direct utilization of experimental data produced in modelling operations, a procedure is proposed for computational estimates on composite materials in which allowance is made for the initial defects, the variable properties of the material, and the conditions of use (physical medium). Experimental data on Plexiglas shells which confirm the practical applicability of the procedure are given. Questions relating to the design of tests on composite shells are discussed.Translated from Mekhanika Polimerov, No. 4, pp. 743–745, July–August, 1972.  相似文献   

14.
We consider the spectral theory and inverse scattering problem for discrete Schrödinger operators on the hexagonal lattice. We give a procedure for reconstructing finitely supported potentials from the scattering matrices for all energies. The same procedure is applicable for the inverse scattering problem on the triangle lattice.  相似文献   

15.
Product line selection and pricing under a share-of-surplus choice model   总被引:1,自引:0,他引:1  
Product line selection and pricing decisions are critical to the profitability of many firms, particularly in today’s competitive business environment in which providers of goods and services are offering a broad array of products to satisfy customer needs.We address the problem of selecting a set of products to offer and their prices when customers select among the offered products according to a share-of-surplus choice model. A customer’s surplus is defined as the difference between his utility (willingness to pay) and the price of the product. Under the share-of-surplus model, the fraction of a customer segment that selects a product is defined as the ratio of the segment’s surplus from this particular product to the segment’s total surplus across all offered products with positive surplus for that segment.We develop a heuristic procedure for this non-concave, mixed-integer optimization problem. The procedure utilizes simulated annealing to handle the binary product selection variables, and a steepest-ascent-style procedure that relies on certain structural properties of the objective function to handle the non-concave, continuous portion of the problem involving the prices. We also develop a variant of our procedure to handle uncertainty in customer utilities. In computational studies, our basic procedures perform extremely well, producing solutions whose objective values are within about 5% of those obtained via enumerative methods. Our procedure to handle uncertain utilities also performs well, producing solutions with expected profit values that are roughly 10% higher than the corresponding expected profits from solutions obtained under the assumption of deterministic utilities.  相似文献   

16.
Stein's two-stage procedure produces a t-test which can realize a prescribed power against a given alternative, regardless of the unknown variance of the underlying normal distribution. This is achieved by determining the size of a second sample on the basis of a variance estimate derived from the first sample. In the paper we introduce a nonparametric competitor of this classical procedure by replacing the t-test by a rank test. For rank tests, the most precise information available are asymptotic expansions for their power to order n -1, where n is the sample size. Using results on combinations of rank tests for sub-samples, we obtain the same level of precision for the two-stage case. In this way we can determine the size of the additional sample to the natural order and moreover compare the nonparametric and the classical procedure in terms of expected additional numbers of observations required.  相似文献   

17.
In this paper we address the problem of routing school buses in a rural area. We approach this problem with a node routing model with multiple objectives that arise from conflicting viewpoints. From the point of view of cost, it is desirable to minimise the number of buses used to transport students from their homes to school and back. From the point of view of service, it is desirable to minimise the time that a given student spends en route. The current literature deals primarily with single-objective problems and the models with multiple objectives typically employ a weighted function to combine the objectives into a single one. We develop a solution procedure that considers each objective separately and search for a set of efficient solutions instead of a single optimum. Our solution procedure is based on constructing, improving and then combining solutions within the framework of the evolutionary approach known as scatter search. Experimental testing with real data is used to assess the merit of our proposed procedure.  相似文献   

18.
We introduce a novel global optimization method called Continuous GRASP (C-GRASP) which extends Feo and Resende’s greedy randomized adaptive search procedure (GRASP) from the domain of discrete optimization to that of continuous global optimization. This stochastic local search method is simple to implement, is widely applicable, and does not make use of derivative information, thus making it a well-suited approach for solving global optimization problems. We illustrate the effectiveness of the procedure on a set of standard test problems as well as two hard global optimization problems.  相似文献   

19.
The problem of finding interior eigenvalues of a large nonsymmetric matrix is examined. A procedure for extracting approximate eigenpairs from a subspace is discussed. It is related to the Rayleigh–Ritz procedure, but is designed for finding interior eigenvalues. Harmonic Ritz values and other approximate eigenvalues are generated. This procedure can be applied to the Arnoldi method, to preconditioning methods, and to other methods for nonsymmetric eigenvalue problems that use the Rayleigh–Ritz procedure. The subject of estimating the boundary of the entire spectrum is briefly discussed, and the importance of preconditioning for interior eigenvalue problems is mentioned. © 1998 John Wiley & Sons, Ltd.  相似文献   

20.
This paper presents a biased random-key genetic algorithm for the resource constrained project scheduling problem. The chromosome representation of the problem is based on random keys. Active schedules are constructed using a priority-rule heuristic in which the priorities of the activities are defined by the genetic algorithm. A forward-backward improvement procedure is applied to all solutions. The chromosomes supplied by the genetic algorithm are adjusted to reflect the solutions obtained by the improvement procedure. The heuristic is tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号