首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 32 毫秒
1.
Based on the equilibrium efficient frontier data envelopment analysis (EEFDEA) approach, Fang (J Oper Res Soc 67:412–420, 2015a) developed an equivalent linear programming model to improve and strengthen the EEFDEA approach. Furthermore, Fang (2015a) indicated that his secondary goal approach can achieve a unique equilibrium efficient frontier. However, through a simple counterexample we demonstrate that Fang’s secondary goal approach cannot always achieve uniqueness of the equilibrium efficient frontier. In this paper, we propose an algorithm based on the secondary goal approach to address the problem. The proposed algorithm is proven mathematically to be an effective approach to guaranteeing the uniqueness of the equilibrium efficient frontier.  相似文献   

2.
Recently flexible manufacturing systems (FMSs) have been modelled as closed networks of queues. In this paper we develop an exponentialization approach to the modeling of FMS networks with general processing times. The idea of the approach is to transform the network into an (approximately) equivalent exponential network, where each station has exponential processing times with state-dependent rates. The approach is formulated as a fixed-point problem. Numerical examples have indicated excellent accuracies of the approach. This approach can also be readily adapted to accommodate limited local buffers and dynamic parts routing.  相似文献   

3.
We introduce the weak approach structure for an arbitrary locally convex approach space and generalize the results from [1] about the weak approach structure of a normed space. Hereto we carefully develop the notion of a closed dual unit ball in an abstract setting (as a special kind of absolutely convex subset) because it is this kind of structure on the algebraic dual that induces, in a duality-compatible way, a locally convex approach structure on the original space.  相似文献   

4.
A new approach to certain motion-planning problems in robotics is introduced. This approach is based on the use of a generalized Voronoi diagram, and reduces the search for a collision-free continuous motion to a search for a connected path along the edges of such a diagram. This approach yields an O(n log n) algorithm for planning an obstacle-avoiding motion of a single circular disc amid polygonal obstacles. Later papers will show that extensions of the approach can solve other motion-planning problems, including those of moving a straight line segment or several coordinated discs in the plane amid polygonal obstacles.  相似文献   

5.
The fully subjective, or fully Bayesian approach discussed in this paper provides straightforward means of presenting uncertainty related to future events to decision makers. This approach is described in the context of an inspection maintenance decision problem, and is contrasted with the classical probabilistic approach that assumes the existence of true probabilities and probability distributions which have to be estimated. Key features of the fully subjective approach are discussed, including integration of engineering judgements, uncertainty treatment and type of performance measures to be used. An example related to the problem of identifying “optimal” inspection intervals for an extrusion press is used to illustrate the principles described.  相似文献   

6.
Recently, Eva Tardos developed an algorithm for solving the linear program min (cx:Ax = b, x ≥ 0 whose solution time is polynomial in the size of A, independent of the sizes of c and b. Her algorithm focuses on the dual LP and employs an approximation of the cost coefficients. Here we adopt what may be called a ‘dual approach’ in that if focuses on the primal LP. This dual approach has some significant differences from Tardo's approach which make the dual approach conceptually simpler.  相似文献   

7.
We introduce a new approach which facilitates the calculation of the covering radius of a binary linear code. It is based on determining the normalized covering radius ϱ. For codes of fixed dimension we give upper and lower bounds on ϱ that are reasonably close. As an application, an explicit formula is given for the covering radius of an arbitrary code of dimension ⩽4. This approach also sheds light on whether or not a code is normal. All codes of dimension ⩽4 are shown to be normal, and an upper bound is given for the norm of an arbitrary code. This approach also leads to an amusing generalization of the Berlekamp-Gale switching game.  相似文献   

8.
Determination of the time evolution of the scattering data for an inverse scattering transform solution of the forced Toda lattice appears to require an overspecification of the boundary condition at the end of the lattice. This appears in the form of an apparent need to specify the values of two functions at the boundary rather than one. We present three different approaches to the resolution of this problem. One approach gives the Maclaurin series (in time) for the scattering data. The second approach gives the scattering data in terms of the solution to a nonlinear, nonlocal partial differential equation. The third approach gives the scattering data in terms of the solution to a linear integral equation. All three approaches reduce to one the number of functions which must be specified to determine a solution. The advantages and limitations of each approach are discussed.  相似文献   

9.
One of the scalability bottlenecks for the large-scale usage of Gaussian processes is the computation of the maximum likelihood estimates of the parameters of the covariance matrix. The classical approach requires a Cholesky factorization of the dense covariance matrix for each optimization iteration. In this work, we present an estimating equations approach for the parameters of zero-mean Gaussian processes. The distinguishing feature of this approach is that no linear system needs to be solved with the covariance matrix. Our approach requires solving an optimization problem for which the main computational expense for the calculation of its objective and gradient is the evaluation of traces of products of the covariance matrix with itself and with its derivatives. For many problems, this is an O(nlog?n) effort, and it is always no larger than O(n2). We prove that when the covariance matrix has a bounded condition number, our approach has the same convergence rate as does maximum likelihood in that the Godambe information matrix of the resulting estimator is at least as large as a fixed fraction of the Fisher information matrix. We demonstrate the effectiveness of the proposed approach on two synthetic examples, one of which involves more than 1 million data points.  相似文献   

10.
Given a polynomial system f, a fundamental question is to determine if f has real roots. Many algorithms involving the use of infinitesimal deformations have been proposed to answer this question. In this article, we transform an approach of Rouillier, Roy, and Safey El Din, which is based on a classical optimization approach of Seidenberg, to develop a homotopy based approach for computing at least one point on each connected component of a real algebraic set. Examples are presented demonstrating the effectiveness of this parallelizable homotopy based approach.  相似文献   

11.
This paper stresses the importance of focusing on the modeling process of computational models for precisely understanding a complex organization and for solving given problems in the organization. Based on our claim, we proposes a method of interpretation by implementation (IbI), which explores factors that drastically change simulation results through an investigation on the modeling process of computational models. A careful investigation on the capabilities of the IbI approach, which comprises the three methods of (a) breakdown and representation, (b) assumption or premise modification, and (c) layer change investigation, derives the following conclusions: (1) the IbI approach has the potential of finding underlying factors that determine the characteristics of an organization; (2) the IbI approach can specify points of attention at necessary levels when analyzing an organization; and (3) the IbI approach has suchadvantages as wide applicability, the effective use of employed models, and KISS principle support.  相似文献   

12.
We study the sensor cover energy problem (SCEP) in wireless communication—a difficult nonconvex problem with nonconvex constraints. A local approach based on DC programming called DCA was proposed by Astorino and Miglionico (Optim Lett 10(2):355–368, 2016) for solving this problem. In the present paper, we propose a global approach to (SCEP) based on the theory of monotonic optimization. By using an appropriate reformulation of (SCEP) we propose an algorithm for finding quickly a local optimal solution along with an efficient algorithm for computing a global optimal solution. Computational experiments are reported which demonstrate the practicability of the approach.  相似文献   

13.
In this paper we present the intermediate approach to investigating asymptotic power and measuring the efficiency of nonparametric goodness-of-fit tests for testing uniformity. Contrary to the classical Pitman approach, the intermediate approach allows the explicit quantitative comparison of powers and calculation of efficiencies. For standard tests, like the Cramér-von Mises test, an intermediate approach gives conclusions consistent with qualitative results obtained using the Pitman approach. For other more complicated cases the Pitman approach does not give the right picture of power behaviour. An example is the data driven Neyman test we present in this paper. In this case the intermediate approach gives results consistent with finite sample results. Moreover, using this setting, we prove that the data driven Neyman test is asymptotically the most powerful and efficient under any smooth departures from uniformity. This result shows that, contrary to classical tests being efficient and the most powerful under one particular type of departure from uniformity, the new test is an adaptive one.  相似文献   

14.
In this paper, we develop a new graph colouring strategy. Our heuristic is an example of a so-called polynomially searchable exponential neighbourhood approach. The neighbourhood is that of permutations of the colours of vertices of a subgraph. Our approach provides a solution method for colouring problems with edge weights. Results for initial tests on unweighted K-colouring benchmark problems are presented. Our colour permutation move was found in practice to be too slow to justify its use on these problems. By contrast, our implementation of iterative descent, which incorporates a permutation kickback move, performed extremely well. Moreover, our approach may yet prove valuable for weighted K-colouring. In addition, our approach offers an improved measure of the distance between colourings of a graph.  相似文献   

15.
We explore an approach to possibilistic fuzzy clustering that avoids a severe drawback of the conventional approach, namely that the objective function is truly minimized only if all cluster centers are identical. Our approach is based on the idea that this undesired property can be avoided if we introduce a mutual repulsion of the clusters, so that they are forced away from each other. We develop this approach for the possibilistic fuzzy c-means algorithm and the Gustafson–Kessel algorithm. In our experiments we found that in this way we can combine the partitioning property of the probabilistic fuzzy c-means algorithm with the advantages of a possibilistic approach w.r.t. the interpretation of the membership degrees.  相似文献   

16.
In this paper, we develop an approach that determines the overall best parameter setting in design of experiments. The approach starts with successive orthogonal array experiments and ends with a full factorial experiment. The setup for the next orthogonal-array experiment is obtained from the previous one by either fixing a factor at a given level or by reducing the number of levels considered for all currently non-fixed factors. We illustrate this method using an industrial problem with seven parameters, each with three levels. In previous work, the full factorial of 37 = 2,187 points was evaluated and the best point was found. With the new method, we found the same point using 3% of these evaluations. As a further comparison, we obtained the optimum using a traditional Taguchi approach, and found it corresponded to the 366th of the 2,187 possibilities when sorted by the objective function. We conclude the proposed approach would provide an accurate, fast, and economic tool for optimization using design of experiments.  相似文献   

17.
In this paper, we propose a greedy heuristic for the 2D rectangular packing problem (2DRP) that represents packings using a skyline; the use of this heuristic in a simple tabu search approach outperforms the best existing approach for the 2DRP on benchmark test cases. We then make use of this 2DRP approach as a subroutine in an “iterative doubling” binary search on the height of the packing to solve the 2D rectangular strip packing problem (2DSP). This approach outperforms all existing approaches on standard benchmark test cases for the 2DSP.  相似文献   

18.
As noise is omnipresent, real-world quantities measured by scientists and engineers are commonly obtained in the form of statistical distributions. In turn, perhaps the most compact representation of a given statistical distribution is via the mean-variance approach: the mean manifesting the distribution’s ‘typical’ value, and the variance manifesting the magnitude of the distribution’s fluctuations about its mean. The mean-variance approach is based on an underlying Euclidean-geometry perspective. So very often real-world quantities of interest are non-negative sizes, and their measurements yield statistical size distributions. In this paper, and in the context of size distributions, we present an alternative to the Euclidean-based mean-variance approach: a mean-equality approach that is based on an underlying socioeconomic perspective. We establish two equality indices that score, on a unit-interval scale, the intrinsic ‘egalitarianism’ of size distributions: (i) the poverty equality index which is particularly sensitive to the existence of very small “poor” sizes; (ii) the riches equality index which is particularly sensitive to the existence of very large “rich” sizes. These equality indices, their properties, their computation, their application, and their connections to the mean-variance approach – are explored and described comprehensively.  相似文献   

19.
Multiple criteria decision making is a well established field encompassing aspects of search for solutions and selection of solutions in presence of more than one conflicting objectives. In this paper, we discuss an approach aimed towards the latter. The decision maker is presented with a limited number of Pareto optimal outcomes and is required to identify regions of interest for further investigation. The inherent sparsity of the given Pareto optimal outcomes in high dimensional space makes it an arduous task for the decision maker. To address this problem, an existing line of thought in literature is to generate a set of approximated Pareto optimal outcomes using piecewise linear interpolation. We present an approach within this paradigm, but one that delivers a comprehensive linearly interpolated set as opposed to its subset delivered by existing methods. We illustrate the advantage in doing so in comparison to stricter non-dominance conditions imposed in existing PAreto INTerpolation method. The interpolated set of outcomes delivered by the proposed approach are non-dominated with respect to the given Pareto optimal outcomes, and additionally the interpolated outcomes along uniformly distributed reference directions are presented to the decision maker. The errors in the given interpolations are also estimated in order to further aid decision making by establishing confidence in achieving true Pareto outcomes in their vicinity. The proposed approach for interpolation is computationally less demanding (for higher number of objectives) and also further amenable to parallelization. We illustrate the performance of the approach using six well established tri-objective test problems and two real-life examples. The problems span different types of fronts, such as convex, concave, mixed, degenerate, highlighting the wide applicability of the approach.  相似文献   

20.
This contribution proposes an approach for solving Reliability–based optimization (RBO) problems involving discrete design variables. The proposed approach is based on a decoupling approach and sequential approximations. An application example involving a linear structure under dynamic loading is presented, showing the efficiency of the proposed method. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号