首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 70 毫秒
1.
Given A?{a1,…,am}⊂Rd whose affine hull is Rd, we study the problems of computing an approximate rounding of the convex hull of A and an approximation to the minimum-volume enclosing ellipsoid of A. In the case of centrally symmetric sets, we first establish that Khachiyan's barycentric coordinate descent (BCD) method is exactly the polar of the deepest cut ellipsoid method using two-sided symmetric cuts. This observation gives further insight into the efficient implementation of the BCD method. We then propose a variant algorithm which computes an approximate rounding of the convex hull of A, and which can also be used to compute an approximation to the minimum-volume enclosing ellipsoid of A. Our algorithm is a modification of the algorithm of Kumar and Y?ld?r?m, which combines Khachiyan's BCD method with a simple initialization scheme to achieve a slightly improved polynomial complexity result, and which returns a small “core set.” We establish that our algorithm computes an approximate solution to the dual optimization formulation of the minimum-volume enclosing ellipsoid problem that satisfies a more complete set of approximate optimality conditions than either of the two previous algorithms. Furthermore, this added benefit is achieved without any increase in the improved asymptotic complexity bound of the algorithm of Kumar and Y?ld?r?m or any increase in the bound on the size of the computed core set. In addition, the “dropping idea” used in our algorithm has the potential of computing smaller core sets in practice. We also discuss several possible variants of this dropping technique.  相似文献   

2.
This paper considers the application of a variable neighborhood search (VNS) algorithm for finite-horizon (H stages) Markov Decision Processes (MDPs), for the purpose of alleviating the “curse of dimensionality” phenomenon in searching for the global optimum. The main idea behind the VNSMDP algorithm is that, based on the result of the stage just considered, the search for the optimal solution (action) of state x in stage t is conducted systematically in variable neighborhood sets of the current action. Thus, the VNSMDP algorithm is capable of searching for the optimum within some subsets of the action space, rather than over the whole action set. Analysis on complexity and convergence attributes of the VNSMDP algorithm are conducted in the paper. It is shown by theoretical and computational analysis that, the VNSMDP algorithm succeeds in searching for the global optimum in an efficient way.  相似文献   

3.
This paper presents an enumerative approach for a particular sports league scheduling problem known as “Prob026” in CSPLib. Despite its exponential-time complexity, this simple method can solve all instances involving a number T of teams up to 50 in a reasonable amount of time while the best known tabu search and constraint programming algorithms are limited to T?40 and the direct construction methods available only solve instances where or T/2 is odd. Furthermore, solutions were also found for some T values up to 70. The proposed approach relies on discovering, by observation, interesting properties from solutions of small problem instances and then using these properties in the final algorithm to constraint the search process.  相似文献   

4.
It is known that Polyhedral Feasibility Problems can be solved via interior-point methods whose real number complexity is polynomial in the dimension of the problem and the logarithm of a condition number of the problem instance. A limitation of these results is that they do not apply to ill-posed instances, for which the condition number is infinite. We propose an algorithm for solving polyhedral feasibility problems in homogeneous form that is applicable to all problem instances, and whose real number complexity is polynomial in the dimension of the problem instance and in the logarithm of an “extended condition number” that is always finite.  相似文献   

5.
Given a set of points and ε>0, we propose and analyze an algorithm for the problem of computing a (1+ε)-approximation to the minimum-volume axis-aligned ellipsoid enclosing . We establish that our algorithm is polynomial for fixed ε. In addition, the algorithm returns a small core set , whose size is independent of the number of points m, with the property that the minimum-volume axis-aligned ellipsoid enclosing is a good approximation of the minimum-volume axis-aligned ellipsoid enclosing . Our computational results indicate that the algorithm exhibits significantly better performance than the theoretical worst-case complexity estimate. This work was supported in part by the National Science Foundation through CAREER Grants CCF-0643593 and DMI-0237415.  相似文献   

6.
We present a complete, decidable logic for reasoning about a notion of completely trustworthy (“conclusive”) evidence and its relations to justifiable (implicit) belief and knowledge, as well as to their explicit justifications. This logic makes use of a number of evidence-related notions such as availability, admissibility, and “goodness” of a piece of evidence, and is based on an innovative modification of the Fitting semantics for Artemov?s Justification Logic designed to preempt Gettier-type counterexamples. We combine this with ideas from belief revision and awareness logics to provide an account for explicitly justified (defeasible) knowledge based on conclusive evidence that addresses the problem of (logical) omniscience.  相似文献   

7.
This paper deals with the bi-objective multi-dimensional knapsack problem. We propose the adaptation of the core concept that is effectively used in single-objective multi-dimensional knapsack problems. The main idea of the core concept is based on the “divide and conquer” principle. Namely, instead of solving one problem with n variables we solve several sub-problems with a fraction of n variables (core variables). The quality of the obtained solution can be adjusted according to the size of the core and there is always a trade off between the solution time and the quality of solution. In the specific study we define the core problem for the multi-objective multi-dimensional knapsack problem. After defining the core we solve the bi-objective integer programming that comprises only the core variables using the Multicriteria Branch and Bound algorithm that can generate the complete Pareto set in small and medium size multi-objective integer programming problems. A small example is used to illustrate the method while computational and economy issues are also discussed. Computational experiments are also presented using available or appropriately modified benchmarks in order to examine the quality of Pareto set approximation with respect to the solution time. Extensions to the general multi-objective case as well as to the computation of the exact solution are also mentioned.  相似文献   

8.
Phylogeny reconstruction is too complex a combinatorial problem for an exhaustive search, because the number of possible solutions increases exponentially with the number of taxa involved. In this paper, we adopt the parsimony principle and design a tabu search algorithm for finding a most parsimonious phylogeny tree. A special array structure is employed to represent the topology of trees and to generate the neighboring trees. We test the proposed tabu search algorithm on randomly selected data sets obtained from nuclear ribosomal DNA sequence data. The experiments show that our algorithm explores fewer trees to reach the optimal one than the commonly used program “dnapenny” (branch-and-bound based) while it generates much more accurate results than the default options of the program “dnapars” (heuristic search based). The percentage of search space needed to find the best solution for our algorithm decreased rapidly as the number of taxa increased. For a 20-taxon phylogeny problem, it needs on average to examine only 3.92 × 10−15% of the sample space.  相似文献   

9.
In multivariate statistics under normality, the problems of interest are random covariance matrices (known as Wishart matrices) and “ratios” of Wishart matrices that arise in multivariate analysis of variance (MANOVA) (see 24). The bimatrix variate beta type IV distribution (also known in the literature as bimatrix variate generalised beta; matrix variate generalization of a bivariate beta type I) arises from “ratios” of Wishart matrices. In this paper, we add a further independent Wishart random variate to the “denominator” of one of the ratios; this results in deriving the exact expression for the density function of the bimatrix variate extended beta type IV distribution. The latter leads to the proposal of the bimatrix variate extended F distribution. Some interesting characteristics of these newly introduced bimatrix distributions are explored. Lastly, we focus on the bivariate extended beta type IV distribution (that is an extension of bivariate Jones’ beta) with emphasis on P(X1<X2) where X1 is the random stress variate and X2 is the random strength variate.  相似文献   

10.
In this extended study of Proposition VI, and its first corollary, in Book I of Newton's Principia, we clarify both the statements and the demonstrations of these fundamental results. We begin by tracing the evolution of this proposition and its corollary, to see how their texts may have changed from their initial versions. To prepare ourselves for some of the difficulties our study confronts, we then examine certain confusions which arise in two recent commentaries on Proposition VI. We go on to note other confusions, not in any particular commentary, but in Newton's demonstration and, especially, in his statement of the proposition. What, exactly, does Newton mean by a “body [that] revolves … about an immobile center”? By a “just-nascent arc”? By the “sagitta of the arc”? By the “centripetal force”? By “will be as”? We search for the mathematical meanings that Newton has in mind for these fragments of the Proposition VI statement, a search that takes us to earlier sections of the Principia and to discussions of the “method of first and last ratios,” centripetal force, and the second law of motion. The intended meaning of Proposition VI then emerges from the combined meanings of these fragments. Next we turn to the demonstration of Proposition VI, noting first that Newton's own argument could be more persuasive, before we construct a modern, more rigorous proof. This proof, however, is not as simple as one might expect, and the blame for this lies with the “sagitta of the arc,” Newton's measure of deflection in Proposition VI. Replacing the sagitta with a more natural measure of deflection, we obtain what we call Platonic Proposition VI, whose demonstration has a Platonic simplicity. Before ending our study, we examine the fundamental first corollary of Proposition VI. In his statement of this Corollary 1, Newton replaces the sagitta of Proposition VI by a not quite equal deflection from the tangent and the area swept out (which represents the time by Proposition I) by a not quite equal area of a triangle. These two approximations create small errors, but are these errors small enough? Do the errors introduced by these approximations tend to zero fast enough to justify these replacements? Newton must believe so, but he leaves this question unasked and unanswered, as have subsequent commentators on this crucial corollary. We end our study by asking and answering this basic question, which then allows us to give Corollary 1 a convincing demonstration.  相似文献   

11.
A product costs the manufacturer c/unit to produce; the retailer sells it at p/unit to the consumers. The retail-market demand volume V varies with p according to a given demand curve Dp. How would or should the “players” (i.e., the manufacturer and the retailer) set their prices? In contrast to many studies that assume a dominant manufacturer implementing the “manufacturer-Stackelberg” (“[mS]”) game, this paper examines how a dominant retailer should operate when his knowledge of c is imperfect. We first derive optimal decisions (some of them counter-intuitive) for the dominant retailer when he is restricted to choosing between [rS] (retailer-Stackelberg) and [mS]. Second, we propose a “reverse quantity discount” scheme that the dominant retailer (i.e., the downstream player) can offer to the manufacturer (note that the standard discount scheme is offered by the upstream player). We show that this discounting scheme is quite effective compared to the considerably more complicated though nevertheless theoretically optimal “menu of contracts.” We also reveal a largely overlooked function of discounting; i.e., discounting enables an “ignorant” but dominant player to usurp the earnings attributable to the knowledge of the dominated player. Finally, we also show that discounting works well when the demand curve is linear, but becomes ineffective when the demand curve is iso-elastic – a result echoing the conclusions of some earlier related works.  相似文献   

12.
A two-point set is a subset of the plane which meets every planar line in exactly two-points. We discuss the problem “What are the topological symmetries of a two-point set?”. Our main results assert the existence of two-point sets which are rigid and the existence of two-point sets which are invariant under the action of certain autohomeomorphism groups. We pay particular attention to the isometry group of a two-point set, and show that such groups consist only of rotations and that they may be chosen to be any subgroup of S1 having size less than c. We also construct a subgroup of S1 having size c that is contained in the isometry group of a two-point set.  相似文献   

13.
The article presents the basic concepts and reviews the main results of the theory of optimal algorithms and informational complexity. Informational complexity bounds are provided for Lipschitzian multi-criterion problems that construct the approximate Pareto-optimal strategy set under different interpretations of approximation—approximation by the functional and approximation by the argument. The informational complexity is compared for the scalar global optimization problem and the problem of finding the roots of nonlinear equations by global search methods.  相似文献   

14.
We present a full Nesterov and Todd step primal-dual infeasible interior-point algorithm for symmetric optimization based on Darvay’s technique by using Euclidean Jordan algebras. The search directions are obtained by an equivalent algebraic transformation of the centering equation. The algorithm decreases the duality gap and the feasibility residuals at the same rate. During this algorithm we construct strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. Each main iteration of the algorithm consists of a feasibility step and some centering steps. The starting point in the first iteration of the algorithm depends on a positive number ξ and it is strictly feasible for a perturbed pair. The feasibility steps find strictly feasible iterates for the next perturbed pair. By using centering steps for the new perturbed pair, we obtain strictly feasible iterates close to the central path of the new perturbed pair. The algorithm finds an ?-optimal solution or detects infeasibility of the given problem. Moreover, we derive the currently best known iteration bound for infeasible interior-point methods.  相似文献   

15.
Finite decomposition complexity (FDC) is a large scale property of a metric space. It generalizes finite asymptotic dimension and applies to a wide class of groups. To make the property quantitative, a countable ordinal “the complexity” can be defined for a metric space with FDC. In this paper we prove that the subgroup Z?Z of Thompson?s group F belongs to Dω exactly, where ω is the smallest infinite ordinal number and show that F equipped with the word-metric with respect to the infinite generating set {x0,x1,…,xn,…} does not have finite decomposition complexity.  相似文献   

16.
We consider the following complete optimal stars-clustering-tree problem: Given a complete graph G=(V,E) with a weight on every edge and a collection of subsets of V, we want to find a minimum weight spanning tree T such that each subset of the vertices in the collection induces a complete star in T. One motivation for this problem is to construct a minimum cost (weight) communication tree network for a collection of (not necessarily disjoint) groups of customers such that each group induces a complete star. As a result the network will provide a “group broadcast” property, “group fault tolerance” and “group privacy”. We present another motivation from database systems with replications. For the case where the intersection graph of the subsets is connected we present a structure theorem that describes all feasible solutions. Based on it we provide a polynomial algorithm for finding an optimal solution. For the case where each subset induces a complete star minus at most k leaves we prove that the problem is NP-hard.  相似文献   

17.
We investigate the role of additional information in reducing the computational complexity of the global optimization problem (GOP). Following this approach, we develop GMG — an algorithm for finding the Global Minimum with a Guarantee. The new algorithm breaks up an originally continuous GOP into a discrete (grid) search problem followed by a descent problem. The discrete search identifies the basin of attraction of the global minimum after which the actual location of the minimizer is found upon applying a descent algorithm. The algorithm is first applied to the golf-course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions. We then illustrate the performance of the validated algorithm on a simple realization of the monocular passive ranging (MPR) problem in remote sensing, which consists of identifying the range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem is set as a GOP whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. We solve the GOP using GMG and report on the performance of the algorithm.  相似文献   

18.
The aim of this paper is to characterise those compact subsets K of 3-manifolds M that are (stable and not necessarily global) attractors for some flow on M. We will show that it is the topology of MK, rather than that of K, the one that plays a relevant role in this problem.A necessary and sufficient condition for a set K to be an attractor is that it must be an “almost tame” subset of M in a sense made precise under the equivalent notions of “weakly tame” and “tamely embedded up to shape”, defined in the paper. These are complemented by a further equivalent condition, “algebraic tameness”, which has the advantage of being checkable by explicit computation.A final section of the paper is devoted to a partial analysis of the same question when one replaces flows by discrete dynamical systems.  相似文献   

19.
We prove that a small minimal blocking set of PG(2,q) is “very close” to be a linear blocking set over some subfield GF(pe)<GF(q). This implies that (i) a similar result holds in PG(n,q) for small minimal blocking sets with respect to k-dimensional subspaces (0?k?n) and (ii) most of the intervals in the interval-theorems of Sz?nyi and Sz?nyi-Weiner are empty.  相似文献   

20.
We study a class of multi-commodity flow problems in geometric domains: For a given planar domain P populated with obstacles (holes) of K?2types, compute a set of thick paths from a “source” edge of P to a “sink” edge of P for vehicles of K distinct classes. Each class k of vehicle has a given set, Ok, of obstacles it must avoid and a certain width, wk, of path it requires. The problem is to determine if it is possible to route Nk width-wk paths for class k vehicles from source to sink, with each path avoiding the requisite set Ok of obstacles, and no two paths overlapping. This form of multi-commodity flow in two-dimensional domains arises in computing throughput capacity for multiple classes of aircraft in an airspace impacted by different types of constraints, such as those arising from weather hazards.We give both algorithmic theory results and experimental results.We show hardness of many versions of the problem by proving that two simple variants are NP-hard even in the case K=2. If w1=w2=1, then the problem is NP-hard even when O1=∅. If w1=2, w2=3, then the problem is NP-hard even when O1=O2. In contrast, the problem for a single width and a single type of obstacles is polynomially solvable.We present approximation algorithms for the multi-criteria optimization problems that arise when trying to maximize the number of routable paths. We also give a polynomial-time algorithm for the case in which the number of holes in the input domain is bounded.Finally, we give experimental results based on an implementation of our methods and experiment with enhanced heuristics for efficient solutions in practice. Our algorithms are being utilized in simulations with NASA?s Future Air traffic management Concepts Evaluation Tool (FACET). We report on experimental results based on applying our algorithms to weather-impacted airspaces, comparing heuristic strategies for searching for feasible path orderings and for computing short multi-class routes. Our results show that multi-class routes can feasibly be computed on real weather data instances on the scale required in air traffic management applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号