首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Given an extensive formG, we associate with every choice an atomic sentence and with every information set a set of well-formed formulas (wffs) of prepositional calculus. The set of such wffs is denoted by Γ(G). Using the so-called topological semantics for propositional calculus (which differs from the standard one based on truth tables), we show that the extensive form yields a topological model of Γ(G), that is, every wff in Γ(G), is “true in G”. We also show that, within the standard truth-table semantics for propositional calculus, there is a one-to-one and onto correspondence between the set of plays ofG and the set of valuations that satisfy all the wffs in Γ(G).  相似文献   

2.
Similar to the mixed-integer programming library (MIPLIB), we present a library of publicly available test problem instances for three classical types of open pit mining problems: the ultimate pit limit problem and two variants of open pit production scheduling problems. The ultimate pit limit problem determines a set of notional three-dimensional blocks containing ore and/or waste material to extract to maximize value subject to geospatial precedence constraints. Open pit production scheduling problems seek to determine when, if ever, a block is extracted from an open pit mine. A typical objective is to maximize the net present value of the extracted ore; constraints include precedence and upper bounds on operational resource usage. Extensions of this problem can include (i) lower bounds on operational resource usage, (ii) the determination of whether a block is sent to a waste dump, i.e., discarded, or to a processing plant, i.e., to a facility that derives salable mineral from the block, (iii) average grade constraints at the processing plant, and (iv) inventories of extracted but unprocessed material. Although open pit mining problems have appeared in academic literature dating back to the 1960s, no standard representations exist, and there are no commonly available corresponding data sets. We describe some representative open pit mining problems, briefly mention related literature, and provide a library consisting of mathematical models and sets of instances, available on the Internet. We conclude with directions for use of this newly established mining library. The library serves not only as a suggestion of standard expressions of and available data for open pit mining problems, but also as encouragement for the development of increasingly sophisticated algorithms.  相似文献   

3.
Consider the problem of testing for existence of an n-node graph G satisfying some condition P, expressed as a Boolean constraint among the n×n Boolean entries of the adjacency matrix M. This problem reduces to satisfiability of P(M). If P is preserved by isomorphism, P(M) is satisfiable iff P(M)∧SB(M) is satisfiable, where SB(M) is a symmetry-breaking predicate—a predicate satisfied by at least one matrix M in each isomorphism class. P(M)∧SB(M) is more constrained than P(M), so it is solved faster by backtracking than P(M)—especially if SB(M) rules out most matrices in each isomorphism class. This method, proposed by Crawford et al., applies not just to graphs but to testing existence of a combinatorial object satisfying any property that respects isomorphism, as long as the property can be compactly specified as a Boolean constraint on the object's binary representation.We present methods for generating symmetry-breaking predicates for several classes of combinatorial objects: acyclic digraphs, permutations, functions, and arbitrary-arity relations (direct products). We define a uniform optimality measure for symmetry-breaking predicates, and evaluate our constraints according to this measure. Results indicate that these constraints are either optimal or near-optimal for their respective classes of objects.  相似文献   

4.
We show how to construct sparse polynomial systems that have non-trivial lower bounds on their numbers of real solutions. These are unmixed systems associated to certain polytopes. For the order polytope of a poset P this lower bound is the sign-imbalance of P and it holds if all maximal chains of P have length of the same parity. This theory also gives lower bounds in the real Schubert calculus through the sagbi degeneration of the Grassmannian to a toric variety, and thus recovers a result of Eremenko and Gabrielov.  相似文献   

5.
It is shown that the satisfaction of a standard constraint qualification of mathematical programming [5] at a stationary point of a non-convex differentiable non-linear program provides explicit numerical bounds for the set of all Lagrange multipliers associated with the stationary point. Solution of a single linear program gives a sharper bound together with an achievable bound on the 1-norm of the multipliers associated with the inequality constraints. The simplicity of obtaining these bounds contrasts sharply with the intractable NP-complete problem of computing an achievable upper bound on the p-norm of the multipliers associated with the equality constraints for integer p ≧ 1.  相似文献   

6.
Linear least squares problems with box constraints are commonly solved to find model parameters within bounds based on physical considerations. Common algorithms include Bounded Variable Least Squares (BVLS) and the Matlab function lsqlin. Here, the goal is to find solutions to ill-posed inverse problems that lie within box constraints. To do this, we formulate the box constraints as quadratic constraints, and solve the corresponding unconstrained regularized least squares problem. Using box constraints as quadratic constraints is an efficient approach because the optimization problem has a closed form solution. The effectiveness of the proposed algorithm is investigated through solving three benchmark problems and one from a hydrological application. Results are compared with solutions found by lsqlin, and the quadratically constrained formulation is solved using the L-curve, maximum a posteriori estimation (MAP), and the χ2 regularization method. The χ2 regularization method with quadratic constraints is the most effective method for solving least squares problems with box constraints.  相似文献   

7.
Default logic is one of the most popular and successful formalisms for non-monotonic reasoning. In 2002, Bonatti and Olivetti introduced several sequent calculi for credulous and skeptical reasoning in propositional default logic. In this paper we examine these calculi from a proof-complexity perspective. In particular, we show that the calculus for credulous reasoning obeys almost the same bounds on the proof size as Gentzen??s system LK. Hence proving lower bounds for credulous reasoning will be as hard as proving lower bounds for LK. On the other hand, we show an exponential lower bound to the proof size in Bonatti and Olivetti??s enhanced calculus for skeptical default reasoning.  相似文献   

8.
Several attempts have been made to enumerate fuzzy switching (FSF's) and to develop upper and lower bounds for the number of FSF's of n variables in an effort to better understand the properties and the complexity of FSF's. Previous upper bounds are 24n [9] and 22–3n—2n—1 [7].It has also been shown that the exact numbers of FSF's of n variables for n = 0, 1, 2, 3, and 4 are 2, 6, 8, 84, 43 918 and 160 297 985 276 respectively.This paper will give a brief overview of previous approaches to the problem, study some of the properties of fuzzy switching functions and give improved upper and lower bounds for a general n.  相似文献   

9.
Normally inventory models of deteriorating items, such as food products, vegetables, etc. involve imprecise parameters, like imprecise inventory costs, fuzzy storage area, fuzzy budget allocation, etc. In this paper, we aim to provide two defuzzification techniques for two fuzzy inventory models using (i) extension principle and duality theory of non-linear programming and (ii) interval arithmetic. On the basis of Zadeh’s extension principle, two non-linear programs parameterized by the possibility level α are formulated to calculate the lower and upper bounds of the minimum average cost at α-level, through which the membership function of the objective function is constructed. In interval arithmetic technique the interval objective function has been transformed into an equivalent deterministic multi-objective problem defined by the left and right limits of the interval. This formulation corresponds to the possibility level, α = 0.5. Finally, the multi-objective problem is solved by a multi-objective genetic algorithm (MOGA). The model has been illustrated through a numerical example and solved for different values of possibility level, α through extension principle and for α = 0.5 via MOGA. As a particular case, the results have been obtained for the inventory model without deterioration. Results from two methods for α = 0.5 are compared.  相似文献   

10.
Consider a random sample from a statistical model with an unknown, and possibly infinite-dimensional, parameter - e.g., a nonparametric or semiparametric model - and a real-valued functional T of this parameter which is to be estimated. The objective is to develop bounds on the (negative) exponential rate at which consistent estimates converge in probability to T, or, equivalently, lower bounds for the asymptotic effective standard deviation of such estimates - that is, to extend work of R.R. Bahadur from parametric models to more general (semiparametric and nonparametric) models. The approach is to define a finite-dimensional submodel, determine Bahadur's bounds for a finite-dimensional model, and then ‘sup’ or ‘inf’ the bounds with respect to ways of defining the submodels; this can be construed as a ‘directional approach’, the submodels being in a specified ‘direction’ from a specific model. Extension is made to the estimation of vector-valued and infinite-dimensional functionals T, by expressing consistency in terms of a distance, or, alternatively, by treating classes of real functionals of T. Several examples are presented.  相似文献   

11.
Given an undirected network with positive edge costs and a natural number p, the Hop-Constrained Minimum Spanning Tree problem (HMST) is the problem of finding a spanning tree with minimum total cost such that each path starting from a specified root node has no more than p hops (edges). In this paper, we develop new formulations for HMST. The formulations are based on Miller-Tucker-Zemlin (MTZ) subtour elimination constraints, MTZ-based liftings in the literature offered for HMST, and a new set of topology-enforcing constraints. We also compare the proposed models with the MTZ-based models in the literature with respect to linear programming relaxation bounds and solution times. The results indicate that the new models give considerably better bounds and solution times than their counterparts in the literature and that the new set of constraints is competitive with liftings to MTZ constraints, some of which are based on well-known, strong liftings of Desrochers and Laporte (1991).  相似文献   

12.
This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) [15] and Cartis et al. (2010) [3] show that at most O(?−3) iterations may have to be performed for finding an iterate which is within ? of satisfying second-order optimality conditions. We first show that this bound can be derived for a version of the algorithm, which only uses one-dimensional global optimization of the cubic model and that it is sharp. We next consider the standard trust-region method and show that a bound of the same type may also be derived for this method, and that it is also sharp in some cases. We conclude by showing that a comparison of the bounds on the worst-case behaviour of the cubic regularization and trust-region algorithms favours the first of these methods.  相似文献   

13.
This paper introduces a fully general, exact algorithm for nesting irregular shapes. Both the shapes and material resource can be arbitrary nonconvex polygons. Moreover, the shapes can have holes and the material can have defective areas. Finally, the shapes can be arranged using both translations and arbitrary rotations (as opposed to a finite set of rotation angles, such as 0 \(^\circ \) and 180 \(^\circ \) ). The insight that has made all this possible is a novel way to relax the constraint that the shapes not overlap. The key idea is to inscribe a few circles in each irregular shape and then relax the non-overlap constraints for the shapes by replacing them with non-overlap constraints for the inscribed circles. These relaxed problems have the form of quadratic programming problems (QPs) and can be solved to optimality to provide valid lower bounds. Valid upper bounds can be found via local search with strict non-overlap constraints. If the shapes overlap in the solution to the relaxed problem, new circles are inscribed in the shapes to prevent this overlapping configuration from recurring, and the QP is then resolved to obtain improved lower bounds. Convergence to any fixed tolerance is guaranteed in a finite number of iterations. A specialized branch-and-bound algorithm, together with some heuristics, are introduced to find the initial inscribed circles that approximate the shapes. The new approach, called “QP-Nest,” is applied to three problems as proof of concept. The most complicated of these is a problem due to Milenkovic that has four nonconvex polygons with 94, 72, 84, and 74 vertices, respectively. QP-Nest is able prove global optimality when nesting the first two or the first three of these shapes. When all four shapes are considered, QP-Nest finds the best known solution, but cannot prove optimality.  相似文献   

14.
Mathematical programs with equilibrium (or complementarity) constraints, MPECs for short, form a difficult class of optimization problems. The feasible set of MPECs is described by standard equality and inequality constraints as well as additional complementarity constraints that are used to model equilibrium conditions in different applications. But these complementarity constraints imply that MPECs violate most of the standard constraint qualifications. Therefore, more specialized algorithms are typically applied to MPECs that take into account the particular structure of the complementarity constraints. One popular class of these specialized algorithms are the relaxation (or regularization) methods. They replace the MPEC by a sequence of nonlinear programs NLP(t) depending on a parameter t, then compute a KKT-point of each NLP(t), and try to get a suitable stationary point of the original MPEC in the limit t→0. For most relaxation methods, one can show that a C-stationary point is obtained in this way, a few others even get M-stationary points, which is a stronger property. So far, however, these results have been obtained under the assumption that one is able to compute exact KKT-points of each NLP(t). But this assumption is not implementable, hence a natural question is: What kind of stationarity do we get if we only compute approximate KKT-points? It turns out that most relaxation methods only get a weakly stationary point under this assumption, while in this paper, we show that the smooth relaxation method by Lin and Fukushima (Ann. Oper. Res. 133:63–84, 2005) still yields a C-stationary point, i.e. the inexact version of this relaxation scheme has the same convergence properties as the exact counterpart.  相似文献   

15.
Recently a lot of results (for a review see Goovaerts et al. (1983)) have been obtained for bounds on stop-loss premiums in case of incomplete information on the claim distribution.As a consequence some extremal distributions (depending on the retention limit) have been characterized. The extremal distributions for the stop-loss ordering in case of fixed values of the retention limit are obtained by means of deep results from the theory of convex analysis. In the present contribution it is shown, by means of some results from the problem of moments, how bounds on integrals with integral constraints can be obtained. We assume only the knowledge of the moments μ0, μ1, …, μn.  相似文献   

16.
Let Cn,cn2,k,t be a random constraint satisfaction problem(CSP) of n binary variables, where c > 0 is a fixed constant and the cn constraints are selected uniformly and independently from all the possible k-ary constraints each of which contains exactly t tuples of the values as its restrictions. We establish upper bounds for the tightness threshold for Cn,cn2,k,t to have an exponential resolution complexity. The upper bounds partly answers the open problems regarding the CSP resolution complexity with the tightness between the existing upper and lower bound [1].  相似文献   

17.
In NIP theories, generically stable Keisler measures can be characterized in several ways. We analyze these various forms of “generic stability” in arbitrary theories. Among other things, we show that the standard definition of generic stability for types coincides with the notion of a frequency interpretation measure. We also give combinatorial examples of types in NSOP theories that are finitely approximated but not generically stable, as well as ϕ-types in simple theories that are definable and finitely satisfiable in a small model, but not finitely approximated. Our proofs demonstrate interesting connections to classical results from Ramsey theory for finite graphs and hypergraphs.  相似文献   

18.
This paper is a study of the car sequencing problem, when feature spacing constraints are soft and colors of vehicles are taken into account. Both pseudo-polynomial algorithms and lower bounds are presented for parts of the problem or family of instances. With this set of lower bounds, we establish the optimality (up to the first non-trivial criteria) of 54% of best known solutions for the benchmark used for the Roadef Challenge 2005. We also prove that the optimal penalty for a single ratio constraint N/P can be computed in O(P) and that determining the feasibility of a car sequencing instance limited to a pair of simple ratio constraints can be achieved by dynamic programming. Finally, we propose a solving algorithm exploiting these results within a local search approach. To achieve this goal, a new meta-heuristic (star relinking) is introduced, designed for the optimization of an aggregation of criteria, when the optimization of each single criterion is a polynomial problem.  相似文献   

19.
This paper concerns lower bounding techniques for the general α-adic assignment problem. The nonlinear objective function is linearized by the introduction of additional variables and constraints, thus yielding a mixed integer linear programming formulation of the problem. The concept of many body interactions is introduced to strengthen this formulation and incorporated in a modified formulation obtained by lifting the original representation to a higher dimensional space. This process involves two steps — (i) addition of new variables and constraints and (ii) incorporation of the new variables in the objective function. If this lifting process is repeated β times on an α-adic assignment problem along with the incorporation of higher order interactions, it results in the mixed-integer formulation of an equivalent (α + β)-adic assignment problem. The incorporation of many body interactions in the higher dimensional formulation improves its degeneracy properties and is also critical to the derivation of decomposition methods for the solution of these large scale mathematical programs in the higher dimensional space. It is shown that a lower bound to the optimal solution of the corresponding linear programming relaxation can be obtained by dualizing a subset of constraints in this formulation and solving O(N2(α+β−1)) linear assignment problems, whose coefficients depend on the dual values. Moreover, it is proved that the optimal solution to the LP relaxation is obtained if we use the optimal duals for the solution of the linear assignment problems. This concept of many body interactions could be applied in designing algorithms for the solution of formulations obtained by lifting general MILP's. We illustrate all these concepts on the quadratic assignment problems With these decomposition bounds, we have found the provably optimal solutions of two unsolved QAP's of size 32 and have also improved upon existing lower bounds for other QAP's.  相似文献   

20.
The topological Tverberg theorem claims that for any continuous map of the (q−1)(d+1)-simplex σ(d+1)(q−1) to Rd there are q disjoint faces of σ(d+1)(q−1) such that their images have a non-empty intersection. This has been proved for affine maps, and if q is a prime power, but not in general.We extend the topological Tverberg theorem in the following way: Pairs of vertices are forced to end up in different faces. This leads to the concept of constraint graphs. In Tverberg's theorem with constraints, we come up with a list of constraints graphs for the topological Tverberg theorem.The proof is based on connectivity results of chessboard-type complexes. Moreover, Tverberg's theorem with constraints implies new lower bounds for the number of Tverberg partitions. As a consequence, we prove Sierksma's conjecture for d=2 and q=3.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号