共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a computationally effective heuristic method which produces good-quality solutions for large-scale set
covering problems with thousands of constraints and about one million variables. The need to solve such large-scale problems
arises from a crew scheduling problem of mass transit agencies where the number of work shifts required has to be minimized.
This problem may be formulated as a large-scale non-unicost set covering problem whose rows are trips to be performed while
columns stand for round trips. The proposed method is mainly based on lagragian relaxation and sub-gradient optimization.
After the reduction of the number of rows and columns by the logical tests, “greedy” heuristic algorithms provide upper and
lower bounds which are continuously improved to produce goodquality solutions. Computational results, regarding randomly generated
problems and real life problems concerning crew scheduling at Italian Railways Company, show that good-quality solutions can
be obtained at an acceptable computational cost.
This work was supported by the project “Progetto Finalizzato Transporti 2” of National Research Council of Italy (C.N.R.)
contract No. 94.01436PF74 and by “Ferrovie dello Stato S.p.A.” 相似文献
2.
The lexicographically-ordered CSP (“lexicographic CSP” or “LO-CSP” for short) combines a simple representation of preferences
with the feasibility constraints of ordinary CSPs. Preferences are defined by a total ordering across all assignments, such
that a change in assignment to a given variable is more important than any change in assignment to any less important variable.
In this paper, we show how this representation can be extended to handle conditional preferences in two ways. In the first,
for each conditional preference relation, the parents have higher priority than the children in the original lexicographic
ordering. In the second, the relation between parents and children need not correspond to the importance ordering of variables.
In this case, by obviating the “overwhelming advantage” effect with respect to the original variables and values, the representational
capacity is significantly enhanced. For problems of the first type, any of the algorithms originally devised for ordinary
LO-CSPs can also be used when some of the domain orderings are dependent on assignments to “parent” variables. For problems
of the second type, algorithms based on lexical orders can be used if the representation is augmented by variables and constraints
that link preference orders to assignments. In addition, the branch-and-bound algorithm originally devised for ordinary LO-CSPs
can be extended to handle CSPs with conditional domain orderings. 相似文献
3.
We have constructed an algorithm for the asymptotic approximation of the solutions of inverse singularly perturbed boundary-value
problems of the convection-diffusion type with unknown diffusion coefficient, depending on the coordinates of a quadrangular
curvilinear domain of filtration. The case of sufficient smoothness and consistency of the overdetermination, initial, and
boundary conditions is considered. Unlike the construction of an algorithm for the solution of similar problems in doubly
connected domains, here, in the corresponding relations, there appear corrections taking into account the influence of “lateral
sources of pollution.” With the help of this algorithm, we have carried out a computer experiment, the results of which confirm
the well-known fact of “strong sensitivity” of the model to assignment of the overdetermination condition. In particular,
we have revealed the specific character of influence of this condition on the required diffusion coefficient depending on
the filtration velocity. 相似文献
4.
This paper describes a heuristic for 0-1 mixed-integer linear programming problems, focusing on “stand-alone” implementation.
Our approach is built around concave “merit functions” measuring solution integrality, and consists of four layers: gradient-based
pivoting, probing pivoting, convexity/intersection cutting, and diving on blocks of variables. The concavity of the merit
function plays an important role in the first and third layers, as well as in connecting the four layers. We present both
the mathematical and software details of a test implementation, along with computational results for several variants. 相似文献
5.
J. Sakalauskaitė 《Lithuanian Mathematical Journal》2007,47(3):266-276
In this paper, we consider branching time temporal logic CT L with epistemic modalities for knowledge (belief) and with awareness operators. These logics involve the discrete-time linear
temporal logic operators “next” and “until” with the branching temporal logic operator “on all paths”. In addition, the temporal
logic of knowledge (belief) contains an indexed set of unary modal operators “agent i knows” (“agent i believes”). In a language of these logics, there are awareness operators. For these logics, we present sequent calculi with
a restricted cut rule. Thus, we get proof systems where proof-search becomes decidable. The soundness and completeness for
these calculi are proved.
Published in Lietuvos Matematikos Rinkinys, Vol. 47, No. 3, pp. 328–340, July–September, 2007. 相似文献
7.
Regenerative events for different queueing models are considered. The aim of this paper is to construct these events for continuous-time
processes if they are given for the corresponding discrete-time model. The construction uses so-called renovative events revealing
the property of the state at timen of the discrete-time model to be independent (in an algebraic sense) of the states referring to epochs not later thann −L (whereL is some constant) given that there are some restrictions on the “governing sequence”. Different types of multi-server and
multi-phase queues are considered. 相似文献
8.
The open pit mine block sequencing problem (OPBS) seeks a discretetime production schedule that maximizes the net present
value of the orebody extracted from an open-pit mine. This integer program (IP) discretizes the mine’s volume into blocks,
imposes precedence constraints between blocks, and limits resource consumption in each time period. We develop a “sliding
time window heuristic” to solve this IP approximately. The heuristic recursively defines, solves and partially fixes an approximating
model having: (i) fixed variables in early time periods, (ii) an exact submodel defined over a “window” of middle time periods,
and (iii) a relaxed submodel in later time periods. The heuristic produces near-optimal solutions (typically within 2% of
optimality) for model instances that standard optimization software fails to solve. Furthermore, it produces these solutions
quickly, even though our OPBS model enforces standard upper-bounding constraints on resource consumption along with less standard,
but important, lower-bounding constraints. 相似文献
9.
This paper presents a formulation and solution algorithm for a composite dynamic user-equilibrium assignment problem with
multi-user classes, in order to assess the impacts of Advanced Traveler Information Systems (ATIS) in general networks with
queues. Suppose that users equipped with ATIS will receive complete information and hence be able to choose the best departure
times and routes in a deterministic manner, while users not equipped with ATIS will have incomplete information and hence
may make decisions on departure times and routes in a stochastic manner. This paper proposes a discrete-time, finite-dimensional
variational inequality formulation that involves two criteria regarding the route and departure time choice behaviors, i.e.,
the deterministic dynamic user equilibrium and the nested logit-based stochastic dynamic user equilibrium. The formulation
is then converted to an equivalent “zero-extreme value” minimization problem. A heuristic algorithm based on route/time-swapping
process is proposed, which iteratively adjusts the route and departure time choices to reach closely to an extreme point of
the minimization problem. A numerical example is used to demonstrate the effectiveness of the proposed approach for assessing
the ATIS impacts such as changes in individual travel costs, departure times, route inflows, queuing peaks and total network
travel cost.
This revised version was published online in July 2006 with corrections to the Cover Date. 相似文献
10.
Maria Battarra 《4OR: A Quarterly Journal of Operations Research》2011,9(4):421-424
This is a summary of the authors PhD thesis supervised by Daniele Vigo and defended on 30 March 2010, at the Università di
Bologna. The thesis is written in English and is available from the author upon request. Several rich routing problems attaining
to the transportation area have been studied. “Simple” algorithms have been proposed to solve them, both exact and heuristic,
producing high quality solutions and transferrable methods. 相似文献
11.
Nikola Kompa 《Acta Analytica》2005,20(1):16-28
The basic idea of conversational contextualism is that knowledge attributions are context sensitive in that a given knowledge
attribution may be true if made in one context but false if made in another, owing to differences in the attributors’ conversational
contexts. Moreover, the context sensitivity involved is traced back to the context sensitivity of the word “know,” which,
in turn, is commonly modelled on the case either of genuine indexicals such as “I” or “here” or of comparative adjectives
such as “tall” or “rich.” But contextualism faces various problems. I argue that in order to solve these problems we need
to look for another account of the context sensitivity involved in knowledge attributions and I sketch an alternative proposal. 相似文献
12.
The paper develops the mathematics of the heavy traffic approach to the control and optimal control problem for multiplexing
systems, where there are many mutually independent sources which feed into a single channel via a multiplexer (or of networks
composed of such subsystems). Due to the widely varying bit rates over all sources, control over admission, bandwidth, etc.,
is needed to assure good performance. Optimal control and heavy traffic analysis has been shown to yield systems with greatly
improved performance. Indeed, the heavy traffic approach covers many cases of great current interest, and provides a useful
and practical approach to problems of analysis and control arising in modern high speed telecommunications. Past works on
the heavy traffic approach to the multiplexing problem concentrated on the uncontrolled system or on the use of the heavy
traffic limit control problem for applications, and did not provide details of the proofs. This is done in the current paper.
The basic control problem for the physical system is hard, and the heavy traffic approach provides much simplification. Owing
to the presence of the control, as well as to the fact that the cost function of main interest is “ergodic”, the problem cannot
be fully treated with “classical” methods of heavy traffic analysis for queueing networks. A basic result is that the optimal
average costs per unit time for the physical problem converge to the optimal cost per unit time for the limit stationary process
as the number of sources and the time interval goes to infinity. This convergence is both in the mean and pathwise senses.
Furthermore, a “nice” nearly optimal control for the limit system provides nearly optimal values for the physical system,
under heavy traffic, in both a mean and pathwise sense.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
13.
Lagrangian heuristics for scheduling new product development projects in the pharmaceutical industry
To stay ahead of their competition, pharmaceutical firms must make effective use of their new product development (NPD) capabilities
by efficiently allocating its analytical, clinical testing and manufacturing resources across various drug development projects.
The resulting project scheduling problems involve coordinating hundreds of testing and manufacturing activities over a period
of several quarters. Most conventional integer programming approaches are computationally impractical for problems of this
size, while priority rule-driven heuristics seldom provide consistent solution quality. We propose a Lagrangian decomposition
(LD) heuristic that exploits the special structure of these problems. Some resources (typically manpower) are shared across
all on-going projects while others (typically equipment) are specific to individual project categories. Our objective function
is a weighted discounted cost expressed in terms of activity completion times. The LD heuristics were subjected to a comprehensive
experimental study based on typical operational instances. While the conventional “Reward–Risk” priority rule heuristic generates
duality gaps between 47–58%, the best LD heuristic achieves duality gaps between 10–20%. The LD heuristics also yield makespan
reductions of over 30% over the Reward–Risk priority rule. 相似文献
14.
In this paper, we propose a new greedy-like heuristic method, which is primarily intended for the general MDKP, but proves
itself effective also for the 0-1 MDKP. Our heuristic differs from the existing greedy-like heuristics in two aspects. First,
existing heuristics rely on each item’s aggregate consumption of resources to make item selection decisions, whereas our heuristic
uses the effective capacity, defined as the maximum number of copies of an item that can be accepted if the entire knapsack were to be used for that
item alone, as the criterion to make item selection decisions. Second, other methods increment the value of each decision
variable only by one unit, whereas our heuristic adds decision variables to the solution in batches and consequently improves
computational efficiency significantly for large-scale problems. We demonstrate that the new heuristic significantly improves
computational efficiency of the existing methods and generates robust and near-optimal solutions. The new heuristic proves
especially efficient for high dimensional knapsack problems with small-to-moderate numbers of decision variables, usually
considered as “hard” MDKP and no computationally efficient heuristic is available to treat such problems.
Supported in part by the NSF grant DMI 9812994. 相似文献
15.
This paper proposes a new and efficient method for “Escherization”, that is, for generating a tile which is close to a given
shape and whose copies cover the plane without gaps or overlaps except at their boundaries. In this method, the Escherization
problem is reduced to a maximum eigenvalue problem, which can be solved easily, while the existing method requires time consuming
heuristic search. Furthermore, we show that the optimal shape corresponds to the orthogonal projection of the vector representing
the given shape to the “space of tilable shapes”. 相似文献
16.
In this paper, we present a novel graph-theoretical approach for representing a wide variety of sequence analysis problems
within a single model. The model allows incorporation of the operations “insertion”, “deletion”, and “substitution”, and various
parameters such as relative distances and weights. Conceptually, we refer the problem as the minimum weight common mutated sequence (MWCMS) problem. The MWCMS model has many applications including multiple sequence alignment problem, the phylogenetic analysis, the DNA sequencing
problem, and sequence comparison problem, which encompass a core set of very difficult problems in computational biology.
Thus the model presented in this paper lays out a mathematical modeling framework that allows one to investigate theoretical
and computational issues, and to forge new advances for these distinct, but related problems.
Through the introduction of supernodes, and the multi-layer supergraph, we proved that MWCMS is -complete. Furthermore, it was shown that a conflict graph derived from the multi-layer supergraph has the property that a
solution to the associated node-packing problem of the conflict graph corresponds to a solution of the MWCMS problem. In this
case, we proved that when the number of input sequences is a constant, MWCMS is polynomial-time solvable. We also demonstrated
that some well-known combinatorial problems can be viewed as special cases of the MWCMS problem. In particular, we presented
theoretical results implied by the MWCMS theory for the minimum weight supersequence problem, the minimum weight superstring
problem, and the longest common subsequence problem.
Two integer programming formulations were presented and a simple yet elegant decomposition heuristic was introduced. The integer
programming instances have proven to be computationally intensive. Consequently, research involving simultaneous column and
row generation and parallel computing will be explored. The heuristic algorithm, introduced herein for multiple sequence alignment,
overcomes the order-dependent drawbacks of many of the existing algorithms, and is capable of returning good sequence alignments
within reasonable computational time. It is able to return the optimal alignment for multiple sequences of length less than
1500 base pairs within 30 minutes. Its algorithmic decomposition nature lends itself naturally for parallel distributed computing,
and we continue to explore its flexibility and scalability in a massive parallel environment. 相似文献
17.
S. A. Nazarov 《Journal of Mathematical Sciences》1996,80(5):1989-2034
The asymptotic series for solutions of the mixed boundary-value problem for the Poisson equation in a domain, which is a junction
of singularly degenerating domains, are constructed. In this paper, which is the first part of the publication, the three-dimensional
problem (“wheel hub with spokes”) and the analogous two-dimensional problems are considered. The methods of matched and compound
asymptotic expansions are used. It is shown that a special self-adjoint extension of the operator of the limit problem in
the “hub” supplied by the straight-line segments (“limits of spokes”) can be chosen as an asymptotical model of the problem
in question; the extension parameters are to be some integral characteristics of the boundary-layer problems. Bibliography:
39 titles.
Translated from Trudy Seminara imeni I. G. Petrovskogo. No. 18, pp. 3–78, 1995. 相似文献
18.
A. V. Zabrodin 《Theoretical and Mathematical Physics》2000,125(2):1455-1475
We construct local M-operators for an integrable discrete-time version of the classical Heisenberg magnet by convoluting the
twisted quantum trigonometric 4×4 R-matrix with certain vectors in its “quantum” space. Components of the vectors are identified
with τ-functions of the model. Hirota's bilinear formalism is extensively used. The construction generalizes the known representation
of M-operators in continuous-time models in terms of Lax operators and the classical τ-matrix.
This paper was written at the request of the Editorial Board.
Translated from Teoreticheskaya i Matematicheskaya Fizika, Vol. 125, No. 2, pp. 179–204, November, 2000. 相似文献
19.
Provability interpretations of modal logic 总被引:5,自引:0,他引:5
Robert M. Solovay 《Israel Journal of Mathematics》1976,25(3-4):287-304
We consider interpretations of modal logic in Peano arithmetic (P) determined by an assignment of a sentencev
* ofP to each propositional variablev. We put (⊥)*=“0 = 1”, (χ → ψ)* = “χ* → ψ*” and let (□ψ)* be a formalization of “ψ)* is a theorem ofP”. We say that a modal formula, χ, isvalid if ψ* is a theorem ofP in each such interpretation. We provide an axiomitization of the class of valid formulae and prove that this class is recursive. 相似文献
20.
Flemming Tops?e 《Journal of Global Optimization》2009,43(4):553-564
Inspired by previous work on information theoretical optimization problems, the basics of an axiomatic theory of certain special
two-person zero-sum games is developed. One of the players, “Observer”, is imagined to have a “mind”, the other, “Nature”,
not. These ideas lead to un-symmetric modeling as the two players are treated quite differently. Basic concavity- and convexity
results as well as a general minimax theorem are derived from the axioms. 相似文献