首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 712 毫秒
1.
Nonlinear optimization algorithms are rarely discussed from a complexity point of view. Even the concept of solving nonlinear problems on digital computers is not well defined. The focus here is on a complexity approach for designing and analyzing algorithms for nonlinear optimization problems providing optimal solutions with prespecified accuracy in the solution space. We delineate the complexity status of convex problems over network constraints, dual of flow constraints, dual of multi-commodity, constraints defined by a submodular rank function (a generalized allocation problem), tree networks, diagonal dominant matrices, and nonlinear Knapsack problem's constraint. All these problems, except for the latter in integers, have polynomial time algorithms which may be viewed within a unifying framework of a proximity-scaling technique or a threshold technique. The complexity of many of these algorithms is furthermore best possible in that it matches lower bounds on the complexity of the respective problems. In general nonseparable optimization problems are shown to be considerably more difficult than separable problems. We compare the complexity of continuous versus discrete nonlinear problems and list some major open problems in the area of nonlinear optimization. MSC classification: 90C30, 68Q25  相似文献   

2.
Nonlinear optimization algorithms are rarely discussed from a complexity point of view. Even the concept of solving nonlinear problems on digital computers is not well defined. The focus here is on a complexity approach for designing and analyzing algorithms for nonlinear optimization problems providing optimal solutions with prespecified accuracy in the solution space. We delineate the complexity status of convex problems over network constraints, dual of flow constraints, dual of multi-commodity, constraints defined by a submodular rank function (a generalized allocation problem), tree networks, diagonal dominant matrices, and nonlinear knapsack problem’s constraint. All these problems, except for the latter in integers, have polynomial time algorithms which may be viewed within a unifying framework of a proximity-scaling technique or a threshold technique. The complexity of many of these algorithms is furthermore best possible in that it matches lower bounds on the complexity of the respective problems. In general nonseparable optimization problems are shown to be considerably more difficult than separable problems. We compare the complexity of continuous versus discrete nonlinear problems and list some major open problems in the area of nonlinear optimization. An earlier version of this paper appeared in 4OR, 3:3, 171–216, 2005.  相似文献   

3.
The purpose of this paper is to introduce and study a new class of combinatorial optimization problems in which the objective function is the algebraic sum of a bottleneck cost function (Min-Max) and a linear cost function (Min-Sum). General algorithms for solving such problems are described and general complexity results are derived. A number of examples of application involving matchings, paths and cutsets, matroid bases, and matroid intersection problems are examined, and the general complexity results are specialized to each of them. The interest of these various problems comes in particular from their strong relation to other important and difficult combinatorial problems such as: weighted edge coloring of a graph; optimum weighted covering with matroid bases; optimum weighted partitioning with matroid intersections, etc. Another important area of application of the algorithms given in the paper is bicriterion analysis involving a Min-Max criterion and a Min-Sum one.  相似文献   

4.
In this paper, we deal with the determination of the entire set of Pareto solutions of location problems involving Q general criteria. These criteria include median, center, or centdian objective functions as particular instances. We characterize the set of Pareto solutions of all these multicriteria problems for any polyhedral gauge. An efficient algorithm is developed for the planar case and its complexity is established. Extensions to the nonconvex case are also considered. The proposed approach is more general than previously published approaches to multicriteria location problems.The research of the third and fourth authors was partially supported by Grants BFM2001-2378, BFM2001-4028, BFM2004-0909, and HA2003-0121.  相似文献   

5.
We consider the intrinsic complexity of selected algorithmic problems of classical elimination theory in algebraic geometry. The inputs and outputs of these problems are given by finite sets of polynomials which we represent alternatively in dense form or by straight line programs. We begin with an overview on the known upper bounds for the sequential and parallel time complexity of these problems and show then that in the most important cases these bounds are tight. Our lower bound results include both the relative and the absolute viewpoint of complexity theory. On one side we give reductions of fundamental questions of elimination theory to NP- and P#-complete problems and on the other side we show that some of these questions may have exponential size outputs. In this way we confirm the intrinsically exponential character of algorithmic problems in elimination theory whatever the type of data structure may be.  相似文献   

6.
The Stem algorithm and Steuer's filtering procedure were offered to 142 subjects to assist them in solving six car buying problems of various complexity. Subjects could change between the algorithms and some other program options almost at will. We show that the algorithms are evaluated differently and that objective measures of the solution process show differences between the algorithms that support the subjective evaluations. If the objective task complexity is noticed (at least along one dimension: number of objective functions) we find that the choice of procedure varies with it as well as with subjective complexity evaluations. In general, Stem is favored by a minority of subjects, preferably for smaller sized problems.  相似文献   

7.
We investigate the complexity of finding solutions to infinite recursive constraint satisfaction problems. We show that, in general, the problem of finding a solution to an infinite recursive constraint satisfaction problem is equivalent to the problem of finding an infinite path through a recursive tree. We also identify natural classes of infinite recursive constraint satisfaction problems where the problem of finding a solution to the infinite recursive constraint satisfaction problem is equivalent to the problem of finding an infinite path through finitely branching recursive trees or recursive binary trees. There are a large number of results in the literature on the complexity of the problem of finding an infinite path through a recursive tree. Our main result allows us to automatically transfer such results to give equivalent results about the complexity of the problem of finding a solution to a recursive constraint satisfaction problem.  相似文献   

8.
9.
Proofs from complexity theory as well as computational experiments indicate that most lot sizing problems are hard to solve. Because these problems are so difficult, various solution techniques have been proposed to solve them. In the past decade, meta-heuristics such as tabu search, genetic algorithms and simulated annealing, have become popular and efficient tools for solving hard combinatorial optimization problems. We review the various meta-heuristics that have been specifically developed to solve lot sizing problems, discussing their main components such as representation, evaluation, neighborhood definition and genetic operators. Further, we briefly review other solution approaches, such as dynamic programming, cutting planes, Dantzig–Wolfe decomposition, Lagrange relaxation and dedicated heuristics. This allows us to compare these techniques. Understanding their respective advantages and disadvantages gives insight into how we can integrate elements from several solution approaches into more powerful hybrid algorithms. Finally, we discuss general guidelines for computational experiments and illustrate these with several examples.  相似文献   

10.
We present several complexity results related to generation and counting of all circuits of an independence system. Our motivation to study these problems is their relevance in the solution of resource constrained scheduling problems, where an independence system arises as the subsets of jobs that may be scheduled simultaneously. We are interested in the circuits of this system, the so-called minimal forbidden sets, which are minimal subsets of jobs that must not be scheduled simultaneously. As a consequence of the complexity results for general independence systems, we obtain several complexity results in the context of resource constrained scheduling. On that account, we propose and analyze a simple backtracking algorithm that generates all minimal forbidden sets for such problems. The performance of this algorithm, in comparison to a previously suggested divide-and-conquer approach, is evaluated empirically using instances from the project scheduling library PSPLIB.Acknowledgement We appreciate the input of two anonymous referees. Particularly the deep remarks of one of them greatly improved our understanding of several issues; he also suggested the simplified Example 1. We thank Marc Pfetsch and Alexander Grigoriev for several enlightening discussions. Marc Pfetsch also pointed us to the paper [15]. Parts of this work were done while the authors were PhD students at the Technische Universität Berlin, Germany, where Frederik Stork was supported by DFG grant Mo 446/3-3, and Marc Uetz was supported by bmb+f grant 03-MO7TU1-3 and GIF grant I 246-304.02/97.  相似文献   

11.
We investigate optimal algorithms for optimizing and approximating a general high dimensional smooth and sparse function from the perspective of information based complexity. Our algorithms and analyses reveal several interesting characteristics for these tasks. In particular, somewhat surprisingly, we show that the optimal sample complexity for optimization or high precision approximation is independent of the ambient dimension. In addition, we show that the benefit of randomization could be substantial for these problems.  相似文献   

12.
The problems of synchronization and pinning control for general time-delay complex dynamical networks are investigated. In this paper, less conservative criterions for both continuous-time and discrete-time complex dynamical networks with time delay are obtained. Pinning control strategies are respectively, designed to make these complex dynamical networks synchronized. Moreover, the problems of designing controllers are converted into solving optimal problems of a series of linear matrix inequalities, which reduces the computation complexity. Finally, numerical simulations verify the effectiveness of our methodology.  相似文献   

13.
We define new complexity classes in the Blum–Shub–Smale theory of computation over the reals, in the spirit of the polynomial hierarchy, with the help of infinitesimal and generic quantifiers. Basic topological properties of semialgebraic sets like boundedness, closedness, compactness, as well as the continuity of semialgebraic functions are shown to be complete in these new classes. All attempts to classify the complexity of these problems in terms of the previously studied complexity classes have failed. We also obtain completeness results in the Turing model for the corresponding discrete problems. In this setting, it turns out that infinitesimal and generic quantifiers can be eliminated, so that the relevant complexity classes can be described in terms of the usual quantifiers only.   相似文献   

14.
15.
Single- and multiple-ratio unconstrained hyperbolic 0-1 programming problems are considered. We prove that checking whether these problems have a unique solution is NP-hard. Furthermore, we show that finding the global maximizer of problems with unique solution remains NP-hard. We also discuss complexity of local search and approximability for multiple-ratio problems.  相似文献   

16.
《Journal of Complexity》1996,12(3):239-253
We study the complexity of some computational problems in case certain stability guarantees are required. After providing suitable settings for this kind of investigations, we are able to give theoretical support to longstanding viewpoints in numerical analysis. More precisely, we focus on a set of significant examples and prove that: (1) the fastest known polynomial evaluation algorithms are not stable, (2) LUP decomposition cannot be computed stably in polylogarithmic parallel time, and (3) reductions among computational problems do not in general constitute a practical tool for solving numerical problems.  相似文献   

17.
Min–max and min–max regret criteria are commonly used to define robust solutions. After motivating the use of these criteria, we present general results. Then, we survey complexity results for the min–max and min–max regret versions of some combinatorial optimization problems: shortest path, spanning tree, assignment, min cut, min st cut, knapsack. Since most of these problems are NP-hard, we also investigate the approximability of these problems. Furthermore, we present algorithms to solve these problems to optimality.  相似文献   

18.
This paper investigates the complexity of the min-max and min-max regret assignment problems both in the discrete scenario and interval data cases. We show that these problems are strongly NP-hard for an unbounded number of scenarios. We also show that the interval data min-max regret assignment problem is strongly NP-hard.  相似文献   

19.
We study the complexity of the problem of deciding the existence of a spanning subgraph of a given graph, and of that of finding a maximum (weight) such subgraph. We establish some general relations between these problems, and we use these relations to obtain new NP-completeness results for maximum (weight) spanning subgraph problems from analogous results for existence problems and from results in extremal graph theory. On the positive side, we provide a decomposition method for the maximum (weight) spanning chordal subgraph problem that can be used, e.g., to obtain a linear (or O(nlogn)) time algorithm for such problems in graphs with vertex degree bounded by 3.  相似文献   

20.
Facility location problems have been investigated in the Operations Research literature from a variety of algorithmic perspectives, including those of approximation algorithms, heuristics, and linear programming. We introduce the study of these problems from the point of view of parameterized algorithms and complexity. Some applications of algorithms for these problems in the processing of semistructured documents and in computational biology are also described.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号