首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
We initiate the study of a new measure of approximation. This measure compares the performance of an approximation algorithm to the random assignment algorithm. This is a useful measure for optimization problems where the random assignment algorithm is known to give essentially the best possible polynomial time approximation. In this paper, we focus on this measure for the optimization problems Max‐Lin‐2 in which we need to maximize the number of satisfied linear equations in a system of linear equations modulo 2, and Max‐k‐Lin‐2, a special case of the above problem in which each equation has at most k variables. The main techniques we use, in our approximation algorithms and inapproximability results for this measure, are from Fourier analysis and derandomization. © 2004 Wiley Periodicals, Inc. Random Struct. Alg., 2004  相似文献   

3.
We return to a classic problem of structural optimization whose solution requires microstructure. It is well‐known that perimeter penalization assures the existence of an optimal design. We are interested in the regime where the perimeter penalization is weak; i.e., in the effect of perimeter as a selection mechanism in structural optimization. To explore this topic in a simple yet challenging example, we focus on a two‐dimensional elastic shape optimization problem involving the optimal removal of material from a rectangular region loaded in shear. We consider the minimization of a weighted sum of volume, perimeter, and compliance (i.e., the work done by the load), focusing on the behavior as the weight ɛ of the perimeter term tends to 0. Our main result concerns the scaling of the optimal value with respect to ɛ. Our analysis combines an upper bound and a lower bound. The upper bound is proved by finding a near‐optimal structure, which resembles a rank‐2 laminate except that the approximate interfaces are replaced by branching constructions. The lower bound, which shows that no other microstructure can be much better, uses arguments based on the Hashin‐Shtrikman variational principle. The regime being considered here is particularly difficult to explore numerically due to the intrinsic nonconvexity of structural optimization and the spatial complexity of the optimal structures. While perimeter has been considered as a selection mechanism in other problems involving microstructure, the example considered here is novel because optimality seems to require the use of two well‐separated length scales.© 2016 Wiley Periodicals, Inc.  相似文献   

4.
In this paper, we propose an efficient numerical scheme for solving some large‐scale ill‐posed linear inverse problems arising from image restoration. In order to accelerate the computation, two different hidden structures are exploited. First, the coefficient matrix is approximated as the sum of a small number of Kronecker products. This procedure not only introduces one more level of parallelism into the computation but also enables the usage of computationally intensive matrix–matrix multiplications in the subsequent optimization procedure. We then derive the corresponding Tikhonov regularized minimization model and extend the fast iterative shrinkage‐thresholding algorithm (FISTA) to solve the resulting optimization problem. Because the matrices appearing in the Kronecker product approximation are all structured matrices (Toeplitz, Hankel, etc.), we can further exploit their fast matrix–vector multiplication algorithms at each iteration. The proposed algorithm is thus called structured FISTA (sFISTA). In particular, we show that the approximation error introduced by sFISTA is well under control and sFISTA can reach the same image restoration accuracy level as FISTA. Finally, both the theoretical complexity analysis and some numerical results are provided to demonstrate the efficiency of sFISTA.  相似文献   

5.
Polynomial time approximation schemes and parameterized complexity   总被引:3,自引:0,他引:3  
In this paper, we study the relationship between the approximability and the parameterized complexity of NP optimization problems. We introduce a notion of polynomial fixed-parameter tractability and prove that, under a very general constraint, an NP optimization problem has a fully polynomial time approximation scheme if and only if the problem is polynomial fixed-parameter tractable. By enforcing a constraint of planarity on the W-hierarchy studied in parameterized complexity theory, we obtain a class of NP optimization problems, the planar W-hierarchy, and prove that all problems in this class have efficient polynomial time approximation schemes (EPTAS). The planar W-hierarchy seems to contain most of the known EPTAS problems, and is significantly different from the class introduced by Khanna and Motwani in their efforts in characterizing optimization problems with polynomial time approximation schemes.  相似文献   

6.
Many space mission planning problems may be formulated as hybrid optimal control problems, i.e. problems that include both continuous-valued variables and categorical (binary) variables. There may be thousands to millions of possible solutions; a current practice is to pre-prune the categorical state space to limit the number of possible missions to a number that may be evaluated via total enumeration. Of course this risks pruning away the optimal solution. The method developed here avoids the need for pre-pruning by incorporating a new solution approach using nested genetic algorithms; an outer-loop genetic algorithm that optimizes the categorical variable sequence and an inner-loop genetic algorithm that can use either a shape-based approximation or a Lambert problem solver to quickly locate near-optimal solutions and return the cost to the outer-loop genetic algorithm. This solution technique is tested on three asteroid tour missions of increasing complexity and is shown to yield near-optimal, and possibly optimal, missions in many fewer evaluations than total enumeration would require.  相似文献   

7.
Recent work in the analysis of randomized approximation algorithms for NP‐hard optimization problems has involved approximating the solution to a problem by the solution of a related subproblem of constant size, where the subproblem is constructed by sampling elements of the original problem uniformly at random. In light of interest in problems with a heterogeneous structure, for which uniform sampling might be expected to yield suboptimal results, we investigate the use of nonuniform sampling probabilities. We develop and analyze an algorithm which uses a novel sampling method to obtain improved bounds for approximating the Max‐Cut of a graph. In particular, we show that by judicious choice of sampling probabilities one can obtain error bounds that are superior to the ones obtained by uniform sampling, both for unweighted and weighted versions of Max‐Cut. Of at least as much interest as the results we derive are the techniques we use. The first technique is a method to compute a compressed approximate decomposition of a matrix as the product of three smaller matrices, each of which has several appealing properties. The second technique is a method to approximate the feasibility or infeasibility of a large linear program by checking the feasibility or infeasibility of a nonuniformly randomly chosen subprogram of the original linear program. We expect that these and related techniques will prove fruitful for the future development of randomized approximation algorithms for problems whose input instances contain heterogeneities. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2008  相似文献   

8.
We discuss the average case complexity of global optimization problems. By the average complexity, we roughly mean the amount of work needed to solve the problem with the expected error not exceeding a preassigned error demand. The expectation is taken with respect to a probability measure on a classF of objective functions.Since the distribution of the maximum, max x f (x), is known only for a few nontrivial probability measures, the average case complexity of optimization is still unknown. Although only preliminary results are available, they indicate that on the average, optimization is not as hard as in the worst case setting. In particular, there are instances, where global optimization is intractable in the worst case, whereas it is tractable on the average.We stress, that the power of the average case approach is proven by exhibiting upper bounds on the average complexity, since the actual complexity is not known even for relatively simple instances of global optimization problems. Thus, we do not know how much easier global optimization becomes when the average case approach is utilized.Research partially supported by the National Science Foundation under Grant CCR-89-0537.  相似文献   

9.
Multiscale or multiphysics problems often involve coupling of partial differential equations posed on domains of different dimensionality. In this work, we consider a simplified model problem of a 3d‐1d coupling and the main objective is to construct algorithms that may utilize standard multilevel algorithms for the 3d domain, which has the dominating computational complexity. Preconditioning for a system of two elliptic problems posed, respectively, in a three‐dimensional domain and an embedded one dimensional curve and coupled by the trace constraint is discussed. Investigating numerically the properties of the well‐defined discrete trace operator, it is found that negative fractional Sobolev norms are suitable preconditioners for the Schur complement of the system. The norms are employed to construct a robust block diagonal preconditioner for the coupled problem.  相似文献   

10.

We provide a new hierarchy of semidefinite programming relaxations, called NCTSSOS, to solve large-scale sparse noncommutative polynomial optimization problems. This hierarchy features the exploitation of term sparsity hidden in the input data for eigenvalue and trace optimization problems. NCTSSOS complements the recent work that exploits correlative sparsity for noncommutative optimization problems by Klep et al. (MP, 2021), and is the noncommutative analogue of the TSSOS framework by Wang et al. (SIAMJO 31: 114–141, 2021, SIAMJO 31: 30–58, 2021). We also propose an extension exploiting simultaneously correlative and term sparsity, as done previously in the commutative case (Wang in CS-TSSOS: Correlative and term sparsity for large-scale polynomial optimization, 2020). Under certain conditions, we prove that the optima of the NCTSSOS hierarchy converge to the optimum of the corresponding dense semidefinite programming relaxation. We illustrate the efficiency and scalability of NCTSSOS by solving eigenvalue/trace optimization problems from the literature as well as randomly generated examples involving up to several thousand variables.

  相似文献   

11.
Bin packing problems are at the core of many well-known combinatorial optimization problems and several practical applications alike. In this work we introduce a novel variant of an abstract bin packing problem which is subject to a chaining constraint among items. The problem stems from an application of container handling in rail freight terminals, but is also of relevance in other fields, such as project scheduling. The paper provides a structural analysis which establishes computational complexity of several problem versions and develops (pseudo-)polynomial algorithms for specific subproblems. We further propose and evaluate simple and fast heuristics for optimization versions of the problem.  相似文献   

12.
The weighted median problem arises as a subproblem in certain multivariate optimization problems, includingL 1 approximation. Three algorithms for the weighted median problem are presented and the relationships between them are discussed. We report on computational experience with these algorithms and on their use in the context of multivariateL 1 approximation.This work was supported in part by National Science Foundation Grant CCR-8713893 and in part by a grant from The City University of New York PSC-CUNY Research Award program.  相似文献   

13.
The goal of this paper is to find a low‐rank approximation for a given nth tensor. Specifically, we give a computable strategy on calculating the rank of a given tensor, based on approximating the solution to an NP‐hard problem. In this paper, we formulate a sparse optimization problem via an l1‐regularization to find a low‐rank approximation of tensors. To solve this sparse optimization problem, we propose a rescaling algorithm of the proximal alternating minimization and study the theoretical convergence of this algorithm. Furthermore, we discuss the probabilistic consistency of the sparsity result and suggest a way to choose the regularization parameter for practical computation. In the simulation experiments, the performance of our algorithm supports that our method provides an efficient estimate on the number of rank‐one tensor components in a given tensor. Moreover, this algorithm is also applied to surveillance videos for low‐rank approximation.  相似文献   

14.
This paper addresses the problem of data fragmentation when incorporating imbalanced categorical covariates in nonparametric survival models. The problem arises in an application of demand forecasting where certain categorical covariates are important explanatory factors for the diversity of survival patterns but are severely imbalanced in the sense that a large percentage of data segments defined by these covariates have very small sample sizes. Two general approaches, called the class‐based approach and the fusion‐based approach, are proposed to handle the problem. Both reply on judicious utilization of a data segment hierarchy defined by the covariates. The class‐based approach allows certain segments in the hierarchy to have their private survival functions and aggregates the others to share a common survival function. The fusion‐based approach allows all survival functions to borrow and share information from all segments based on their positions in the hierarchy. A nonparametric Bayesian estimator with Dirichlet process priors provides the data‐sharing mechanism in the fusion‐based approach. The hyperparameters in the priors are treated as fixed quantities and learned from data by taking advantage of the data segment hierarchy. The proposed methods are motivated and validated by a case study with real‐world data from an operation of software development service.  相似文献   

15.
This paper presents the Galerkin approximation of the optimization problem of a system governed by non‐linear second‐order evolution equation where a non‐linear operator depends on derivative of the state of the system. The control is acting on a non‐linear equation. After giving some results on the existence of optimal control we shall prove the existence of the weak and strong condensation points of a set of solutions of the approximate optimization problems. Each of these points is a solution of the initial optimization problem. Finally we shall give a simple example using the obtained results. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
《Operations Research Letters》2014,42(6-7):432-437
We approximate as closely as desired the Pareto curve associated with bicriteria polynomial optimization problems. We use three formulations (including the weighted sum approach and the Chebyshev approximation) and each of them is viewed as a parametric polynomial optimization problem. For each case is associated a hierarchy of semidefinite relaxations and from an optimal solution of each relaxation one approximates the Pareto curve by solving an inverse problem (first two cases) or by building a polynomial underestimator (third case).  相似文献   

17.
This paper deals with the min-max version of the problem of selecting p items of the minimum total weight out of a set of n items, where the item weights are uncertain. The discrete scenario representation of uncertainty is considered. The computational complexity of the problem is explored. A randomized algorithm for the problem is then proposed, which returns an O(ln K)-approximate solution with a high probability, where K is the number of scenarios. This is the first approximation algorithm with better than K worst case ratio for the class of min-max combinatorial optimization problems with unbounded scenario set.  相似文献   

18.
We present a comparison of different multigrid approaches for the solution of systems arising from high‐order continuous finite element discretizations of elliptic partial differential equations on complex geometries. We consider the pointwise Jacobi, the Chebyshev‐accelerated Jacobi, and the symmetric successive over‐relaxation smoothers, as well as elementwise block Jacobi smoothing. Three approaches for the multigrid hierarchy are compared: (1) high‐order h‐multigrid, which uses high‐order interpolation and restriction between geometrically coarsened meshes; (2) p‐multigrid, in which the polynomial order is reduced while the mesh remains unchanged, and the interpolation and restriction incorporate the different‐order basis functions; and (3) a first‐order approximation multigrid preconditioner constructed using the nodes of the high‐order discretization. This latter approach is often combined with algebraic multigrid for the low‐order operator and is attractive for high‐order discretizations on unstructured meshes, where geometric coarsening is difficult. Based on a simple performance model, we compare the computational cost of the different approaches. Using scalar test problems in two and three dimensions with constant and varying coefficients, we compare the performance of the different multigrid approaches for polynomial orders up to 16. Overall, both h‐multigrid and p‐multigrid work well; the first‐order approximation is less efficient. For constant coefficients, all smoothers work well. For variable coefficients, Chebyshev and symmetric successive over‐relaxation smoothing outperform Jacobi smoothing. While all of the tested methods converge in a mesh‐independent number of iterations, none of them behaves completely independent of the polynomial order. When multigrid is used as a preconditioner in a Krylov method, the iteration number decreases significantly compared with using multigrid as a solver. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Nonlinear optimization algorithms are rarely discussed from a complexity point of view. Even the concept of solving nonlinear problems on digital computers is not well defined. The focus here is on a complexity approach for designing and analyzing algorithms for nonlinear optimization problems providing optimal solutions with prespecified accuracy in the solution space. We delineate the complexity status of convex problems over network constraints, dual of flow constraints, dual of multi-commodity, constraints defined by a submodular rank function (a generalized allocation problem), tree networks, diagonal dominant matrices, and nonlinear Knapsack problem's constraint. All these problems, except for the latter in integers, have polynomial time algorithms which may be viewed within a unifying framework of a proximity-scaling technique or a threshold technique. The complexity of many of these algorithms is furthermore best possible in that it matches lower bounds on the complexity of the respective problems. In general nonseparable optimization problems are shown to be considerably more difficult than separable problems. We compare the complexity of continuous versus discrete nonlinear problems and list some major open problems in the area of nonlinear optimization. MSC classification: 90C30, 68Q25  相似文献   

20.
The minimum weight design of structures made of fiber reinforced composite materials leads to a class of mixed‐integer optimization problems for which evolutionary algorithms (EA) are well suited. Based on these algorithms the optimization tool package GEOPS has been developed at TU Dresden. For each design generated by an EA the structural response has to be evaluated. This is often based on a finite element analysis which results in a high computational complexity for each single design. Typical runs of EA require the evaluation of thousands of designs. Thus, an efficient approximation of the structural response could improve the performance considerably. To achieve this aim the constraints on the structural response are approximated by means of support vector machines (SVM). It is trained by means of exact structural evaluations for selected design alternatives only. Several ways to enhance the efficiency of such an optimization procedure are presented. As an example for a typical aircraft structure, a stiffened composite panel under compressive and shear loading is considered. The SVM is trained on geometrical and material data. Representing the design space of composite panels by ABD matrices turned out to be a valuable means for obtaining well trained SVMs. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号