首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We define new complexity classes in the Blum–Shub–Smale theory of computation over the reals, in the spirit of the polynomial hierarchy, with the help of infinitesimal and generic quantifiers. Basic topological properties of semialgebraic sets like boundedness, closedness, compactness, as well as the continuity of semialgebraic functions are shown to be complete in these new classes. All attempts to classify the complexity of these problems in terms of the previously studied complexity classes have failed. We also obtain completeness results in the Turing model for the corresponding discrete problems. In this setting, it turns out that infinitesimal and generic quantifiers can be eliminated, so that the relevant complexity classes can be described in terms of the usual quantifiers only.   相似文献   

2.
钥匙编码的若干计数问题   总被引:1,自引:0,他引:1  
本文考察钥匙编码的若干计数问题,给出它们的显表达式,这些结果均优于文[1],[2]的相应的结果。  相似文献   

3.
Rice's Theorem says that every nontrivia semantic property of programs is undecidable. In this spirit we show the following: Every nontrivia absolute (gap, relative) counting property of circuits is UP‐hard with respect to polynomial‐time Turing reductions. For generators [31] we show a perfect analogue of Rice's Theorem.  相似文献   

4.
Graph sandwich problems were introduced by Golumbic et al. (1994) in [12] for DNA physical mapping problems and can be described as follows. Given a property Π of graphs and two disjoint sets of edges E1, E2 with E1E2 on a vertex set V, the problem is to find a graph G on V with edge set Es having property Π and such that E1EsE2.In this paper, we exhibit a quasi-linear reduction between the problem of finding an independent set of size k≥2 in a graph and the problem of finding a sandwich homogeneous set of the same size k. Using this reduction, we prove that a number of natural (decision and counting) problems related to sandwich homogeneous sets are hard in general. We then exploit a little further the reduction and show that finding efficient algorithms to compute small sandwich homogeneous sets would imply substantial improvement for computing triangles in graphs.  相似文献   

5.
We discuss the complexity of a class of highly structured global optimization problems, namely the maximization of separable functions, with each one-dimensional component convex and nondecreasing, over polytopes defined by a 0-1 constraint matrix with at most two variables involved in each constraint. In particular, we prove some inapproximability and approximability results.  相似文献   

6.
Stochastic Global Optimization: Problem Classes and Solution Techniques   总被引:4,自引:0,他引:4  
There is a lack of a representative set of test problems for comparing global optimization methods. To remedy this a classification of essentially unconstrained global optimization problems into unimodal, easy, moderately difficult, and difficult problems is proposed. The problem features giving this classification are the chance to miss the region of attraction of the global minimum, embeddedness of the global minimum, and the number of minimizers. The classification of some often used test problems are given and it is recognized that most of them are easy and some even unimodal. Global optimization solution techniques treated are global, local, and adaptive search and their use for tackling different classes of problems is discussed. The problem of fair comparison of methods is then adressed. Further possible components of a general global optimization tool based on the problem classes and solution techniques is presented.  相似文献   

7.
We present a new generic minimum cross-entropy method, called the semi-iterative MinxEnt, or simply SME, for rare-event probability estimation, counting, and approximation of the optimal solutions of a broad class of NP-hard linear integer and combinatorial optimization problems (COP’s). The main idea of our approach is to associate with each original problem an auxiliary single-constrained convex MinxEnt program of a special type, which has a closed-form solution. We prove that the optimal pdf obtained from the solution of such a specially designed MinxEnt program is a zero variance pdf, provided the “temperature” parameter is set to minus infinity. In addition we prove that the parametric pdf based on the product of marginals obtained from the optimal zero variance pdf coincides with the parametric pdf of the standard cross-entropy (CE) method. Thus, originally designed at the end of 1990s as a heuristics for estimation of rare-events and COP’s, CE has strong connection with MinxEnt, and thus, strong mathematical foundation. The crucial difference between the proposed SME method and the standard CE counterparts lies in their simulation-based versions: in the latter we always require to generate (via Monte Carlo) a sequence of tuples including the temperature parameter and the parameter vector in the optimal marginal pdf’s, while in the former we can fix in advance the temperature parameter (to be set to a large negative number) and then generate (via Monte Carlo) a sequence of parameter vectors of the optimal marginal pdf’s alone. In addition, in contrast to CE, neither the elite sample no the rarity parameter is needed in SME. As result, the proposed SME algorithm becomes simpler, faster and at least as accurate as the standard CE. Motivated by the SME method we introduce a new updating rule for the parameter vector in the parametric pdf of the CE program. We show that the CE algorithm based on the new updating rule, called the combined CE (CCE), is at least as fast and accurate as its standard CE and SME counterparts. We also found numerically that the variance minimization (VM)-based algorithms are the most robust ones. We, finally, demonstrate numerically that the proposed algorithms, and in particular the CCE one, allows accurate estimation of counting quantities up to the order of hundred of decision variables and hundreds of constraints. This research was supported by the Israel Science Foundation (grant No 191-565).  相似文献   

8.
A simple variational approach to the estimation of timeseries is studied in detail and mathematical rigor. The functional in question is a complexity penalized sum of squares. The results include existence and uniqueness of the statistical estimate, as well as continuous dependence and stability in dependence on parameters and data. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
Consider a smooth manifold with smooth (0, 2)-tensor which changes bilinear type on a hypersurface. We show that there are natural tensors on this hypersurface which control the smooth extension of sectional, Ricci, and scalar curvature. This enables us to adapt the classical characteristic class construction to a large collection of manifolds with such singular pseudo-Riemannian metrics.  相似文献   

10.
This paper investigates two problems related to the determination of critical edges for the minimum cost assignment problem. Given a complete bipartite balanced graph with nn vertices on each part and with costs on its edges, kkMost Vital Edges Assignment consists of determining a set of kk edges whose removal results in the largest increase in the cost of a minimum cost assignment. A dual problem, Min Edge Blocker Assignment, consists of removing a subset of edges of minimum cardinality such that the cost of a minimum cost assignment in the remaining graph is larger than or equal to a specified threshold. We show that kkMost Vital Edges Assignment is NPNP-hard to approximate within a factor c<2c<2 and Min Edge Blocker Assignment is NPNP-hard to approximate within a factor 1.361.36. We also provide an exact algorithm for kkMost Vital Edges Assignment that runs in O(nk+2)O(nk+2). This algorithm can also be used to solve exactly Min Edge Blocker Assignment.  相似文献   

11.
12.
In this paper we consider the problem of no-wait cyclic scheduling of identical parts in an m-machine production line in which a robot is responsible for moving each part from a machine to another. The aim is to find the minimum cycle time for the so-called 2-cyclic schedules, in which exactly two parts enter and two parts leave the production line during each cycle. The earlier known polynomial-time algorithms for this problem are applicable only under the additional assumption that the robot travel times satisfy the triangle inequalities. We lift this assumption on robot travel times and present a polynomial-time algorithm with the same time complexity as in the metric case, O(m5logm).  相似文献   

13.
In this paper, we investigate how complexity theory can benefit collaboration by applying an agent-based computer simulation approach to a new form of synchronous real-time collaborative engineering design. Fieldwork was conducted with a space mission design team during their actual design sessions, to collect data on their group conversations, team interdependencies, and error monitoring and recovery practices. Based on the fieldwork analysis, an agent-based simulator was constructed. The simulation shows how error recovery and monitoring is affected by the number of small group, or sidebar, conversations, and consequent noise in the room environment. This simulation shows that it is possible to create a virtual environment with cooperating agents interacting in a dynamic environment. This simulation approach is useful for identifying the best scenarios and eliminating potential catastrophic combinations of parameters and values, where error recovery and workload in collaborative engineering design could be significantly impacted. This approach is also useful for defining strategies for integrating solutions into organizations. Narjès Bellamine-Ben Saoud is an Associate Professor at the University of Tunis and Researcher at RIADI-GDL Laboratory, Tunisia. After Computer Science engineering diploma (1993) of the ENSEEIHT of Toulouse, France, she received her PhD (1996), on groupware design applied to the study of cooperation within a space project, from the University of Toulouse I, France. Her main research interests concern studying complex systems particularly by modeling and simulating collaborative and socio-technical systems; developing Computer Supported Collaborative Learning in tunisian primary schools; and Software Engineering. Her current reserach projects include modeling and simulation of emergency rescue activities for large-scale accidents, modeling of epidemics and study of malaria, simulation of collabration artifacts. Gloria Mark is an Associate Professor in the Department of Informatics, University of California, Irvine. Dr. Mark received her Ph.D. in Psychology from Columbia University in 1991. Prior to UCI, she was a Research Scientist at the GMD, in Bonn, Germany, a visiting research scientist at the Boeing Company, and a research scientist at the Electronic Data Systems Center for Advanced Research. Dr. Mark’s research focuses on the design and evaluation of collaborative systems. Her current projects include studying worklife in the network-centric organization, multi-tasking of information workers, nomad workers, and a work in a large-scale medical collaboratory. Dr. Mark is widely published in the fields of CSCW and HCI, is currently the program co-chair for the ACM CSCW’06 conference and is on the editorial board of Computer Supported Cooperative Work: The Journal of Collaborative Computing, and e-Service Qu@rterly.  相似文献   

14.
Let ? = ?F, R, ρ? be a system language. Given a class of ?-systems K and an ?-algebraic system A = ?SEN,?N,F??, i.e., a functor SEN: Sign → Set, with N a category of natural transformations on SEN, and F:F → N a surjective functor preserving all projections, define the collection K A of A-systems in K as the collection of all members of K of the form 𝔄 = ? SEN,?N,F?,R 𝔄 ?, for some set of relation systems R 𝔄 on SEN. Taking after work of Czelakowski and Elgueta in the context of the model theory of equality-free first-order logic, several relationships between closure properties of the class K, on the one hand, and local properties of K A and global properties connecting K A and K A, whenever there exists an ?-morphism ? F,α? : A → A′, on the other, are investigated. In the main result of the article, it is shown, roughly speaking, that K A is an algebraic closure system, for every ?-algebraic system A, provided that K is closed under subsystems and reduced products.  相似文献   

15.
This article is an immediate continuation of [1]. Solution of the Lyapunov equation leads to a boundary value problem for the first-order hyperbolic equations in two variables with data on the boundary of the unit square. In general, the problems of this kind are not normally solvable. We prove that the boundary value problems in question possess the Fredholm property under some conditions.  相似文献   

16.
We solve the inverse spectral problem of recovering the singular potential from W−12(0,1) of a Sturm-Liouville operator by its spectra on the three intervals [0,1], [0,a], and [a,1] for some a∈(0,1). Necessary and sufficient conditions on the spectral data are derived, and uniqueness of the solution is analyzed.  相似文献   

17.
A new algorithm for the computation of eigenvalues of a nonsymmetric matrix pencil is described. It is a generalization of the shifted and inverted Lanczos (or Arnoldi) algorithm, in which several shifts are used in one run. It computes an orthogonal basis and a small Hessenberg pencil. The eigensolution of the Hessenberg pencil, gives Ritz approximations to the solution of the original pencil. It is shown how complex shifts can be used to compute a real block Hessenberg pencil to a real matrix pair.Two applicationx, one coming from an aircraft stability problem and the other from a hydrodynamic bifurcation, have been tested and results are reported.Dedicated to Carl-Erik Fröberg on the occasion of his 75th birthday.  相似文献   

18.
This paper is concerned with two well-known families of iterative methods for solving the linear and nonlinear complementarity problems. For the linear complementarity problem, we consider the class of matrix splitting methods and establish, under a finiteness assumption on the number of solutions, a necessary and sufficient condition for the convergence of the sequence of iterates produced. A rate of convergence result for this class of methods is also derived under a stability assumption on the limit solution. For the nonlinear complementarity problem, we establish the convergence of the Newton method under the assumption of a pseudo-regular solution which generalizes Robinson's concept of a strongly regular solution. In both instances, the convergence proofs rely on a common sensitivity result of the linear complementarity problem under perturbation.This work was based on research supported by the National Science Foundation under grant ECS-8717968.  相似文献   

19.
The well-known method of Iterated Defect Correction (IDeC) is based on the following idea: Compute a simple, basic approximation and form its defect w.r.t. the given ODE via a piecewise interpolant. This defect is used to define an auxiliary, neighboring problem whose exact solution is known. Solving the neighboring problem with the basic discretization scheme yields a global error estimate. This can be used to construct an improved approximation, and the procedure can be iterated. The fixed point of such an iterative process corresponds to a certain collocation solution. We present a variety of modifications to this algorithm. Some of these have been proposed only recently, and together they form a family of iterative techniques, each with its particular advantages. These modifications are based on techniques like defect quadrature (IQDeC), defect interpolation (IPDeC), and combinations thereof. We investigate the convergence on locally equidistant and nonequidistant grids and show how superconvergent approximations can be obtained. Numerical examples illustrate our considerations. The application to stiff initial value problems will be discussed in Part II of this paper.  相似文献   

20.
As shown in part I of this paper and references therein, the classical method of Iterated Defect Correction (IDeC) can be modified in several nontrivial ways, extending the flexibility and range of applications of this approach. The essential point is an adequate definition of the defect, resulting in a significantly more robust convergence behavior of the IDeC iteration, in particular, for nonequidistant grids. The present part II is devoted to the efficient high-order integration of stiff initial value problems. By means of model problem investigation and systematic numerical experiments with a set of stiff test problems, our new versions of defect correction are systematically evaluated, and further algorithmic measures are proposed for the stiff case. The performance of the different variants under consideration is compared, and it is shown how strong coupling between non-stiff and stiff components can be successfully handled. AMS subject classification 65L05 Supported by the Austrian Research Fund (FWF) grant P-15030.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号