首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
DNA链置换技术和荧光标记是近年生物计算领域的新兴的方法,并且因为它们都有着操作简单的优势而成为DNA计算的常用方法.DNA自组装算法是以DNA分子作为数据存储和运算的一种新型计算模式.为了提高算法的特异性和检测的灵敏度,在自组装算法的基础上,首次将DNA链置换技术和荧光标记结合引入到自组装模型中,提出了一个解决0-1规划问题的DNA计算新模型.与以往DNA计算模型相比,该模型提高了运算的可靠性和准确性,而且可以逐步缩小解空间,降低运算的复杂度,同时也使检测的方法更加灵活,易于引入到其他自组装算法模型中.  相似文献   

2.
Design of survivable IP-over-optical networks   总被引:2,自引:0,他引:2  
In the past years, telecommunications networks have seen an important evolution with the advances in optical technologies and the explosive growth of the Internet. Several optical systems allow a very large transport capacity, and data traffic has dramatically increased. Telecommunications networks are now moving towards a model of high-speed routers interconnected by intelligent optical core networks. Moreover, there is a general consensus that the control plan of the optical networks should utilize IP-based protocols for dynamic provisioning and restoration of lightpaths. The interaction of the IP routers with the optical core networks permits to achieve end-to-end connections, and the lightpaths of the optical networks define the topology of the IP network. This new infrastructure has to be sufficiently survivable, so that network services can be restored in the event of a catastrophic failure. In this paper we consider a multilayer survivable network design problem that may be of practical interest for IP-over-optical neworks. We give an integer programming formulation for this problem and discuss the associated polytope. We describe some valid inequalities and study when these are facet defining. We discuss separation algorithms for these inequalities and introduce some reduction operations. We develop a Branch-and-Cut algorithm based on these results and present extensive computational results.  相似文献   

3.
《Journal of Complexity》1998,14(1):102-121
The real-number model of computation is used in computational geometry, in the approach suggested by Blum, Shub, and Smale and in information based complexity. In this paper we propose a refinement of this model, the TTE-model of computation. In contrast to the real-number model, which is unrealistic or at least too optimistic, the TTE-model is very realistic; i.e., for TTE-algorithms there are digital computers, which behave exactly the same way as predicted by the theoretical model. We start with a detailed discussion of some objections to the real-number model. We introduce the refined model by adding the condition “every partial input or output information of an algorithm is finite” to the assumptions of the IBC-model of computation. First, we explain computability and computational complexity in TTE for the simple case of real functions. Then we apply the refined model to two typical IBC-problems, integration and zero-finding on the spaceC[0; 1] of continuous real functions. We study computability and computational complexity and compare TTE-results with IBC-results. Finally, we briefly discuss the computation of discontinuous problems. This paper does not present new technical results but should be considered as a fresh impetus to reflect on models of computation for numerical analysis and as an invitation to try out the TTE-model of computation in information based complexity.  相似文献   

4.
Subtraction-free computational complexity is the version of arithmetic circuit complexity that allows only three operations: addition, multiplication, and division. We use cluster transformations to design efficient subtraction-free algorithms for computing Schur functions and their skew, double, and supersymmetric analogues, thereby generalizing earlier results by P. Koev. We develop such algorithms for computing generating functions of spanning trees, both directed and undirected. A comparison to the lower bound due to M. Jerrum and M. Snir shows that in subtraction-free computations, “division can be exponentially powerful.” Finally, we give a simple example where the gap between ordinary and subtraction-free complexity is exponential.  相似文献   

5.
One of the key computational problems in Bayesian networks is computing the maximal posterior probability of a set of variables in the network, given an observation of the values of another set of variables. In its most simple form, this problem is known as the MPE-problem. In this paper, we give an overview of the computational complexity of many problem variants, including enumeration variants, parameterized problems, and approximation strategies to the MPE-problem with and without additional (neither observed nor explained) variables. Many of these complexity results appear elsewhere in the literature; other results have not been published yet. The paper aims to provide a fairly exhaustive overview of both the known and new results.  相似文献   

6.
The complexity of a computational problem is the order of computational resources which are necessary and sufficient to solve the problem. The algorithm complexity is the cost of a particular algorithm. We say that a problem has polynomial complexity if its computational complexity is a polynomial in the measure of input size. We introduce polynomial time algorithms based in generating functions for computing the Myerson value in weighted voting games restricted by a tree. Moreover, we apply the new generating algorithm for computing the Myerson value in the Council of Ministers of the European Union restricted by a communication structure.  相似文献   

7.
We study the computational complexity of the Spare Capacity Allocation problem arising in optical networks that use a shared mesh restoration scheme. In this problem we are given a network with edge capacities and point-to-point demands, and the goal is to allocate two edge-disjoint paths for each demand (a working path and a so-called restoration path, which is activated only if the working path fails) so that the capacity constraints are satisfied and the total cost of the used and reserved bandwidth is minimized. We focus on the setting where we deal with a group of demands together, and select their restoration paths simultaneously in order to minimize the total cost. We investigate how the computational complexity of this problem is affected by certain parameters, such as the number of restoration paths to be selected, or the treewidth of the network graph. To analyze the complexity of the problem, we introduce a generalization of the Steiner Forest problem that we call Multicost Steiner Subgraph. We study its parameterized complexity, and identify computationally easy and hard cases by providing hardness proofs as well as efficient (fixed-parameter tractable) algorithms.  相似文献   

8.
This paper deals with strategic capacity planning of a single-site manufacturing system. We propose a MILP model that includes relevant business aspects and possibilities, some of which are only partially or not at all found in the literature. Specifically, we consider decisions on expansion, reduction and renewal of production capacity, and acquisition of storage capacity. In addition, we model aspects such as (a) maintenance costs and unit variable costs depending, respectively, on age and characteristics of facilities, (b) seasonality of the demand and (c) cash flow management, including taxes and, therefore, depreciation of assets. The model maximises the after-tax cash balance at the end of the planning horizon. We also present a computational experiment with 54 instances to show that the model can be solved for a wide range of sizes in a reasonable computing time using comercial software.  相似文献   

9.
《Journal of Complexity》1997,13(1):83-107
This paper is devoted to the study of lower bounds on the inherent number of additions and subtractions necessary to solve some natural matrix computational tasks such as computing the nullspace, some band transformation, and some triangulation of a givenm×mmatrix. The additive complexities of such tasks are shown to grow asymptotically like that of them×mmatrix multiplication. The paper is a continuation of an earlier paper by the authors, and also of4where multiplicative complexity has been considered. We also propose a formalization of semialgebraic computational tasks.  相似文献   

10.
在现有的基本初等函数的高精度快速算法基础上,进一步研究基本初等函数的加速算法.现有的基本初等函数的高精度快速算法是通过对函数进行幂级数展开的方式来实现函数的任意精度快速计算.而其加速算法则是在幂级数展开之前,先利用函数的多种性质来缩减函数的参数,减少函数在进行幂级数展开时的计算难度,提高函数的计算速度.给出了加速算法,并从计算误差和算法复杂性两方面对该算法进行了分析,给出了误差最小,算法复杂性最低的最优加速算法.然后,对于三角函数、双曲函数、指数函数以及它们的反函数,在实数域上给出了的具体的加速过程和计算结果.  相似文献   

11.
It is now well-recognized that we are witnessing a golden age of innovation with novel materials, with discoveries important for both basic science and device applications—some of which will be treated at this Workshop. In this talk, we discuss the role of computation and simulation in the dramatic advances of the past and those we are witnessing today. We will also describe the growing acceptance and impact of computational materials science as a major component of materials research and its import for the future. In the process, we will demonstrate how the well-recognized goal driving computational physics/computational materials science—simulations of ever-increasing complexity on more and more realistic models—has been brought into greater focus with the introduction of greater computing power that is readily available to run sophisticated and powerful software codes like our highly precise full-potential linearized augmented plane wave (FLAPW) method, now also running on massively parallel computer platforms.We will then describe some specific advances we are witnessing today, and computation and simulation as a major component of quantum materials design and its import for the future, with the goal—to synthesize materials with desired properties in a controlled way via materials engineering on the atomic scale. The theory continues to develop along with computing power. With the universality and applicability of these methods to essentially all materials and properties, these simulations are starting to fill the increasingly urgent demands of material scientists and engineers.  相似文献   

12.
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this article, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study, we demonstrate our improvements to numerical stability and computational complexity.  相似文献   

13.
In the last century, the costs of powering datacenters have increased so quickly, that datacenter power bills now dwarf the IT hardware bills. Many large infrastructure programs have been developed in the past few years to reduce the energy consumption of datacenters, especially with respect to cooling requirements. Although these methods are effective in lowering the operation costs they do require large upfront investments. It is therefore not surprising that some datacenters have been unable to utilize the above means and as a result are still struggling with high energy bills. In this work we present a cheap addition to or an alternative to such investments as we propose the use of intelligent, energy efficient, system allocation mechanisms in place of current packaged system schedulers available in modern hardware infrastructure cutting server power costs by 40%. We pursue both the quest for (1) understanding the energy costs generated in operation as well has how to utilize this information to (2) allocate computing tasks efficiently in a cost minimizing optimization approach. We were able to underline the energy savings potential of our models compared to current state-of-the-art schedulers. However, since this allocation problem is complex (NP-hard) we investigated various model approximations in a trade-off between computational complexity and allocative efficiency. As a part of this investigation, we evaluate how changes in system configurations impact the goodness of our results in a full factorial parametric evaluation.  相似文献   

14.
In the lines of our previous approach to devise proximal algorithms for nonsmooth convex optimization by applying Nesterov fast gradient concept to the Moreau–Yosida regularization of a convex function, we develop three new proximal algorithms for nonsmooth convex optimization. In these algorithms, the errors in computing approximate solutions for the Moreau–Yosida regularization are not fixed beforehand, while preserving the complexity estimates already established. We report some preliminary computational results to give a first estimate of their performance.  相似文献   

15.
Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using .NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments.  相似文献   

16.
Abstract

Statistical environments such as S, R, XLisp-Stat, and others have had a dramatic effect on the way we, statistics practitioners, think about data and statistical methodology. However, the possibilities and challenges introduced by recent technological developments and the general ways we use computing conflict with the computational model of these systems. This article explores some of these challenges and identifies the need to support easy integration of functionality from other domains, and to export statistical methodology to other audiences and applications, both statically and dynamically. Existing systems can be improved in these domains with some already implemented extensions (see Section 5). However, the development of a new environment and computational model that exploits modern tools designed to handle many general aspects of these challenges appears more promising as a long-term approach. We present the architecture for such a new model named Omegahat. It lends itself to entirely new statistical computing paradigms. It is highly extensible at both the user and programmer level, and also encourages the development of new environments for different user groups. The Omegahat interactive language offers a continuity between the different programming tasks and levels via optional type checking and seamless access between the interpreted user language and the implementation language, Java. Parallel and distributed computing, network and database access, interactive graphics, and many other aspects of statistical computing are directly accessible to the user as a consequence of this seamless access. We describe the benefits of using Java as the implementation language for the environment and several innovative features of the user-level language which promise to assist development of software that can be used in many contexts. We also outline how this architecture can be integrated with existing environments such as R and S.

The ideas are drawn from work within the Omega Project for Statistical Computing. The project provides open-source software for researching and developing next generation statistical computing tools.  相似文献   

17.
We consider the problem of routing a number of communication requests in WDM (wavelength division multiplexing) all-optical networks from the standpoint of game theory. If we view each routing request (pair of source-target nodes) as a player, then a strategy consists of a path from the source to the target and a frequency (color). To reflect the restriction that two requests must not use the same frequency on the same edge, conflicting strategies are assigned a prohibitively high cost.Under this formulation, we consider several natural cost functions, each one reflecting a different aspect of restriction in the available bandwidth. For each cost function we examine the problem of the existence of pure Nash equilibria, the complexity of recognizing and computing them and finally, the problem in which we are given a Nash equilibrium and we are asked to find a better one in the sense that the total bandwidth used is less. As it turns out some of these problems are tractable and others are NP-hard.  相似文献   

18.
We present some reoptimization techniques for computing (shortest) hyperpath weights in a directed hypergraph. These techniques are exploited to improve the worst-case computational complexity (as well as the practical performance) of an algorithm finding the K shortest hyperpaths in acyclic hypergraphs.  相似文献   

19.
We tackle the problem of computing fair periodical premiums of an equity-linked policy with a maturity guarantee and an embedded surrender option. We consider the policy as a Bermudan-style contingent claim that can be exercised at the premium payment dates. The evaluation framework is based on a discretization of a bivariate model that considers the joint evolution of the equity value with stochastic interest rates. To deeply reduce the computational complexity of the pricing problem we use the singular points framework that allows us to compute accurate upper and lower estimates of the policy premiums.  相似文献   

20.
The paper considers the hybrid flow-shop scheduling problem with multiprocessor tasks. Motivated by the computational complexity of the problem, we propose a memetic algorithm for this problem in the paper. We first describe the implementation details of a genetic algorithm, which is used in the memetic algorithm. We then propose a constraint programming based branch-and-bound algorithm to be employed as the local search engine of the memetic algorithm. Next, we present the new memetic algorithm. We lastly explain the computational experiments carried out to evaluate the performance of three algorithms (genetic algorithm, constraint programming based branch-and-bound algorithm, and memetic algorithm) in terms of both the quality of the solutions produced and the efficiency. These results demonstrate that the memetic algorithm produces better quality solutions and that it is very efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号