首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An algorithm of generating graph structures for the ordering of datasets is proposed. It operates with the key concept of “sphere neighborhood” and generates the graph structured ordering of datasets, provided the elements of such sets can be related by some similarity relation. The algorithm always allows determining the shortest path between two nodes in a constructive way. It is demonstrated by an application to the famous Word Morph game and in addition to the ordering of logfiles with respect to the detection of errors. Therefore, the algorithm can be very useful for the dealing with Big Data problems. © 2016 Wiley Periodicals, Inc. Complexity 21: 426–436, 2016  相似文献   

2.
The minimum spanning tree write policy for the maintenance of the consistency of a distributed database, where replicated data exist, has been proposed in [1]. In this paper, we first present a data placement heuristic algorithm in general networks for minimizing the overall transmission cost for processing the typical demands of queries (by a “simple” process strategy) and updates (by the minimum spanning tree write policy). Several interesting optimality estimation results of this algorithm are shown, while the computational intractability of the complete optimization, with respect to the simple strategy, is shown as well. Secondly, we apply a classical climbing hill technique to obtain a dynamic database placement algorithm based on an employed optimizer—a collection of distributed query process algorithms. This is guaranteed to output a “locally optimal” data allocation. The implementation results also show that those two heuristics work well in practice.  相似文献   

3.
The computational complexity of discrete problems concerning the enumeration of solutions is addressed. The concept of an asymptotically efficient algorithm is introduced for the dualization problem, which is formulated as the problem of constructing irreducible coverings of a Boolean matrix. This concept imposes weaker constraints on the number of “redundant” algorithmic steps as compared with the previously introduced concept of an asymptotically optimal algorithm. When the number of rows in a Boolean matrix is no less than the number of columns (in which case asymptotically optimal algorithms for the problem fail to be constructed), algorithms based on the polynomialtime-delay enumeration of “compatible” sets of columns of the matrix is shown to be asymptotically efficient. A similar result is obtained for the problem of searching for maximal conjunctions of a monotone Boolean function defined by a conjunctive normal form.  相似文献   

4.
We study the representability problem for torsion-free arithmetic matroids. After introducing a “strong gcd property” and a new operation called “reduction”, we describe and implement an algorithm to compute all essential representations, up to equivalence. As a consequence, we obtain an upper bound to the number of equivalence classes of representations. In order to rule out equivalent representations, we describe an efficient way to compute a normal form of integer matrices, up to left-multiplication by invertible matrices and change of sign of the columns (we call it the “signed Hermite normal form”). Finally, as an application of our algorithms, we disprove two conjectures about the poset of layers and the independence poset of a toric arrangement.  相似文献   

5.
For solving inverse gravimetry problems, efficient stable parallel algorithms based on iterative gradient methods are proposed. For solving systems of linear algebraic equations with block-tridiagonal matrices arising in geoelectrics problems, a parallel matrix sweep algorithm, a square root method, and a conjugate gradient method with preconditioner are proposed. The algorithms are implemented numerically on a parallel computing system of the Institute of Mathematics and Mechanics (PCS-IMM), NVIDIA graphics processors, and an Intel multi-core CPU with some new computing technologies. The parallel algorithms are incorporated into a system of remote computations entitled “Specialized Web-Portal for Solving Geophysical Problems on Multiprocessor Computers.” Some problems with “quasi-model” and real data are solved.  相似文献   

6.
求整体优化问题全部解的胞腔排除法   总被引:5,自引:0,他引:5  
张讲社  王卫  陈白丽 《计算数学》1995,17(4):443-455
其中X~0为一各边平行于坐标轴的立方体(以下将这类立方体简称为胞腔),X~0的最大边长度称为网径,记为W(X~0).在本文中,X(?)X~0表示任一胞腔,f(X)={f(x):x∈X}表示f在X上的值域,f在X~0上的整体极小值记为f~*,f在X~0上全部整体极小点集合记为M.以下恒假定M仅由有限个孤立点组成,且M中所有点含于X~O内部.于是,(?)x~*∈M,有  相似文献   

7.
In this paper several parameter dependent scalarization approaches for solving nonlinear multi-objective optimization problems are discussed. It is shown that they can be considered as special cases of a scalarization problem by Pascoletti and Serafini (or a modification of this problem). Based on these connections theoretical results as well as a new algorithm for adaptively controlling the choice of the parameters for generating almost equidistant approximations of the efficient set, lately developed for the Pascoletti-Serafini scalarization, can be applied to these problems. For instance for such well-known scalarizations as the ε-constraint or the normal boundary intersection problem algorithms for adaptively generating high quality approximations are derived.  相似文献   

8.
We propose an algorithm, semismooth Newton coordinate descent (SNCD), for the elastic-net penalized Huber loss regression and quantile regression in high dimensional settings. Unlike existing coordinate descent type algorithms, the SNCD updates a regression coefficient and its corresponding subgradient simultaneously in each iteration. It combines the strengths of the coordinate descent and the semismooth Newton algorithm, and effectively solves the computational challenges posed by dimensionality and nonsmoothness. We establish the convergence properties of the algorithm. In addition, we present an adaptive version of the “strong rule” for screening predictors to gain extra efficiency. Through numerical experiments, we demonstrate that the proposed algorithm is very efficient and scalable to ultrahigh dimensions. We illustrate the application via a real data example. Supplementary materials for this article are available online.  相似文献   

9.
Subtraction-free computational complexity is the version of arithmetic circuit complexity that allows only three operations: addition, multiplication, and division. We use cluster transformations to design efficient subtraction-free algorithms for computing Schur functions and their skew, double, and supersymmetric analogues, thereby generalizing earlier results by P. Koev. We develop such algorithms for computing generating functions of spanning trees, both directed and undirected. A comparison to the lower bound due to M. Jerrum and M. Snir shows that in subtraction-free computations, “division can be exponentially powerful.” Finally, we give a simple example where the gap between ordinary and subtraction-free complexity is exponential.  相似文献   

10.
In recent years, parallel processing has become widely available to researchers. It can be applied in an obvious way in the context of Monte Carlo simulation, but techniques for “parallelizing” Markov chain Monte Carlo (MCMC) algorithms are not so obvious, apart from the natural approach of generating multiple chains in parallel. Although generation of parallel chains is generally the easiest approach, in cases where burn-in is a serious problem, it is often desirable to use parallelization to speed up generation of a single chain. This article briefly discusses some existing methods for parallelization of MCMC algorithms, and proposes a new “pre-fetching” algorithm to parallelize generation of a single chain.  相似文献   

11.
Patch-based denoising algorithms currently provide the optimal techniques to restore an image. These algorithms denoise patches locally in “patch-space”. In contrast, we propose in this paper a simple method that uses the eigenvectors of the Laplacian of the patch-graph to denoise the image. Experiments demonstrate that our denoising algorithm outperforms the denoising gold-standards. We provide an analysis of the algorithm based on recent results on the perturbation of kernel matrices (El Karoui, 2010) [1], [2], and theoretical analyses of patch denoising algorithms (Levin et al., 2012) [3], (Taylor and Meyer, 2012) [4].  相似文献   

12.
In the last two decades, numerous evolutionary algorithms (EAs) have been developed for solving optimization problems. However, only a few works have focused on the question of the termination criteria. Indeed, EAs still need termination criteria prespecified by the user. In this paper, we develop a genetic algorithm (GA) with automatic termination and acceleration elements which allow the search to end without resort to predefined conditions. We call this algorithm “Genetic Algorithm with Automatic Termination and Search Space Rotation”, abbreviated as GATR. This algorithm utilizes the so-called “Gene Matrix” (GM) to equip the search process with a self-check in order to judge how much exploration has been performed, while maintaining the population diversity. The algorithm also implements a mutation operator called “mutagenesis” to achieve more efficient and faster exploration and exploitation processes. Moreover, GATR fully exploits the structure of the GM by calling a novel search space decomposition mechanism combined with a search space rotation procedure. As a result, the search operates strictly within two-dimensional subspaces irrespective of the dimension of the original problem. The computational experiments and comparisons with some state-of-the-art EAs demonstrate the effectiveness of the automatic termination criteria and the space decomposition mechanism of GATR.  相似文献   

13.
This paper presents two new approximate versions of the alternating direction method of multipliers (ADMM) derived by modifying of the original “Lagrangian splitting” convergence analysis of Fortin and Glowinski. They require neither strong convexity of the objective function nor any restrictions on the coupling matrix. The first method uses an absolutely summable error criterion and resembles methods that may readily be derived from earlier work on the relationship between the ADMM and the proximal point method, but without any need for restrictive assumptions to make it practically implementable. It permits both subproblems to be solved inexactly. The second method uses a relative error criterion and the same kind of auxiliary iterate sequence that has recently been proposed to enable relative-error approximate implementation of non-decomposition augmented Lagrangian algorithms. It also allows both subproblems to be solved inexactly, although ruling out “jamming” behavior requires a somewhat complicated implementation. The convergence analyses of the two methods share extensive underlying elements.  相似文献   

14.
A modified approach had been developed in this study by combining two well-known algorithms of clustering, namely fuzzy c-means algorithm and entropy-based algorithm. Fuzzy c-means algorithm is one of the most popular algorithms for fuzzy clustering. It could yield compact clusters but might not be able to generate distinct clusters. On the other hand, entropy-based algorithm could obtain distinct clusters, which might not be compact. However, the clusters need to be both distinct as well as compact. The present paper proposes a modified approach of clustering by combining the above two algorithms. A genetic algorithm was utilized for tuning of all three clustering algorithms separately. The proposed approach was found to yield both distinct as well as compact clusters on two data sets.  相似文献   

15.
Classes of integer Abaffy–Broyden–Spedicato (ABS) methods have recently been introduced for solving linear systems of Diophantine equations. Each method provides the general integer solution of the system by computing an integer solution and an integer matrix, named Abaffian, with rows generating the integer null space of the coefficient matrix. The Smith normal form of a general rectangular integer matrix is a diagonal matrix, obtained by elementary nonsingular (unimodular) operations. Here, we present a class of algorithms for computing the Smith normal form of an integer matrix. In doing this, we propose new ideas to develop a new class of extended integer ABS algorithms generating an integer basis for the integer null space of the matrix. For the Smith normal form, having the need to solve the quadratic Diophantine equation, we present two algorithms for solving such equations. The first algorithm makes use of a special integer basis for the row space of the matrix, and the second one, with the intention of controlling the growth of intermediate results and making use of our given conjecture, is based on a recently proposed integer ABS algorithm. Finally, we report some numerical results on randomly generated test problems showing a better performance of the second algorithm in controlling the size of the solution. We also report the results obtained by our proposed algorithm on the Smith normal form and compare them with the ones obtained using Maple, observing a more balanced distribution of the intermediate components obtained by our algorithm.  相似文献   

16.
17.
Record values are very popular in probability and mathematical statistics. There are many books and papers concerned with classical record values and record times, i.e., records in sequences of independent equally distributed random variables. In recent times, new types of record values (records in the F α-scheme, record values in sequences of unequally distributed random variables, records with confirmations, exceedance record values) have been proposed and examined. The present paper proposes another record scheme (so-called “records with constraint”). Various cases are studied in which these records may be useful. For these record values, we give their joint density functions and discover some of their properties. For particular cases of utmost importance, when the initial random variables are independent and have equal exponential distribution, we obtain fairly simple representations of records with constraints as sums of independent equally distributed random terms.  相似文献   

18.
Most parallel efficient global optimization (EGO) algorithms focus only on the parallel architectures for producing multiple updating points, but give few attention to the balance between the global search (i.e., sampling in different areas of the search space) and local search (i.e., sampling more intensely in one promising area of the search space) of the updating points. In this study, a novel approach is proposed to apply this idea to further accelerate the search of parallel EGO algorithms. In each cycle of the proposed algorithm, all local maxima of expected improvement (EI) function are identified by a multi-modal optimization algorithm. Then the local EI maxima with value greater than a threshold are selected and candidates are sampled around these selected EI maxima. The results of numerical experiments show that, although the proposed parallel EGO algorithm needs more evaluations to find the optimum compared to the standard EGO algorithm, it is able to reduce the optimization cycles. Moreover, the proposed parallel EGO algorithm gains better results in terms of both number of cycles and evaluations compared to a state-of-the-art parallel EGO algorithm over six test problems.  相似文献   

19.
In this paper, we present homogeneous polynomials in many variables. We show how the hypercube representation of these polynomials (introduced by Beauzamy et al. in [1], and derived from Bombieri's work in Beauzamy et al. [2]) allows us to build interpolation polynomials, that is, polynomials taking prescribed values at prescribed points in . We then show that the construction is robust and give quantitative estimates on how the constructed polynomial is perturbed if either the data, the points, or both are perturbed. The theorems, constructions, and algorithms answer questions asked by Dr. Ken Clark, U.S. Army Research Office.

In the final part of the paper, we present the explicit algorithms, implemented on the Connection Machines CM200 and CM5 at the Etablissement Technique Central de l'Armement, Arcueil. This algorithm is efficient, especially when the number of variables is high, and it takes all advantage of the massively parallel architecture.  相似文献   


20.
In this paper, we address uncapacitated network design problems characterised by uncertainty in the input data. Network design choices have a determinant impact on the effectiveness of the system. Design decisions are frequently made with a great degree of uncertainty about the conditions under which the system will be required to operate. Instead of finding optimal designs for a given future scenario, designers often search for network configurations that are “good” for a variety of likely future scenarios. This approach is referred to as the “robustness” approach to system design. We present a formal definition of “robustness” for the uncapacitated network design problem, and develop algorithms aimed at finding robust network designs. These algorithms are adaptations of the Benders decomposition methodology that are tailored so they can efficiently identify robust network designs. We tested the proposed algorithms on a set of randomly generated problems. Our computational experiments showed two important properties. First, robust solutions are abundant in uncapacitated network design problems, and second, the proposed algorithms performance is satisfactory in terms of cost and number of robust network designs obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号