首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Algorithms and implementations for computing the sign function of a triangular matrix are fundamental building blocks for computing the sign of arbitrary square real or complex matrices. We present novel recursive and cache‐efficient algorithms that are based on Higham's stabilized specialization of Parlett's substitution algorithm for computing the sign of a triangular matrix. We show that the new recursive algorithms are asymptotically optimal in terms of the number of cache misses that they generate. One algorithm that we present performs more arithmetic than the nonrecursive version, but this allows it to benefit from calling highly optimized matrix multiplication routines; the other performs the same number of operations as the nonrecursive version, suing custom computational kernels instead. We present implementations of both, as well as a cache‐efficient implementation of a block version of Parlett's algorithm. Our experiments demonstrate that the blocked and recursive versions are much faster than the previous algorithms and that the inertia strongly influences their relative performance, as predicted by our analysis.  相似文献   

2.
We analyze randomized broadcast in dynamic networks modeled as edge‐Markovian evolving graphs. The most realistic range of edge‐Markovian parameters yields sparse and disconnected graphs. We prove that, in this setting, the “push” protocol completes with high probability in optimal logarithmic time. © 2015 Wiley Periodicals, Inc. Random Struct. Alg., 48, 290–312, 2016  相似文献   

3.
We study classic machine sequencing problems in an online setting. Specifically, we look at deterministic and randomized algorithms for the problem of scheduling jobs with release dates on identical parallel machines, to minimize the sum of weighted completion times: Both preemptive and non-preemptive versions of the problem are analyzed. Using linear programming techniques, borrowed from the single machine case, we are able to design a 2.62-competitive deterministic algorithm for the non-preemptive version of the problem, improving upon the 3.28-competitive algorithm of Megow and Schulz. Additionally, we show how to combine randomization techniques with the linear programming approach to obtain randomized algorithms for both versions of the problem with competitive ratio strictly smaller than 2 for any number of machines (but approaching two as the number of machines grows). Our algorithms naturally extend several approaches for single and parallel machine scheduling. We also present a brief computational study, for randomly generated problem instances, which suggests that our algorithms perform very well in practice. A preliminary version of this work appears in the Proceedings of the 11th conference on integer programming and combinatorial optimization (IPCO), Berlin, 8–10 June 2005.  相似文献   

4.
This is an experimental computational account of projection algorithms for the linear best approximation problem. We focus on the sequential and simultaneous versions of Dykstra’s algorithm and the Halpern-Lions-Wittmann-Bauschke algorithm for the best approximation problem from a point to the intersection of closed convex sets in the Euclidean space. These algorithms employ different iterative approaches to reach the same goal but no mathematical connection has yet been found between their algorithmic schemes. We compare these algorithms on linear best approximation test problems that we generate so that the solution will be known a priori and enable us to assess the relative computational merits of these algorithms. For the simultaneous versions we present a new component-averaging variant that substantially accelerates their initial behavior for sparse systems.  相似文献   

5.
We study randomized gossip‐based processes in dynamic networks that are motivated by information discovery in large‐scale distributed networks such as peer‐to‐peer and social networks. A well‐studied problem in peer‐to‐peer networks is resource discovery, where the goal for nodes (hosts with IP addresses) is to discover the IP addresses of all other hosts. Also, some of the recent work on self‐stabilization algorithms for P2P/overlay networks proceed via discovery of the complete network. In social networks, nodes (people) discover new nodes through exchanging contacts with their neighbors (friends). In both cases the discovery of new nodes changes the underlying network — new edges are added to the network — and the process continues in the changed network. Rigorously analyzing such dynamic (stochastic) processes in a continuously changing topology remains a challenging problem with obvious applications. This paper studies and analyzes two natural gossip‐based discovery processes. In the push discovery or triangulation process, each node repeatedly chooses two random neighbors and connects them (i.e., “pushes” their mutual information to each other). In the pull discovery process or the two‐hop walk, each node repeatedly requests or “pulls” a random contact from a random neighbor and connects itself to this two‐hop neighbor. Both processes are lightweight in the sense that the amortized work done per node is constant per round, local, and naturally robust due to the inherent randomized nature of gossip. Our main result is an almost‐tight analysis of the time taken for these two randomized processes to converge. We show that in any undirected n‐node graph both processes take rounds to connect every node to all other nodes with high probability, whereas is a lower bound. We also study the two‐hop walk in directed graphs, and show that it takes time with high probability, and that the worst‐case bound is tight for arbitrary directed graphs, whereas Ω(n2) is a lower bound for strongly connected directed graphs. A key technical challenge that we overcome in our work is the analysis of a randomized process that itself results in a constantly changing network leading to complicated dependencies in every round. We discuss implications of our results and their analysis to discovery problems in P2P networks as well as to evolution in social networks. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 48, 565–587, 2016  相似文献   

6.
In this article, we study effect of numerical integration on Galerkin meshless method (GMM), applied to approximate solutions of elliptic partial differential equations with essential boundary conditions (EBC). It is well‐known that it is difficult to impose the EBC on the standard approximation space used in GMM. We have used the Nitsche's approach, which was introduced in context of finite element method, to impose the EBC. We refer to this approach as the meshless Nitsche's method (MNM). We require that the numerical integration rule satisfies (a) a “discrete Green's identity” on polynomial spaces, and (b) a “conforming condition” involving the additional integration terms introduced by the Nitsche's approach. Based on such numerical integration rules, we have obtained a convergence result for MNM with numerical integration, where the shape functions reproduce polynomials of degree k ≥ 1. Though we have presented the analysis for the nonsymmetric MNM, the analysis could be extended to the symmetric MNM similarly. Numerical results have been presented to illuminate the theoretical results and to demonstrate the efficiency of the algorithms.Copyright © 2012 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 30: 265–288, 2014  相似文献   

7.
Constantin Popa 《PAMM》2008,8(1):10823-10824
In this paper we consider three versions of Kovarik's iterative orthogonalization algorithms, for approximating the minimal norm solution of symmetric least squares problems. Although the convergence of these algorithms is linear, in practical applications we observed that a too big number of iterations can dramatically deteriorate the already obtained approximation. In this respect we analyse the above mentioned Kovarik–like methods according to the modifications they make on the “machine zero” eigenvalues of the problem (symmetric) matrix. We establish a theoretical almost optimal formula for the number of iterations necessary to obtain an enough accurate approximation, as well as to avoid the above mentioned troubles. Experiments on collocation discretization of a Fredholm first kind integral equation ilustrate the efficiency of our considerations. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
We study and compare natural generalizations of Euclid's algorithm for polynomials with coefficients in a finite field. This leads to gcd algorithms together with their associated continued fraction maps. The gcd algorithms act on triples of polynomials and rely on two-dimensional versions of the Brun, Jacobi–Perron and fully subtractive continued fraction maps, respectively. We first provide a unified framework for these algorithms and their associated continued fraction maps. We then analyse various costs for the gcd algorithms, including the number of iterations and two versions of the bit-complexity, corresponding to two representations of polynomials (the usual and the sparse one). We also study the associated two-dimensional continued fraction maps and prove the invariance and the ergodicity of the Haar measure. We deduce corresponding estimates for the costs of truncated trajectories under the action of these continued fraction maps, obtained thanks to their transfer operators, and we compare the two models (gcd algorithms and their associated continued fraction maps). Proving that the generating functions appear as dominant eigenvalues of the transfer operator allows indeed a fine comparison between the models.  相似文献   

9.
Khrushchev's formula is the cornerstone of the so‐called Khrushchev theory, a body of results which has revolutionized the theory of orthogonal polynomials on the unit circle. This formula can be understood as a factorization of the Schur function for an orthogonal polynomial modification of a measure on the unit circle. No such formula is known in the case of matrix‐valued measures. This constitutes the main obstacle to generalize Khrushchev theory to the matrix‐valued setting, which we overcome in this paper. It was recently discovered that orthogonal polynomials on the unit circle and their matrix‐valued versions play a significant role in the study of quantum walks, the quantum mechanical analogue of random walks. In particular, Schur functions turn out to be the mathematical tool which best codify the return properties of a discrete time quantum system, a topic in which Khrushchev's formula has profound and surprising implications. We will show that this connection between Schur functions and quantum walks is behind a simple proof of Khrushchev's formula via “quantum” diagrammatic techniques for CMV matrices. This does not merely give a quantum meaning to a known mathematical result, since the diagrammatic proof also works for matrix‐valued measures. Actually, this path‐counting approach is so fruitful that it provides different matrix generalizations of Khrushchev's formula, some of them new even in the case of scalar measures. Furthermore, the path‐counting approach allows us to identify the properties of CMV matrices which are responsible for Khrushchev's formula. On the one hand, this helps to formalize and unify the diagrammatic proofs using simple operator theory tools. On the other hand, this is the origin of our main result which extends Khrushchev's formula beyond the CMV case, as a factorization rule for Schur functions related to general unitary operators.© 2016 Wiley Periodicals, Inc.  相似文献   

10.
Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t‐spanner of the graph G, for any t ≥ 1, is a subgraph (V,ES), ESE, such that the distance between any pair of vertices in the subgraph is at most t times the distance between them in the graph G. Computing a t‐spanner of minimum size (number of edges) has been a widely studied and well‐motivated problem in computer science. In this paper we present the first linear time randomized algorithm that computes a t‐spanner of a given weighted graph. Moreover, the size of the t‐spanner computed essentially matches the worst case lower bound implied by a 43‐year old girth lower bound conjecture made independently by Erdős, Bollobás, and Bondy & Simonovits. Our algorithm uses a novel clustering approach that avoids any distance computation altogether. This feature is somewhat surprising since all the previously existing algorithms employ computation of some sort of local or global distance information, which involves growing either breadth first search trees up to θ(t)‐levels or full shortest path trees on a large fraction of vertices. The truly local approach of our algorithm also leads to equally simple and efficient algorithms for computing spanners in other important computational environments like distributed, parallel, and external memory. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2007  相似文献   

11.
In algebraic reconstruction of images in computerized tomography we are dealing with rectangular, large, sparse and ill-conditioned linear systems of equations. In this paper we describe a two-grid algorithm for solving such kind of linear systems, which uses Kaczmarz's projection method as relaxation. The correction step is performed with a special “local” aggregation / disaggregation type procedure. In this respect, we have to solve a small sized minimization problem associated to each coarse grid pixel. The information so obtained is then “distributed” to the neighbour fine grid pixels. Some image reconstruction experiments are also presented. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
We consider the elastic energy of a hanging drape—a thin elastic sheet, pulled down by the force of gravity, with fine‐scale folding at the top that achieves approximately uniform confinement. This example of energy‐driven pattern formation in a thin elastic sheet is of particular interest because the length scale of folding varies with height. We focus on how the minimum elastic energy depends on the physical parameters. As the sheet thickness vanishes, the limiting energy is due to the gravitational force and is relatively easy to understand. Our main accomplishment is to identify the “scaling law” of the correction due to positive thickness. We do this by (i) proving an upper bound, by considering the energies of several constructions and taking the best; and (ii) proving an ansatz‐free lower bound, which agrees with the upper bound up to a parameter‐independent prefactor. The coarsening of folds in hanging drapes has also been considered in the recent physics literature, by using a self‐similar construction whose basic cell has been called a “wrinklon.” Our results complement and extend that work by showing that self‐similar coarsening achieves the optimal scaling law in a certain parameter regime, and by showing that other constructions (involving lateral spreading of the sheet) do better in other regions of parameter space. Our analysis uses a geometrically linear Föppl‐von Kármán model for the elastic energy, and is restricted to the case when Poisson's ratio is 0. © 2016 Wiley Periodicals, Inc.  相似文献   

13.
We give two new versions of the LS category for the set-up of measurable laminations defined by Bermúdez. Both of these versions must be considered as “tangential categories”. The first one, simply called (LS) category, is the direct analogue for measurable laminations of the tangential category of (topological) laminations introduced by Colman Vale and Macías Virgós. For the measurable lamination that underlies any lamination, our measurable tangential category is a lower bound of the tangential category. The second version, called the Λ-category, depends on the choice of a transverse invariant measure Λ. We show that both of these “tangential categories” satisfy appropriate versions of some well known properties of the classical category: the homotopy invariance, existence of a dimensional upper bound, a cohomological lower bound (cup length), and an upper bound given by the critical points of a smooth function. Also, we show possible applications of these invariants to variational problems.  相似文献   

14.
We consider a class of fourth‐order nonlinear diffusion equations motivated by Tumblin and Turk's “low‐curvature image simplifiers” for image denoising and segmentation. The PDE for the image intensity u is of the form where g(s) = k2/(k2 + s2) is a “curvature” threshold and λ denotes a fidelity‐matching parameter. We derive a priori bounds for Δu that allow us to prove global regularity of smooth solutions in one space dimension, and a geometric constraint for finite‐time singularities from smooth initial data in two space dimensions. This is in sharp contrast to the second‐order Perona‐Malik equation (an ill‐posed problem), on which the original LCIS method is modeled. The estimates also allow us to design a finite difference scheme that satisfies discrete versions of the estimates, in particular, a priori bounds on the smoothness estimator in both one and two space dimensions. We present computational results that show the effectiveness of such algorithms. Our results are connected to recent results for fourth‐order lubrication‐type equations and the design of positivity‐preserving schemes for such equations. This connection also has relevance for other related fourth‐order imaging equations. © 2004 Wiley Periodicals, Inc.  相似文献   

15.
This paper extends some adaptive schemes that have been developed for the Random Walk Metropolis algorithm to more general versions of the Metropolis-Hastings (MH) algorithm, particularly to the Metropolis Adjusted Langevin algorithm of Roberts and Tweedie (1996). Our simulations show that the adaptation drastically improves the performance of such MH algorithms. We study the convergence of the algorithm. Our proves are based on a new approach to the analysis of stochastic approximation algorithms based on mixingales theory.   相似文献   

16.
The well‐known lower bound on the independence number of a graph due to Caro (Technical Report, Tel‐Aviv University, 1979) and Wei (Technical Memorandum, TM 81 ‐ 11217 ‐ 9, Bell Laboratories, 1981) can be established as a performance guarantee of two natural and simple greedy algorithms or of a simple randomized algorithm. We study possible generalizations and improvements of these approaches using vertex weights and discuss conditions on so‐called potential functions pG: V(G)→?0 defined on the vertex set of a graph G for which suitably modified versions of the greedy algorithms applied to G yield independent sets I with . We provide examples of such potentials, which lead to bounds improving the bound due to Caro and Wei. Furthermore, suitably adapting the randomized algorithm we give a short proof of Thiele's lower bound on the independence number of a hypergraph (T. Thiele, J Graph Theory 30 (1999), 213–221).  相似文献   

17.
“Classical” First Order (FO) algorithms of convex optimization, such as Mirror Descent algorithm or Nesterov’s optimal algorithm of smooth convex optimization, are well known to have optimal (theoretical) complexity estimates which do not depend on the problem dimension. However, to attain the optimality, the domain of the problem should admit a “good proximal setup”. The latter essentially means that (1) the problem domain should satisfy certain geometric conditions of “favorable geometry”, and (2) the practical use of these methods is conditioned by our ability to compute at a moderate cost proximal transformation at each iteration. More often than not these two conditions are satisfied in optimization problems arising in computational learning, what explains why proximal type FO methods recently became methods of choice when solving various learning problems. Yet, they meet their limits in several important problems such as multi-task learning with large number of tasks, where the problem domain does not exhibit favorable geometry, and learning and matrix completion problems with nuclear norm constraint, when the numerical cost of computing proximal transformation becomes prohibitive in large-scale problems. We propose a novel approach to solving nonsmooth optimization problems arising in learning applications where Fenchel-type representation of the objective function is available. The approach is based on applying FO algorithms to the dual problem and using the accuracy certificates supplied by the method to recover the primal solution. While suboptimal in terms of accuracy guaranties, the proposed approach does not rely upon “good proximal setup” for the primal problem but requires the problem domain to admit a Linear Optimization oracle—the ability to efficiently maximize a linear form on the domain of the primal problem.  相似文献   

18.
In a recently published paper “A note on “A novel approach to multi attribute group decision making based on trapezoidal interval type-2 fuzzy soft sets””, Khalil and Hassan pointed out that assertions (3) and (4) of Theorem 3.2 in our previous paper “A novel approach to multi attribute group decision making based on trapezoidal interval type-2 fuzzy soft sets” are not true [2]. Furthermore, they introduced the notions of a generalized trapezoidal interval type-2 fuzzy soft subset and a generalized trapezoidal interval type-2 fuzzy soft equal and used these two notions to correct the flaw in assertions (3) and (4) of Theorem 3.2 in our previous paper. In this paper, we show by a counterexample that Khalil and Hassan's correction is incorrect and provide the modified versions of assertions (3) and (4) of Theorem 3.2, along with a strict proof. In addition, Khalil and Hassan pointed out by a counterexample that assertions (3)–(6) of Theorem 3.5 in our paper are not true and proposed the corrections of those assertions. In this paper, we show that Khalil and Hassan's counterexample and corrections are incorrect and provide a new example to verify the inaccuracies of assertions (3) and (5) of Theorem 3.5 in our paper. Moreover, we offer the modified versions of assertions (3) and (5) of Theorem 3.5 and prove them. Finally, Khalil and Hassan's statement that assertions (4) and (6) of Theorem 3.5 in our previous paper are not true is proven to be incorrect, i.e. assertions (4) and (6) of Theorem 3.5 in our previous paper are true.  相似文献   

19.
A comparison of sequential Delaunay triangulation algorithms   总被引:5,自引:0,他引:5  
This paper presents an experimental comparison of a number of different algorithms for computing the Delaunay triangulation. The algorithms examined are: Dwyer's divide and conquer algorithm, Fortune's sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri and Murota, a new bucketing-based algorithm described in this paper, and Devillers's version of a Delaunay-tree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber's convex hull based algorithm.

Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of non-uniform distributions. The experiments go beyond measuring total running time, which tends to be machine-dependent. We also analyze the major high-level primitives that algorithms use and do an experimental analysis of how often implementations of these algorithms perform each operation.  相似文献   


20.
We consider the problem of scheduling a sequence of packets over a linear network, where every packet has a source and a target, as well as a release time and a deadline by which it must arrive at its target. The model we consider is bufferless, where packets are not allowed to be buffered in nodes along their paths other than at their source. This model applies to optical networks where opto-electronic conversion is costly, and packets mostly travel through bufferless hops. The offline version of this problem was previously studied in M. Adler et al. (2002) [3]. In this paper we study the online version of the problem, where we are required to schedule the packets without knowledge of future packet arrivals. We use competitive analysis to evaluate the performance of our algorithms. We present the first online algorithms for several versions of the problem. For the problem of throughput maximization, where all packets have uniform weights, we give an algorithm with a logarithmic competitive ratio, and present some lower bounds. For other weight functions, we show algorithms that achieve optimal competitive ratios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号