首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
We overview numerous algorithms in computational D-module theory together with the theoretical background as well as the implementation in the computer algebra system Singular. We discuss new approaches to the computation of Bernstein operators, of logarithmic annihilator of a polynomial, of annihilators of rational functions as well as complex powers of polynomials. We analyze algorithms for local Bernstein–Sato polynomials and also algorithms, recovering any kind of Bernstein–Sato polynomial from partial knowledge of its roots. We address a novel way to compute the Bernstein–Sato polynomial for an affine variety algorithmically. All the carefully selected nontrivial examples, which we present, have been computed with our implementation. We also address such applications as the computation of a zeta-function for certain integrals and revealing the algebraic dependence between pairwise commuting elements.  相似文献   

2.
In this paper, we focus on the flexible inference method with parameters, that is the parametric triple I method by the combination of Schweizer–Sklar operators and triple I principles for fuzzy reasoning. Because the Schweizer–Sklar parameter m reflects the interaction between propositions in reasoning processes, the new parameterized triple I algorithms are closer to human reasoning in daily life. Also some properties of the new algorithms such as the reductivity, continuity and approximation are discussed. It is shown that some existing results are special cases of the new algorithms given here and in view of the variability of the parameter m the new algorithms have excellent flexibility in reasoning processes.  相似文献   

3.
The Bernstein–Sato polynomial of a hypersurface is an important object with many applications. However, its computation is hard, as a number of open questions and challenges indicate. In this paper we propose a family of algorithms called checkRoot for optimized checking whether a given rational number is a root of Bernstein–Sato polynomial and in the affirmative case, computing its multiplicity. These algorithms are used in the new approach to compute the global or local Bernstein–Sato polynomial and b-function of a holonomic ideal with respect to a weight vector. They can be applied in numerous situations, where a multiple of the Bernstein–Sato polynomial can be established. Namely, a multiple can be obtained by means of embedded resolution, for topologically equivalent singularities or using the formula of A?Campo and spectral numbers. We also present approaches to the logarithmic comparison problem and the intersection homology D-module. Several applications are presented as well as solutions to some challenges which were intractable with the classical methods. One of the main applications is the computation of a stratification of affine space with the local b-function being constant on each stratum. Notably, the algorithm we propose does not employ primary decomposition. Our results can be also applied for the computation of Bernstein–Sato polynomials for varieties. The examples in the paper have been computed with our implementation of the methods described in Singular:Plural.  相似文献   

4.
We study two widely used algorithms for the Potts model on rectangular subsets of the hypercubic lattice ${\mathbb{Z}^{d}}$ —heat bath dynamics and the Swendsen–Wang algorithm—and prove that, under certain circumstances, the mixing in these algorithms is torpid or slow. In particular, we show that for heat bath dynamics throughout the region of phase coexistence, and for the Swendsen–Wang algorithm at the transition point, the mixing time in a box of side length L with periodic boundary conditions has upper and lower bounds which are exponential in L d-1. This work provides the first upper bound of this form for the Swendsen–Wang algorithm, and gives lower bounds for both algorithms which significantly improve the previous lower bounds that were exponential in L/(log L)2.  相似文献   

5.
The discrete prolate spheroidal sequences (DPSS's) provide an efficient representation for discrete signals that are perfectly timelimited and nearly bandlimited. Due to the high computational complexity of projecting onto the DPSS basis – also known as the Slepian basis – this representation is often overlooked in favor of the fast Fourier transform (FFT). We show that there exist fast constructions for computing approximate projections onto the leading Slepian basis elements. The complexity of the resulting algorithms is comparable to the FFT, and scales favorably as the quality of the desired approximation is increased. In the process of bounding the complexity of these algorithms, we also establish new nonasymptotic results on the eigenvalue distribution of discrete time–frequency localization operators. We then demonstrate how these algorithms allow us to efficiently compute the solution to certain least-squares problems that arise in signal processing. We also provide simulations comparing these fast, approximate Slepian methods to exact Slepian methods as well as the traditional FFT based methods.  相似文献   

6.
We define and study the Plancherel–Hecke probability measure on Young diagrams; the Hecke algorithm of Buch–Kresch–Shimozono–Tamvakis–Yong is interpreted as a polynomial-time exact sampling algorithm for this measure. Using the results of Thomas–Yong on jeu de taquin for increasing tableaux, a symmetry property of the Hecke algorithm is proved, in terms of longest strictly increasing/decreasing subsequences of words. This parallels classical theorems of Schensted and of Knuth, respectively, on the Schensted and Robinson–Schensted–Knuth algorithms. We investigate, and conjecture about, the limit typical shape of the measure, in analogy with work of Vershik–Kerov, Logan–Shepp and others on the “longest increasing subsequence problem” for permutations. We also include a related extension of Aldous–Diaconis on patience sorting. Together, these results provide a new rationale for the study of increasing tableau combinatorics, distinct from the original algebraic-geometric ones concerning K-theoretic Schubert calculus.  相似文献   

7.
Large classes of self-similar (isospectral) flows can be viewed as continuous analogues of certain matrix eigenvalue algorithms. In particular there exist families of flows associated with the QR, LR, and Cholesky eigenvalue algorithms. This paper uses Lie theory to develop a general theory of self-similar flows which includes the QR, LR, and Cholesky flows as special cases. Also included are new families of flows associated with the SR and HR eigenvalue algorithms. The basic theory produces analogues of unshifted, single-step eigenvalue algorithms, but it is also shown how the theory can be extended to include flows which are continuous analogues of shifted and multiple-step eigenvalue algorithms.  相似文献   

8.
9.
We discuss a generalization of the Cohn–Umans method, a potent technique developed for studying the bilinear complexity of matrix multiplication by embedding matrices into an appropriate group algebra. We investigate how the Cohn–Umans method may be used for bilinear operations other than matrix multiplication, with algebras other than group algebras, and we relate it to Strassen’s tensor rank approach, the traditional framework for investigating bilinear complexity. To demonstrate the utility of the generalized method, we apply it to find the fastest algorithms for forming structured matrix–vector product, the basic operation underlying iterative algorithms for structured matrices. The structures we study include Toeplitz, Hankel, circulant, symmetric, skew-symmetric, f-circulant, block Toeplitz–Toeplitz block, triangular Toeplitz matrices, Toeplitz-plus-Hankel, sparse/banded/triangular. Except for the case of skew-symmetric matrices, for which we have only upper bounds, the algorithms derived using the generalized Cohn–Umans method in all other instances are the fastest possible in the sense of having minimum bilinear complexity. We also apply this framework to a few other bilinear operations including matrix–matrix, commutator, simultaneous matrix products, and briefly discuss the relation between tensor nuclear norm and numerical stability.  相似文献   

10.
The Ball basis was introduced for cubic polynomials by Ball, and two different generalizations for higher degree m polynomials have been called the Said–Ball and the Wang–Ball basis, respectively. In this paper, we analyze some shape preserving and stability properties of these bases. We prove that the Wang–Ball basis is strictly monotonicity preserving for all m. However, it is not geometrically convexity preserving and is not totally positive for m>3, in contrast with the Said–Ball basis. We prove that the Said–Ball basis is better conditioned than the Wang–Ball basis and we include a stable conversion between both generalized Ball bases. The Wang–Ball basis has an evaluation algorithm with linear complexity. We perform an error analysis of the evaluation algorithms of both bases and compare them with other algorithms for polynomial evaluation.  相似文献   

11.
We introduce a new family of Nlog Nbest basis search algorithms for functions of more than one variable. These algorithms search the collection of anisotropic wavelet packet and cosine packet bases and output a minimum entropy basis for a given function. These algorithms are constructed after treating the model problem of computing best Walsh packet bases. Several intermediate algorithms for conducting mixed isotropic/anisotropic best basis searches in the function's various coordinate directions are also presented.  相似文献   

12.
The conjugate gradient method is a powerful solution scheme for solving unconstrained optimization problems, especially for large-scale problems. However, the convergence rate of the method without restart is only linear. In this paper, we will consider an idea contained in [16] and present a new restart technique for this method. Given an arbitrary descent direction d t and the gradient g t , our key idea is to make use of the BFGS updating formula to provide a symmetric positive definite matrix P t such that d t =?P t g t , and then define the conjugate gradient iteration in the transformed space. Two conjugate gradient algorithms are designed based on the new restart technique. Their global convergence is proved under mild assumptions on the objective function. Numerical experiments are also reported, which show that the two algorithms are comparable to the Beale–Powell restart algorithm.  相似文献   

13.
We propose a new class of primal–dual methods for linear optimization (LO). By using some new analysis tools, we prove that the large-update method for LO based on the new search direction has a polynomial complexity of O(n4/(4+ρ)log(n/ε)) iterations, where ρ∈[0,2] is a parameter used in the system defining the search direction. If ρ=0, our results reproduce the well-known complexity of the standard primal–dual Newton method for LO. At each iteration, our algorithm needs only to solve a linear equation system. An extension of the algorithms to semidefinite optimization is also presented.  相似文献   

14.
《Computational Geometry》2000,15(1-3):51-68
This paper presents the Hierarchical Walk, or H-Walk algorithm, which maintains the distance between two moving convex bodies by exploiting both motion coherence and hierarchical representations. For convex polygons, we prove that H-Walk improves on the classic Lin–Canny and Dobkin–Kirkpatrick algorithms. We have implemented H-Walk for moving convex polyhedra in three dimensions. Experimental results indicate that, unlike previous incremental distance computation algorithms, H-Walk adapts well to variable coherence in the motion and provides consistent performance.  相似文献   

15.
An image consists of many discrete pixels with greyness of different levels, which can be quantified by greyness values. The greyness values at a pixel can also be represented by an integral as the mean of continuous greyness functions over a small pixel region. Based on such an idea, the discrete images can be produced by numerical integration; several efficient algorithms are developed to convert images under transformations. Among these algorithms, the combination of splitting–shooting–integrating methods (CSIM) is most promising because no solutions of nonlinear equations are required for the inverse transformation. The CSIM is proposed in [6] to facilitate images and patterns under a cycle transformations T−1T, where T is a nonlinear transformation. When a pixel region in two dimensions is split into N2 subpixels, convergence rates of pixel greyness by CSIM are proven in [8] to be only O(1/N). In [10], the convergence rates Op(1/N1.5) in probability and Op(1/N2) in probability using a local partition are discovered. The CSIM is well suited to binary images and the images with a few greyness levels due to its simplicity. However, for images with large (e.g., 256) multi-greyness levels, the CSIM still needs more CPU time since a rather large division number is needed.In this paper, a partition technique for numerical integration is proposed to evaluate carefully any overlaps between the transformed subpixel regions and the standard square pixel regions. This technique is employed to evolve the CSIM such that the convergence rate O(1/N2) of greyness solutions can be achieved. The new combinations are simple to carry out for image transformations because no solutions of nonlinear equations are involved in, either. The computational figures for real images of 256×256 with 256 greyness levels display that N=4 is good enough for real applications. This clearly shows validity and effectiveness of the new algorithms in this paper.  相似文献   

16.
A subgraph F of graph G is called a perfectly matchable subgraph if F contains a set of independent edges convering all the vertices in F. The convex hull of the incidence vectors of perfectly matchable subgraphs of G is a 0–1 polytope. We characterize the adjacency of vertices on such polytopes. We also show that when G is bipartite, the separation problem for such polytones can be solved by maximum flow algorithms.  相似文献   

17.
We develop analysis-based fast and accurate direct algorithms for several biharmonic problems in a unit disk derived directly from the Green’s functions of these problems and compare the numerical results with the “decomposition” algorithms (see Ghosh and Daripa, IMA J. Numer. Anal. 36(2), 824–850 [17]) in which the biharmonic problems are first decomposed into lower order problems, most often either into two Poisson problems or into two Poisson problems and a homogeneous biharmonic problem. One of the steps in the “decomposition algorithm” as discussed in Ghosh and Daripa (IMA J. Numer. Anal. 36(2), 824–850 [17]) for solving certain biharmonic problems uses the “direct algorithm” without which the problem can not be solved. Using classical Green’s function approach for these biharmonic problems, solutions of these problems are represented in terms of singular integrals in the complex z?plane (the physical plane) involving explicitly the boundary conditions. Analysis of these singular integrals using FFT and recursive relations (RR) in Fourier space leads to the development of these fast algorithms which are called FFTRR based algorithms. These algorithms do not need to do anything special to overcome coordinate singularity at the origin as often the case when solving these problems using finite difference methods in polar coordinates. These algorithms have some other desirable properties such as the ease of implementation and parallel in nature by construction. Moreover, these algorithms have O(logN) complexity per grid point where N 2 is the total number of grid points and have very low constant behind this order estimate of the complexity. Performance of these algorithms is shown on several test problems. These algorithms are applied to solving viscous flow problems at low and moderate Reynolds numbers and numerical results are presented.  相似文献   

18.
Abstract

In this article, Swendsen–Wang–Wolff algorithms are extended to simulate spatial point processes with symmetric and stationary interactions. Convergence of these algorithms is considered. Some further generalizations of the algorithms are discussed. The ideas presented in this article can also be useful in handling some large and complicated systems.  相似文献   

19.
In this paper we study the quadratic bottleneck knapsack problem (QBKP) from an algorithmic point of view. QBKP is shown to be NP-hard and it does not admit polynomial time ?-approximation algorithms for any ?>0 (unless P=NP). We then provide exact and heuristic algorithms to solve the problem and also identify polynomially solvable special cases. Results of extensive computational experiments are reported which show that our algorithms can solve QBKP of reasonably large size and produce good quality solutions very quickly. Several variations of QBKP are also discussed.  相似文献   

20.
The 0–1 knapsack [1] problem is a well-known NP-complete problem. There are different algorithms in the literature to attack this problem, two of them being of specific interest. One is a pseudo polynomial algorithm of order O(nK), K being the target of the problem. This algorithm works unsatisfactorily, as the given target becomes high. In fact, the complexity might become exponential in that case. The other scheme is a fully polynomial time approximation scheme (FPTAS) whose complexity is also polynomial time. The present paper suggests a probabilistic heuristic which is an evolutionary scheme accompanied by the necessary statistical formulation and its theoretical justification. We have identified parameters responsible for the performance of our evolutionary scheme which in turn would keep the option open for improving the scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号