首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
We prove a more general version of the Grüss inequality by using the recent theory of combined dynamic derivatives on time scales and the more general notions of diamond-α derivative and integral. For the particular case where α = 1, one obtains the delta-integral Grüss inequality on time scales in (see M. Bohner and T. Matthews [5]); for α = 0 a nabla-integral Grüss inequality is derived. If we further restrict ourselves by fixing the time scale to the real (or integer) numbers, then the standard continuous (discrete) inequalities are obtained.  相似文献   

2.
In this article, we study the Reidemeister torsion and the analytic torsion of the m dimensional disc, with the Ray and Singer homology basis (Adv Math 7:145–210, 1971). We prove that the Reidemeister torsion coincides with a power of the volume of the disc. We study the additional terms arising in the analytic torsion due to the boundary, using generalizations of the Cheeger–Müller theorem. We use a formula proved by Brüning and Ma (GAFA 16:767–873, 2006) that predicts a new anomaly boundary term beside the known term proportional to the Euler characteristic of the boundary (Lück, J Diff Geom 37:263–322, 1993). Some of our results extend to the case of the cone over a sphere, in particular we evaluate directly the analytic torsion for a cone over the circle and over the two sphere. We compare the results obtained in the low dimensional cases. We also consider a different formula for the boundary term given by Dai and Fang (Asian J Math 4:695–714, 2000), and we compare the results. The results of these work were announced in the study of Hartmann et al. (BUMI 2:529–533, 2009).  相似文献   

3.
In our recent paper (Douglass et al. (2011)), we claimed that both the group algebra of a finite Coxeter group W as well as the Orlik–Solomon algebra of W can be decomposed into a sum of induced one-dimensional representations of centralizers, one for each conjugacy class of elements of W, and gave a uniform proof of this claim for symmetric groups. In this note, we outline an inductive approach to our conjecture. As an application of this method, we prove the inductive version of the conjecture for finite Coxeter groups of rank up to 2.  相似文献   

4.
We present two algorithms to compute m-fold hypergeometric solutions of linear recurrence equations for the classical shift case and for the q-case, respectively. The first is an m-fold generalization and q-generalization of the algorithm by van Hoeij (Appl Algebra Eng Commun Comput 17:83–115, 2005; J. Pure Appl Algebra 139:109–131, 1998) for recurrence equations. The second is a combination of an improved version of the algorithms by Petkovšek (Discrete Math 180:3–22, 1998; J Symb Comput 14(2–3):243–264, 1992) for recurrence and q-recurrence equations and the m-fold algorithm from Petkovšek and Salvy (ISSAC 1993 Proceedings, pp 27–33, 1993) for recurrence equations. We will refer to the classical algorithms as van Hoeij or Petkovšek respectively. To formulate our ideas, we first need to introduce an adapted version of an m-fold Newton polygon and its characteristic polynomials for the classical case and q-case, and to prove the important properties in this case. Using the data from the Newton polygon, we are able to present efficient m-fold versions of the van Hoeij and Petkovšek algorithms for the classical shift case and for the q-case, respectively. Furthermore, we show how one can use the Newton polygon and our characteristic polynomials to conclude for which m ? \mathbbN{m\in \mathbb{N}} there might be an m-fold hypergeometric solution at all. Again by using the information obtained from the Newton polygon, the presentation of the q-Petkovšek algorithm can be simplified and streamlined. Finally, we give timings for the ‘classical’ q-Petkovšek, our q-van Hoeij and our modified q-Petkovšek algorithm on some classes of problems and we present a Maple implementation of the m-fold algorithms for the q-case.  相似文献   

5.
The quickest path problem is related to the classical shortest path problem, but its objective function concerns the transmission time of a given amount of data throughout a path, which involves both cost and capacity. The K-quickest simple paths problem generalises the latter, by looking for a given number K of simple paths in non-decreasing order of transmission time. Two categories of algorithms are known for ranking simple paths according to the transmission time. One is the adaptation of deviation algorithms for ranking shortest simple paths (Pascoal et al. in Comput. Oper. Res. 32(3):509–520, 2005; Rosen et al. in Comput. Oper. Res. 18(6):571–584, 1991), and another is based on ranking shortest simple paths in a sequence of networks with fixed capacity lower bounds (Chen in Inf. Process. Lett. 50:89–92, 1994), and afterwards selecting the K quickest ones. After reviewing the quickest path and the K-quickest simple paths problems we describe a recent algorithm for ranking quickest simple paths (Pascoal et al. in Ann. Oper. Res. 147(1):5–21, 2006). This is a lazy version of Chen’s algorithm, able to interchange the calculation of new simple paths and the output of each k-quickest simple path. Finally, the described algorithm is computationally compared to its former version, as well as to deviation algorithms.   相似文献   

6.
Combinatorial Sublinear-Time Fourier Algorithms   总被引:1,自引:0,他引:1  
We study the problem of estimating the best k term Fourier representation for a given frequency sparse signal (i.e., vector) A of length Nk. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of [^(A)]\hat{\mathbf{A}} , and estimate their coefficients, in polynomial(k,log N) time. Randomized sublinear-time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem (Gilbert et al. in ACM STOC, pp. 152–161, 2002; Proceedings of SPIE Wavelets XI, 2005). In this paper we develop the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime/sampling bounds to the current best randomized Fourier method (Gilbert et al. in Proceedings of SPIE Wavelets XI, 2005). Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in (Iwen in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA’08), 2008).  相似文献   

7.
A refinable spline in ℝ d is a compactly supported refinable function whose support can be decomposed into simplices such that the function is a polynomial on each simplex. The best-known refinable splines in ℝ d are the box splines. Refinable splines play a key role in many applications, such as numerical computation, approximation theory and computer-aided geometric design. Such functions have been classified in one dimension in Dai et al. (Appl. Comput. Harmon. Anal. 22(3), 374–381, 2007), Lawton et al. (Comput. Math. 3, 137–145, 1995). In higher dimensions Sun (J. Approx. Theory 86, 240–252, 1996) characterized those splines when the dilation matrices are of the form A=mI, where m∈ℤ and I is the identity matrix. For more general dilation matrices the problem becomes more complex. In this paper we give a complete classification of refinable splines in ℝ d for arbitrary dilation matrices AM d (ℤ).  相似文献   

8.
In Martín et al. (J Funct Anal 252:677–695, 2007) we developed a new method to obtain symmetrization inequalities of Sobolev type for functions in . In this paper we extend our method to Sobolev functions that do not vanish at the boundary. This paper is in final form and no version of it will be submitted for publication elsewhere.  相似文献   

9.
In this paper, we continue an asymptotic analysis of a stochastic version of the Lotka–Volterra model for predator–prey interactions. While the fluid approximation and large deviations were shown in Klebaner and Liptser (Ann. Appl. Probab. 11, 1263–1291, 2001) here we establish the diffusion approximation and moderate deviations.  相似文献   

10.
We provide an obstacle version of the Geometric Dynamic Programming Principle of Soner and Touzi (J. Eur. Math. Soc. 4:201–236, 2002) for stochastic target problems. This opens the doors to a wide range of applications, particularly in risk control in finance and insurance, in which a controlled stochastic process has to be maintained in a given set on a time interval [0,T]. As an example of application, we show how it can be used to provide a viscosity characterization of the super-hedging cost of American options under portfolio constraints, without appealing to the standard dual formulation from mathematical finance. In particular, we allow for a degenerate volatility, a case which does not seem to have been studied so far in this context.  相似文献   

11.
In this paper we continue the development of the differential calculus started in Aragona et al. (Monatsh. Math. 144:13–29, 2005). Guided by the so-called sharp topology and the interpretation of Colombeau generalized functions as point functions on generalized point sets, we introduce the notion of membranes and extend the definition of integrals, given in Aragona et al. (Monatsh. Math. 144:13–29, 2005), to integrals defined on membranes. We use this to prove a generalized version of the Cauchy formula and to obtain the Goursat Theorem for generalized holomorphic functions. A number of results from classical differential and integral calculus, like the inverse and implicit function theorems and Green’s theorem, are transferred to the generalized setting. Further, we indicate that solution formulas for transport and wave equations with generalized initial data can be obtained as well.  相似文献   

12.
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in Bailey et al. (Am. Math. Mon. 115:383–404, 2008; J. Agron. Crop Sci. 165:121–130, 1990), Morgan (Latin squares and related experimental designs. Wiley, New York, 2008) and Vaughan (Electron. J. Comb. 16, 2009). On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with mn is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables—variables with the same value in all the solutions of an instance—and hardness of GSP.  相似文献   

13.
14.
We consider the problem of the approximation of regular convex bodies in ℝ d by level surfaces of convex algebraic polynomials. Hammer (in Mathematika 10, 67–71, 1963) verified that any convex body in ℝ d can be approximated by a level surface of a convex algebraic polynomial. In Jaen J. Approx. 1, 97–109, 2009 and subsequently in J. Approx. Theory 162, 628–637, 2010 a quantitative version of Hammer’s approximation theorem was given by showing that the order of approximation of convex bodies by convex algebraic level surfaces of degree n is \frac1n\frac{1}{n}. Moreover, it was also shown that whenever the convex body is not regular (that is, there exists a point on its boundary at which the convex body possesses two distinct supporting hyperplanes), then \frac1n\frac{1}{n} is essentially the sharp rate of approximation. This leads to the natural question whether this rate of approximation can be improved further when the convex body is regular. In this paper we shall give an affirmative answer to this question. It turns out that for regular convex bodies a o(1/n) rate of convergence holds. In addition, if the body satisfies the condition of C 2-smoothness the rate of approximation is O(\frac1n2)O(\frac{1}{n^{2}}).  相似文献   

15.
In this paper, we study a variation of the equations of a chemotaxis kinetic model and investigate it in one dimension. In fact, we use fractional diffusion for the chemoattractant in the Othmar–Dunbar–Alt system (Othmer in J Math Biol 26(3):263–298, 1988). This version was exhibited in Calvez in Amer Math Soc, pp 45–62, 2007 for the macroscopic well-known Keller–Segel model in all space dimensions. These two macroscopic and kinetic models are related as mentioned in Bournaveas, Ann Inst H Poincaré Anal Non Linéaire, 26(5):1871–1895, 2009, Chalub, Math Models Methods Appl Sci, 16(7 suppl):1173–1197, 2006, Chalub, Monatsh Math, 142(1–2):123–141, 2004, Chalub, Port Math (NS), 63(2):227–250, 2006. The model we study here behaves in a similar way to the original model in two dimensions with the spherical symmetry assumption on the initial data which is described in Bournaveas, Ann Inst H Poincaré Anal Non Linéaire, 26(5):1871–1895, 2009. We prove the existence and uniqueness of solutions for this model, as well as a convergence result for a family of numerical schemes. The advantage of this model is that numerical simulations can be easily done especially to track the blow-up phenomenon.  相似文献   

16.
In 1, we have previously observed that, in a chordal graph G, the maximum number of independent r-cliques (i.e., of vertex disjoint subgraphs of G, each isomorphic to Kr, with no edges joining any two of the subgraphs) equals the minimum number of cliques of G that meet all the r-cliques of G. When r = 1, this says that chordal graphs have independence number equal to the clique covering number. When r = 2, this is equivalent to a result of Cameron (1989), and it implies a well known forbidden subgraph characterization of split graphs. In this paper we consider a weighted version of the above min-max optimization problem. Given a chordal graph G, with a nonnegative weight for each r-clique in G, we seek a set of independent r-cliques with maximum total weight. We present two algorithms to solve this problem, based on the principle of complementary slackness. The first one operates on a graph derived from G, and is an adaptation of an algorithm of Farber (1982). The second one improves the performance by reducing the number of constraints of the linear programs. Both results produce a min-max relation. When the algorithms are specialized to the situation in which all the r-cliques have the same weight, we simplify the algorithms reported in 1, although these simpler algorithms are not as efficient. As a byproduct, we also obtain new elementary proofs of the above min-max result.  相似文献   

17.
Given a graph G=(V,E) and a weight function on the edges w:E→ℝ, we consider the polyhedron P(G,w) of negative-weight flows on G, and get a complete characterization of the vertices and extreme directions of P(G,w). Based on this characterization, and using a construction developed in Khachiyan et al. (Discrete Comput. Geom. 39(1–3):174–190, 2008), we show that, unless P=NP, there is no output polynomial-time algorithm to generate all the vertices of a 0/1-polyhedron. This strengthens the NP-hardness result of Khachiyan et al. (Discrete Comput. Geom. 39(1–3):174–190, 2008) for non 0/1-polyhedra, and comes in contrast with the polynomiality of vertex enumeration for 0/1-polytopes (Bussiech and Lübbecke in Comput. Geom., Theory Appl. 11(2):103–109, 1998). As further applications, we show that it is NP-hard to check if a given integral polyhedron is 0/1, or if a given polyhedron is half-integral. Finally, we also show that it is NP-hard to approximate the maximum support of a vertex of a polyhedron in ℝ n within a factor of 12/n.  相似文献   

18.
This paper continues the investigation of the groups RF(G)\mathcal{RF}(G) first introduced in the forthcoming book of Chiswell and Müller “A Class of Groups Universal for Free ℝ-Tree Actions” and in the article by Müller and Schlage-Puchta (Abh. Math. Semin. Univ. Hambg. 79:193–227, 2009). We establish a criterion for a family {Hs}\{\mathcal{H}_{\sigma}\} of hyperbolic subgroups HsRF(G)\mathcal{H}_{\sigma}\leq\mathcal{RF}(G) to generate a hyperbolic subgroup isomorphic to the free product of the Hs\mathcal{H}_{\sigma} (Theorem 1.2), as well as a local-global principle for local incompatibility (Theorem 4.1). In conjunction with the theory of test functions as developed by Müller and Schlage-Puchta (Abh. Math. Semin. Univ. Hambg. 79:193–227, 2009), these results allow us to obtain a necessary and sufficient condition for a free product of real groups to embed as a hyperbolic subgroup in RF(G)\mathcal{RF}(G) for a given group G (Corollary 5.4). As a further application, we show that the centralizers associated with a family of pairwise locally incompatible cyclically reduced functions in RF(G)\mathcal{RF}(G) generate a hyperbolic subgroup isomorphic to the free product of these centralizers (Corollary 5.2).  相似文献   

19.
In this paper we study the main properties of the Cesàro means of bi-continuous semigroups, introduced and studied by Kühnemund (Semigroup Forum 67:205–225, 2003). We also give some applications to Feller semigroups generated by second-order elliptic differential operators with unbounded coefficients in C b (ℝ N ) and to evolution operators associated with nonautonomous second-order differential operators in C b (ℝ N ) with time-periodic coefficients.  相似文献   

20.
Consider a maximum-length shift-register sequence generated by a primitive polynomial f over a finite field. The set of its subintervals is a linear code whose dual code is formed by all polynomials divisible by f. Since the minimum weight of dual codes is directly related to the strength of the corresponding orthogonal arrays, we can produce orthogonal arrays by studying divisibility of polynomials. Munemasa (Finite Fields Appl 4(3):252–260, 1998) uses trinomials over \mathbbF2{\mathbb{F}_2} to construct orthogonal arrays of guaranteed strength 2 (and almost strength 3). That result was extended by Dewar et al. (Des Codes Cryptogr 45:1–17, 2007) to construct orthogonal arrays of guaranteed strength 3 by considering divisibility of trinomials by pentanomials over \mathbbF2{\mathbb{F}_2} . Here we first simplify the requirement in Munemasa’s approach that the characteristic polynomial of the sequence must be primitive: we show that the method applies even to the much broader class of polynomials with no repeated roots. Then we give characterizations of divisibility for binomials and trinomials over \mathbbF3{\mathbb{F}_3} . Some of our results apply to any finite field \mathbbFq{\mathbb{F}_q} with q elements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号