首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Improved algorithms for the multicut and multiflow problems in rooted trees   总被引:1,自引:1,他引:0  
A. Tamir 《TOP》2008,16(1):114-125
Costa et al. (Oper. Res. Lett. 31:21–27, 2003) presented a quadratic O(min (Kn,n 2)) greedy algorithm to solve the integer multicut and multiflow problems in a rooted tree. (n is the number of nodes of the tree, and K is the number of commodities). Their algorithm is a special case of the greedy type algorithm of Kolen (Location problems on trees and in the rectilinear plane. Ph.D. dissertation, 1982) to solve weighted covering and packing problems defined by general totally balanced (greedy) matrices. In this communication we improve the complexity bound in Costa et al. (Oper. Res. Lett. 31:21–27, 2003) and show that in the case of the integer multicut and multiflow problems in a rooted tree the greedy algorithm of Kolen can be implemented in subquadratic O(K+n+min (K,n)log n) time. The improvement is obtained by identifying additional properties of this model which lead to a subquadratic transformation to greedy form and using more sophisticated data structures.   相似文献   

2.
Abstract

All known robust location and scale estimators with high breakdown point for multivariate samples are very expensive to compute. In practice, this computation has to be carried out using an approximate subsampling procedure. In this article we describe an alternative subsampling scheme, applicable to both the Stahel-Donoho estimator and the minimum volume ellipsoid estimator, with the property that the number of subsamples required can be substantially reduced with respect to the standard subsampling procedures used in both cases. We also discuss some bias and variability properties of the estimator obtained from the proposed subsampling process.  相似文献   

3.
In high-dimensional directional statistics one of the most basic probability distributions is the von Mises-Fisher (vMF) distribution. Maximum likelihood estimation for the vMF distribution turns out to be surprisingly hard because of a difficult transcendental equation that needs to be solved for computing the concentration parameter κ. This paper is a followup to the recent paper of Tanabe et al. (Comput Stat 22(1):145–157, 2007), who exploited inequalities about Bessel function ratios to obtain an interval in which the parameter estimate for κ should lie; their observation lends theoretical validity to the heuristic approximation of Banerjee et al. (JMLR 6:1345–1382, 2005). Tanabe et al. (Comput Stat 22(1):145–157, 2007) also presented a fixed-point algorithm for computing improved approximations for κ. However, their approximations require (potentially significant) additional computation, and in this short paper we show that given the same amount of computation as their method, one can achieve more accurate approximations using a truncated Newton method. A more interesting contribution of this paper is a simple algorithm for computing I s (x): the modified Bessel function of the first kind. Surprisingly, our na?ve implementation turns out to be several orders of magnitude faster for large arguments common to high-dimensional data, than the standard implementations in well-established software such as Mathematica ?, Maple ?, and Gp/Pari.  相似文献   

4.
We say that a semigroup S is a permutable semigroup if the congruences of S commute with each other, that is, αβ=βα is satisfied for all congruences α and β of S. A semigroup is called a medial semigroup if it satisfies the identity axyb=ayxb. The medial permutable semigroups were examined in Proc. Coll. Math. Soc. János Bolyai, vol. 39, pp. 21–39 (1981), where the medial semigroups of the first, the second and the third kind were characterized, respectively. In Atta Accad. Sci. Torino, I-Cl. Sci. Fis. Mat. Nat. 117, 355–368 (1983) a construction was given for medial permutable semigroups of the second [the third] kind. In the present paper we give a construction for medial permutable semigroups of the first kind. We prove that they can be obtained from non-archimedean commutative permutable semigroups (which were characterized in Semigroup Forum 10, 55–66, 1975). Research supported by the Hungarian NFSR grant No T042481 and No T043034.  相似文献   

5.
Abstract

An improved resampling algorithm for S estimators reduces the number of times the objective function is evaluated and increases the speed of convergence. With this algorithm, S estimates can be computed in less time than least median squares (LMS) for regression and minimum volume ellipsoid (MVE) for location/scatter estimates with the same accuracy. Here accuracy refers to the randomness due to the algorithm. S estimators are also more statistically efficient than the LMS and MVE estimators, that is, they have less variability due to the randomness of the data.  相似文献   

6.
We address two fundamental questions in the representation theory of affine Hecke algebras of classical types. One is an inductive algorithm to compute characters of tempered modules, and the other is the determination of the constants in the formal degrees of discrete series (in the form conjectured by Reeder (J. Reine Angew. Math. 520:37–93, 2000)). The former is completely different from the Lusztig-Shoji algorithm (Shoji in Invent. Math. 74:239–267, 1983; Lusztig in Ann. Math. 131:355–408, 1990), and it is more effective in a number of cases. The main idea in our proof is to introduce a new family of representations which behave like tempered modules, but for which it is easier to analyze the effect of parameter specializations. Our proof also requires a comparison of the C -theoretic results of Opdam, Delorme, Slooten, Solleveld (J. Inst. Math. Jussieu 3:531–648, 2004; ; Int. Math. Res. Not., 2008; Adv. Math. 220:1549–1601, 2009; Acta Math. 205:105–187, 2010), and the geometric construction from Kato (Duke Math. J. 148:305–371, 2009; Am. J. Math. 133:518–553, 2011), Ciubotaru and Kato (Adv. Math. 226:1538–1590, 2011).  相似文献   

7.
In the multiple-output regression context, Hallin et al. (Ann Statist 38:635–669, 2010) introduced a powerful data-analytical tool based on regression quantile regions. However, the computation of these regions, that are obtained by considering in all directions an original concept of directional regression quantiles, is a very challenging problem. Paindaveine and Šiman (Comput Stat Data Anal 2011b) described a first elegant solution relying on linear programming techniques. The present paper provides another solution based on the fact that the quantile regions can also be computed from a competing concept of projection regression quantiles, elaborated in Kong and Mizera (Quantile tomography: using quantiles with multivariate data 2008) and Paindaveine and Šiman (J Multivar Anal 2011a). As a by-product, this alternative solution further provides various characteristics useful for statistical inference. We describe in detail the algorithm solving the parametric programming problem involved, and illustrate the resulting procedure on simulated data. We show through simulations that the Matlab implementation of the algorithm proposed in this paper is faster than that from Paindaveine and Šiman (Comput Stat Data Anal 2011b) in various cases.  相似文献   

8.
Two convex disks K and L in the plane are said to cross each other if the removal of their intersection causes each disk to fall into disjoint components. Almost all major theorems concerning the covering density of a convex disk were proved only for crossing-free coverings. This includes the classical theorem of L. Fejes Tóth (Acta Sci. Math. Szeged 12/A:62–67, 1950) that uses the maximum area hexagon inscribed in the disk to give a significant lower bound for the covering density of the disk. From the early seventies, all attempts of generalizing this theorem were based on the common belief that crossings in a plane covering by congruent convex disks, being counterproductive for producing low density, are always avoidable. Partial success was achieved not long ago, first for “fat” ellipses (A. Heppes in Discrete Comput. Geom. 29:477–481, 2003) and then for “fat” convex disks (G. Fejes Tóth in Discrete Comput. Geom. 34(1):129–141, 2005), where “fat” means of shape sufficiently close to a circle. A recently constructed example will be presented here, showing that, in general, all such attempts must fail. Three perpendiculars drawn from the center of a regular hexagon to its three nonadjacent sides partition the hexagon into three congruent pentagons. Obviously, the plane can be tiled by such pentagons. But a slight modification produces a (non-tiling) pentagon with an unexpected covering property: every thinnest covering of the plane by congruent copies of the modified pentagon must contain crossing pairs. The example has no bearing on the validity of Fejes Tóth’s bound in general, but it shows that any prospective proof must take into consideration the existence of unavoidable crossings.  相似文献   

9.
Abstract

The existence of outliers in a data set and how to deal with them is an important problem in statistics. The minimum volume ellipsoid (MVE) estimator is a robust estimator of location and covariate structure; however its use has been limited because there are few computationally attractive methods. Determining the MVE consists of two parts—finding the subset of points to be used in the estimate and finding the ellipsoid that covers this set. This article addresses the first problem. Our method will also allow us to compute the minimum covariance determinant (MCD) estimator. The proposed method of subset selection is called the effective independence distribution (EID) method, which chooses the subset by minimizing determinants of matrices containing the data. This method is deterministic, yielding reproducible estimates of location and scatter for a given data set. The EID method of finding the MVE is applied to several regression data sets where the true estimate is known. Results show that the EID method, when applied to these data sets, produces the subset of data more quickly than conventional procedures and that there is less than 6% relative error in the estimates. We also give timing results illustrating the feasibility of our method for larger data sets. For the case of 10,000 points in 10 dimensions, the compute time is under 25 minutes.  相似文献   

10.
In this paper, we propose a new smoothing Broyden-like method for solving nonlinear complementarity problem with P 0 function. The presented algorithm is based on the smoothing symmetrically perturbed minimum function φ(a, b) = min{a, b} and makes use of the derivative-free line search rule of Li et al. (J Optim Theory Appl 109(1):123–167, 2001). Without requiring any strict complementarity assumption at the P 0-NCP solution, we show that the iteration sequence generated by the suggested algorithm converges globally and superlinearly under suitable conditions. Furthermore, the algorithm has local quadratic convergence under mild assumptions. Some numerical results are also reported in this paper.  相似文献   

11.
FFTs on the Rotation Group   总被引:1,自引:0,他引:1  
We discuss an implementation of an efficient algorithm for the numerical computation of Fourier transforms of bandlimited functions defined on the rotation group SO(3). The implementation is freely available on the web. The algorithm described herein uses O(B 4) operations to compute the Fourier coefficients of a function whose Fourier expansion uses only (the O(B 3)) spherical harmonics of degree at most B. This compares very favorably with the direct O(B 6) algorithm derived from a basic quadrature rule on O(B 3) sample points. The efficient Fourier transform also makes possible the efficient calculation of convolution over SO(3) which has been used as the analytic engine for some new approaches to searching 3D databases (Funkhouser et al., ACM Trans. Graph. 83–105, [2003]; Kazhdan et al., Eurographics Symposium in Geometry Processing, pp. 167–175, [2003]). Our implementation is based on the “Separation of Variables” technique (see, e.g., Maslen and Rockmore, Proceedings of the DIMACS Workshop on Groups and Computation, pp. 183–237, [1997]). In conjunction with techniques developed for the efficient computation of orthogonal polynomial expansions (Driscoll et al., SIAM J. Comput. 26(4):1066–1099, [1997]), our fast SO(3) algorithm can be improved to give an algorithm of complexity O(B 3log 2 B), but at a cost in numerical reliability. Numerical and empirical results are presented establishing the empirical stability of the basic algorithm. Examples of applications are presented as well. First author was supported by NSF ITR award; second author was supported by NSF Grant 0219717 and the Santa Fe Institute.  相似文献   

12.
Given a graph G=(V,E) and a weight function on the edges w:E→ℝ, we consider the polyhedron P(G,w) of negative-weight flows on G, and get a complete characterization of the vertices and extreme directions of P(G,w). Based on this characterization, and using a construction developed in Khachiyan et al. (Discrete Comput. Geom. 39(1–3):174–190, 2008), we show that, unless P=NP, there is no output polynomial-time algorithm to generate all the vertices of a 0/1-polyhedron. This strengthens the NP-hardness result of Khachiyan et al. (Discrete Comput. Geom. 39(1–3):174–190, 2008) for non 0/1-polyhedra, and comes in contrast with the polynomiality of vertex enumeration for 0/1-polytopes (Bussiech and Lübbecke in Comput. Geom., Theory Appl. 11(2):103–109, 1998). As further applications, we show that it is NP-hard to check if a given integral polyhedron is 0/1, or if a given polyhedron is half-integral. Finally, we also show that it is NP-hard to approximate the maximum support of a vertex of a polyhedron in ℝ n within a factor of 12/n.  相似文献   

13.
We introduce an iterative sequence for finding the solution to 0∈T(v), where T : EE * is a maximal monotone operator in a smooth and uniformly convex Banach space E. This iterative procedure is a combination of iterative algorithms proposed by Kohsaka and Takahashi (Abstr. Appl. Anal. 3:239–249, 2004) and Kamamura, Kohsaka and Takahashi (Set-Valued Anal. 12:417–429, 2004). We prove a strong convergence theorem and a weak convergence theorem under different conditions respectively and give an estimate of the convergence rate of the algorithm. An application to minimization problems is given. This work was partially supported by the National Natural Sciences Grant 10671050 and the Heilongjiang Province Natural Sciences Grant A200607. The authors thank the referees for useful comments improving the presentation and Professor K. Kohsaka for pointing out Ref. 7.  相似文献   

14.
We rigorously prove results on spiky patterns for the Gierer–Meinhardt system (Kybernetik (Berlin) 12:30–39, 1972) with a jump discontinuity in the diffusion coefficient of the inhibitor. Using numerical computations in combination with a Turing-type instability analysis, this system has been investigated by Benson, Maini, and Sherratt (Math. Comput. Model. 17:29–34, 1993a; Bull. Math. Biol. 55:365–384, 1993b; IMA J. Math. Appl. Med. Biol. 9:197–213, 1992). Firstly, we show the existence of an interior spike located away from the jump discontinuity, deriving a necessary condition for the position of the spike. In particular, we show that the spike is located in one-and-only-one of the two subintervals created by the jump discontinuity of the inhibitor diffusivity. This localization principle for a spike is a new effect which does not occur for homogeneous diffusion coefficients. Further, we show that this interior spike is stable. Secondly, we establish the existence of a spike whose distance from the jump discontinuity is of the same order as its spatial extent. The existence of such a spike near the jump discontinuity is the second new effect presented in this paper. To derive these new effects in a mathematically rigorous way, we use analytical tools like Liapunov–Schmidt reduction and nonlocal eigenvalue problems which have been developed in our previous work (J. Nonlinear Sci. 11:415–458, 2001). Finally, we confirm our results by numerical computations for the dynamical behavior of the system. We observe a moving spike which converges to a stationary spike located in the interior of one of the subintervals or near the jump discontinuity.   相似文献   

15.
In this paper, we analyze the outer approximation property of the algorithm for generalized semi-infinite programming from Stein and Still (SIAM J. Control Optim. 42:769–788, 2003). A simple bound on the regularization error is found and used to formulate a feasible numerical method for generalized semi-infinite programming with convex lower-level problems. That is, all iterates of the numerical method are feasible points of the original optimization problem. The new method has the same computational cost as the original algorithm from Stein and Still (SIAM J. Control Optim. 42:769–788, 2003). We also discuss the merits of this approach for the adaptive convexification algorithm, a feasible point method for standard semi-infinite programming from Floudas and Stein (SIAM J. Optim. 18:1187–1208, 2007).  相似文献   

16.
The simplicial algorithm is a kind of branch-and-bound method for computing a globally optimal solution of a convex maximization problem. Its convergence under the ω-subdivision strategy was an open question for some decades until Locatelli and Raber proved it (J Optim Theory Appl 107:69–79, 2000). In this paper, we modify their linear programming relaxation and give a different and simpler proof of the convergence. We also develop a new convergent subdivision strategy, and report numerical results of comparing it with existing strategies.  相似文献   

17.
Let X be a complete local Dirichlet space with a local Poincaré inequality, local volume doubling, and volumes of balls of a fixed radius bounded away from both 0 and ∞. When X is a co-compact covering of a finitely generated group, the large time behavior of their heat kernels are comparable. This is an extension of work by Pittet and Saloff-Coste (J Geom Anal 10:713–737, 2000).  相似文献   

18.
Lance Nielsen 《Acta Appl Math》2010,110(1):409-429
In this paper we develop a method of forming functions of noncommuting operators (or disentangling) using functions that are not necessarily analytic at the origin in ℂ n . The method of disentangling follows Feynman’s heuristic rules from in (Feynman in Phys. Rev. 84:18–128, 1951) a mathematically rigorous fashion, generalizing the work of Jefferies and Johnson and the present author in (Jefferies and Johnson in Russ. J. Math. 8:153–181, 2001) and (Jefferies et al. in J. Korean Math. Soc. 38:193–226, 2001). In fact, the work in (Jefferies and Johnson in Russ. J. Math. 8:153–181, 2001) and (Jefferies et al. in J. Korean Math. Soc. 38:193–226, 2001) allow only functions analytic in a polydisk centered at the origin in ℂ n while the method introduced in this paper enable functions that are not analytic at the origin to be used. It is shown that the disentangling formalism introduced here reduces to that of (Jefferies and Johnson in Russ. J. Math. 8:153–181, 2001) and (Jefferies et al. in J. Korean Math. Soc. 38:193–226, 2001) under the appropriate assumptions. A basic commutativity theorem is also established.  相似文献   

19.
Given a partition λ and a composition β, the stretched Kostka coefficient is the map n K n λ,n β sending each positive integer n to the Kostka coefficient indexed by n λ and n β. Kirillov and Reshetikhin (J. Soviet Math. 41(2), 925–955, 1988) have shown that stretched Kostka coefficients are polynomial functions of n. King, Tollu, and Toumazet have conjectured that these polynomials always have nonnegative coefficients (CRM Proc. Lecture Notes 34, 99–112, 2004), and they have given a conjectural expression for their degrees (Séminaire Lotharingien de Combinatoire 54A, 2006). We prove the values conjectured by King, Tollu, and Toumazet for the degrees of stretched Kostka coefficients. Our proof depends upon the polyhedral geometry of Gelfand–Tsetlin polytopes and uses tilings of GT-patterns, a combinatorial structure introduced in De Loera and McAllister, (Discret. Comput. Geom. 32(4), 459–470, 2004). Research supported by NSF VIGRE Grant No. DMS-0135345 and by NWO Mathematics Cluster DIAMANT.  相似文献   

20.
Based on the basis theorem of Bruhat–Chevalley (in Algebraic Groups and Their Generalizations: Classical Methods, Proceedings of Symposia in Pure Mathematics, vol. 56 (part 1), pp. 1–26, AMS, Providence, 1994) and the formula for multiplying Schubert classes obtained in (Duan, Invent. Math. 159:407–436, 2005) and programmed in (Duan and Zhao, Int. J. Algebra Comput. 16:1197–1210, 2006), we introduce a new method for computing the Chow rings of flag varieties (resp. the integral cohomology of homogeneous spaces).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号