首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper a new hybrid conjugate gradient algorithm is proposed and analyzed. The parameter β k is computed as a convex combination of the Polak-Ribière-Polyak and the Dai-Yuan conjugate gradient algorithms, i.e. β k N =(1−θ k )β k PRP +θ k β k DY . The parameter θ k in the convex combination is computed in such a way that the conjugacy condition is satisfied, independently of the line search. The line search uses the standard Wolfe conditions. The algorithm generates descent directions and when the iterates jam the directions satisfy the sufficient descent condition. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this hybrid computational scheme outperforms the known hybrid conjugate gradient algorithms. N. Andrei is a member of the Academy of Romanian Scientists, Splaiul Independenţei nr. 54, Sector 5, Bucharest, Romania.  相似文献   

2.
This article studies a modified BFGS algorithm for solving smooth unconstrained strongly convex minimization problem. The modified BFGS method is based on the new quasi-Newton equation Bk+1sk=yk where yk*, =yk + Aksk andA k is a matrix. Wei, Li and Qi [WLQ] have proven that the average performance of two of those algorithms is better than that of the classical one. In this paper, we prove the global convergence of these algorithms associated to a general line search rule.  相似文献   

3.
It is proved that every convex bodyC inR n can be approximated by a sequenceC k of convex bodies, whose boundary is the intersection of a level set of a homogeneous polynomial of degree 2k and a hyperplane. The Minkowski functional ofC k is given explictly. Some further nice properties of the approximantsC k are proved.Supported in part by BSF and Erwin Schrödinger Auslandsstipendium J0630.  相似文献   

4.
Géza Tóth 《Combinatorica》2000,20(4):589-596
Let F{\cal{F}} denote a family of pairwise disjoint convex sets in the plane. F{\cal{F}} is said to be in convex position, if none of its members is contained in the convex hull of the union of the others. For any fixed k 3 5k\ge5, we give a linear upper bound on Pk(n)P_k(n), the maximum size of a family F{\cal{F}} with the property that any k members of F{\cal{F}} are in convex position, but no n are.  相似文献   

5.
Arrangements of oriented hyperplanes   总被引:1,自引:0,他引:1  
An arrangement ofn oriented hyperplanes or half-spaces dividesE d into a certain number of convex cells. We study the numberc k of cells which are covered by exactlyk half-spaces and derive an upper bound onc k for givenn andd.  相似文献   

6.
Streaming Algorithms for Line Simplification   总被引:1,自引:0,他引:1  
We study the following variant of the well-known line-simplification problem: we are getting a (possibly infinite) sequence of points p 0,p 1,p 2,… in the plane defining a polygonal path, and as we receive the points, we wish to maintain a simplification of the path seen so far. We study this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage.  相似文献   

7.
Based on the notion of the ε -subgradient, we present a unified technique to establish convergence properties of several methods for nonsmooth convex minimization problems. Starting from the technical results, we obtain the global convergence of: (i) the variable metric proximal methods presented by Bonnans, Gilbert, Lemaréchal, and Sagastizábal, (ii) some algorithms proposed by Correa and Lemaréchal, and (iii) the proximal point algorithm given by Rockafellar. In particular, we prove that the Rockafellar—Todd phenomenon does not occur for each of the above mentioned methods. Moreover, we explore the convergence rate of {||x k || } and {f(x k ) } when {x k } is unbounded and {f(x k ) } is bounded for the non\-smooth minimization methods (i), (ii), and (iii). Accepted 15 October 1996  相似文献   

8.
Clustering is a fundamental problem in many scientific applications. Standard methods such as k-means, Gaussian mixture models, and hierarchical clustering, however, are beset by local minima, which are sometimes drastically suboptimal. Recently introduced convex relaxations of k-means and hierarchical clustering shrink cluster centroids toward one another and ensure a unique global minimizer. In this work, we present two splitting methods for solving the convex clustering problem. The first is an instance of the alternating direction method of multipliers (ADMM); the second is an instance of the alternating minimization algorithm (AMA). In contrast to previously considered algorithms, our ADMM and AMA formulations provide simple and unified frameworks for solving the convex clustering problem under the previously studied norms and open the door to potentially novel norms. We demonstrate the performance of our algorithm on both simulated and real data examples. While the differences between the two algorithms appear to be minor on the surface, complexity analysis and numerical experiments show AMA to be significantly more efficient. This article has supplementary materials available online.  相似文献   

9.
An accelerated hybrid conjugate gradient algorithm represents the subject of this paper. The parameter β k is computed as a convex combination of bkHS\beta_k^{HS} (Hestenes and Stiefel, J Res Nat Bur Stand 49:409–436, 1952) and bkDY\beta_k^{DY} (Dai and Yuan, SIAM J Optim 10:177–182, 1999), i.e. bkC = (1-qk)bkHS + qk bkDY\beta_k^C =\left({1-\theta_k}\right)\beta_k^{HS} + \theta_k \beta_k^{DY}. The parameter θ k in the convex combinaztion is computed in such a way the direction corresponding to the conjugate gradient algorithm is the best direction we know, i.e. the Newton direction, while the pair (s k , y k ) satisfies the modified secant condition given by Li et al. (J Comput Appl Math 202:523–539, 2007) B k + 1 s k  = z k , where zk = yk +(hk / || sk ||2 )skz_k =y_k +\left({{\eta_k} / {\left\| {s_k} \right\|^2}} \right)s_k, hk = 2( fk -fk+1 )+( gk +gk+1 )Tsk\eta_k =2\left( {f_k -f_{k+1}} \right)+\left( {g_k +g_{k+1}} \right)^Ts_k, s k  = x k + 1 − x k and y k  = g k + 1 − g k . It is shown that both for uniformly convex functions and for general nonlinear functions the algorithm with strong Wolfe line search is globally convergent. The algorithm uses an acceleration scheme modifying the steplength α k for improving the reduction of the function values along the iterations. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms a variant of the hybrid conjugate gradient algorithm given by Andrei (Numer Algorithms 47:143–156, 2008), in which the pair (s k , y k ) satisfies the classical secant condition B k + 1 s k  = y k , as well as some other conjugate gradient algorithms including Hestenes-Stiefel, Dai-Yuan, Polack-Ribière-Polyak, Liu-Storey, hybrid Dai-Yuan, Gilbert-Nocedal etc. A set of 75 unconstrained optimization problems with 10 different dimensions is being used (Andrei, Adv Model Optim 10:147–161, 2008).  相似文献   

10.
We establish a C2 a priori estimate for convex hypersurfaces whose principal curvatures κ=(κ1,…, κn) satisfy σk(κ(X))=f(X,ν(X)), the Weingarten curvature equation. We also obtain such an estimate for admissible 2‐convex hypersurfaces in the case k=2. Our estimates resolve a longstanding problem in geometric fully nonlinear elliptic equations.© 2015 Wiley Periodicals, Inc.  相似文献   

11.
Eberhard proved that for every sequence (p k ), 3≤kr, k≠6, of nonnegative integers satisfying Euler’s formula ∑ k≥3(6−k)p k =12, there are infinitely many values p 6 such that there exists a simple convex polyhedron having precisely p k faces of size k for every k≥3, where p k =0 if k>r. In this paper we prove a similar statement when nonnegative integers p k are given for 3≤kr, except for k=5 and k=7 (but including p 6). We prove that there are infinitely many values p 5,p 7 such that there exists a simple convex polyhedron having precisely p k faces of size k for every k≥3. We derive an extension to arbitrary closed surfaces, yielding maps of arbitrarily high face-width. Our proof suggests a general method for obtaining results of this kind.  相似文献   

12.
We present an interior-point method for a class of fractional programs with convex constraints. The proposed algorithm converges at a polynomial rate, similarly as in the case of a convex problem, even though fractional programs are only pseudo-convex. Here, the rate of convergence is measured in terms of the area of two-dimensional convex setsC k containing the origin and certain projections of the optimal points, and the area ofC k is reduced by a constant factorc < 1 at each iteration. The factorc depends only on the self-concordance parameter of a barrier function associated with the feasible set. We present an outline of a practical implementation of the proposed method, and we report results of some preliminary numerical experiments.Corresponding author.  相似文献   

13.
Some Convergence Properties of Descent Methods   总被引:6,自引:0,他引:6  
In this paper, we discuss the convergence properties of a class of descent algorithms for minimizing a continuously differentiable function f on R n without assuming that the sequence { x k } of iterates is bounded. Under mild conditions, we prove that the limit infimum of is zero and that false convergence does not occur when f is convex. Furthermore, we discuss the convergence rate of { } and { f(x k )} when { x k } is unbounded and { f(x k )} is bounded.  相似文献   

14.
A construction is given which, for any zonoidZR d and any unit vectorsu 1,...,u k , produces a zonotopeZ′ which is a sum of at mostk segments and has the same support values asZ in the directionsu 1 ,..., u k . Applications to an estimation problem for fiber processes are also given as well as some related results for convex bodies.  相似文献   

15.
   Abstract. Let k≥ 4 . A finite planar point set X is called a convex k -clustering if it is a disjoint union of k sets X 1 , . . . ,X k of equal sizes such that x 1 x 2 . . . x k is a convex k -gon for each choice of x 1 ∈ X 1 , . . . ,x k ∈ X k . Answering a question of Gil Kalai, we show that for every k≥ 4 there are two constants c=c(k) , c'=c'(k) such that the following holds. If X is a finite set of points in general position in the plane, then it has a subset X' of size at most c' such that X \ X' can be partitioned into at most c convex k -clusterings. The special case k=4 was proved earlier by Pór. Our result strengthens the so-called positive fraction Erdos—Szekeres theorem proved by Barany and Valtr. The proof gives reasonable estimates on c and c' , and it works also in higher dimensions. We also improve the previous constants for the positive fraction Erdos—Szekeres theorem obtained by Pach and Solymosi.  相似文献   

16.
Bambah, Rogers, Woods, and Zassenhaus considered the general problem of covering planar convex bodiesC byk translates of a centrally-symmetric convex bodyK ofE 2 with the ramification that these translates cover the convex hullC k of their centres. They proved interesting inequalities for the volume ofC andC k . In the present paper some analogous results in euclideand-spaceE d are given. It turns out that on one hand extremal configurations ford5 are of quite different type than in the planar case. On the other hand inequalities similar to the planar ones seem to exist in general. Inequalities in both directions for the volume and other quermass-integrals are given.  相似文献   

17.
Thek-partitioning problem   总被引:1,自引:0,他引:1  
Thek-partitioning problem is defined as follows: Given a set of items {I 1,I 2,...,I n} where itemIj is of weightwj 0, find a partitionS 1,S 2,...,S m of this set with ¦S i ¦ =k such that the maximum weight of all subsetsS i is minimal,k-partitioning is strongly related to the classical multiprocessor scheduling problem of minimizing the makespan on identical machines. This paper provides suitable tools for the construction of algorithms which solve exactly the problem. Several approximation algorithms are presented for this NP-hard problem. The worst-case behavior of the algorithms is analyzed. The best of these algorithms achieves a performance bound of 4/3.  相似文献   

18.
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter β k is computed as a convex combination of (Hestenes-Stiefel) and (Dai-Yuan) algorithms, i.e. . The parameter θ k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation , where and . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.   相似文献   

19.
    
Abstract. Let k≥ 4 . A finite planar point set X is called a convex k -clustering if it is a disjoint union of k sets X 1 , . . . ,X k of equal sizes such that x 1 x 2 . . . x k is a convex k -gon for each choice of x 1 ∈ X 1 , . . . ,x k ∈ X k . Answering a question of Gil Kalai, we show that for every k≥ 4 there are two constants c=c(k) , c'=c'(k) such that the following holds. If X is a finite set of points in general position in the plane, then it has a subset X' of size at most c' such that X \ X' can be partitioned into at most c convex k -clusterings. The special case k=4 was proved earlier by Pór. Our result strengthens the so-called positive fraction Erdos—Szekeres theorem proved by Barany and Valtr. The proof gives reasonable estimates on c and c' , and it works also in higher dimensions. We also improve the previous constants for the positive fraction Erdos—Szekeres theorem obtained by Pach and Solymosi.  相似文献   

20.
We present two new error bounds for optimization problems over a convex set whose objective function f is either semianalytic or γ-strictly convex, with γ≥1. We then apply these error bounds to analyze the rate of convergence of a wide class of iterative descent algorithms for the aforementioned optimization problem. Our analysis shows that the function sequence {f(x k )} converges at least at the sublinear rate of k for some positive constant ε, where k is the iteration index. Moreover, the distances from the iterate sequence {x k } to the set of stationary points of the optimization problem converge to zero at least sublinearly. Received: October 5, 1999 / Accepted: January 1, 2000?Published online July 20, 2000  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号