首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary. Let be the unit disk in the complex plane and let be a compact, simply connected subset of , whose boundary is assumed to belong to the class . Let be the unit ball of the Hardy space . A linear algorithm is constructed for approximating functions in . The algorithm is based on sampling functions in the Fejer points of and it produces the error Here denotes the space of continuous functions on and is the Green capacity of with respect to . Moreover it is shown that the algorithm is asymptotically optimal in the sense of -widths. Received July 7, 1994  相似文献   

2.
BIT Numerical Mathematics - In this article we address the problem of minimizing a strictly convex quadratic function using a novel iterative method. The new algorithm is based on the well-known...  相似文献   

3.
In this paper, we introduce a new concept of approximate optimal stepsize for gradient method, use it to interpret the Barzilai-Borwein (BB) method, and present an efficient gradient method with approximate optimal stepsize for large unconstrained optimization. If the objective function f is not close to a quadratic on a line segment between the current iterate x k and the latest iterate x k?1, we construct a conic model to generate the approximate optimal stepsize for gradient method if the conic model is suitable to be used. Otherwise, we construct a new quadratic model or two other new approximation models to generate the approximate optimal stepsize for gradient method. We analyze the convergence of the proposed method under some suitable conditions. Numerical results show the proposed method is very promising.  相似文献   

4.
In this paper, a new type of stepsize, approximate optimal stepsize, for gradient method is introduced to interpret the Barzilai–Borwein (BB) method, and an efficient gradient method with an approximate optimal stepsize for the strictly convex quadratic minimization problem is presented. Based on a multi-step quasi-Newton condition, we construct a new quadratic approximation model to generate an approximate optimal stepsize. We then use the two well-known BB stepsizes to truncate it for improving numerical effects and treat the resulted approximate optimal stepsize as the new stepsize for gradient method. We establish the global convergence and R-linear convergence of the proposed method. Numerical results show that the proposed method outperforms some well-known gradient methods.  相似文献   

5.
In this paper we introduce an acceleration of gradient descent algorithm with backtracking. The idea is to modify the steplength t k by means of a positive parameter θ k , in a multiplicative manner, in such a way to improve the behaviour of the classical gradient algorithm. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is significantly improved.  相似文献   

6.
In this paper we consider the interpolation problem for a sufficiently smooth function on the segment [0, 1]. The values of the function under consideration are defined at given mesh nodes. We construct a cubic spline asymptotically optimal with respect to the growing number of nodes. Then we estimate interpolation errors for the constructed spline in the uniform and L 2 metrics.  相似文献   

7.
Доказывается следую щая теорема Пусть φ(t) — неубывающая па [0,+∞] непрерывная сле ва функция, φ(0)=0.Пусть дале е \(\Phi (t) = \mathop \smallint \limits_0^t \varphi (s) ds u \mathop {sup}\limits_{t > 0} \frac{{t\varphi (t)}}{{\Phi (t)}}< \infty \) .Если X 1 Х 2, ... —такая последовательность случайных величин, что $$E\left( {\Phi \left( {\left| {\mathop \sum \limits_{i = m + 1}^{m + n} X_i } \right|} \right)} \right) \leqq g^\alpha (F_{m, n} ) (m \geqq 0, n \geqq 1)$$ , где α>1, а g(Fm,n) — некоторый функционал, зависящи й от совместного распред еления Xi и удовлетворяющий ус ловиям $$g(F_{m, n} ) + g(F_{m + k, n} ) \leqq g(F_{m, n + k} ) (m \geqq 0, n \geqq 1, k \geqq 1)$$ ,k ≧1), moсправедливы оценки $$E\left( {\Phi \left( {\mathop {\max }\limits_{1 \leqq k \leqq n} \left| {\mathop \sum \limits_{i = m + 1}^{m + n} X_i } \right|} \right)} \right) = Kg^\alpha (F_{m, n} ) (m \geqq 0, n \geqq 1)$$ ,где множитель К конеч ен и не зависит от т. п.  相似文献   

8.
Most existing methods of quadratically constrained quadratic optimization actually solve a refined linear or convex relaxation of the original problem. It turned out, however, that such an approach may sometimes provide an infeasible solution which cannot be accepted as an approximate optimal solution in any reasonable sense. To overcome these limitations a new approach is proposed that guarantees a more appropriate approximate optimal solution which is also stable under small perturbations of the constraints.  相似文献   

9.
10.
This paper shows that the optimal subgradient algorithm (OSGA)—which uses first-order information to solve convex optimization problems with optimal complexity—can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on large-scale problems in applications arising from signal and image processing, machine learning and statistics. Numerical experiments demonstrate the promising performance of OSGA on such problems. A software package implementing OSGA for bound-constrained convex problems is available.  相似文献   

11.
The implementation of the recently proposed semi-monotonic augmented Lagrangian algorithm for the solution of large convex equality constrained quadratic programming problems is considered. It is proved that if the auxiliary problems are approximately solved by the conjugate gradient method, then the algorithm finds an approximate solution of the class of problems with uniformly bounded spectrum of the Hessian matrix at O(1) matrix–vector multiplications. If applied to the class of problems with the Hessian matrices that are in addition either sufficiently sparse or can be expressed as a product of such sparse matrices, then the cost of the solution is proportional to the dimension of the problems. Theoretical results are illustrated by numerical experiments. This research is supported by grants of the Ministry of Education No. S3086102, ET400300415 and MSM 6198910027.  相似文献   

12.
为了更好地解决二次约束二次规划问题(QCQP), 本文基于分支定界算法框架提出了自适应线性松弛技术, 在理论上证明了这种新的定界技术对于解决(QCQP)是可观的。文中分支操作采用条件二分法便于对矩形进行有效剖分; 通过缩减技术删除不包含全局最优解的部分区域, 以加快算法的收敛速度。最后, 通过数值结果表明提出的算法是有效可行的。  相似文献   

13.
为了更好地解决二次约束二次规划问题(QCQP), 本文基于分支定界算法框架提出了自适应线性松弛技术, 在理论上证明了这种新的定界技术对于解决(QCQP)是可观的。文中分支操作采用条件二分法便于对矩形进行有效剖分; 通过缩减技术删除不包含全局最优解的部分区域, 以加快算法的收敛速度。最后, 通过数值结果表明提出的算法是有效可行的。  相似文献   

14.
《Optimization》2012,61(12):2679-2691
In this article, we present an improved three-term conjugate gradient algorithm for large-scale unconstrained optimization. The search directions in the developed algorithm are proved to satisfy an approximate secant equation as well as the Dai-Liao’s conjugacy condition. With the standard Wolfe line search and the restart strategy, global convergence of the algorithm is established under mild conditions. By implementing the algorithm to solve 75 benchmark test problems with dimensions from 1000 to 10,000, the obtained numerical results indicate that the algorithm outperforms the state-of-the-art algorithms available in the literature. It costs less CPU time and smaller number of iterations in solving the large-scale unconstrained optimization.  相似文献   

15.
A three-term conjugate gradient algorithm for large-scale unconstrained optimization using subspace minimizing technique is presented. In this algorithm the search directions are computed by minimizing the quadratic approximation of the objective function in a subspace spanned by the vectors: ?g k+1, s k and y k . The search direction is considered as: d k+1 = ?g k+1 + a k s k + b k y k , where the scalars a k and b k are determined by minimization the affine quadratic approximate of the objective function. The step-lengths are determined by the Wolfe line search conditions. We prove that the search directions are descent and satisfy the Dai-Liao conjugacy condition. The suggested algorithm is of three-term conjugate gradient type, for which both the descent and the conjugacy conditions are guaranteed. It is shown that, for uniformly convex functions, the directions generated by the algorithm are bounded above, i.e. the algorithm is convergent. The numerical experiments, for a set of 750 unconstrained optimization test problems, show that this new algorithm substantially outperforms the known Hestenes and Stiefel, Dai and Liao, Dai and Yuan and Polak, Ribiére and Poliak conjugate gradient algorithms, as well as the limited memory quasi-Newton method L-BFGS and the discrete truncated-Newton method TN.  相似文献   

16.
Network flow problems with quadratic separable costs appear in a number of important applications such as; approximating input-output matrices in economy; projecting and forecasting traffic matrices in telecommunication networks; solving nondifferentiable cost flow problems by subgradient algorithms. It is shown that the scaling technique introduced by Edmonds and Karp (1972) in the case of linear cost flows for deriving a polynomial complexity bound for the out-of-kilter method, may be extended to quadratic cost flows and leads to a polynomial algorithm for this class of problems. The method may be applied to the solution of singly constrained quadratic programs and thus provides an alternative approach to the polynomial algorithm suggested by Helgason, Kennington and Lall (1980).  相似文献   

17.
In a recent paper [6] we suggested an algorithm for solving complicated mixed-integer quadratic programs, based on an equivalent formulation that employs a nonsingular transformation of variables. The objectives of the present paper are two. First, to present an improved version of this algorithm, which reduces substantially its computational requirements; second, to report on the results of a computational study with the revised algorithm.  相似文献   

18.
In the paper, we propose an active set identification technique which accurately identifies active constraints in a neighborhood of an isolated stationary point without strict complementarity conditions. Based on the identification technique, we propose a conjugate gradient algorithm for large-scale bound constrained optimization. In the algorithm, the recently developed modified Polak-Ribiére-Polyak method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. Under appropriate conditions, we show that the proposed method is globally convergent. Numerical experiments are presented using bound constrained problems in the CUTEr test problem library.  相似文献   

19.
We describe an approximate algorithm for a special ‘quadratic semi-assignment problem’ arising from ‘equipartition’ applications, where one wants to cluster n objects with given weights wi into p classes, so as to minimize the variance of the class-weights. The algorithm can be viewed both as a list scheduling method and as a special case of a heuristic procedure, due to Nemhauser and Carlson, for quadratic semi-assignment problems. Our main result is that the relative approximation error is O(1/n) when p and r = (maxwi)/(min wi) are bounded.  相似文献   

20.
The quadratic approximation is a three dimensional analogue of the two dimensional Padé approximation. A determinantal expression for the polynomial coefficients of the quadratic approximation is given. A recursive algorithm for the construction of these coefficients is derived. The algorithm constructs a table of quadratic approximations analogous to the Padé table of rational approximations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号