首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work, we present an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives. The algorithm mixes an inexact restoration framework with filter techniques, where the forbidden regions can be given by the flat or slanting filter rule. Each iteration is decomposed into two independent phases: a feasibility phase which reduces an infeasibility measure without evaluations of the objective function, and an optimality phase which reduces the objective function value. As the derivatives of the objective function are not available, the optimality step is computed by derivative-free trust-region internal iterations. Any technique to construct the trust-region models can be used since the gradient of the model is a reasonable approximation of the gradient of the objective function at the current point. Assuming this and classical assumptions, we prove that the full steps are efficient in the sense that near a feasible nonstationary point, the decrease in the objective function is relatively large, ensuring the global convergence results of the algorithm. Numerical experiments show the effectiveness of the proposed method.  相似文献   

2.
We apply a flexible inexact-restoration (IR) algorithm to optimization problems with multiobjective constraints under the weighted-sum scalarization approach. In IR methods each iteration has two phases. In the first phase one aims to improve the feasibility and, in the second phase, one minimizes a suitable objective function. We show that with the IR framework there is a natural way to explore the structure of the problem in both IR phases. Numerical experiments are conducted on Portfolio optimization, the Moré–Garbow–Hillstrom collection, and random fourth-degree polynomials, where we show the advantages of exploiting the structure of the problem.  相似文献   

3.
Summary. In this paper, we consider some nonlinear inexact Uzawa methods for iteratively solving linear saddle-point problems. By means of a new technique, we first give an essential improvement on the convergence results of Bramble-Paschiak-Vassilev for a known nonlinear inexact Uzawa algorithm. Then we propose two new algorithms, which can be viewed as a combination of the known nonlinear inexact Uzawa method with the classical steepest descent method and conjugate gradient method respectively. The two new algorithms converge under very practical conditions and do not require any apriori estimates on the minimal and maximal eigenvalues of the preconditioned systems involved, including the preconditioned Schur complement. Numerical results of the algorithms applied for the Stokes problem and a purely linear system of algebraic equations are presented to show the efficiency of the algorithms. Received December 8, 1999 / Revised version received September 8, 2001 / Published online March 8, 2002 RID="*" ID="*" The work of this author was partially supported by a grant from The Institute of Mathematical Sciences, CUHK RID="**" ID="**" The work of this author was partially supported by Hong Kong RGC Grants CUHK 4292/00P and CUHK 4244/01P  相似文献   

4.
Parabolic equations with nonlocal boundary conditions have been given considerable attention in recent years. In this paper new high-order algorithms for the linear diffusion-reaction problem are derived. The convergence of the new schemes is studied and numerical examples are given to show the efficiency of the new methods to solve linear and nonlinear diffusion-reaction equations with these non classical conditions.  相似文献   

5.
A new application-oriented notion of relatively A-maximal monotonicity (RMM) framework is introduced, and then it is applied to the approximation solvability of a general class of inclusion problems, while generalizing other existing results on linear convergence, including Rockafellar’s theorem (1976) on linear convergence using the proximal point algorithm in a real Hilbert space setting. The obtained results not only generalize most of the existing investigations, but also reduce smoothly to the case of the results on maximal monotone mappings and corresponding classical resolvent operators. Furthermore, our proof approach differs significantly to that of Rockafellar’s celebrated work, where the Lipschitz continuity of M ?1, the inverse of M:X→2 X , at zero is assumed to achieve a linear convergence of the proximal point algorithm. Note that the notion of relatively A-maximal monotonicity framework seems to be used to generalize the classical Yosida approximation (which is applied and studied mostly based on the classical resolvent operator in the literature) that in turn can be applied to first-order evolution equations as well as evolution inclusions.  相似文献   

6.
We study the local convergence of several inexact numerical algorithms closely related to Newton’s method for the solution of a simple eigenpair of the general nonlinear eigenvalue problem $T(\lambda )v=0$ . We investigate inverse iteration, Rayleigh quotient iteration, residual inverse iteration, and the single-vector Jacobi–Davidson method, analyzing the impact of the tolerances chosen for the approximate solution of the linear systems arising in these algorithms on the order of the local convergence rates. We show that the inexact algorithms can achieve the same order of convergence as the exact methods if appropriate sequences of tolerances are applied to the inner solves. We discuss the connections and emphasize the differences between the standard inexact Newton’s method and these inexact algorithms. When the local symmetry of $T(\lambda )$ is present, the use of a nonlinear Rayleigh functional is shown to be fundamental in achieving higher order of convergence rates. The convergence results are illustrated by numerical experiments.  相似文献   

7.
Semidefinite relaxations of certain combinatorial optimization problems lead to approximation algorithms with performance guarantees. For large-scale problems, it may not be computationally feasible to solve the semidefinite relaxations to optimality. In this paper, we investigate the effect on the performance guarantees of an approximate solution to the semidefinite relaxation for MaxCut, Max2Sat, and Max3Sat. We show that it is possible to make simple modifications to the approximate solutions and obtain performance guarantees that depend linearly on the most negative eigenvalue of the approximate solution, the size of the problem, and the duality gap. In every case, we recover the original performance guarantees in the limit as the solution approaches the optimal solution to the semidefinite relaxation.  相似文献   

8.
Using our new concept of recurrent functions, we approximate a locally unique solution of a nonlinear equation by an inexact two-step Newton-like algorithm in a Banach space setting. Our semilocal analysis provides tighter error bounds than before, and in many interesting cases, weaker sufficient convergence conditions. Applications including the solution of a nonlinear Chandrasekhar-type integral equation appearing in radiative transfer, and a two point boundary value problem with a Green kernel are also provided in this study.  相似文献   

9.
The paper compares the numerical performances of the LDL decomposition of the BFGS variable-metric algorithm, the Dennis-Mei dogleg algorithm on the BFGS update, and Davidon's projections with the BFGS update with the straight BFGS update on a number of standard test problems. Numerical results indicate that the standard BFGS algorithm is superior to all of the more complex strategies.This research was supported by the National Science Foundation under Research Grant No. MCS77-07327.  相似文献   

10.
In this paper we construct and analyze new non-overlapping domain decomposition preconditioners for the solution of second-order elliptic and parabolic boundary value problems. The preconditioners are developed using uniform preconditioners on the subdomains instead of exact solves. They exhibit the same asymptotic condition number growth as the corresponding preconditioners with exact subdomain solves and are much more efficient computationally. Moreover, this asymptotic condition number growth is bounded independently of jumps in the operator coefficients across subdomain boundaries. We also show that our preconditioners fit into the additive Schwarz framework with appropriately chosen subspace decompositions. Condition numbers associated with the new algorithms are computed numerically in several cases and compared with those of the corresponding algorithms in which exact subdomain solves are used.

  相似文献   


11.
1.IntroductionIntillspaperweanalyzetheconvergenceonmultiplicativeiterativealgorithmsfortheIninimizationofadiffcrentiablefunctiondefinedonthepositiveorthantofR".ThealgorithmissllggestedbyEggermolltl'],andisrelatedtotheEM[2](Expextation--Maximization)algoritllnlforPositronemissiontonlography[']andimagereconstructi..14].Wecollsidertheproblenl"linf(x)s.t.x20.Themultiplicativeiterativealgorithmshavethel'orlniforj=l,2,',n,withAhdeterminedthroughalinesearch.Whilelusem[5]establishedanelegantconv…  相似文献   

12.
We consider solving the unconstrained minimization problem using an iterative method derived from the third order super Halley method. Each iteration of the super Halley method requires the solution of two linear systems of equations. We show a practical implementation using an iterative method to solve the linear systems. This paper introduces an array of arrays (jagged) data structure for storing the second and third derivative of a multivariate function and suitable termination criteria for the (inner) iterative method to achieve a cubic rate of convergence. Using a jagged compressed diagonal storage of the Hessian matrices and for the tensor, numerical results show that storing the diagonals are more efficient than the row or column oriented approach when we use an iterative method for solving the linear systems of equations.  相似文献   

13.
Clusterization is one of the most widespread problems in data analysis. There are many approaches and methods for its solution. However, the result of clusterization strongly depends on the choice of the feature space, on the object proximity measures, and on the method used to formalize the concepts of the object and cluster equivalence. As a result, different solutions can be far apart from each other; they can be degenerate or be quite different from the actually existing groupings. In this paper, clusterization obtained by groups of algorithms is considered; these methods make it possible to construct consistent solutions that best match the actually existing groupings.  相似文献   

14.
The technique of kriging has a fundamental importance in applied sciences such as hydrology, meteorology, soil sciences, and mining. By using kriging, not only can the estimates of the natural phenomena be determined, but the estimation variances reflect the uncertainty of the estimation process. The sampling points for kriging should be selected to minimize the uncertainty, that is, to minimize the estimation variance of the kriging estimator. In this paper four algorithms for optimal observation network design are compared. Two of the algorithms give global optima, while the others give only suboptimal solutions. The computer time required for using the heuristic algorithms, giving only suboptimal solutions, is much less than that of the optimizing procedures. It is also shown on the basis of our experiments that the suboptimal solutions are either optimal or very close to optimal; consequently on the basis of our simulated examples, the heuristic algorithms are highly recommended for practical applications.  相似文献   

15.
Translated from Chislennye Metody Resheniya Obratnykh Zadach Matematicheskoi Fiziki, pp. 97–105.  相似文献   

16.
This paper considers the problem of minimizing a functionalI which depends on the statex(t), the controlu(t), and the parameter . Here,I is a scalar,x ann-vector,u anm-vector, and ap-vector. At the initial point, the state is prescribed. At the final point, the state and the parameter are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations.Four types of gradient-restoration algorithms are considered, and their relative efficiency (in terms of number of iterations for convergence) is evaluated. The algorithms being considered are as follows: sequential gradient-restoration algorithm, complete restoration (SGRA-CR), sequential gradient-restoration algorithm, incomplete restoration (SGRA-IR), combined gradient-restoration algorithm, no restoration (CGRA-NR), and combined gradient-restoration algorithm, incomplete restoration (CGRA-IR).Evaluation of these algorithms is accomplished through six numerical examples. The results indicate that (i) the inclusion of a restoration phase is necessary for rapid convergence and (ii) while SGRA-CR is the most desirable algorithm if feasibility of the suboptimal solutions is required, rapidity of convergence to the optimal solution can be increased if one employs algorithms with incomplete restoration, in particular, CGRA-IR.This research was supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-72-2185.  相似文献   

17.
A variety of algorithms have been proposed to solve the bicriterion shortest path problem. This article analyzes and compares the performance of three best-first (label-setting) algorithms that accept heuristic information to improve efficiency. These are NAMOA, MOA, and Tung & Chew’s algorithm (TC). A set of experiments explores the impact of heuristic information in search efficiency, and the relative performance of the algorithms. The analysis reveals that NAMOA is the best option for difficult problems. Its time performance can benefit considerably from heuristic information, though not in all cases. The performance of TC is similar but somewhat worse. However, the time performance of MOA is found to degrade considerably with the use of heuristic information in most cases. Explanations are provided for these phenomena.  相似文献   

18.
In this paper, sequential gradient-restoration algorithms for optimal control problems are considered, and attention is focused on the restoration phase. It is shown that the Lagrange multipliers associated with the restoration phase not only solve the auxiliary minimization problem of the restoration phase, but are also endowed with a supplementary optimality property: they minimize a special functional, quadratic in the multipliers, subject to the multiplier differential equations and boundary conditions, for given state, control, and parameter.Dedicated to L. CesariThis work was supported by a grant of the National Science Foundation.  相似文献   

19.
Exact efficient numerical methods are proposed for solving bilinear optimization problems that arise when various solution variants are compared based on their preferability using an additive value function in the case of interval estimates of the degrees of superiority of certain criteria over others and in the case of interval restrictions on the growth of preferences along the criteria range.  相似文献   

20.
This paper studies strip-packing problems. It is our aim to optimise the position of a number of rectangular shapes on a base surface in order to minimise wastage of material. As the problem is a complex NP-complete one, a heuristic based on genetic algorithms (GA) is used to solve it. The main problem is the wide variety of genetic algorithms available in the literature, which makes it hard to know which variation is best suited to this type of problem. We conclude that using a cyclic crossover GA with fitness by area and variable mutation works best for this problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号