共查询到20条相似文献,搜索用时 15 毫秒
1.
Inexact Newton methods for the nonlinear complementarity problem 总被引:2,自引:0,他引:2
Jong-Shi Pang 《Mathematical Programming》1986,36(1):54-71
An exact Newton method for solving a nonlinear complementarity problem consists of solving a sequence of linear complementarity
subproblems. For problems of large size, solving the subproblems exactly can be very expensive. In this paper we study inexact
Newton methods for solving the nonlinear, complementarity problem. In such an inexact method, the subproblems are solved only
up to a certain degree of accuracy. The necessary accuracies that are needed to preserve the nice features of the exact Newton
method are established and analyzed. We also discuss some extensions as well as an application.
This research was based on work supported by the National Science Foundation under grant ECS-8407240. 相似文献
2.
We consider a class of inexact Newton regularization methods for solving nonlinear inverse problems in Hilbert scales. Under certain conditions we obtain the order optimal convergence rate result. 相似文献
3.
《Journal of Computational and Applied Mathematics》1998,93(2):107-122
In this paper we define second order C-differentiable functions and second order C-differential operators, describe their some properties and propose an inexact generalized Newton method to solve unconstrained optimization problems in which the objective function is not twice differentiable, but second order C-differentiable. We prove that the algorithm is linearly convergent or superlinearly convergent including the case of quadratic convergence depending on various conditions on the objective function and different values for the control parameter in the algorithm. 相似文献
4.
5.
Theoretical Efficiency of an Inexact Newton Method 总被引:6,自引:0,他引:6
We propose a local algorithm for smooth unconstrained optimization problems with n variables. The algorithm is the optimal combination of an exact Newton step with Choleski factorization and several inexact Newton steps with preconditioned conjugate gradient subiterations. The preconditioner is taken as the inverse of the Choleski factorization in the previous exact Newton step. While the Newton method is converging precisely with Q-order 2, this algorithm is also precisely converging with Q-order 2. Theoretically, its average number of arithmetic operations per step is much less than the corresponding number of the Newton method for middle-scale and large-scale problems. For instance, when n=200, the ratio of these two numbers is less than 0.53. Furthermore, the ratio tends to zero approximately at a rate of log 2/logn when n approaches infinity. 相似文献
6.
I. K. Argyros 《分析论及其应用》1997,13(3):91-103
In this study, we use inexact newton methods to find solutions of nonlinear, nondifferentiable operator equations on Banach
spaces with a convergence structure. This technique involves the introduction of a generalized norm as an operator from a
linear space into a partially ordered Banach space. In this way the metric properties of the examined problem can be analyzed
more precisely. Moreover, this approach allows us to derive from the same theorem, on the one hand, semi-local results of
Kantorovich-type, and on the other hand, global results based on monotonicity considerations. Furthermore, we show that special
cases of our results reduce to the corresponding ones already in the literature. Finally, our results are used to solve integral
equations that cannot be solved with existing methods. 相似文献
7.
We present several strong convergence results for the modified, Halpern-type, proximal point algorithm \({x_{n+1}=\alpha_{n}u+(1-\alpha_{n})J_{\beta_n}x_n+e_{n}}\) (n = 0,1, . . .; \({u,\,x_0\in H}\) given, and \({J_{\beta_n}=(I+\beta_nA)^{-1}}\), for a maximal monotone operator A) in a real Hilbert space, under new sets of conditions on \({\alpha_n\in(0,1)}\) and \({\beta_n\in(0,\infty)}\). These conditions are weaker than those known to us and our results extend and improve some recent results such as those of H. K. Xu. We also show how to apply our results to approximate minimizers of convex functionals. In addition, we give convergence rate estimates for a sequence approximating the minimum value of such a functional. 相似文献
8.
We introduce the new idea of recurrent functions to provide a semilocal convergence analysis for an inexact Newton-type method, using outer inverses. It turns out that our sufficient convergence conditions are weaker than in earlier studies in many interesting cases (Argyros, 2004 [5] and [6], Argyros, 2007 [7], Dennis, 1971 [14], Deuflhard and Heindl, 1979 [15], Gutiérrez, 1997 [16], Gutiérrez et al., 1995 [17], Häubler, 1986 [18], Huang, 1993 [19], Kantorovich and Akilov, 1982 [20], Nashed and Chen, 1993 [21], Potra, 1982 [22], Potra, 1985 [23]). 相似文献
9.
本文主要解决Banach空间中抽象的半光滑算子方程的解法.提出了两种不精确牛顿法,它们的收敛性同时得到了证明.这两种方法可以看作是有限维空间中已存在的解半光滑算子方程的方法的延伸. 相似文献
10.
Given a self-concordant barrier function for a convex set
, we determine a self-concordant barrier function for the conic hull
of
. As our main result, we derive an “optimal” barrier for
based on the barrier function for
. Important applications of this result include the conic reformulation of a convex problem, and the solution of fractional
programs by interior-point methods. The problem of minimizing a convex-concave fraction over some convex set can be solved
by applying an interior-point method directly to the original nonconvex problem, or by applying an interior-point method to
an equivalent convex reformulation of the original problem. Our main result allows to analyze the second approach showing
that the rate of convergence is of the same order in both cases. 相似文献
11.
Abdellah Bnouhachem Muhammad Aslam Noor 《Journal of Mathematical Analysis and Applications》2006,324(2):1195-1212
In this paper, we suggest and analyze a new inexact proximal point method for solving general variational inequalities, which can be considered as an implicit predictor-corrector method. An easily measurable error term is proposed with further relaxed error bound and an optimal step length is obtained by maximizing the profit-function and is dependent on the previous points. Our results include several known and new techniques for solving variational inequalities and related optimization problems. Results obtained in this paper can be viewed as an important improvement and refinement of the previously known results. Preliminary numerical experiments are included to illustrate the advantage and efficiency of the proposed method. 相似文献
12.
An Inexact Newton Method Derived from Efficiency Analysis 总被引:1,自引:0,他引:1
We consider solving an unconstrained optimization problem by Newton-PCG like methods in which the preconditioned conjugate gradient method is applied to solve the Newton equations. The main question to be investigated is how efficient Newton-PCG like methods can be from theoretical point of view. An algorithmic model with several parameters is established. Furthermore, a lower bound of the efficiency measure of the algorithmic model is derived as a function of the parameters. By maximizing this lower bound function, the parameters are specified and therefore an implementable algorithm is obtained. The efficiency of the implementable algorithm is compared with Newtons method by theoretical analysis and numerical experiments. The results show that this algorithm is competitive.Mathematics Subject Classification: 90C30, 65K05.This work was supported by the National Science Foundation of China Grant No. 10371131, and Hong Kong Competitive Earmarked Research Grant CityU 1066/00P from Hong Kong University Grant Council 相似文献
13.
Motivated by the method of Martinez and Qi (Ref. 1), we propose in this paper a globally convergent inexact generalized Newton method to solve unconstrained optimization problems in which the objective functions have Lipschitz continuous gradient functions, but are not twice differentiable. This method is implementable, globally convergent, and produces monotonically decreasing function values. We prove that the method has locally superlinear convergence or even quadratic convergence rate under some mild conditions, which do not assume the convexity of the functions. 相似文献
14.
Yan Gao 《Journal of Applied Mathematics and Computing》1999,6(3):513-522
The Newton method and the quasi-Newton method for solving equations of smooth compositions of semismooth functions are proposed. The Q-superlinear convergence of the Newton method and the Q-linear convergence of the quasi-Newton method are proved. The present methods can be more easily implemented than previous ones for this class of nonsmooth equations. 相似文献
15.
Xiao Wang Shuxiong Wang Hongchao Zhang 《Computational Optimization and Applications》2017,68(3):579-618
We study an inexact proximal stochastic gradient (IPSG) method for convex composite optimization, whose objective function is a summation of an average of a large number of smooth convex functions and a convex, but possibly nonsmooth, function. Variance reduction techniques are incorporated in the method to reduce the stochastic gradient variance. The main feature of this IPSG algorithm is to allow solving the proximal subproblems inexactly while still keeping the global convergence with desirable complexity bounds. Different subproblem stopping criteria are proposed. Global convergence and the component gradient complexity bounds are derived for the both cases when the objective function is strongly convex or just generally convex. Preliminary numerical experiment shows the overall efficiency of the IPSG algorithm. 相似文献
16.
We present an inexact multisplitting method for solving the linear complementarity problems, which is based on the inexact splitting method and the multisplitting method. This new method provides a specific realization for the multisplitting method and generalizes many existing matrix splitting methods for linear complementarity problems. Convergence for this new method is proved when the coefficient matrix is an H+-matrix. Then, two specific iteration forms for this inexact multisplitting method are presented, where the inner iterations are implemented either through a matrix splitting method or through a damped Newton method. Convergence properties for both these specific forms are analyzed, where the system matrix is either an H+-matrix or a symmetric matrix. 相似文献
17.
We establish a new semilocal convergence results for Inexact Newton-type methods for approximating a locally unique solution of a nonlinear equation in a Banach spaces setting. We show that our sufficient convergence conditions are weaker and the estimates of error bounds are tighter in some cases than in earlier works [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30] and [31]. Special cases and numerical examples are also provided in this study. 相似文献
18.
In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient method when the constraint set is compact or the objective function has a set of generalized weak sharp minima. In both cases, using the constant and diminishing stepsize rules, we describe convergence results in both objective values and iterates, and finite convergence to approximate optimality. We also investigate efficiency estimates of iterates and apply the inexact subgradient algorithm to solve the Cobb–Douglas production efficiency problem. The numerical results verify our theoretical analysis and show the high efficiency of our proposed algorithm, especially for the large-scale problems. 相似文献
19.
In this paper, some semismooth methods are considered to solve a nonsmooth equation which can arise from a discrete version of the well-known Hamilton-Jacobi-Bellman equation. By using the slant differentiability introduced by Chen, Nashed and Qi in 2000, a semismooth Newton method is proposed. The method is proved to have monotone convergence by suitably choosing the initial iterative point and local superlinear convergence rate. Moreover, an inexact version of the proposed method is introduced, which reduces the cost of computations and still preserves nice convergence properties. Some numerical results are also reported. 相似文献