首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A special class of neural dynamics called Zhang dynamics (ZD), which is different from gradient dynamics (GD), has recently been proposed, generalized, and investigated for solving time-varying problems by following Zhang et al.’s design method. In view of potential digital hardware implemetation, discrete-time ZD (DTZD) models are proposed and investigated in this paper for solving nonlinear time-varying equations in the form of $f(x,t)=0$ . For comparative purposes, the discrete-time GD (DTGD) model and Newton iteration (NI) are also presented for solving such nonlinear time-varying equations. Numerical examples and results demonstrate the efficacy and superiority of the proposed DTZD models for solving nonlinear time-varying equations, as compared with the DTGD model and NI.  相似文献   

2.
In this paper, we derive some results on the exponential stabilizability and robustness analysis for discrete-time nonlinear control systems. Using the discrete Gronwall’s inequality, we also derive an important absolute estimate for the robustness index of the controlled discrete-time nonlinear system.  相似文献   

3.
Bian  Wei  Chen  Xiaojun  Ye  Yinyu 《Mathematical Programming》2015,152(1-2):301-338
Mathematical Programming - We propose a first order interior point algorithm for a class of non-Lipschitz and nonconvex minimization problems with box constraints, which arise from applications in...  相似文献   

4.
This paper addresses the reachable set bounding for discrete-time switched nonlinear positive systems with mixed time-varying delays and disturbance, which contains switched linear positive systems as a special case. By resorting to a new method that does not involve the common Lyapunov–Krasovskii functional one, explicit criteria to ensure any state trajectory of the system converges exponentially into a prescribed sphere are obtained under average dwell time switching. The results can then be extended to more general time-varying systems. Finally, two numerical examples are used to demonstrate the effectiveness of the obtained results.  相似文献   

5.
6.
This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as the intersection of possibly infinitely many constraint sets. Each constraint set is assumed to be given as a level set of a convex but not necessarily differentiable function. The proposed algorithms are applicable to the situation where the whole constraint set of the problem is not known in advance, but it is rather learned in time through observations. Also, the algorithms are of interest for constrained optimization problems where the constraints are known but the number of constraints is either large or not finite. We analyze the proposed algorithm for the case when the objective function is differentiable with Lipschitz gradients and the case when the objective function is not necessarily differentiable. The behavior of the algorithm is investigated both for diminishing and non-diminishing stepsize values. The almost sure convergence to an optimal solution is established for diminishing stepsize. For non-diminishing stepsize, the error bounds are established for the expected distances of the weighted averages of the iterates from the constraint set, as well as for the expected sub-optimality of the function values along the weighted averages.  相似文献   

7.
On search directions for minimization algorithms   总被引:1,自引:0,他引:1  
Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.  相似文献   

8.
9.
10.
In 1952, Hestenes and Stiefel first established, along with the conjugate-gradient algorithm, fundamental relations which exist between conjugate direction methods for function minimization on the one hand and Gram-Schmidt processes relative to a given positive-definite, symmetric matrix on the other. This paper is based on a recent reformulation of these relations by Hestenes which yield the conjugate Gram-Schmidt (CGS) algorithm. CGS includes a variety of function minimization routines, one of which is the conjugate-gradient routine. This paper gives the basic equations of CGS, including the form applicable to minimizing general nonquadratic functions ofn variables. Results of numerical experiments of one form of CGS on five standard test functions are presented. These results show that this version of CGS is very effective.The preparation of this paper was sponsored in part by the US Army Research Office, Grant No. DH-ARO-D-31-124-71-G18.The authors wish to thank Mr. Paul Speckman for the many computer runs made using these algorithms. They served as a good check on the results which they had obtained earlier. Special thanks must go to Professor M. R. Hestenes whose constant encouragement and assistance made this paper possible.  相似文献   

11.
We present a simple and unified technique to establish convergence of various minimization methods. These contain the (conceptual) proximal point method, as well as implementable forms such as bundle algorithms, including the classical subgradient relaxation algorithm with divergent series.An important research work of Phil Wolfe's concerned convex minimization. This paper is dedicated to him, on the occasion of his 65th birthday, in appreciation of his creative and pioneering work.  相似文献   

12.
13.
Mathematical Programming - Huber et al. (SIAM J Comput 43:1064–1084, 2014) introduced a concept of skew bisubmodularity, as a generalization of bisubmodularity, in their complexity dichotomy...  相似文献   

14.
This paper is a geometric study of finding general exponential observers for discrete-time nonlinear systems. Using center manifold theory for maps, we derive necessary and sufficient conditions for general exponential observers for Lyapunov stable discrete-time nonlinear systems. As an application of our characterization of general exponential observers, we give a construction procedure for identity exponential observers for discrete-time nonlinear systems.  相似文献   

15.
This paper is a geometric study of the observer design for discrete-time nonlinear systems. First, we obtain necessary and sufficient conditions for local exponential observers for Lyaupnov stable discrete-time nonlinear systems. We also show that the definition of local exponential observers can be considerably weakened for neutrally stable discrete-time nonlinear systems. As an application of our local observer design, we consider a class of discrete-time nonlinear systems with an input generator (exosystem) and show that for this class of nonlinear systems, under some stability assumptions, the existence of local exponential observers in the presence of inputs implies and is implied by the existence of local exponential observers in the absence of inputs.  相似文献   

16.
In this paper, we prove a theorem of convergence to a point for descent minimization methods. When the objective function is differentiable, the convergence point is a stationary point. The theorem, however, is applicable also to nondifferentiable functions. This theorem is then applied to prove convergence of some nongradient algorithms.  相似文献   

17.
Quasi-Newton algorithms minimize a functionF(x),xR n, searching at any iterationk along the directions k=?H kgk, whereg k=?F(x k) andH k approximates in some sense the inverse Hessian ofF(x) atx k. When the matrixH is updated according to the formulas in Broyden's family and when an exact line search is performed at any iteration, a compact algorithm (free from the Broyden's family parameter) can be conceived in terms of the followingn ×n matrix: $$H{_R} = H - Hgg{^T} H/g{^T} Hg,$$ which can be viewed as an approximating reduced inverse Hessian. In this paper, a new algorithm is proposed which uses at any iteration an (n?1)×(n?1) matrixK related toH R by $$H_R = Q\left[ {\begin{array}{*{20}c} 0 & 0 \\ 0 & K \\ \end{array} } \right]Q$$ whereQ is a suitable orthogonaln×n matrix. The updating formula in terms of the matrixK incorporated in this algorithm is only moderately more complicated than the standard updating formulas for variable-metric methods, but, at the same time, it updates at any iteration a positive definite matrixK, instead of a singular matrixH R. Other than the compactness with respect to the algorithms with updating formulas in Broyden's class, a further noticeable feature of the reduced Hessian algorithm is that the downhill condition can be stated in a simple way, and thus efficient line searches may be implemented.  相似文献   

18.
A result of Spencer states that every collection of n sets over a universe of size n has a coloring of the ground set with of discrepancy . A geometric generalization of this result was given by Gluskin (see also Giannopoulos) who showed that every symmetric convex body with Gaussian measure at least , for a small , contains a point where a constant fraction of coordinates of y are in . This is often called a partial coloring result. While the proofs of both these results were inherently non‐algorithmic, recently Bansal (see also Lovett‐Meka) gave a polynomial time algorithm for Spencer's setting and Rothvoß gave a randomized polynomial time algorithm obtaining the same guarantee as the result of Gluskin and Giannopoulos. This paper contains several related results which combine techniques from convex geometry to analyze simple and efficient algorithms for discrepancy minimization. First, we prove another constructive version of the result of Gluskin and Giannopoulos, in which the coloring is attained via the optimization of a linear function. This implies a linear programming based algorithm for combinatorial discrepancy obtaining the same result as Spencer. Our second result suggests a new approach to obtain partial colorings, which is also valid for the non‐symmetric case. It shows that every (possibly non‐symmetric) convex body , with Gaussian measure at least , for a small , contains a point where a constant fraction of coordinates of y are in . Finally, we give a simple proof that shows that for any there exists a constant c > 0 such that given a body K with , a uniformly random x from is in cK with constant probability. This gives an algorithmic version of a special case of the result of Banaszczyk.  相似文献   

19.
In this paper, two PVD-type algorithms are proposed for solving inseparable linear constraint optimization. Instead of computing the residual gradient function, the new algorithm uses the reduced gradients to construct the PVD directions in parallel computation, which can greatly reduce the computation amount each iteration and is closer to practical applications for solve large-scale nonlinear programming. Moreover, based on an active set computed by the coordinate rotation at each iteration, a feasible descent direction can be easily obtained by the extended reduced gradient method. The direction is then used as the PVD direction and a new PVD algorithm is proposed for the general linearly constrained optimization. And the global convergence is also proved.  相似文献   

20.
《Optimization》2012,61(6):627-639
Abstract: In this article, we consider the concave quadratic programming problem which is known to be NP hard. Based on the improved global optimality conditions by [Dür, M., Horst, R. and Locatelli, M., 1998, Necessary and sufficient global optimality conditions for convex maximization revisited, Journal of Mathematical Analysis and Applications, 217, 637–649] and [Hiriart-Urruty, J.B. and Ledyav, J.S., 1996, A note in the characterization of the global maxima of a convex function over a convex set, Journal of Convex Analysis, 3, 55–61], we develop a new approach for solving concave quadratic programming problems. The main idea of the algorithms is to generate a sequence of local minimizers either ending at a global optimal solution or at an approximate global optimal solution within a finite number of iterations. At each iteration of the algorithms we solve a number of linear programming problems with the same constraints of the original problem. We also present the convergence properties of the proposed algorithms under some conditions. The efficiency of the algorithms has been demonstrated with some numerical examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号