共查询到20条相似文献,搜索用时 687 毫秒
1.
In multi-objective convex optimization it is necessary to compute an infinite set of nondominated points. We propose a method
for approximating the nondominated set of a multi-objective nonlinear programming problem, where the objective functions and
the feasible set are convex. This method is an extension of Benson’s outer approximation algorithm for multi-objective linear
programming problems. We prove that this method provides a set of weakly ε-nondominated points. For the case that the objectives and constraints are differentiable, we describe an efficient way to
carry out the main step of the algorithm, the construction of a hyperplane separating an exterior point from the feasible
set in objective space. We provide examples that show that this cannot always be done in the same way in the case of non-differentiable
objectives or constraints. 相似文献
2.
Alireza Nazemi Elahe Sharifi 《Communications in Nonlinear Science & Numerical Simulation》2013,18(3):692-709
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve geometric programming (GP) problems. The main idea is to convert the GP problem into an equivalent convex optimization problem. A neural network model is then constructed for solving the obtained convex programming problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient. 相似文献
3.
The paper considers solving of linear programming problems with p-order conic constraints that are related to a certain class of stochastic optimization models with risk objective or constraints. The proposed approach is based on construction of polyhedral approximations for p-order cones, and then invoking a Benders decomposition scheme that allows for efficient solving of the approximating problems. The conducted case study of portfolio optimization with p-order conic constraints demonstrates that the developed computational techniques compare favorably against a number of benchmark methods, including second-order conic programming methods. 相似文献
4.
章祥荪 《应用数学学报(英文版)》1996,12(1):1-10
ZHANGXIANGSUN(章祥荪)(InstituteofAppliedMathematicstheChineseAcademyofSciences,Beijing100080,China)ReceivedJune18,1994.Thisworki... 相似文献
5.
《Communications in Nonlinear Science & Numerical Simulation》2014,19(4):789-798
In this paper, the optimization techniques for solving pseudoconvex optimization problems are investigated. A simplified recurrent neural network is proposed according to the optimization problem. We prove that the optimal solution of the optimization problem is just the equilibrium point of the neural network, and vice versa if the equilibrium point satisfies the linear constraints. The proposed neural network is proven to be globally stable in the sense of Lyapunov and convergent to an exact optimal solution of the optimization problem. A numerical simulation is given to illustrate the global convergence of the neural network. Applications in business and chemistry are given to demonstrate the effectiveness of the neural network. 相似文献
6.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network. 相似文献
7.
Xiang Li Yang Zhang Hau-San Wong Zhongfeng Qin 《Journal of Computational and Applied Mathematics》2009,233(2):264-278
Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean–variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems. 相似文献
8.
9.
Global optimization is a field of mathematical programming dealing with finding global (absolute) minima of multi-dimensional multiextremal functions. Problems of this kind where the objective function is non-differentiable, satisfies the Lipschitz condition with an unknown Lipschitz constant, and is given as a “black-box” are very often encountered in engineering optimization applications. Due to the presence of multiple local minima and the absence of differentiability, traditional optimization techniques using gradients and working with problems having only one minimum cannot be applied in this case. These real-life applied problems are attacked here by employing one of the mostly abstract mathematical objects—space-filling curves. A practical derivative-free deterministic method reducing the dimensionality of the problem by using space-filling curves and working simultaneously with all possible estimates of Lipschitz and Hölder constants is proposed. A smart adaptive balancing of local and global information collected during the search is performed at each iteration. Conditions ensuring convergence of the new method to the global minima are established. Results of numerical experiments on 1000 randomly generated test functions show a clear superiority of the new method w.r.t. the popular method DIRECT and other competitors. 相似文献
10.
Ali Namadchian Mehdi Ramezani 《Numerical Methods for Partial Differential Equations》2020,36(3):637-653
The Fokker–Planck equation is a useful tool to analyze the transient probability density function of the states of a stochastic differential equation. In this paper, a multilayer perceptron neural network is utilized to approximate the solution of the Fokker–Planck equation. To use unconstrained optimization in neural network training, a special form of the trial solution is considered to satisfy the initial and boundary conditions. The weights of the neural network are calculated by Levenberg–Marquardt training algorithm with Bayesian regularization. Three practical examples demonstrate the efficiency of the proposed method. 相似文献
11.
I. N. da Silva W. C. Amaral L. V. R. Arruda 《Journal of Optimization Theory and Applications》2006,128(3):563-580
Neural networks consist of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of recurrent neural net-works that can be used to solve several classes of optimization problems. More specifically, a modified Hopfield network is developed and its inter-nal parameters are computed explicitly using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points, which represent a solution of the problem considered. The problems that can be treated by the proposed approach include combinatorial optimiza-tion problems, dynamic programming problems, and nonlinear optimization problems.Communicated by L. C. W. Dixon 相似文献
12.
Many engineering optimization problems frequently encounter discrete variables as well as continuous variables and the presence of nonlinear discrete variables considerably adds to the solution complexity. Very few of the existing methods can find a globally optimal solution when the objective functions are non-convex and non-differentiable. In this paper, we present a mixed-variable evolutionary programming (MVEP) technique for solving these nonlinear optimization problems which contain integer, discrete, zero-one and continuous variables. The MVEP provides an improvement in global search reliability in a mixed-variable space and converges steadily to a good solution. An approach to handle various kinds of variables and constraints is discussed. Some examples of mixed-variable optimization problems in the literature are tested, which demonstrate that the proposed approach is superior to current methods for finding the best solution, in terms of both solution quality and algorithm robustness. 相似文献
13.
A neural network is proposed for solving a convex quadratic bilevel programming problem. Based on Lyapunov and LaSalle theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the equilibrium, which corresponds to the optimal solution of a convex quadratic bilevel programming problem. Numerical simulation results show that the proposed neural network is feasible and efficient for a convex quadratic bilevel programming problem. 相似文献
14.
15.
Optimal design of arch dams including dam-water–foundation rock interaction is achieved using the soft computing techniques. For this, linear dynamic behavior of arch dam-water–foundation rock system subjected to earthquake ground motion is simulated using the finite element method at first and then, to reduce the computational cost of optimization process, a wavelet back propagation neural network (WBPNN) is designed to predict the arch dam response instead of directly evaluating it by a time-consuming finite-element analysis (FEA). In order to enhance the performance generality of the neural network, a dam grading technique (DGT) is also introduced. To assess the computational efficiency of the proposed methodology for arch dam optimization, an actual arch dam is considered. The optimization is implemented via the simultaneous perturbation stochastic approximation (SPSA) algorithm for the various conditions of the interaction problem. Numerical results show the merits of the suggested techniques for arch dam optimization. It is also found that considering the dam-water–foundation rock interaction has an important role for safely designing an arch dam. 相似文献
16.
ABSTRACTThe authors' paper in Dempe et al. [Necessary optimality conditions in pessimistic bilevel programming. Optimization. 2014;63:505–533], was the first one to provide detailed optimality conditions for pessimistic bilevel optimization. The results there were based on the concept of the two-level optimal value function introduced and analysed in Dempe et al. [Sensitivity analysis for two-level value functions with applications to bilevel programming. SIAM J. Optim. 22 (2012), 1309–1343], for the case of optimistic bilevel programs. One of the basic assumptions in both of these papers is that the functions involved in the problems are at least continuously differentiable. Motivated by the fact that many real-world applications of optimization involve functions that are non-differentiable at some points of their domain, the main goal of the current paper is to extend the two-level value function approach by deriving new necessary optimality conditions for both optimistic and pessimistic versions in bilevel programming with non-smooth data. 相似文献
17.
《European Journal of Operational Research》1996,93(2):244-256
We propose and analyse a new class of neural network models for solving linear programming (LP) problems in real time. We introduce a novel energy function that transforms linear programming into a system of nonlinear differential equations. This system of differential equations can be solved on-line by a simplified low-cost analog neural network containing only one single artificial neuron with adaptive synaptic weights. The network architecture is suitable for currently available CMOS VLSI implementations. An important feature of the proposed neural network architecture is its flexibility and universality. The correctness and performance of the proposed neural network is illustrated by extensive computer simulation experiments. 相似文献
18.
A.R. Nazemi 《Communications in Nonlinear Science & Numerical Simulation》2012,17(4):1696-1705
This paper proposes a feedback neural network model for solving convex nonlinear programming (CNLP) problems. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using some examples. 相似文献
19.
In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. 相似文献
20.
D. V. Alexeev 《Journal of Mathematical Sciences》2010,168(1):5-13
The main result of the work is as follows: in the Chebyshev–Hermite weighted integral metric, it is possible to approximate any function of sufficiently general form by a neural network. The approximating net consists of two layers, where the first uses any predefined sigmoid function of activation and the second uses a linear-threshold function. The Chebyshev–Hermite weight is chosen because it allows one to imitate the distribution of receptors, for example, in the eye of a human or some mammal. 相似文献