首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Simulated annealing for constrained global optimization   总被引:10,自引:0,他引:10  
Hide-and-Seek is a powerful yet simple and easily implemented continuous simulated annealing algorithm for finding the maximum of a continuous function over an arbitrary closed, bounded and full-dimensional body. The function may be nondifferentiable and the feasible region may be nonconvex or even disconnected. The algorithm begins with any feasible interior point. In each iteration it generates a candidate successor point by generating a uniformly distributed point along a direction chosen at random from the current iteration point. In contrast to the discrete case, a single step of this algorithm may generateany point in the feasible region as a candidate point. The candidate point is then accepted as the next iteration point according to the Metropolis criterion parametrized by anadaptive cooling schedule. Again in contrast to discrete simulated annealing, the sequence of iteration points converges in probability to a global optimum regardless of how rapidly the temperatures converge to zero. Empirical comparisons with other algorithms suggest competitive performance by Hide-and-Seek.This material is based on work supported by a NATO Collaborative Research Grant, no. 0119/89.  相似文献   

2.
通过对高维数据整体表达式建模预测方法和分区间等预测算法的缺陷分析,提出基于向量值有理插值的最优预测算法,通过有理向量插值函数和各分量的误差限得到向量之间的相似性,克服了其它很多算法利用向量的整体表达式方法而产生预测的偏差;另外,通过向量的误差限与训练样本所得向量值有理插值函数及迭代仿真方法来确定预测样本向量所对应的最优预测值.通过实例,算法所得预测值的精度比其他算法更高,并且分析了误差限和迭代步长对算法性能的影响.  相似文献   

3.
An augmented Lagrangian algorithm is used to find local solutions of geometric programming problems with equality constraints (GPE). The algorithm is Newton's method for unconstrained minimization. The complexity of the algorithm is defined by the number of multiplications and divisions. By analyzing the algorithm we obtain results about the influence of each parameter in the GPE problem on the complexity of an iteration. An attempt is made to estimate the number of iterations needed for convergence. In practice, certain hypotheses are tested, such as the influence of the penalty coefficient update formula, the distance of the starting point from the optimum, and the stopping criterion. For these tests, a random problem generator was constructed, many problems were run, and the results were analyzed by statistical methods.The authors are grateful to Dr. J. Moré, Argonne National Laboratory for his valuable comments.This research was partially funded by the Fund for the Advancement of Research at the Technion and by the Applied Mathematical Sciences Research Program (KC-04-02), Office of Energy Research, US Department of Energy, Contract No. W-31-109-Eng-38.  相似文献   

4.
最近,Salahi对线性规划提出了一个基于新的自适应参数校正策略的Mehrotra型预估-校正算法,该策略使其在不使用安全策略的情况下,证明了算法的多项式迭代复杂界.本文将这一算法推广到半定规划的情形.通过利用Zhang的对称化技术,得到了算法的多项式迭代复杂界,这与求解线性规划的相应算法有相同的迭代复杂性阶.  相似文献   

5.
We present an improved iteration regularization method for solving linear inverse problems. The algorithm considered here is detailedly given and proved that the computational costs for the proposed method are nearly the same as the Landweber iteration method, yet the number of iteration steps by the present method is even less. Meanwhile, we obtain the optimum asymptotic convergence order of the regularized solution by choosing a posterior regularization parameter based on Morozov’s discrepancy principle, and the present method is applied to the identification of the multi-source dynamic loads on a surface of the plate. Numerical simulations of two examples demonstrate the effectiveness and robustness of the present method.  相似文献   

6.
We propose an adaptive smoothing algorithm based on Nesterov’s smoothing technique in Nesterov (Math Prog 103(1):127–152, 2005) for solving “fully” nonsmooth composite convex optimization problems. Our method combines both Nesterov’s accelerated proximal gradient scheme and a new homotopy strategy for smoothness parameter. By an appropriate choice of smoothing functions, we develop a new algorithm that has the \(\mathcal {O}\left( \frac{1}{\varepsilon }\right) \)-worst-case iteration-complexity while preserves the same complexity-per-iteration as in Nesterov’s method and allows one to automatically update the smoothness parameter at each iteration. Then, we customize our algorithm to solve four special cases that cover various applications. We also specify our algorithm to solve constrained convex optimization problems and show its convergence guarantee on a primal sequence of iterates. We demonstrate our algorithm through three numerical examples and compare it with other related algorithms.  相似文献   

7.
The method of linear associative memory (LAM), a notion from the field of artificial neural nets, has been applied recently in nonlinear parameter estimation. In the LAM method, a model response, nonlinear with respect to the parameters, is approximated linearly by a matrix, which maps inversely from a response vector to a parameter vector. This matrix is determined from a set of initial training parameter vectors and their response vectors, and can be update recursively and adaptively with a pair of newly generated parameter response vectors. The LAM advantage is that it can yield a good estimation of the true parameters from a given observed response, even if the initial training parameter vectors are far from the true values.In this paper, we present a weighted linear associative memory (WLAM) for nonlinear parameter estimation. WLAM improves LAM by taking into account an observed response vector oriented weighting. The basic idea is to weight each pair of parameter response vectors in the cost function such that, if a response vector is closer to the observed one, then this pair plays a more important role in the cost function. This weighting algorithm improves significantly the accuracy of parameter estimation as compared to a LAM without weighting. In addition, we are able to construct the associative memory matrix recursively, while taking the weighting procedure into account, and simultaneously update the ridge parameter of the cost function further improving the efficiency of the WLAM estimation. These features enable WLAM to be a powerful tool for nonlinear parameter simulation.This work was supported by National Science Foundation, Grants BCS-93-15886 and INT-94-17206. We thank Mr. L. Yobas for fruitful discussions.  相似文献   

8.
提出一种求解P*(k)阵水平线性互补问题的全牛顿内点算法,全牛顿算法的优势在于每次迭代中不需要线性搜寻.当给定适当的中心路径邻域的阈值和更新势垒参数,证明算法中心邻域的全牛顿是局部二次收敛的,最后给出算法迭代复杂性O(√n)log(n+1+k)/εμ0.  相似文献   

9.
汤丹 《运筹学学报》2011,15(4):124-128
本文是对非线性规划问题提出的一种算法,该算法把模拟退火算法应用到CRS算法中,根据模拟退火算法每一次迭代都体现集中和扩散两个策略的平衡的特点,使CRS算法更能够搜索到全局最优解,而不会陷入局部最优解。最后把提出的算法应用到两个典型的函数优化问题中,结果表明,算法是可行的、有效的  相似文献   

10.
The asymptotic convergence of parameterized variants of Newton’s method for the solution of nonlinear systems of equations is considered. The original system is perturbed by a term involving the variables and a scalar parameter which is driven to zero as the iteration proceeds. The exact local solutions to the perturbed systems then form a differentiable path leading to a solution of the original system, the scalar parameter determining the progress along the path. A path-following algorithm, which involves an inner iteration in which the perturbed systems are approximately solved, is outlined. It is shown that asymptotically, a single linear system is solved per update of the scalar parameter. It turns out that a componentwise Q-superlinear rate may be attained, both in the direct error and in the residuals, under standard assumptions, and that this rate may be made arbitrarily close to quadratic. Numerical experiments illustrate the results and we discuss the relationships that this method shares with interior methods in constrained optimization. Received: September 8, 2000 / Accepted: September 17, 2001?Published online February 14, 2002  相似文献   

11.
The particle swarm optimization (PSO) technique is a powerful stochastic evolutionary algorithm that can be used to find the global optimum solution in a complex search space. This paper presents a variation on the standard PSO algorithm called the rank based particle swarm optimizer, or PSOrank, employing cooperative behavior of the particles to significantly improve the performance of the original algorithm. In this method, in order to efficiently control the local search and convergence to global optimum solution, the γ best particles are taken to contribute to the updating of the position of a candidate particle. The contribution of each particle is proportional to its strength. The strength is a function of three parameters: strivness, immediacy and number of contributed particles. All particles are sorted according to their fitness values, and only the γ best particles will be selected. The value of γ decreases linearly as the iteration increases. A time-varying inertia weight decreasing non-linearly is introduced to improve the performance. PSOrank is tested on a commonly used set of optimization problems and is compared to other variants of the PSO algorithm presented in the literature. As a real application, PSOrank is used for neural network training. The PSOrank strategy outperformed all the methods considered in this investigation for most of the functions. Experimental results show the suitability of the proposed algorithm in terms of effectiveness and robustness.  相似文献   

12.
The linear least squares problem, minxAx − b∥2, is solved by applying a multisplitting (MS) strategy in which the system matrix is decomposed by columns into p blocks. The b and x vectors are partitioned consistently with the matrix decomposition. The global least squares problem is then replaced by a sequence of local least squares problems which can be solved in parallel by MS. In MS the solutions to the local problems are recombined using weighting matrices to pick out the appropriate components of each subproblem solution. A new two-stage algorithm which optimizes the global update each iteration is also given. For this algorithm the updates are obtained by finding the optimal update with respect to the weights of the recombination. For the least squares problem presented, the global update optimization can also be formulated as a least squares problem of dimension p. Theoretical results are presented which prove the convergence of the iterations. Numerical results which detail the iteration behavior relative to subproblem size, convergence criteria and recombination techniques are given. The two-stage MS strategy is shown to be effective for near-separable problems. © 1998 John Wiley & Sons, Ltd.  相似文献   

13.

The modulus-based matrix splitting (MMS) algorithm is effective to solve linear complementarity problems (Bai in Numer Linear Algebra Appl 17: 917–933, 2010). This algorithm is parameter dependent, and previous studies mainly focus on giving the convergence interval of the iteration parameter. Yet the specific selection approach of the optimal parameter has not been systematically studied due to the nonlinearity of the algorithm. In this work, we first propose a novel and simple strategy for obtaining the optimal parameter of the MMS algorithm by merely solving two quadratic equations in each iteration. Further, we figure out the interval of optimal parameter which is iteration independent and give a practical choice of optimal parameter to avoid iteration-based computations. Compared with the experimental optimal parameter, the numerical results from three problems, including the Signorini problem of the Laplacian, show the feasibility, effectiveness and efficiency of the proposed strategy.

  相似文献   

14.
一种无约束全局优化的水平值下降算法   总被引:1,自引:0,他引:1  
彭拯  张海东  邬冬华 《应用数学》2007,20(1):213-219
本文研究无约束全局优化问题,建立了一种新的水平值下降算法(Level-value Descent Method,LDM).讨论并建立了概率意义下取全局最小值的一个充分必要条件,证明了算法LDM是依概率测度收敛的.这种LDM算法是基于重点度取样(Improtance Sampling)和Markov链Monte-Carlo随机模拟实现的,并利用相对熵方法(TheCross-Entropy Method)自动更新取样密度,算例表明LDM算法具有较高的数值精度和较好的全局收敛性.  相似文献   

15.
Magnetic resonance electrical impedance tomography (MREIT) is a new technique to recover the conductivity of biologic tissue from the induced magnetic flux density. This paper proposes an inversion scheme for recovering the conductivity from one component of the magnetic field based on the nonlinear integral equation method. To apply magnetic fields corresponding to two incoherent injected currents, an alternative iteration scheme is proposed to update the conductivity. For each magnetic field, the regularizing technique on the finite dimensional space is applied to solve an ill-posed linear system. Compared with the well-developed harmonic Bz method, the advantage of this inversion scheme is its stability, since no differential operation is required on the noisy magnetic field. Numerical implementations are given to show the convergence of the iteration and its validity for noisy input data.  相似文献   

16.
In this paper, a class of optimization problems with equality and inequality constraints is discussed. Firstly, the original problem is transformed to an associated simpler problem with only inequality constraints and a parameter. The later problem is shown to be equivalent to the original problem if the parameter is large enough (but finite), then a feasible descent SQP algorithm for the simplified problem is presented. At each iteration of the proposed algorithm, a master direction is obtained by solving a quadratic program (which always has a feasible solution). With two corrections on the master direction by two simple explicit formulas, the algorithm generates a feasible descent direction for the simplified problem and a height-order correction direction which can avoid the Maratos effect without the strict complementarity, then performs a curve search to obtain the next iteration point. Thanks to the new height-order correction technique, under mild conditions without the strict complementarity, the globally and superlinearly convergent properties are obtained. Finally, an efficient implementation of the numerical experiments is reported.  相似文献   

17.
As an application of an optimization technique, a gradient-projection method is employed to derive an adaptive algorithm for updating the parameters of an inverse which is designed to cancel the effects of actuator uncertainties in a control system. The actuator uncertainty is parametrized by a set of unknown parameters which belong to a parameter region. A desirable inverse is implemented with adaptive estimates of the actuator parameters. Minimizing an estimation error, a gradient algorithm is used to update such parameter estimates. To ensure that the parameter estimates also belong to the parameter region, the adaptive update law is designed with parameter projection. With such an adaptive inverse, desired control system performance can be achieved despite the presence of the actuator uncertainties.  相似文献   

18.
In this paper, we present a simulated annealing algorithm for solving multi-objective simulation optimization problems. The algorithm is based on the idea of simulated annealing with constant temperature, and uses a rule for accepting a candidate solution that depends on the individual estimated objective function values. The algorithm is shown to converge almost surely to an optimal solution. It is applied to a multi-objective inventory problem; the numerical results show that the algorithm converges rapidly.  相似文献   

19.
Recently, Elfving, Hansen, and Nikazad introduced a successful nonstationary block-column iterative method for solving linear system of equations based on flagging idea (called BCI-F). Their numerical tests show that the column-action method provides a basis for saving computational work using flagging technique in BCI algorithm. However, they did not present a general convergence analysis. In this paper, we give a convergence analysis of BCI-F. Furthermore, we consider a fully flexible version of block-column iterative method (FBCI), in which the relaxation parameters and weight matrices can be updated in each iteration and the column partitioning of coefficient matrix is allowed to update in each cycle. We also provide the convergence analysis of algorithm FBCI under mild conditions.  相似文献   

20.
Analyzing the Performance of Generalized Hill Climbing Algorithms   总被引:2,自引:0,他引:2  
Generalized hill climbing algorithms provide a framework to describe and analyze metaheuristics for addressing intractable discrete optimization problems. The performance of such algorithms can be assessed asymptotically, either through convergence results or by comparison to other algorithms. This paper presents necessary and sufficient convergence conditions for generalized hill climbing algorithms. These conditions are shown to be equivalent to necessary and sufficient convergence conditions for simulated annealing when the generalized hill climbing algorithm is restricted to simulated annealing. Performance measures are also introduced that permit generalized hill climbing algorithms to be compared using random restart local search. These results identify a solution landscape parameter based on the basins of attraction for local optima that determines whether simulated annealing or random restart local search is more effective in visiting a global optimum. The implications and limitations of these results are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号