首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
Hit-and-run algorithms are Monte Carlo methods for detecting necessary constraints in convex programming including semidefinite programming. The well known of these in semidefinite programming are semidefinite coordinate directions (SCD), semidefinite hypersphere directions (SHD) and semidefinite stand-and-hit (SSH) algorithms. SCD is considered to be the best on average and hence we use it for comparison.We develop two new hit-and-run algorithms in semidefinite programming that use diagonal directions. They are the uniform semidefinite diagonal directions (uniform SDD) and the original semidefinite diagonal directions (original SDD) algorithms. We analyze the costs and benefits of this change in comparison with SCD. We also show that both uniform SDD and original SDD generate points that are asymptotically uniform in the interior of the feasible region defined by the constraints.  相似文献   

2.
The particle swarm optimization algorithm includes three vectors associated with each particle: inertia, personal, and social influence vectors. The personal and social influence vectors are typically multiplied by random diagonal matrices (often referred to as random vectors) resulting in changes in their lengths and directions. This multiplication, in turn, influences the variation of the particles in the swarm. In this paper we examine several issues associated with the multiplication of personal and social influence vectors by such random matrices, these include: (1) Uncontrollable changes in the length and direction of these vectors resulting in delay in convergence or attraction to locations far from quality solutions in some situations (2) Weak direction alternation for the vectors that are aligned closely to coordinate axes resulting in preventing the swarm from further improvement in some situations, and (3) limitation in particle movement to one orthant resulting in premature convergence in some situations. To overcome these issues, we use randomly generated rotation matrices (rather than the random diagonal matrices) in the velocity updating rule of the particle swarm optimizer. This approach makes it possible to control the impact of the random components (i.e. the random matrices) on the direction and length of personal and social influence vectors separately. As a result, all the above mentioned issues are effectively addressed. We propose to use the Euclidean rotation matrices for rotation because it preserves the length of the vectors during rotation, which makes it easier to control the effects of the randomness on the direction and length of vectors. The direction of the Euclidean matrices is generated randomly by a normal distribution. The mean and variance of the distribution are investigated in detail for different algorithms and different numbers of dimensions. Also, an adaptive approach for the variance of the normal distribution is proposed which is independent from the algorithm and the number of dimensions. The method is adjoined to several particle swarm optimization variants. It is tested on 18 standard optimization benchmark functions in 10, 30 and 60 dimensional spaces. Experimental results show that the proposed method can significantly improve the performance of several types of particle swarm optimization algorithms in terms of convergence speed and solution quality.  相似文献   

3.
The problem [maximize f(x), subject to x1 + … + xj ? bj for j = 1, …, N] is solved by a feasible direction method that takes advantage of its special structure. A direction vector that approximates the vector of Lagrange multipliers is used. In the one-dimensional subproblem the direction vector is bent every time a constraint becomes active. Convergence to a K-T point is proven. McCormick has used a similar method for the problem [maximize f(x), subject to x ? 0], with the gradient as direction vector. A computationally implementable algorithm is given, with a finite stepsize procedure and a finite stopping rule. Observations from numerous applications to a recurring banking problem are discussed. Related techniques might be useful in other situations.  相似文献   

4.
The Stability Index Method (SIM) combines stochastic and deterministic algorithms to find global minima of multidimensional functions. The functions may be nonsmooth and may have multiple local minima. The method examines the change of the diameters of the minimizing sets for its stopping criterion. At first, the algorithm uses the uniform random distribution in the admissible set. Then normal random distributions of decreasing variation are used to focus on probable global minimizers. To test the method, it is applied to seven standard test functions of several variables. The computational results show that the SIM is efficient, reliable and robust.The authors thank the referees for valuable suggestions.  相似文献   

5.
The full-information best choice problem with a random number of observations is considered. N i.i.d. random variables with a known continuous distribution are observed sequentially with the object of selecting the largest. Neither recall nor uncertainty of selection is allowed and one choice must be made. In this paper the number N of observations is random with a known distribution. The structure of the stopping set is investigated. A class of distributions of N (which contains in particular the uniform, negative-binomial and Poisson distributions) is determined, for which the so-called “monotone case” occurs. The theoretical solution for the monotone case is considered. In the case where N is geometric the optimal solution is presented and the probability of winning worked out. Finally, the case where N is uniform is examined. A simple asymptotically optimal stopping rule is found and the asymptotic probability of winning is obtained.  相似文献   

6.
An acceptance-rejection algorithm for generating random vectors uniformly distributed over (inside or on the surface of) a complex region inserted in a minimal multidimensional rectangle is considered. For regions having simple forms (simplex, hypersphere, hyperellipsoid) several algorithms are presented as well.  相似文献   

7.
In the framework of stochastic approximation, in separable Hilbert spaces one can often establish weak convergence for a suitable normalized, sequence of random variables to a Gaussian distributed random varible. In connection with a sequence of empirical covariance operators and estimator of the unknown radius of a ball is described, for which the Gaussian limit distribution, takes a given value. Further a stopping rule is proposed leading to asymptotic confidence balls with a fixed radius.  相似文献   

8.
In this paper, two PVD-type algorithms are proposed for solving inseparable linear constraint optimization. Instead of computing the residual gradient function, the new algorithm uses the reduced gradients to construct the PVD directions in parallel computation, which can greatly reduce the computation amount each iteration and is closer to practical applications for solve large-scale nonlinear programming. Moreover, based on an active set computed by the coordinate rotation at each iteration, a feasible descent direction can be easily obtained by the extended reduced gradient method. The direction is then used as the PVD direction and a new PVD algorithm is proposed for the general linearly constrained optimization. And the global convergence is also proved.  相似文献   

9.
On search directions for minimization algorithms   总被引:1,自引:0,他引:1  
Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions.  相似文献   

10.
In this article accurate approximations and inequalities are derived for the distribution, expected stopping time and variance of the stopping time associated with moving sums of independent and identically distributed continuous random variables. Numerical results for a scan statistic based on a sequence of moving sums are presented for a normal distribution model, for both known and unknown mean and variance. The new R algorithms for the multivariate normal and t distributions established by Genz et?al. (2010) provide readily available numerical values of the bounds and approximations.  相似文献   

11.
Stochastic approximation problem is to find some root or extremum of a non- linear function for which only noisy measurements of the function are available.The classical algorithm for stochastic approximation problem is the Robbins-Monro (RM) algorithm,which uses the noisy evaluation of the negative gradient direction as the iterative direction.In order to accelerate the RM algorithm,this paper gives a flame algorithm using adaptive iterative directions.At each iteration,the new algorithm goes towards either the noisy evaluation of the negative gradient direction or some other directions under some switch criterions.Two feasible choices of the criterions are pro- posed and two corresponding flame algorithms are formed.Different choices of the directions under the same given switch criterion in the flame can also form different algorithms.We also proposed the simultanous perturbation difference forms for the two flame algorithms.The almost surely convergence of the new algorithms are all established.The numerical experiments show that the new algorithms are promising.  相似文献   

12.
The paper is devoted to solving the two‐stage problem of stochastic programming with quantile criterion. It is assumed that the loss function is bilinear in random parameters and strategies, and the random vector has a normal distribution. Two algorithms are suggested to solve the problem, and they are compared. The first algorithm is based on the reduction of the original stochastic problem to a mixed integer linear programming problem. The second algorithm is based on the reduction of the problem to a sequence of convex programming problems. Performance characteristics of both the algorithms are illustrated by an example. A modification of both the algorithms is suggested to reduce the computing time. The new algorithm uses the solution obtained by the second algorithm as a starting point for the first algorithm. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
A Manhattan search algorithm to minimize artificial neural network error function is outlined in this paper. From an existing position in Cartesian coordinate, a search vector moves in orthogonal directions to locate minimum function value. The search algorithm computes optimized step length for rapid convergence. This step is performed when consecutive search is successful in minimizing function value. The optimized step length identifies favorable descent direction to minimize function value. The search method is suitable for complex error surface where derivative information is difficult to obtain or when the error surface is nearly flat. The rate of change in function value is almost negligible near the flat surface. Most of the derivative based training algorithm faces difficulty in such scenarios. This algorithm avoids derivative information of an error function. Therefore, it is an attractive search method when derivative based algorithm faces difficulty due to complex ridges and flat valleys. In case the algorithm gets into trapped minimum, the search vector takes steps to move out of a local minimum by exploring neighborhood descent search directions. The algorithm differs from the first and second order derivative based training methods. To measure the performance of the algorithm, estimation of electric energy generation model from Fiji Islands and “L-T” letter recognition problems are solved. Bootstrap analysis shows that the algorithm’s predictive and classification abilities are high. The algorithm is reliable when solution to a problem is unknown. Therefore, the algorithm identifies benchmark solution.  相似文献   

14.
In this paper random utility maximization based on maximization of correct classification of the choice decisions over a given data set is considered. It is shown that if the disturbance vector in the random utility model is independent and identically distributed, then preference determination based on the most probable alternative reduces to deterministic utility maximization. As a consequence of the above equivalence, the form of the error distribution (normal, Weibull, uniform etc.) plays no role in the determination of the preferred alternative. Parameter estimation under the most probable alternative rule is carried out using two methods. The first is based on the solution of an appropriately defined system of linear inequalities and the second one is based on the function optimization of a newly proposed function, whose optimum is achieved when the number of correctly classified individuals is maximized. The ability to use these algorithms in the framework of pattern recognition and machine learning is pointed out. Simulations and a real case study involving intercity travel behavior are employed to assess the proposed methods.  相似文献   

15.
We propose an extrapolation algorithm for initial value problems in ordinary differential equations. In the algorithm, an appropriately chosen stepsizeH is divided into smaller stepsizes by a sequence and a new stopping rule is proposed. The sequences applied to the algorithm are Romberg {2,4,8,16,32,...}, Bulirsch {2,4,6,8,16...} and Harmonic {2,4,6,8,10,12,...} types. The proposed algorithm is compared numerically with the algorithm introduced by Stoer. In view of the accuracy of numerical solutions, the relatively small number of calculations, the stability and reliability of the algorithm, we found that the algorithm with the Romberg sequence is the best.  相似文献   

16.
对求解无约束规划的超记忆梯度算法中线搜索方向中的参数,给了一个假设条件,从而确定了它的一个新的取值范围,保证了搜索方向是目标函数的充分下降方向,由此提出了一类新的记忆梯度算法.在去掉迭代点列有界和Armijo步长搜索下,讨论了算法的全局收敛性,且给出了结合形如共轭梯度法FR,PR,HS的记忆梯度法的修正形式.数值实验表明,新算法比Armijo线搜索下的FR、PR、HS共轭梯度法和超记忆梯度法更稳定、更有效.  相似文献   

17.
针对具有多块可分结构的非凸优化问题提出了一类新的随机Bregman交替方向乘子法,在周期更新规则下, 证明了该算法的渐进收敛性; 在随机更新的规则下, 几乎确定的渐进收敛性得以证明。数值实验结果表明, 该算法可有效训练具有离散结构的支持向量机。  相似文献   

18.
We propose a new truncated Newton method for large scale unconstrained optimization, where a Conjugate Gradient (CG)-based technique is adopted to solve Newton’s equation. In the current iteration, the Krylov method computes a pair of search directions: the first approximates the Newton step of the quadratic convex model, while the second is a suitable negative curvature direction. A test based on the quadratic model of the objective function is used to select the most promising between the two search directions. Both the latter selection rule and the CG stopping criterion for approximately solving Newton’s equation, strongly rely on conjugacy conditions. An appropriate linesearch technique is adopted for each search direction: a nonmonotone stabilization is used with the approximate Newton step, while an Armijo type linesearch is used for the negative curvature direction. The proposed algorithm is both globally and superlinearly convergent to stationary points satisfying second order necessary conditions. We carry out a significant numerical experience in order to test our proposal.  相似文献   

19.
Convergence properties of a class of multi-directional parallel quasi-Newton algorithmsfor the solution of unconstrained minimization problems are studied in this paper.At eachiteration these algorithms generate several different quasi-Newton directions,and thenapply line searches to determine step lengths along each direction,simultaneously.Thenext iterate is obtained among these trail points by choosing the lowest point in the sense offunction reductions.Different quasi-Newton updating formulas from the Broyden familyare used to generate a main sequence of Hessian matrix approximations.Based on theBFGS and the modified BFGS updating formulas,the global and superlinear convergenceresults are proved.It is observed that all the quasi-Newton directions asymptoticallyapproach the Newton direction in both direction and length when the iterate sequenceconverges to a local minimum of the objective function,and hence the result of superlinearconvergence follows.  相似文献   

20.
We consider anisotropic second order elliptic boundary value problems in two dimensions, for which the anisotropy is exactly aligned with the coordinate axes. This includes cases where the operator features a singular perturbation in one coordinate direction, whereas its restriction to the other direction remains neatly elliptic. Most prominently, such a situation arises when polar coordinates are introduced.The common multigrid approach to such problems relies on line relaxation in the direction of the singular perturbation combined with semi-coarsening in the other direction. Taking the idea from classical Fourier analysis of multigrid, we employ eigenspace techniques to separate the coordinate directions. Thus, convergence of the multigrid method can be examined by looking at one-dimensional operators only. In a tensor product Galerkin setting, this makes it possible to confirm that the convergence rates of the multigrid V-cycle are bounded independently of the number of grid levels involved. In addition, the estimates reveal that convergence is also robust with respect to a singular perturbation in one coordinate direction.Finally, we supply numerical evidence that the algorithm performs satisfactorily in settings more general than those covered by the proof.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号