首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study a modification of the EMS algorithm in which each step of the EMS algorithm is preceded by a nonlinear smoothing step of the form , where S is the smoothing operator of the EMS algorithm. In the context of positive integral equations (à la positron emission tomography) the resulting algorithm is related to a convex minimization problem which always admits a unique smooth solution, in contrast to the unmodified maximum likelihood setup. The new algorithm has slightly stronger monotonicity properties than the original EM algorithm. This suggests that the modified EMS algorithm is actually an EM algorithm for the modified problem. The existence of a smooth solution to the modified maximum likelihood problem and the monotonicity together imply the strong convergence of the new algorithm. We also present some simulation results for the integral equation of stereology, which suggests that the new algorithm behaves roughly like the EMS algorithm. Accepted 1 April 1997  相似文献   

2.
Faugère and Rahmany have presented the invariant F5 algorithm to compute SAGBI-Grbner bases of ideals of invariant rings. This algorithm has an incremental structure, and it is based on the matrix version of F5 algorithm to use F5 criterion to remove a part of useless reductions. Although this algorithm is more efficient than the Buchberger-like algorithm, however it does not use all the existing criteria (for an incremental structure) to detect superfluous reductions. In this paper, we consider a new algorithm, namely, invariant G2V algorithm, to compute SAGBI-Grbner bases of ideals of invariant rings using more criteria. This algorithm has a new structure and it is based on the G2V algorithm; a variant of the F5 algorithm to compute Grbner bases. We have implemented our new algorithm in Maple , and we give experimental comparison, via some examples, of performance of this algorithm with the invariant F5 algorithm.  相似文献   

3.
In this paper a linear programming-based optimization algorithm called the Sequential Cutting Plane algorithm is presented. The main features of the algorithm are described, convergence to a Karush–Kuhn–Tucker stationary point is proved and numerical experience on some well-known test sets is showed. The algorithm is based on an earlier version for convex inequality constrained problems, but here the algorithm is extended to general continuously differentiable nonlinear programming problems containing both nonlinear inequality and equality constraints. A comparison with some existing solvers shows that the algorithm is competitive with these solvers. Thus, this new method based on solving linear programming subproblems is a good alternative method for solving nonlinear programming problems efficiently. The algorithm has been used as a subsolver in a mixed integer nonlinear programming algorithm where the linear problems provide lower bounds on the optimal solutions of the nonlinear programming subproblems in the branch and bound tree for convex, inequality constrained problems.  相似文献   

4.
We consider the classical problem of searching for a heavier coin in a set of n coins, n-1 of which have the same weight. The weighing device is b-balance which is the generalization of two-arms balance. The minimum numbers of weighings are determined exactly for worst-case sequential algorithm, average-case sequential algorithm, worst-case predetermined algorithm, average-case predetermined algorithm.We also investigate the above search model with additional constraint: each weighing is only allowed to use the coins that are still in doubt. We present a worst-case optimal sequential algorithm and an average-case optimal sequential algorithm requiring the minimum numbers of weighings.  相似文献   

5.
An algorithm is presented which produces a Delaunay triangulation ofn points in the Euclidean plane in expected linear time. The expected execution time is achieved when the data are (not too far from) uniformly distributed. A modification of the algorithm discussed in the appendix treats most of the non-uniform distributions. The basis of this algorithm is a geographical partitioning of the plane into boxes by the well-known Radix-sort algorithm. This partitioning is also used as a basis for a linear time algorithm for finding the convex hull ofn points in the Euclidean plane.  相似文献   

6.
This article considers computational aspects of the nonparametric maximum likelihood estimator (NPMLE) for the distribution function of bivariate interval-censored data. The computation of the NPMLE consists of a parameter reduction step and an optimization step. This article focuses on the reduction step and introduces two new reduction algorithms: the Tree algorithm and the HeightMap algorithm. The Tree algorithm is mentioned only briefly. The HeightMap algorithm is discussed in detail and also given in pseudo code. It is a fast and simple algorithm of time complexityO(n2). This is an order faster than the best known algorithm thus far by Bogaerts and Lesaffre. We compare the new algorithms to earlier algorithms in a simulation study, and demonstrate that the new algorithms are significantly faster. Finally, we discuss how the HeightMap algorithm can be generalized to d-dimensional data with d > 2. Such a multivariate version of the HeightMap algorithm has time complexity O(nd).  相似文献   

7.
This paper introduces an algorithm for pattern recognition. The algorithm will classify a measured object as belonging to one of N known classes or none of the classes. The algorithm makes use of fuzzy techniques and possibility is used instead of probability. The algorithm was conceived with the idea of recognizing fast moving objects, but it is shown to be more general. Fuzzy ISODATA's use as a front end to the algorithm is shown. The algorithm is shown to accomplish the objectives of correct classification or no classification. Values that describe possibility distributions are introduced with some of their properties investigated and illustrated. An expected value for a possibility distribution is also investigated. The algorithm actually proves to be adaptable to a wide variety of imprecise recognition problems. Some test results illustrate the use of the technique embodied in the algorithm and indicate its viability.  相似文献   

8.
A descent algorithm for nonsmooth convex optimization   总被引:1,自引:0,他引:1  
This paper presents a new descent algorithm for minimizing a convex function which is not necessarily differentiable. The algorithm can be implemented and may be considered a modification of the ε-subgradient algorithm and Lemarechal's descent algorithm. Also our algorithm is seen to be closely related to the proximal point algorithm applied to convex minimization problems. A convergence theorem for the algorithm is established under the assumption that the objective function is bounded from below. Limited computational experience with the algorithm is also reported.  相似文献   

9.
Any nonempty string of the form xx is called a repetition. An O(n log n) algorithm is presented to find all repetitions in a string of lenght n. The algorithm is based on a linear algorithm to find all the new repetitions formed when two strings are concatenated. This linear algorithm is possible because new repetitions of equal length must occur in blocks with consecutive starting positions. The linear algorithm uses a variation of the Knuth-Morris-Pratt algorithm to find all partial occurrences of a pattern within a text string. It is also shown that no algorithm based on comparisons of symbols can improve O(n log n). Finally, some open problems and applications are suggested.  相似文献   

10.
This paper presents a new composite sub-steps algorithm for solving reliable numerical responses in structural dynamics. The newly developed algorithm is a two sub-steps, second-order accurate and unconditionally stable implicit algorithm with the same numerical properties as the Bathe algorithm. The detailed analysis of the stability and numerical accuracy is presented for the new algorithm, which shows that its numerical characteristics are identical to those of the Bathe algorithm. Hence, the new sub-steps scheme could be considered as an alternative to the Bathe algorithm. Meanwhile, the new algorithm possesses the following properties: (a) it produces the same accurate solutions as the Bathe algorithm for solving linear and nonlinear problems; (b) it does not involve any artificial parameters and additional variables, such as the Lagrange multipliers; (c) The identical effective stiffness matrices can be obtained inside two sub-steps; (d) it is a self-starting algorithm. Some numerical experiments are given to show the superiority of the new algorithm and the Bathe algorithm over the dissipative CH-α algorithm and the non-dissipative trapezoidal rule.  相似文献   

11.
In order to reduce the computational amount and improve computational precision for nonlinear optimizations and pollution source identification in convection–diffusion equation, a new algorithm, chaos gray-coded genetic algorithm (CGGA) is proposed, in which initial population are generated by chaos mapping, and new chaos mutation and Hooke–Jeeves evolution operation are used. With the shrinking of searching range, CGGA gradually directs to an optimal result with the excellent individuals obtained by gray-coded genetic algorithm. Its convergence is analyzed. It is very efficient in maintaining the population diversity during the evolution process of gray-coded genetic algorithm. This new algorithm overcomes any Hamming-cliff phenomena existing in other encoding genetic algorithm. Its efficiency is verified by application of 20 nonlinear test functions of 1–20 variables compared with standard binary-coded genetic algorithm and improved genetic algorithm. The position and intensity of pollution source are well found by CGGA. Compared with Gray-coded hybrid-accelerated genetic algorithm and pure random search algorithm, CGGA has rapider convergent speed and higher calculation precision.  相似文献   

12.
A rank-one algorithm is presented for unconstrained function minimization. The algorithm is a modified version of Davidon's variance algorithm and incorporates a limited line search. It is shown that the algorithm is a descent algorithm; for quadratic forms, it exhibits finite convergence, in certain cases. Numerical studies indicate that it is considerably superior to both the Davidon-Fletcher-Powell algorithm and the conjugate-gradient algorithm.  相似文献   

13.
An algorithm is developed for minimizing nonsmooth convex functions. This algorithm extends Elzinga–Moore cutting plane algorithm by enforcing the search of the next test point not too far from the previous ones, thus removing compactness assumption. Our method is to Elzinga–Moore’s algorithm what a proximal bundle method is to Kelley’s algorithm. Instead of lower approximations used in proximal bundle methods, the present approach is based on some objects regularizing translated functions of the objective function. We propose some variants and using some academic test problems, we conduct a numerical comparative study with Elzinga–Moore algorithm and two other well-known nonsmooth methods.   相似文献   

14.
This article introduces a new method for computing regression quantile functions. This method applies a finite smoothing algorithm based on smoothing the nondifferentiable quantile regression objective function ρτ. The smoothing can be done for all τ ∈ (0, 1), and the convergence is finite for any finite number of τi ∈ (0, 1), i = 1,…,N. Numerical comparison shows that the finite smoothing algorithm outperforms the simplex algorithm in computing speed. Compared with the powerful interior point algorithm, which was introduced in an earlier article, it is competitive overall; however, it is significantly faster than the interior point algorithm when the design matrix in quantile regression has a large number of covariates. Additionally, the new algorithm provides the same accuracy as the simplex algorithm. In contrast, the interior point algorithm gives only the approximate solutions in theory, and rounding may be necessary to improve the accuracy of these solutions in practice.  相似文献   

15.
This paper is to further study the origin-based (OB) algorithm for solving the combined distribution and assignment (CDA) problem, where the trip distribution follows a gravity model and the traffic assignment is a user-equilibrium model. Recently, the OB algorithm has shown to be superior to the Frank–Wolfe (FW) algorithm for the traffic assignment (TA) problem and better than the Evans’ algorithm for the CDA problem in both computational time and solution accuracy. In this paper, a modified origin–destination (OD) flow update strategy proposed by Huang and Lam [Huang, H.J., Lam, W.H.K., 1992. Modified Evans’ algorithms for solving the combined trip distribution and assignment problem. Transportation Research B 26 (4), 325–337] for CDA with the Evans’ algorithm is adopted to improve the OB algorithm for solving the CDA problem. Convergence proof of the improved OB algorithm is provided along with some preliminary computational results to demonstrate the effect of the modified OD flow update strategy embedded in the OB algorithm.  相似文献   

16.
In this paper the usage of a stochastic optimization algorithm as a model search tool is proposed for the Bayesian variable selection problem in generalized linear models. Combining aspects of three well known stochastic optimization algorithms, namely, simulated annealing, genetic algorithm and tabu search, a powerful model search algorithm is produced. After choosing suitable priors, the posterior model probability is used as a criterion function for the algorithm; in cases when it is not analytically tractable Laplace approximation is used. The proposed algorithm is illustrated on normal linear and logistic regression models, for simulated and real-life examples, and it is shown that, with a very low computational cost, it achieves improved performance when compared with popular MCMC algorithms, such as the MCMC model composition, as well as with “vanilla” versions of simulated annealing, genetic algorithm and tabu search.  相似文献   

17.
A framework and an algorithm for using modified Gram-Schmidt for constrained and weighted linear least squares problems is presented. It is shown that a direct implementation of a weighted modified Gram-Schmidt algorithm is unstable for heavily weighted problems. It is shown that, in most cases it is possible to get a stable algorithm by a simple modification free from any extra computational costs. In particular, it is not necessary to perform reorthogonalization.Solving the weighted and constrained linear least squares problem with the presented weighted modified Gram-Schmidt algorithm is seen to be numerically equivalent to an algorithm based on a weighted Householder-likeQR factorization applied to a slightly larger problem. This equivalence is used to explain the instability of the weighted modified Gram-Schmidt algorithm. If orthogonality, with respect to a weighted inner product, of the columns inQ is important then reorthogonalization can be used. One way of performing such reorthogonalization is described.Computational tests are given to show the main features of the algorithm.  相似文献   

18.
In Venkaiah [1] an algorithm for solving linear optimization problems based on the idea of the projective algorithm of Karmarkar, is proposed. The essential simplification in the new algorithm is the use of a fixed projection operator. In this way the algorithm requires onlyO(n 2 ) operations to obtain a sufficient exact solution. In this note it is shown that in some special cases the algorithm of Venkaiah yields a feasible solution that is far from the optimal one.  相似文献   

19.
Ideas of a simplicial variable dimension restart algorithm to approximate zero points onR n developed by the authors and of a linear complementarity problem pivoting algorithm are combined to an algorithm for solving the nonlinear complementarity problem with lower and upper bounds. The algorithm can be considered as a modification of the2n-ray zero point finding algorithm onR n . It appears that for the new algorithm the number of linear programming pivot steps is typically less than for the2n-ray algorithm applied to an equivalent zero point problem. This is caused by the fact that the algorithm utilizes the complementarity conditions on the variables. This work is part of the VF-program “Equilibrium and Disequilibrium in Demand and Supply,” which has been approved by the Netherlands Ministry of Education and Sciences.  相似文献   

20.
Componentwise adaptation for high dimensional MCMC   总被引:1,自引:0,他引:1  
Summary  We introduce a new adaptive MCMC algorithm, based on the traditional single component Metropolis-Hastings algorithm and on our earlier adaptive Metropolis algorithm (AM). In the new algorithm the adaption is performed component by component. The chain is no more Markovian, but it remains ergodic. The algorithm is demonstrated to work well in varying test cases up to 1000 dimensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号