首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
对二维定常的不可压缩的Navier-Stokes方程的局部和并行算法进行了研究.给出的算法是多重网格和区域分解相结合的算法,它是基于两个有限元空间:粗网格上的函数空间和子区域的细网格上的函数空间.局部算法是在粗网格上求一个非线性问题,然后在细网格上求一个线性问题,并舍掉内部边界附近的误差相对较大的解.最后,基于局部算法,通过有重叠的区域分解而构造了并行算法,并且做了算法的误差分析,得到了比标准有限元方法更好的误差估计,也对算法做了数值试验,数值结果通过比较验证了本算法的高效性和合理性.  相似文献   

2.
Probabilistic proximity searching algorithms based on compact partitions   总被引:1,自引:0,他引:1  
The main bottleneck of the research in metric space searching is the so-called curse of dimensionality, which makes the task of searching some metric spaces intrinsically difficult, whatever algorithm is used. A recent trend to break this bottleneck resorts to probabilistic algorithms, where it has been shown that one can find 99% of the relevant objects at a fraction of the cost of the exact algorithm. These algorithms are welcome in most applications because resorting to metric space searching already involves a fuzziness in the retrieval requirements. In this paper, we push further in this direction by developing probabilistic algorithms on data structures whose exact versions are the best for high dimensions. As a result, we obtain probabilistic algorithms that are better than the previous ones. We give new insights on the problem and propose a novel view based on time-bounded searching. We also propose an experimental framework for probabilistic algorithms that permits comparing them in offline mode.  相似文献   

3.
In practical data mining tasks, high-dimensional data has to be analyzed. In most of the cases it is very informative to map and visualize the hidden structure of a complex data set in a low-dimensional space. In this paper a new class of mapping algorithms is defined. These algorithms combine topology representing networks and different nonlinear mapping algorithms. While the former methods aim to quantify the data and disclose the real structure of the objects, the nonlinear mapping algorithms are able to visualize the quantized data in the low-dimensional vector space. In this paper, techniques based on these methods are gathered and the results of a detailed analysis performed on them are shown. The primary aim of this analysis is to examine the preservation of distances and neighborhood relations of the objects. Preservation of neighborhood relations was analyzed both in local and global environments. To evaluate the main properties of the examined methods we show the outcome of the analysis based both on synthetic and real benchmark examples.  相似文献   

4.
根据有界差分条件,提出了学习算法的有界差分稳定框架.依据新框架,研究了机器学习阈值选择算法,再生核Hilbert空间中的正则化学习算法,Ranking学习算法和Bagging算法,证明了对应学习算法的有界差分稳定性.所获结果断言了这些算法均具有有界差分稳定性,从而为这些算法的应用奠定了理论基础.  相似文献   

5.
In the area of broad-band antenna array signal processing, the global minimum of a quadratic equality constrained quadratic cost minimization problem is often required. The problem posed is usually characterized by a large optimization space (around 50–90 tuples), a large number of linear equality constraints, and a few quadratic equality constraints each having very low rank quadratic constraint matrices. Two main difficulties arise in this class of problem. Firstly, the feasibility region is nonconvex and multiple local minima abound. This makes conventional numerical search techniques unattractive as they are unable to locate the global optimum consistently (unless a finite search area is specified). Secondly, the large optimization space makes the use of decision-method algorithms for the theory of the reals unattractive. This is because these algorithms involve the solution of the roots of univariate polynomials of order to the square of the optimization space. In this paper we present a new algorithm which exploits the structure of the constraints to reduce the optimization space to a more manageable size. The new algorithm relies on linear-algebra concepts, basic optimization theory, and a multivariate polynomial root-solving tool often used by decision-method algorithms.This research was supported by the Australian Research Council and the Corporative Research Centre for Broadband Telecommunications and Networking.  相似文献   

6.
Piecewise affine inverse problems form a general class of nonlinear inverse problems. In particular inverse problems obeying certain variational structures, such as Fermat's principle in travel time tomography, are of this type. In a piecewise affine inverse problem a parameter is to be reconstructed when its mapping through a piecewise affine operator is observed, possibly with errors. A piecewise affine operator is defined by partitioning the parameter space and assigning a specific affine operator to each part. A Bayesian approach with a Gaussian random field prior on the parameter space is used. Both problems with a discrete finite partition and a continuous partition of the parameter space are considered.

The main result is that the posterior distribution is decomposed into a mixture of truncated Gaussian distributions, and the expression for the mixing distribution is partially analytically tractable. The general framework has, to the authors' knowledge, not previously been published, although the result for the finite partition is generally known.

Inverse problems are currently of large interest in many fields. The Bayesian approach is popular and most often highly computer intensive. The posterior distribution is frequently concentrated close to high-dimensional nonlinear spaces, resulting in slow mixing for generic sampling algorithms. Inverse problems are, however, often highly structured. In order to develop efficient sampling algorithms for a problem at hand, the problem structure must be exploited.

The decomposition of the posterior distribution that is derived in the current work can be used to develop specialized sampling algorithms. The article contains examples of such sampling algorithms. The proposed algorithms are applicable also for problems with exact observations. This is a case for which generic sampling algorithms tend to fail.  相似文献   

7.
We consider two types of orthogonal, oriented, rectangular, two-dimensional packing problems. The first is the strip packing problem, for which four new and improved level-packing algorithms are presented. Two of these algorithms guarantee a packing that may be disentangled by guillotine cuts. These are combined with a two-stage heuristic designed to find a solution to the variable-sized bin packing problem, where the aim is to pack all items into bins so as to minimise the packing area. This heuristic packs the levels of a solution to the strip packing problem into large bins and then attempts to repack the items in those bins into smaller bins in order to reduce wasted space. The results of the algorithms are compared to those of seven level-packing heuristics from the literature by means of a large number of strip-packing benchmark instances. It is found that the new algorithms are an improvement over known level-packing heuristics for the strip packing problem. The advancements made by the new and improved algorithms are limited in terms of utilised space when applied to the variable-sized bin packing problem. However, they do provide results faster than many existing algorithms.  相似文献   

8.
Algorithms are developed for computing generalized Racah coefficients for the U(N) groups. The irreducible representations (irreps) of the U(N) groups, as well as their tensor products, are realized as polynomials in complex variables. When tensor product irrep labels as well as a given irrep label are specified, maps are constructed from the irrep space to the tensor product space. The number of linearly independent maps gives the multiplicity. The main theorem of this paper shows that the eigenvalues of generalized Casimir operators are always sufficient to break the multiplicity. Using this theorem algorithms are given for computing the overlap between different sets of eigenvalues of commuting generalized Casimir operators, which are the generalized Racah coefficients. It is also shown that these coefficients are basis independent. Mathematics Subject Classifications (2000) 22E70, 81R05, 81R40.  相似文献   

9.
We consider the problem of scheduling a single machine to minimize total tardiness with sequence dependent setup times. We present two algorithms, a problem space-based local search heuristic and a Greedy Randomized Adaptive Search Procedure (GRASP) for this problem. With respect to GRASP, our main contributions are—a new cost function in the construction phase, a new variation of Variable Neighborhood Search in the improvement phase, and Path Relinking using three different search neighborhoods. The problem space-based local search heuristic incorporates local search with respect to both the problem space and the solution space. We compare our algorithms with Simulated Annealing, Genetic Search, Pairwise Interchange, Branch and Bound and Ant Colony Search on a set of test problems from literature, showing that the algorithms perform very competitively.  相似文献   

10.
The basic contracts traded on energy exchanges are swaps involving the delivery of electricity for fixed-rate payments over a certain period of time. The main objective of this article is to solve the quadratic hedging problem for European options on these swaps, known as electricity swaptions. We consider a general class of Hilbert space valued exponential jump-diffusion models. Since the forward curve is an infinite-dimensional object, but only a finite set of traded contracts are available for hedging, the market is inherently incomplete. We derive the optimization problem for the quadratic hedging problem under the risk neutral measure and state a representation of its solution, which is the starting point for numerical algorithms.  相似文献   

11.
Three parallel space-decomposition minimization (PSDM) algorithms, based on the parallel variable transformation (PVT) and the parallel gradient distribution (PGD) algorithms (O.L. Mangasarian, SIMA Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925.), are presented for solving convex or nonconvex unconstrained minimization problems. The PSDM algorithms decompose the variable space into subspaces and distribute these decomposed subproblems among parallel processors. It is shown that if all decomposed subproblems are uncoupled of each other, they can be solved independently. Otherwise, the parallel algorithms presented in this paper can be used. Numerical experiments show that these parallel algorithms can save processor time, particularly for medium and large-scale problems. Up to six parallel processors are connected by Ethernet networks to solve four large-scale minimization problems. The results are compared with those obtained by using sequential algorithms run on a single processor. An application of the PSDM algorithms to the training of multilayer Adaptive Linear Neurons (Madaline) and a new parallel architecture for such parallel training are also presented.  相似文献   

12.
Several kind of new numerical schemes for the stationary Navier-Stokes equations based on the virtue of Inertial Manifold and Approximate Inertial Manifold, which we call them inertial algorithms in this paper, together with their error estimations are presented. All these algorithms are constructed under an uniform frame, that is to construct some kind of new projections for the Sobolev space in which the true solution is sought. It is shown that the proposed inertial algorithms can greatly improve the convergence rate of the standard Galerkin approximate solution with lower computing effort. And some numerical examples are also given to verify results of this paper.  相似文献   

13.
徐永春  何欣枫  何震 《数学学报》2010,53(4):751-758
依赖于投影映射的性质,许多学者在Hilbert空间研究了具不同映射的变分不等式组解的逼近问题,但在Banach空间的研究比较少.其主要原因是因为在Banach空间投影映射缺乏很好的性质.本文利用向阳非扩张保核映射(the sunny nonexpansiveretraction mapping)Q_K的性质,导出了一种隐迭代方法.用这一方法,本文的结果把[M.A.Noor,K.I.Noor,Projection algorithms for solving a system of generalvariational inequalities,Nonlinear Analysis,70(2009)2700-2706]的主要成果从Hilbert空间推广到了Banach空间.  相似文献   

14.
Many constrained optimization algorithms use a basis for the null space of the matrix of constraint gradients. Recently, methods have been proposed that enable this null space basis to vary continuously as a function of the iterates in a neighborhood of the solution. This paper reports results from topology showing that, in general, there is no continuous function that generates the null space basis of all full rank rectangular matrices of a fixed size. Thus constrained optimization algorithms cannot assume an everywhere continuous null space basis. We also give some indication of where these discontinuities must occur. We then propose an alternative implementation of a class of constrained optimization algorithms that uses approximations to the reduced Hessian of the Lagrangian but is independent of the choice of null space basis. This approach obviates the need for a continuously varying null space basis.Research supported by NSF grant MCS 81-15475 and DCR-8403483Research supported by ARO contracts DAAG 29-81-K-0108 and DAAG 29-84-K-0140  相似文献   

15.
This paper deals with a new variable metric algorithm for stochastic optimization problems. The essence of this is as follows: there exist two stochastic quasigradient algorithms working simultaneously — the first in the main space, the second with respect to the matrices that modify the space variables. Almost sure convergence of the algorithm is proved for the case of the convex (possiblynonsmooth) objective function.  相似文献   

16.
Algorithms based on Pythagorean hodographs (PH) in the Euclidean plane and in Minkowski space share common goals, the main one being rationality of offsets of planar domains. However, only separate interpolation techniques based on these curves can be found in the literature. It was recently revealed that rational PH curves in the Euclidean plane and in Minkowski space are very closely related. In this paper, we continue the discussion of the interplay between spatial MPH curves and their associated planar PH curves from the point of view of Hermite interpolation. On the basis of this approach we design a new, simple interpolation algorithm. The main advantage of the unifying method presented lies in the fact that it uses, after only some simple additional computations, an arbitrary algorithm for interpolation using planar PH curves also for interpolation using spatial MPH curves. We present the functionality of our method for G1 Hermite data; however, one could also obtain higher order algorithms.  相似文献   

17.
The soft-clustered vehicle-routing problem (SoftCluVRP) extends the classical capacitated vehicle-routing problem by one additional constraint: The customers are partitioned into clusters and feasible routes must respect the soft-cluster constraint, that is, all customers of the same cluster must be served by the same vehicle. In this article, we design and analyze different branch-and-price algorithms for the exact solution of the SoftCluVRP. The algorithms differ in the way the column-generation subproblem, a variant of the shortest-path problem with resource constraints (SPPRC), is solved. The standard approach for SPPRCs is based on dynamic-programming labeling algorithms. We show that even with all the recent acceleration techniques (e.g., partial pricing, bidirectional labeling, decremental state space relaxation) available for SPPRC labeling algorithms, the solution of the subproblem remains extremely difficult. The main contribution is the modeling and solution of the subproblem using a branch-and-cut algorithm. The conducted computational experiments prove that branch-and-price equipped with this integer programming-based approach outperforms sophisticated labeling-based algorithms by one order of magnitude. The largest SoftCluVRP instances solved to optimality have more than 400 customers or more than 50 clusters.  相似文献   

18.
This paper summarizes the main results on approximate nonlinear programming algorithms investigated by the author. These algorithms are obtained by combining approximation and nonlinear programming algorithms. They are designed for programs in which the evaluation of the objective functions is very difficult so that only their approximate values can be obtained. Therefore, these algorithms are particularly suitable for stochastic programming problems with recourse.Project supported by the National Natural Science Foundation of China.  相似文献   

19.
The variational iteration method and the homotopy analysis method, as alternative methods, have been widely used to handle linear and nonlinear models. The main property of the methods is their flexibility and ability to solve nonlinear equations accurately and conveniently. This paper deals with the numerical solutions of nonlinear fractional differential equations, where the fractional derivatives are considered in Caputo sense. The main aim is to introduce efficient algorithms of variational iteration and homotopy analysis methods that can be simply used to deal with nonlinear fractional differential equations. In these algorithms, Legendre polynomials are effectively implemented to achieve better approximation for the nonhomogeneous and nonlinear terms that leads to facilitate the computational work. The proposed algorithms are capable of reducing the size of calculations, improving the accuracy and easily overcome the difficulty arising in calculating complicated integrals. Numerical examples are examined to show the efficiency of the algorithms.  相似文献   

20.
Many promising optimization algorithms for solving numerical optimization problems come from population-based metaheuristics. A few of them are based on Swarm-Intelligence Algorithms, which are inspired by the collective behavior of social organisms. One of the most successful of such algorithms is the Differential Ant-Stigmergy Algorithm (DASA), which uses stigmergy, a method of communication in emergent systems where the individual parts (artificial ants) of the system communicate with one another by modifying their local environment (pheromone intensity). The main characteristic of the DASA is its underlying structure (pheromone graph) that uses discrete steps to move through a continuous search space. As a consequence of this the search-space movement is in some way limited and the algorithm’s time/space complexity is increased. In order to overcome the problem an improved algorithm called the Continuous Differential Ant-Stigmergy Algorithm (CDASA) is proposed and then benchmarked on standard benchmark functions. This benchmarking showed that the CDASA performs better than the DASA, especially at lower dimensions, that its time/space complexity is decreased, and that the algorithm code is simplified. As such, the CDASA is more suitable for parallel implementations on General-Purpose Graphic Processing Units. Compared to the Swarm-Intelligence Algorithms presented in this paper, the CDASA is the best-performing algorithm and competitive to the state-of-the-art algorithms belonging to different metaheuristic approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号