首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary  An increasingly important problem in exploratory data analysis and visualization is that of scale; more and more data sets are much too large to analyze using traditional techniques, either in terms of the number of variables or the number of records. One approach to addressing this problem is the development and use of multiresolution strategies, where we represent the data at different levels of abstraction or detail through aggregation and summarization. In this paper we present an overview of our recent and current activities in the development of a multiresolution exploratory visualization environment for large-scale multivariate data. We have developed visualization, interaction, and data management techniques for effectively dealing with data sets that contain millions of records and/or hundreds of dimensions, and propose methods for applying similar approaches to extend the system to handle nominal as well as ordinal data.  相似文献   

2.
Summary  Increasing amounts of large climate data require new analysis techniques. The area of data mining investigates new paradigms and methods including factors like scalability, flexibility and problem abstraction for large data sets. The field of visual data mining in particular offers valuable methods for analyzing large amounts of data intuitively. In this paper we describe our approach of integrating cluster analysis and visualization methods for the exploration of climate data. We integrated cluster algorithms, appropriate visualization techniques and sophisticated interaction paradigms into a general framework.  相似文献   

3.
Surface reconstruction from large unorganized data sets is very challenging, especially if the data present undesired holes. This is usually the case when the data come from laser scanner 3D acquisitions or if they represent damaged objects to be restored. An attractive field of research focuses on situations in which these holes are too geometrically and topologically complex to fill using triangulation algorithms. In this work a local approach to surface reconstruction from point-clouds based on positive definite Radial Basis Functions (RBF) is presented that progressively fills the holes by expanding the neighbouring information. The method is based on the algorithm introduced in [7] which has been successfully tested for the smooth multivariate interpolation of large scattered data sets. The local nature of the algorithm allows for real time handling of large amounts of data, since the computation is limited to suitable small areas, thus avoiding the critical efficiency problem involved in RBF multivariate interpolation. Several tests on simulated and real data sets demonstrate the efficiency and the quality of the reconstructions obtained using the proposed algorithm. AMS subject classification 65D17, 65D05, 65Y20  相似文献   

4.
The use of the maximum likelihood method for parameter estimation in a probabilistic path choice model used for road traffic networks is described. The method requires observation of mean travel costs in a given network rather than the traffic flows throughout the network, and thus data collection efforts may be reduced. A test network, for which many sets of artificial data were generated, is examined so that the usefulness of the parameter estimation procedure may be assessed. It is shown that the procedure is both convenient and quick, and that accurate estimates of model parameters may be obtained.  相似文献   

5.
6.
The use of object-oriented programming techniques in the development of parallel, finite element analysis software enhances code reuse and increases efficiency during application development. In this paper, an object-oriented programming framework developed by the authors is utilized in the implementation of parallel finite element software for modeling of the resin transfer molding manufacturing process. The motivation for choosing the resin transfer molding finite element application and implementing it with the object-oriented framework is that it was originally developed and parallelized in a functional programming paradigm thus offering the possibility of direct comparisons. Discussion of the software development effort and performance results are presented and analyzed.Mathematics Subject Classifications (2000) 65M60, 65Y05.  相似文献   

7.
A finite element method is used for the computation of entropy solutions to the transonic full potential equation. Physically correct solutions with sharp and correctly placed shocks were obtained. (AMS (MOS) 1980 Mathematics subject classifications: 65N30, 76N15, 35M05, 76H05, 49D10, 35A40, 35L67.)  相似文献   

8.
Summary. A mixed finite element discretization is applied to Richards equation, a nonlinear, possibly degenerate parabolic partial differential equation modeling water flow through porous medium. The equation is considered in its pressure formulation and includes both variably and fully saturated flow regime. Characteristic for such problems is the lack in regularity of the solution. To handle this we use a time-integrated scheme. We analyze the scheme and present error estimates showing its convergence.Mathematics Subject Classification (2000): 65M12, 65M60, 76S05, 35K65Acknowledgments. We would like to thank Markus Bause for very useful discussions and suggestions.  相似文献   

9.
Abstract

An important goal of visualization technology is to support the exploration and analysis of very large amounts of data. This article describes a set of pixel-oriented visualization techniques that use each pixel of the display to visualize one data value and therefore allow the visualization of the largest amount of data possible. Most of the techniques have been specifically designed for visualizing and querying large data bases. The techniques may be divided into query-independent techniques that directly visualize the data (or a certain portion of it) and query-dependent techniques that visualize the data in the context of a specific query. Examples for the class of query-independent techniques are the screen-filling curve and recursive pattern techniques. The screen-filling curve techniques are based on the well-known Morton and Peano–Hilbert curve algorithms, and the recursive pattern technique is based on a generic recursive scheme, which generalizes a wide range of pixel-oriented arrangements for visualizing large data sets. Examples for the class of query-dependent techniques are the snake-spiral and snake-axes techniques, which visualize the distances with respect to a data base query and arrange the most relevant data items in the center of the display. In addition to describing the basic ideas of our techniques, we provide example visualizations generated by the various techniques, which demonstrate the usefulness of our techniques and show some of their advantages and disadvantages.  相似文献   

10.
Summary. In this work we introduce and analyze a space-time discretization for the Primitive Equations of the Ocean. We use a reduced formulation of these equations which only includes the (3D) horizontal velocity and the (2D) surface pressure (cf.[19,20]). We use a semi-implicit Backward Euler scheme for the time discretization. The spatial discretization in each time step is carried out through a Penalty Stabilized Method. This allows to circumvent the use of pairs of spaces satisfying the inf-sup condition, thus attempting a large saving of degrees of freedom. We prove stability estimates, from which we deduce weak convergence in two steps : first in space to a semi-discretisation in time, and then in time to the continuous problem. Finally, we show a numerical test in a real-life application. Specifically, we simulate the wind-driven circulation in the Leman lake.Mathematics Subject Classification (2000): 65N30, 76M10, 86A05Revised version received July 1, 2002Research partially supported by Spanish Government Research Projects: MAR97-1055-C02-02, REN2000-1162-C02-01 MAR and REN2000-1168-C02-01 MAR  相似文献   

11.
The goal of this paper is to construct data-independent optimal point sets for interpolation by radial basis functions. The interpolation points are chosen to be uniformly good for all functions from the associated native Hilbert space. To this end we collect various results on the power function, which we use to show that good interpolation points are always uniformly distributed in a certain sense. We also prove convergence of two different greedy algorithms for the construction of near-optimal sets which lead to stable interpolation. Finally, we provide several examples. AMS subject classification 41A05, 41063, 41065, 65D05, 65D15This work has been done with the support of the Vigoni CRUI-DAAD programme, for the years 2001/2002, between the Universities of Verona and Göttingen.  相似文献   

12.
A finite element method for Burgers’ equation is studied. The method is analyzed using techniques from stabilized finite element methods and convergence to entropy solutions is proven under certain hypotheses on the artificial viscosity. In particular we assume that a discrete maximum principle holds. We then construct a nonlinear artificial viscosity that satisfies the assumptions required for convergence and that can be tuned to minimize artificial viscosity away from local extrema. The theoretical results are exemplified on a numerical example. AMS subject classification (2000)  65M20, 65M12, 35L65, 76M10  相似文献   

13.
IntroductionIn previous worksll,2], we discussed several issues associated with the standard version oflarge eddy simulation (LES) such as filtering and averaging. By the standard version, we meanthe traditional practice of first constructing a set of field equations of motions for turbulentmotion and then discretizing the equations to suit computational simulations.In this paper, we revisit the issue of large eddy simulation with a view towards improvingthe filtering procedure that is used i…  相似文献   

14.
Summary. We consider numerical computation of Taylor expansions of invariant manifolds around equilibria of maps and flows. These expansions are obtained by writing the corresponding functional equation in a number of points, setting up a nonlinear system of equations and solving this system using a simplified Newtons method. This approach will avoid symbolic or explicit numerical differentiation. The linear algebra issues of solving the resulting Sylvester equations are studied in detail.Mathematics Subject Classification (1991): 65Q05, 65P, 37M, 65P30, 65F20, 15A69Dedicated to Gerhard Wanner on the occasion of his 60th birthdayAcknowledgments. The authors like to thank Olavi Nevanlinna for discussions and his suggestion to use complex evaluation points.  相似文献   

15.
Abstract We consider the isentropic compressible flow through a tiny pore. Our approach is to adapt the recent results by N. Masmoudi on the homogenization of compressible flows through porous media to our situation. The major difference is in the a priori estimates for the pressure field. We derive the appropriate ones and then Masmoudi’s results allow to conclude the convergence. In this way the compressible Reynolds equation in the lubrication theory is rigorously justified. Keywords: Compressible Navier-Stokes equations, Lubrication, Pressure estimates Mathematics Subject Classification (2000): 35B27, 76M50, 35D05  相似文献   

16.
A general construction of barycentric coordinates over convex polygons   总被引:1,自引:0,他引:1  
Barycentric coordinates are unique for triangles, but there are many possible generalizations to convex polygons. In this paper we derive sharp upper and lower bounds on all barycentric coordinates over convex polygons and use them to show that all such coordinates have the same continuous extension to the boundary. We then present a general approach for constructing such coordinates and use it to show that the Wachspress, mean value, and discrete harmonic coordinates all belong to a unifying one-parameter family of smooth three-point coordinates. We show that the only members of this family that are positive, and therefore barycentric, are the Wachspress and mean value ones. However, our general approach allows us to construct several sets of smooth five-point coordinates, which are positive and therefore barycentric. Dedicated to Charles A. Micchelli on his 60th Birthday Mathematics subject classifications (2000) 26C15, 65D05.  相似文献   

17.
The aim of the food industry is to supply consumers with safe,high-quality, and wholesome products. Recent developments inthe industry, such as the elimination of preservatives, longerproduction runs, and frequent product changes, require a greaterattention to product safety at the design stage. This placesincreasing demands upon the design of process equipment, andhygienic plant design in particular, to ensure the safety ofproducts. This paper covers work carried out at Unilever Researchover the last five years aimed at developing suitable theoreticalmethods for investigating design and operational aspects forhygienic plant. Such a method requires three things for it to be of practicalbenefit: good physical insights, modern modelling techniques,and modern computing power. Since a wide range of food-processingoperations involve complex product flows, the core element ofthis approach has been the use of computational fluid dynamics(CFD), particularly the use of large commercial CFD codes. Moreover,the wide range of product rheologies encountered in the foodindustry has necessitated that these codes be flexible enoughto model the physics of both Newtonian and non-Newtonian fluids.Theresulting method should provide a valuable new tool to the hygienic-plantdesiener. The CFD approach has been used to investigate the interactionsbetween product rheology. heat transfer. mass transfer. andequipment geometry. In particular, it is shown that the interactionbetween product rheology and equipment geometry has a largeeffect on the resulting flow pattern. with significant consequencesfor hygienic plant design, namely that plant designed for oneproduct type may be unsuitable for use with others. This isillustrated by examples using dead-end and T-piece geometriesin process systems.  相似文献   

18.
Fume and hygiene hoods are widely used to prevent fugitive emissions from charge ports, tap holes and many other openings in mineral processing and smelting vessels. The highly buoyant nature of the fume combined with often complex geometries make the design of these hoods difficult with traditional engineering tools. However, by combining the traditional engineering approach with computational fluid dynamics (CFD) techniques, a clear understanding of the shortfalls of an existing system can be obtained, and an optimised hood design can be achieved. This paper reports on a combined engineering and CFD analysis of a fume extraction system for a zinc slag fumer charge port. The engineering model revealed that the existing plant components (bag house and fan) were not capable of capturing the required amount of fume, and that the original hood design was flawed. The CFD model was then used to predict the fume capture and emission from the existing hood. CFD model predictions showed that increasing the draft flow rate by an order of magnitude would only give a marginal improvement in fume capture. Using findings of both the models enabled a new fume capture hood to be designed. CFD analysis of the new hood revealed that a significant improvement in fume capture is possible. Construction and installation of the hood has been performed and a 65% reduction in fume emission was achieved, thus significantly mitigating a long-standing emission problem.  相似文献   

19.
In this paper we consider a real-time multiserver (such as machine controllers) and multichannel (such as regions and assembly lines) systems involving maintenance. Our objective is to fit these systems into the framework of queueing models and thus to justify the use of the powerful queueing theory analytical methods in the analysis of real-time systems. The main difficulty is that real-time systems by their very nature do not permit queues. To resolve this contradiction we use a dual approach in which we consider jobs as servers and servers as jobs. We adjust the traditional definition of availability for the real-time systems under consideration and show how to compute the system’s availability, when both service and maintenance times are exponentially distributed (birth-and-death process). At this stage we restrict ourselves to a worst case (maximum load regime), which is most typical in high-performance data acquisition and control (production and military) systems.  相似文献   

20.
Summary. This paper is devoted to the numerical approximation of the solutions of a system of conservation laws arising in the modeling of two-phase flows in pipelines. The PDEs are closed by two highly nonlinear algebraic relations, namely, a pressure law and a hydrodynamic one. The severe nonlinearities encoded in these laws make the classical approximate Riemann solvers virtually intractable at a reasonable cost of evaluation. We propose a strategy for relaxing solely these two nonlinearities. The relaxation system we introduce is of course hyperbolic but all associated eigenfields are linearly degenerate. Such a property not only makes it trivial to solve the Riemann problem but also enables us to enforce some further stability requirements, in addition to those coming from a Chapman-Enskog analysis. The new method turns out to be fairly simple and robust while achieving desirable positivity properties on the density and the mass fractions. Extensive numerical evidences are provided.Mathematics Subject Classification (1991): 76T10, 76N15, 35L65, 65M06  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号