首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The deformation in polycrystals is often heterogenous, e.g. due to grain size dependent hardening. In a semi-analytical representative volume element (RVE), a log-normal distributed grain size is assumed together with a grain size dependent local plastic behavior. The numerical results are well approximated by a simple analytical expression. The effect of the homogenization comparison stiffness on the transient behaviour is explained using a simplified localization equation. (© 2013 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
A polycrystalline material is investigated under creep conditions within the framework of continuum micromechanics. Geometrical 3D model of a polycrystalline microstructure is represented as a unit cell with grains of random crystallographical orientation and shape. Thickness of the plains, separating neighboring grains in the unit cell, can have non-zero value. Obtained geometry assigns a special zone in the vicinity of grain boundaries, possessing unordered crystalline structure. A mechanical behavior of this zone should allow sliding of the adjacent grains. Within the grain interior crystalline structure is ordered, what prescribes cubic symmetry of a material. The anisotropic material model with the orthotropic symmetry is implemented in ABAQUS and used to assign elastic and creep behavior of both the grain interior and grain boundary material. Appropriate parameters set allows transition from the orthotropy to the cubic symmetry for the grain interior. Material parameters for the grain interior are identified from creep tests for single crystal copper. Model parameters for the grain boundary are set from the physical considerations and numerical model validation according to the experimental data of the grain boundary sliding in a polycrystalline copper [2]. As the result of analysis representative number of grains and grain boundary thickness in the unit cell are recommended. (© 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
In high dimensional data modeling, Multivariate Adaptive Regression Splines (MARS) is a popular nonparametric regression technique used to define the nonlinear relationship between a response variable and the predictors with the help of splines. MARS uses piecewise linear functions for local fit and apply an adaptive procedure to select the number and location of breaking points (called knots). The function estimation is basically generated via a two-stepwise procedure: forward selection and backward elimination. In the first step, a large number of local fits is obtained by selecting large number of knots via a lack-of-fit criteria; and in the latter one, the least contributing local fits or knots are removed. In conventional adaptive spline procedure, knots are selected from a set of all distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid this drawback, it is possible to restrict the knot points to a subset of data points. In this context, a new method is proposed for knot selection which bases on a mapping approach like self organizing maps. By this method, less but more representative data points are become eligible to be used as knots for function estimation in forward step of MARS. The proposed method is applied to many simulated and real datasets, and the results show that it proposes a time efficient forward step for the knot selection and model estimation without degrading the model accuracy and prediction performance.  相似文献   

4.
The aim of this paper is to study the local and asymptotic behavior of Brownian motion on simply connected nilpotent Lie groups. We carry over a qualitative version of the Erdös-Rényi law of large numbers for Brownian motion to simply connected step 2-nilpotent Lie groups. The method applied gives rise to a proof for qualitative results concerning the modulus of continuity of Brownian motion on simply connected step 3-resp. step 2-nilpotent Lie groups without using the Ventsel-Freidlin theory as in Baldi.  相似文献   

5.
A new methodology for density estimation is proposed. The methodology, which builds on the one developed by Tabak and Vanden‐Eijnden, normalizes the data points through the composition of simple maps. The parameters of each map are determined through the maximization of a local quadratic approximation to the log‐likelihood. Various candidates for the elementary maps of each step are proposed; criteria for choosing one includes robustness, computational simplicity, and good behavior in high‐dimensional settings. A good choice is that of localized radial expansions, which depend on a single parameter: all the complexity of arbitrary, possibly convoluted probability densities can be built through the composition of such simple maps. © 2012 Wiley Periodicals, Inc.  相似文献   

6.
Grinding is a very complex and high dynamical material removal process with stochastically distributed grain engagements and strong varying local contact conditions. Over a long time only macroscopic effects are analyzed and predicted by empirical relations. To understand the dynamical behavior also local effects must be considered. Therefore, the local contact conditions and especially the time-dependent friction coefficient are analyzed. One detected effect is the dependency of the friction coefficient on the normal forces and on their time history, so a hysteresis loop occurs for increasing and decreasing values. With the force dependent friction coefficient local and dynamic effects are physically interpretable. In contrast, the global mean friction coefficient is constant over the entire force range which describes only quasi-stationary effects. (© 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
This paper proves local convergence rates of primal-dual interior point methods for general nonlinearly constrained optimization problems. Conditions to be satisfied at a solution are those given by the usual Jacobian uniqueness conditions. Proofs about convergence rates are given for three kinds of step size rules. They are: (i) the step size rule adopted by Zhang et al. in their convergence analysis of a primal-dual interior point method for linear programs, in which they used single step size for primal and dual variables; (ii) the step size rule used in the software package OB1, which uses different step sizes for primal and dual variables; and (iii) the step size rule used by Yamashita for his globally convergent primal-dual interior point method for general constrained optimization problems, which also uses different step sizes for primal and dual variables. Conditions to the barrier parameter and parameters in step size rules are given for each case. For these step size rules, local and quadratic convergence of the Newton method and local and superlinear convergence of the quasi-Newton method are proved. A preliminary version of this paper was presented at the conference “Optimization-Models and Algorithms” held at the Institute of Statistical Mathematics, Tokyo, March 1993.  相似文献   

8.
The family of expectation--maximization (EM) algorithms provides a general approach to fitting flexible models for large and complex data. The expectation (E) step of EM-type algorithms is time-consuming in massive data applications because it requires multiple passes through the full data. We address this problem by proposing an asynchronous and distributed generalization of the EM called the distributed EM (DEM). Using DEM, existing EM-type algorithms are easily extended to massive data settings by exploiting the divide-and-conquer technique and widely available computing power, such as grid computing. The DEM algorithm reserves two groups of computing processes called workers and managers for performing the E step and the maximization step (M step), respectively. The samples are randomly partitioned into a large number of disjoint subsets and are stored on the worker processes. The E step of DEM algorithm is performed in parallel on all the workers, and every worker communicates its results to the managers at the end of local E step. The managers perform the M step after they have received results from a γ-fraction of the workers, where γ is a fixed constant in (0, 1]. The sequence of parameter estimates generated by the DEM algorithm retains the attractive properties of EM: convergence of the sequence of parameter estimates to a local mode and linear global rate of convergence. Across diverse simulations focused on linear mixed-effects models, the DEM algorithm is significantly faster than competing EM-type algorithms while having a similar accuracy. The DEM algorithm maintains its superior empirical performance on a movie ratings database consisting of 10 million ratings. Supplementary material for this article is available online.  相似文献   

9.
Inexact Newton methods are variant of the Newton method in which each step satisfies only approximately the linear system (Ref. 1). The local convergence theory given by the authors of Ref. 1 and most of the results based on it consider the error terms as being provided only by the fact that the linear systems are not solved exactly. The few existing results for the general case (when some perturbed linear systems are considered, which in turn are not solved exactly) do not offer explicit formulas in terms of the perturbations and residuals. We extend this local convergence theory to the general case, characterizing the rate of convergence in terms of the perturbations and residuals.The Newton iterations are then analyzed when, at each step, an approximate solution of the linear system is determined by the following Krylov solvers based on backward error minimization properties: GMRES, GMBACK, MINPERT. We obtain results concerning the following topics: monotone properties of the errors in these Newton–Krylov iterates when the initial guess is taken 0 in the Krylov algorithms; control of the convergence orders of the Newton–Krylov iterations by the magnitude of the backward errors of the approximate steps; similarities of the asymptotical behavior of GMRES and MINPERT when used in a converging Newton method. At the end of the paper, the theoretical results are verified on some numerical examples.  相似文献   

10.
A new local smoothing procedure is suggested for jump-preserving surface reconstruction from noisy data. In a neighborhood of a given point in the design space, a plane is fitted by local linear kernel smoothing, giving the conventional local linear kernel estimator of the surface at the point. The neighborhood is then divided into two parts by a line passing through the given point and perpendicular to the gradient direction of the fitted plane. In the two parts, two half planes are fitted, respectively, by local linear kernel smoothing, providing two one-sided estimators of the surface at the given point. Our surface reconstruction procedure then proceeds in the following two steps. First, the fitted surface is defined by one of the three estimators, i.e., the conventional estimator and the two one-sided estimators, depending on the weighted residual means of squares of the fitted planes. The fitted surface of this step preserves the jumps well, but it is a bit noisy, compared to the conventional local linear kernel estimator. Second, the estimated surface values at the original design points obtained in the first step are used as new data, and the above procedure is applied to this data in the same way except that one of the three estimators is selected based on their estimated variances. Theoretical justification and numerical examples show that the fitted surface of the second step preserves jumps well and also removes noise efficiently. Besides two window widths, this procedure does not introduce other parameters. Its surface estimator has an explicit formula. All these features make it convenient to use and simple to compute.  相似文献   

11.
Based on the stress transport model, a rate-dependent algebraic expression for the Reynolds stress tensor is developed. It is shown that the new model includes the normal stress effects and exhibits viscoelastic behavior. Furthermore, it is compatible with recently developed improved models of turbulence. The model is also consistent with the limiting behavior of turbulence in the inertial sublayer and is capable of predicting secondary flows in noncircular ducts. The TEACH code is modified according to the requirements of the rate-dependent model and is used to predict turbulent flow fields in a channel and behind a backward-facing step. The predicted results are compared with the available experimental data and those obtained from the standard k-ε and algebraic stress models. It is shown that the predictions of the new model are in better agreements with the experimental data.  相似文献   

12.
Many subsurface reservoirs compact or subside due to production-induced pressure changes. Numerical simulation of this compaction process is important for predicting and preventing well-failure in deforming hydrocarbon reservoirs. However, development of sophisticated numerical simulators for coupled fluid flow and mechanical deformation modeling requires a considerable manpower investment. This development time can be shortened by loosely coupling pre-existing flow and deformation codes via an interface. These codes have an additional advantage over fully-coupled simulators in that fewer flow and mechanics time steps need to be taken to achieve a desired solution accuracy. Specifically, the length of time before a mechanics step is taken can be adapted to the rate of change in output parameters (pressure or displacement) for the particular application problem being studied. Comparing two adaptive methods (the local error method—a variant of Runge–Kutta–Fehlberg for solving ode’s—and the pore pressure method) to a constant step size scheme illustrates the considerable cost savings of adaptive time stepping for loose coupling. The methods are tested on a simple loosely-coupled simulator modeling single-phase flow and linear elastic deformation. For the Terzaghi consolidation problem, the local error method achieves similar accuracy to the constant step size solution with only one third as many mechanics solves. The pore pressure method is an inexpensive adaptive method whose behavior closely follows the physics of the problem. The local error method, while a more general technique and therefore more expensive per time step, is able to achieve excellent solution accuracy overall.  相似文献   

13.
Jaroslaw Chodor  Leon Kukielka 《PAMM》2008,8(1):10715-10716
Grinding is a very complicated processing. To increase quality of product and minimize the cost of abrasive machining, we should know physical phenomena which exist during the process. The first step to solution of this problem is analysis of machining process with a single abrasive grain. In the paper [1] the thermo–mechanical models of this process are presented, but in this work attention is concentrated on chip formation and his separation from object for different velocity of abrasive grain. The phenomena on a typical step time were described using step–by–step incremental procedure, with updated Lagrangian formulation. Then, the finite elements methods (FEM) and dynamic explicit method (DEM) were used to obtain the solution. Application was developed in the ANSYS system, which makes possible a complex time analysis of the physical phenomena – states of: displacements, strains and stress. Numerical computations of the strain have been conducted with the use of two methodologies. The first one requires an introduction of boundary conditions for displacements in the contact area determined in modeling investigation, while the second – a proper definition of the contact zone, without the necessity to introduce boundary conditions in the contact area. Examples of calculations for the intensity of stress in the surface layer zones were presented. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell’s equation numerical solutions.  相似文献   

15.
This paper presents DivClusFD, a new divisive hierarchical method for the non-supervised classification of functional data. Data of this type present the peculiarity that the differences among clusters may be caused by changes as well in level as in shape. Different clusters can be separated in different subregion and there may be no subregion in which all clusters are separated. In each step of division, the DivClusFD method explores the functions and their derivatives at several fixed points, seeking the subregion in which the highest number of clusters can be separated. The number of clusters is estimated via the gap statistic. The functions are assigned to the new clusters by combining the k-means algorithm with the use of functional boxplots to identify functions that have been incorrectly classified because of their atypical local behavior. The DivClusFD method provides the number of clusters, the classification of the observed functions into the clusters and guidelines that may be for interpreting the clusters. A simulation study using synthetic data and tests of the performance of the DivClusFD method on real data sets indicate that this method is able to classify functions accurately.  相似文献   

16.
A new defect‐correction method for the stationary Navier–Stokes equations based on local Gauss integration is considered in this paper. In both defect step and correction step, a locally stabilized technique based on the Gaussian quadrature rule is used. Moreover, stability and convergence of the presented method are deduced. Finally, we provide some numerical experiments to show good stability and effectiveness properties of the presented method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
A vriable step size control algorithm for the weak approximation of stochastic differential equations is introduced. The algorithm is based on embedded Runge–Kutta methods which yield two approximations of different orders with a negligible additional computational effort. The difference of these two approximations is used as an estimator for the local error of the less precise approximation. Some numerical results are presented to illustrate the effectiveness of the introduced step size control method.   相似文献   

18.
In this paper, we first present an adaptive nonmonotone term to improve the efficiency of nonmonotone line search, and then an active set identification technique is suggested to get more efficient descent direction such that it improves the local convergence behavior of algorithm and decreases the computation cost. By means of the adaptive nonmonotone line search and the active set identification technique, we put forward a global convergent gradient-based method to solve the nonnegative matrix factorization (NMF) based on the alternating nonnegative least squares framework, in which we introduce a modified Barzilai-Borwein (BB) step size. The new modified BB step size and the larger step size strategy are exploited to accelerate convergence. Finally, the results of extensive numerical experiments using both synthetic and image datasets show that our proposed method is efficient in terms of computational speed.  相似文献   

19.
Greedy Randomized Adaptive Search Procedures   总被引:24,自引:0,他引:24  
Today, a variety of heuristic approaches are available to the operations research practitioner. One methodology that has a strong intuitive appeal, a prominent empirical track record, and is trivial to efficiently implement on parallel processors is GRASP (Greedy Randomized Adaptive Search Procedures). GRASP is an iterative randomized sampling technique in which each iteration provides a solution to the problem at hand. The incumbent solution over all GRASP iterations is kept as the final result. There are two phases within each GRASP iteration: the first intelligently constructs an initial solution via an adaptive randomized greedy function; the second applies a local search procedure to the constructed solution in hope of finding an improvement. In this paper, we define the various components comprising a GRASP and demonstrate, step by step, how to develop such heuristics for combinatorial optimization problems. Intuitive justifications for the observed empirical behavior of the methodology are discussed. The paper concludes with a brief literature review of GRASP implementations and mentions two industrial applications.  相似文献   

20.
In this paper the effect of changing step size on the local discretization error of BDF and Adams type methods is considered. According to Shampine and Bogacki the usual assumption for variable step size multistep methods of orderp, that the local discretization error changes by p+1 as the step size changes by a factor of , is incorrect and may lead to unreliability in step size selection algorithms. Here, by using the true expression of the local discretization error for variable step size BDF-, Adams- and FLC methods, new algorithms for step size control are proposed. It is shown that the new algorithms are more accurate and reliable than those employed in usual codes. to confirm the advantages of the new algorithms some numerical experiments based on a modified version of EPISODE are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号