首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Manfred H. Ulz 《PAMM》2014,14(1):571-572
Hierarchical two-scale methods are computationally very powerful as there is no direct coupling between the macro- and microscale. Such schemes develop first a microscale model under macroscopic constraints, then the macroscopic constitutive laws are found by averaging over the microscale. The heterogeneous multiscale method (HMM) is a general top-down approach for the design of multiscale algorithms. While this method is mainly used for concurrent coupling schemes in the literature, the proposed methodology also applies to a hierarchical coupling. This contribution discusses a hierarchical two-scale setting based on the heterogeneous multi-scale method for quasi-static problems: the macroscale is treated by continuum mechanics and the finite element method and the microscale is treated by statistical mechanics and molecular dynamics. Our investigation focuses on an optimised coupling of solvers on the macro- and microscale which yields a significant decrease in computational time with no associated loss in accuracy. In particular, the number of time steps used for the molecular dynamics simulation is adjusted at each iteration of the macroscopic solver. A numerical example demonstrates the performance of the model. (© 2014 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
Hyperbolic two-step microscale heat transport equations have attracted attention in thermal analysis of thin metal films exposed to ultrashort-pulsed lasers. Exploration of temperature-dependent thermal properties is absolutely necessary to advance our fundamental understanding of microscale (ultrafast) heat transport. In this article, we develop a finite difference scheme, by obtaining an energy estimate, for solving the hyperbolic two-step model with temperature-dependent thermal properties in a double-layered microscale thin film with nonlinear interfacial conditions irradiated by ultrashort-pulsed lasers. The method is illustrated by investigating the heat transfer in a gold layer on a chromium layer.  相似文献   

3.
The simulated annealing (SA) algorithm is a well-established optimization technique which has found applications in many research areas. However, the SA algorithm is limited in its application due to the high computational cost and the difficulties in determining the annealing schedule. This paper demonstrates that the temperature parallel simulated annealing (TPSA) algorithm, a parallel implementation of the SA algorithm, shows great promise to overcome these limitations when applied to continuous functions. The TPSA algorithm greatly reduces the computational time due to its parallel nature, and avoids the determination of the annealing schedule by fixing the temperatures during the annealing process. The main contributions of this paper are threefold. First, this paper explains a simple and effective way to determine the temperatures by applying the concept of critical temperature (TC). Second, this paper presents systematic tests of the TPSA algorithm on various continuous functions, demonstrating comparable performance as well-established sequential SA algorithms. Third, this paper demonstrates the application of the TPSA algorithm on a difficult practical inverse problem, namely the hyperspectral tomography problem. The results and conclusions presented in this work provide are expected to be useful for the further development and expanded applications of the TPSA algorithm.  相似文献   

4.
Our work presents extensions of multi layered composite sphere models known from the literature to temperature-dependent elastic effects accompanied by curing. In particular, volumetric effective properties are obtained by homogenization for a representative unit cell (micro-RVE) on the heterogeneous microscale for thermo-chemo-mechanical coupling within linear elasticity. To this end, an analytical solution for an n-layered composite sphere model is derived. In a numerical study for a (3)-phase matrix it is demonstrated that the effective elastic and thermal properties lie within Voigt and Reuss bounds, whilst for the chemical part of the model an analogous result is obtained for the effective strains. (© 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
In this contribution, the model of the concrete deterioration due to the alkali silica reaction(ASR) at the microscale is set up. Based on a three-dimensional micro computer-tomography, a finite-element mesh is constructed at the micrometer length scale and 3D coupled Chemo-Themo-Mechanics model in the hardened cement paste(HCP) and computational homogenization of damage are addressed in this contribution. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
The periodic capacitated arc routing problem (PCARP) is a challenging general model with important applications. The PCARP has two hierarchical optimization objectives: a primary objective of minimizing the number of vehicles (Fv) and a secondary objective of minimizing the total cost (Fc). In this paper, we propose an effective two phased hybrid local search (HLS) algorithm for the PCARP. The first phase makes a particular effort to optimize the primary objective while the second phase seeks to further optimize both objectives by using the resulting number of vehicles of the first phase as an upper bound to prune the search space. For both phases, combined local search heuristics are devised to ensure an effective exploration of the search space. Experimental results on 63 benchmark instances demonstrate that HLS performs remarkably well both in terms of computational efficiency and solution quality. In particular, HLS discovers 44 improved best known values (new upper bounds) for the total cost objective Fc while attaining all the known optimal values regarding the objective of the number of vehicles Fv. To our knowledge, this is the first PCARP algorithm reaching such a performance. Key components of HLS are analyzed to better understand their contributions to the overall performance.  相似文献   

7.
Gaussian process models have been widely used in spatial statistics but face tremendous modeling and computational challenges for very large nonstationary spatial datasets. To address these challenges, we develop a Bayesian modeling approach using a nonstationary covariance function constructed based on adaptively selected partitions. The partitioned nonstationary class allows one to knit together local covariance parameters into a valid global nonstationary covariance for prediction, where the local covariance parameters are allowed to be estimated within each partition to reduce computational cost. To further facilitate the computations in local covariance estimation and global prediction, we use the full-scale covariance approximation (FSA) approach for the Bayesian inference of our model. One of our contributions is to model the partitions stochastically by embedding a modified treed partitioning process into the hierarchical models that leads to automated partitioning and substantial computational benefits. We illustrate the utility of our method with simulation studies and the global Total Ozone Matrix Spectrometer (TOMS) data. Supplementary materials for this article are available online.  相似文献   

8.
A domain decomposition method (DDM) is presented to solve the distributed optimal control problem. The optimal control problem essentially couples an elliptic partial differential equation with respect to the state variable and a variational inequality with respect to the constrained control variable. The proposed algorithm, called SA-GP algorithm, consists of two iterative stages. In the inner loops, the Schwarz alternating method (SA) is applied to solve the state and co-state variables, and in the outer loops the gradient projection algorithm (GP) is adopted to obtain the control variable. Convergence of iterations depends on both the outer and the inner loops, which are coupled and affected by each other. In the classical iteration algorithms, a given tolerance would be reached after sufficiently many iteration steps, but more iterations lead to huge computational cost. For solving constrained optimal control problems, most of the computational cost is used to solve PDEs. In this paper, a proposed iterative number independent of the tolerance is used in the inner loops so as to save a lot of computational cost. The convergence rate of L2-error of control variable is derived. Also the analysis on how to choose the proposed iteration number in the inner loops is given. Some numerical experiments are performed to verify the theoretical results.  相似文献   

9.
In some organizational applications, the principle of allocation (PoA) and scale advantage (SA) oppose each other. While PoA implies that organizations with wide niches get punished, SA holds that large organizations gain an advantage because of scale efficiencies. The opposition occurs because many large organizations also possess wide niches. However, analyzing these theoretical mechanisms implies a possible trade-off between niche width and size: if both PoA and SA are strong, then organizations must be either focused or large to survive, resulting in a dual market structure, as proposed by the theory of resource partitioning. This article develops a computational model used to study this trade-off, and investigates the properties of organizational populations with low/high SA and low/high PoA. The model generates three expected core “corner” solutions: (1) the dominance of large organizations in the strong SA setting; (2) the proliferation of narrow-niche organizations in the strong PoA setting; and (3) a bifurcated or dual market structure if both SA and PoA are present. The model also allows us to identify circumstances under which narrow-niche (specialists) or wide-niche (generalists) organizations thrive. We also use the model to examine the claim that concentrated resource distributions are more likely to generate partitioned or bifurcated populations. We also investigate the consequences of environments comprised of ordered versus unordered positions.  相似文献   

10.
This paper presents a planning/budgeting scheme for hierarchical systems. A multi-objective network optimization model for multilayer budget allocation is suggested. The network presents the hierarchical structure of the system. The budget allocations are the flows in the network. Each component in the system (arc in the network) has lower and upper bounds. The model maximizes the additive utility function of the system, expressed as a weighted summation over the preferences of the system's components in the various levels. The preferences are evaluated by using a multigoal approach, utilizing the Analytical Hierarchy Process (AHP). Finally, the model is conceptually compared with other known budgeting procedures and models, such as ZBB, PBBS and cost benefit analyses.  相似文献   

11.
In this paper, we develop two revelation mechanism models of a supply chain consisting of one manufacturer and one retailer under asymmetric information, where the retailer provides store assistance (SA) to reduce consumer returns rate and increase demand. Under full information, we find that a higher returns rate or returns handling cost increases the SA level if the market scale is sufficiently high. In the demand information asymmetry model, we find that: (i) the low-type retailer (facing a low demand) has no incentive to distort demand information while the high-type retailer may report wrong information; (ii) the manufacturer would like to design a menu of wholesale price-order quantity contract to induce truthful demand information and the manufacturer pays an information rent to the high-type retailer if the returns rate or returns handling cost for the retailer is sufficiently low; (iii) asymmetry of information does not change the monotonicity of the unit wholesale price in the retailer’s type, and information asymmetry decreases the retail price but increases the SA level. Unlike the demand information asymmetry model, a higher retailer’s returns handling cost expands the effects of information asymmetry on the retail price and the SA level, and using revelation mechanism decreases the channel profit if the retailer’s returns handling cost is sufficiently high under the returns rate information asymmetry model.  相似文献   

12.
Manfred H. Ulz 《PAMM》2013,13(1):175-176
Investigations into the atomistic-to-continuum coupling are recently pursued in literature. A hierarchical modelling in terms of a macroscale treated by continuum mechanics and the microscale governed by statistical mechanics may be a very fruitful combination. If the microscale is simulated with the help of molecular dynamics, the isostress-isoenthalpic ensemble as proposed by Parrinello and Rahman presents a beneficial choice. This statistical ensemble is remarkable as the equations of motion are derived from a Lagrangian. Recently, this Lagrangian was situated into a continuum mechanics setting. This paper investigates the behavior of this continuum-related Lagrangian in a kinetics-driven setting (by imposing an external stress) and a kinematics-driven setting (by imposing the shape of the molecular dynamics cell) in terms of a numerical example. (© 2013 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
The main goal of this paper is to develop accuracy estimates for stochastic programming problems by employing stochastic approximation (SA) type algorithms. To this end we show that while running a Mirror Descent Stochastic Approximation procedure one can compute, with a small additional effort, lower and upper statistical bounds for the optimal objective value. We demonstrate that for a certain class of convex stochastic programs these bounds are comparable in quality with similar bounds computed by the sample average approximation method, while their computational cost is considerably smaller.  相似文献   

14.
In this paper, solving a cell formation (CF) problem in dynamic condition is going to be discussed by using some traditional metaheuristic methods such as genetic algorithm (GA), simulated annealing (SA) and tabu search (TS). Most of previous researches were done under the static condition. Due to the fact that CF is a NP-hard problem, then solving the model using classical optimization methods needs a long computational time. In this research, a nonlinear integer model of CF is first given and then solved by GA, SA and TS. Then, the results are compared with the optimal solution and the efficiency of the proposed algorithms is discussed.  相似文献   

15.
This work provides a Markov-modulated stochastic approximation based approach for pricing American put options under a regime-switching geometric Brownian motion market model. The solutions of pricing American options may be characterized by certain threshold values. Here, a class of Markov-modulated stochastic approximation (SA) algorithms is developed to determine the optimal threshold levels. For option pricing in a finite horizon, a SA procedure is carried out for a fixed time T. As T varies, the optimal threshold values obtained via SA trace out a curve, called the threshold frontier. Numerical experiments are reported to demonstrate the effectiveness of the approach. Our approach provides us with a viable computational tool and has advantage in terms of the reduced computational complexity compared with the variational or quasivariational inequality methods for optimal stopping.Communicated by C. T. LeondesThis research was supported in part by the National Science Foundation under Grant DMS-0304928, and in part by the National Natural Science Foundation of China under Grant 60574069.  相似文献   

16.
This paper presents a local-search heuristic, based on the simulated annealing (SA) algorithm for a modified bin-packing problem (MBPP). The objective of the MBPP is to assign items of various sizes to a fixed number of bins, such that the sum-of-squared deviation (across all bins) from the target bin workload is minimized. This problem has a number of practical applications which include the assignment of computer jobs to processors, the assignment of projects to work teams, and infinite-loading machine scheduling problems. The SA-based heuristic we developed uses a morph-based search procedure when looking for better allocations. In a large computational study we evaluated 12 versions of this new heuristic, as well as two versions of a previously published SA-based heuristic that used a completely random search. The primary performance measure for this evaluation was the mean percent above the best known objective value (MPABKOV). Since the MPABKOV associated with the best version of the random-search SA heuristic was more than 290 times larger than that of the best version of the morph-based SA heuristic, we conclude that the morphing process is a significant enhancement to SA algorithms for these problems.  相似文献   

17.
In this paper, the Ni/Al hybrid open-cell foams are characterized on different hierarchical levels by means of experiments and numerical modeling from the atomic to the microscale. In this case, it is possible to compare the elastic-plastic behavior at different scales in order to attain a deeper understanding of the multiscale properties of the Ni/Al hybrid foams. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
In this paper, flexible job shop scheduling problem with a new approach, overlapping in operations, is discussed. In many flexible job shops, a customer demand can be released more than one for each job, where demand determines the quantity of each finished job ordered by a customer. In these models each job has a demand more than one. This assumption is an important and practical issue for many flexible job shops such as petrochemical industries. To consider this assumption, we use a new approach, named overlapping in operations. In this approach, embedded operations of each job can be performed due to overlap considerations in which each operation may be overlapped with the others because of its nature. The overlapping is limited by structural constraints, such as the dimensions of the box to be packed or the capacity of the container used to move the pieces from one machine to the next. Since this problem is well known as NP-Hard class, a hierarchical approach used simulated annealing algorithm is developed to solve large problem instances. Moreover, a mixed integer linear programming (MILP) method is presented. To evaluate the validity of the proposed SA algorithm, the results are compared with the optimal solution obtained with the traditional optimization technique (The Branch and Bound method). The computational results validate the efficiency and effectiveness of the proposed algorithm. Also the computational results show that the overlapping considering can improve the makespan and machines utilization measures. So the proposed algorithm can be applied easily in real factory conditions and for the large size problems and it should thus be useful to both practitioners and researchers.  相似文献   

19.
In this contribution, we address the computational treatment of transient diffusion problems with heterogeneous microstructures using first-order homogenization. There, we treat two different cases, firstly, when the transient part at the microscale can be neglected due to the vanishingly small size of the representative volume element (RVE) and secondly, when no steady state is reached at the microscale. This is the case when dealing with a relaxed version of the scale separation condition. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
In recent years, there has been a growing interest in uncertainty propagation, and a wide variety of uncertainty propagation methods exist in literature. In this paper, an uncertainty propagation approach is developed by using high-dimensional model representation (HDMR) and dimension reduction (DR) method technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. To save the computational cost, a dimension-adaptive version of the additive decomposition is proposed to detect the important component functions to reduce the terms. The proposed method requires neither the calculation of partial derivatives of response, as in commonly used Taylor expansion/perturbation methods, nor the inversion of random matrices, as in the Neumann expansion method. Two numerical examples show the efficiency and accuracy of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号