首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The purpose of this study is to describe the interfacial interactions in terms of stress distributions on short fibers in fiber-matrix unit-cell models. The fiber and matrix are subjected to tensile loading. The study consists of three main parts. First, fiber-matrix cell segments are modeled using a 3D finite-element analysis (FEA) with ANSYS. Three different finite-element geometrical unit-cell models are generated in order to simulate the Cox analytical model: a fiber-matrix combination, a single fiber, and a single matrix element. The second part contains the results of 3D FE analyses, which are applied to the Cox formulations by using a computer program developed. In the last part, the analytical solutions for distributions of normal and shear stresses are investigated. Cox 2D linear elasticity solutions, together with finite-element ones, are presented in detail in graphs. The interfacial interactions between the fibers and matrix are also discussed considering the relative changes in the distributions of normal and shear stresses. Russian translation published in Mekhanika Kompozitnykh Materialov, Vol. 44, No. 4, pp. 505–520, July–August, 2008.  相似文献   

2.
The relative merits of different parametric models for making life expectancy and annuity value predictions at both pensioner and adult ages are investigated. This study builds on current published research and considers recent model enhancements and the extent to which these enhancements address the deficiencies that have been identified of some of the models. The England & Wales male mortality experience is used to conduct detailed comparisons at pensioner ages, having first established a common basis for comparison across all models. The model comparison is then extended to include the England & Wales female experience and both the male and female USA mortality experiences over a wider age range, encompassing also the working ages.  相似文献   

3.
Gaussian geostatistical models (GGMs) and Gaussian Markov random fields (GMRFs) are two distinct approaches commonly used in spatial models for modeling point-referenced and areal data, respectively. In this paper, the relations between GGMs and GMRFs are explored based on approximations of GMRFs by GGMs, and approximations of GGMs by GMRFs. Two new metrics of approximation are proposed : (i) the Kullback-Leibler discrepancy of spectral densities and (ii) the chi-squared distance between spectral densities. The distances between the spectral density functions of GGMs and GMRFs measured by these metrics are minimized to obtain the approximations of GGMs and GMRFs. The proposed methodologies are validated through several empirical studies. We compare the performance of our approach to other methods based on covariance functions, in terms of the average mean squared prediction error and also the computational time. A spatial analysis of a dataset on PM2.5 collected in California is presented to illustrate the proposed method.  相似文献   

4.
Computer traffic simulation models are valuable tools for the design and deployment of Intelligent Transportation Systems (ITS). Simulations of traffic flow can be used for the analysis and assessment of potential ITS technologies. Using simulations, alternative systems can be tested under identical conditions so the effects of oversaturated conditions, spillback, queuing, and overlapping bottlenecks can be measured. The Federal Highway Administration (FHWA) microscopic traffic simulation models, NETSIM, FRESIM, and CORSIM, are regarded as highly comprehensive but somewhat difficult to use. A graphics processor, TRAFVU, has recently been developed for analyzing the output of these microscopic models. TRAFVU was designed to support direct comparison of alternatives to facilitate design and evaluation. Applications of the CORSIM traffic simulation model and the TRAFVU graphics processor to interchange design and developing incident management strategies are presented.  相似文献   

5.
Three mathematical models of different levels of sophistication have been used to study a practical problem on underground heat and fluid flow, associated with the seasonal storage of hot water in an aquifer. A number of scenarios have been examined using the three models. For the basic problem the three models yield similar results, so use of the simplest is preferred. For several variations on the problem, only the more complicated models are adequate to properly address the problem. In general, the choice of an appropriate model is very problem-specific and requires not only experience with modelling methods, but also an understanding of the physics of the problem.  相似文献   

6.
Proceedings - Mathematical Sciences - The interaction of a number of alkali, alkaline earth and transition element cations with solvents like triethyl phosphate, dimethyl sulphoxide and dimethyl...  相似文献   

7.
In this research,novel epidemic models based on fractional calculus are developed by utilizing the Caputo and Atangana-Baleanu(AB)derivatives.These models integrate vacci-nation effects,additional safety measures,home and hospital isolation,and treatment options.Fractional models are particularly significant as they provide a more comprehensive under-standing of epidemic diseases and can account for non-locality and memory effects.Equilibrium points of the model are calculated,including the disease-free and endemic equilibrium points,and the basic reproduction number R0 is computed using the next-generation matrix approach.Results indicate that the epidemic becomes endemic when R0 is greater than unity,and it goes extinct when it is less than unity.The positiveness and boundedness of the solutions of model are verified.The Routh-Hurwitz technique is utilized to analyze the local stability of equilib-rium points.The Lyapunov function and the LaSalle's principle are used to demonstrate the global stability of equilibrium points.Numerical schemes are proposed,and their validity is established by comparing them to the fourth-order Runge-Kutta(RK4)method.Numerical simulations are performed using the Adams-Bashforth-Moulton predictor-corrector algorithm for the Caputo time-fractional derivative and the Toufik-Atangana numerical technique for the AB time-fractional derivative.The study looks at how the quarantine policy affected different human population groups.On the basis of these findings,a strict quarantine policy voluntarily implemented by an informed human population can help reduce the pandemic's spread.Addi-tionally,vaccination efforts become a crucial tool in the fight against diseases.We can greatly lower the number of susceptible people and develop a shield of immunity in the population by guaranteeing common access to vaccinations and boosting vaccination awareness.Moreover,the graphical representations of the fractional models are also developed.  相似文献   

8.
Two kernel-based approaches to discriminant analysis are considered: the traditional one where kernels are used to estimate the distribution of the predictor variables given the group and a direct kernel method where kernels are used to estimate the a posteriori probabilities directly. For both approaches cross-validatory choice of smoothing parameters is based on various loss functions which are directly connected to the separation of groups. Comparison with parametric models shows the improvement gained by the more flexible kernel approaches.  相似文献   

9.
The solution of integrable (n+1)-dimensional KdV system in bilinear form yields a dromion solution that is localized in all directions. The interactions between two dromions are studied both in analytical and in numerical for three (n+1)-dimensional KdV-type equations (n=1, 2, 3). The same interactive properties between two dromions (solitons) are revealed for these models. The interactions between two dromions (solitons) may be elastic or inelastic for different form of solutions.  相似文献   

10.
This paper deals with the problem of choosing the optimum criterion to select the best of a set of nested binary choice models. Special attention is given to the procedures which are derived in a decision-theoretic framework, called model selection criteria (MSC). We propose a new criterion, which we call C 2, whose theoretical behaviour is compared with that of the AIC and SBIC criteria. The result of the theoretical study shows that the SBIC is the best criterion whatever the situation we consider, while the AIC and C 2 are only adequate in some cases. The Monte Carlo experiment that is carried out corroborates the theoretical results and adds others: finite sample behaviour and robustness to changes in some aspects of the data generating process. The classical hypothesis testing procedures LR and LM are included and compared with the three criteria of the MSC category. The authors wish to thank the financial support provided by the Spanish Department of Education under project BEC 2003-01757.  相似文献   

11.
Models of environmental processes must often be constructed without the use of extensive data sets. This can occur because the exercise is preliminary (aimed at guiding future data collection) or because requisite data are extremely difficult, expensive, or even impossible to obtain. In such cases traditional, statistically based methods for estimating parameters in the model cannot be applied; in fact, parameter estimation cannot be accomplished in a rigorous way at all. We examine the use of a regionalized sensitivity analysis procedure to select appropriate values for parameters in cases where only sparse, imprecise data are available. The utility of the method is examined in the context of equilibrium and dynamic models for describing water quality and hydrological data in a small catchment in Shehandoah National Park, Virginia. Results demonstrate that (1) models can be “tentatively calibrated” using this procedure; (2) the data most likely to provide a stringent test of the model can be identified; and (3) potential problems with model identifiability can be exposed in a preliminary analysis.  相似文献   

12.
We consider the problem of scheduling operations in a robotic cell processing a single part type. Each machine in the cell has a one-unit input buffer and a one-unit output buffer. The machines and buffers are served by one single gripper robot. The domain considered is free-pickup cells with additive inter-machine travel time. The processing constraints specify the cell to be a flow shop. The objective is to find a cyclic sequence of robot moves that minimizes the long-run average time to produce a part or, equivalently, maximizes throughput. Bufferless robotic cells have been studied extensively in the literature. However, the few studies of robotic cells with output buffers at each machine have shown that the throughput can be improved by such a configuration. We show that there is no throughput advantage in providing machine input buffers in addition to output buffers. The equivalence in throughput between the two models has significant practical implications, since the cost of providing additional buffers at each machine is substantial.  相似文献   

13.
Summary  Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods are discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study compares the performance of the different estimators under several kinds of sampling schemes. Robustness is studied as well by breakdown plots.  相似文献   

14.
In order to understand the role of crossover in differential evolution, theoretical analysis and comparative study of crossover in differential evolution are presented in this paper. Two new crossover methods, namely consecutive binomial crossover and non-consecutive exponential crossover, are designed. The probability distribution and expectation of crossover length for binomial and exponential crossover used in this paper are derived. Various differential evolution algorithms with different crossover methods including mutation-only differential evolution are comprehensively compared at system level instead of parameter level. Based on the theoretical analysis and simulation results, the effect of crossover on the reliability and efficiency of differential evolution algorithms is discussed. Some insights are revealed.  相似文献   

15.
A comparative study of Artificial Bee Colony algorithm   总被引:27,自引:0,他引:27  
Artificial Bee Colony (ABC) algorithm is one of the most recently introduced swarm-based algorithms. ABC simulates the intelligent foraging behaviour of a honeybee swarm. In this work, ABC is used for optimizing a large set of numerical test functions and the results produced by ABC algorithm are compared with the results obtained by genetic algorithm, particle swarm optimization algorithm, differential evolution algorithm and evolution strategies. Results show that the performance of the ABC is better than or similar to those of other population-based algorithms with the advantage of employing fewer control parameters.  相似文献   

16.
The results of experimental and theoretical investigations into the kinetics of moisture sorption by a neat epoxy resin obtained from RAE Industries (Reapox 520, D523) are reported. The sorption process was realized in atmospheres with a constant relative humidity of 33, 53, 75, 84, and 97% and a temperature of 50°C. The results obtained showed that the diffusion behavior of epoxy resin did not obey Fick’s law under the experimental conditions considered. Consequently, the application of a non-Fickian diffusion model was necessary. For this purpose, two-phase moisture sorption models, a model with a time-dependent diffusivity, a two-phase material model, as well as relaxation and convection models of anomalous diffusion, were considered. The model parameters were obtained from the approximation of experimental sorption data. A comparative analysis of the sorption models was performed, and the specific features of their applications were estimated. The two-phase material model and the model with varying diffusivity were found to be the most suitable ones due to a good agreement between calculation results and experimental data and the rather small (three or four) number of parameters, which make them more flexible and physically more justified than the classical Fick’s model with its two parameters. __________ Translated from Mekhanika Kompozitnykh Materialov, Vol. 43, No. 4, pp. 555–570, July–August, 2007.  相似文献   

17.
We compare the performance of seven approximate methods for locating new capacity over time to minimize the total discounted costs of meeting growing demands at several locations. Comparisons are based on results for two industrial planning problems from India, and are given for both discrete-time and continuous-time frameworks. We also discuss strategies for combining different methods into possibly more effective hybrid approaches.  相似文献   

18.
We study, in this paper, some relativistic hadron bag models. We prove the existence of excited state solutions in the symmetric case and of a ground state solution in the non-symmetric case for the soliton bag and the bag approximation models by concentration compactness. We show that the energy functionals of the bag approximation model are $\Gamma $ -limits of sequences of soliton bag energy functionals for the ground and excited state problems. The pre-compactness, up to translation, of the sequence of ground state solutions associated with the soliton bag energy functionals in the non-symmetric case is obtained combining the $\Gamma $ -convergence theory and the concentration-compactness principle. Finally, we give a rigorous proof of the original derivation of the M.I.T. bag equations via a limit of bag approximation ground state solutions in the spherical case. The supersymmetry property of the Dirac operator is a key point in many of our arguments.  相似文献   

19.
This study attempts to show how a Kohonen map can be used to improve the temporal stability of the accuracy of a financial failure model. Most models lose a significant part of their ability to generalize when data used for estimation and prediction purposes are collected over different time periods. As their lifespan is fairly short, it becomes a real problem if a model is still in use when re-estimation appears to be necessary. To overcome this drawback, we introduce a new way of using a Kohonen map as a prediction model. The results of our experiments show that the generalization error achieved with a map remains more stable over time than that achieved with conventional methods used to design failure models (discriminant analysis, logistic regression, Cox’s method, and neural networks). They also show that type-I error, the economically costliest error, is the greatest beneficiary of this gain in stability.  相似文献   

20.
Summary This paper analyses the shift in parameter of a life test model. This analysis depends on the prediction of order statistics in future samples based on order statistics in a series of earlier samples in life tests having a general exponential model. While a series ofk samples are being drawn, model itself undergoes a change. Firstly, a single shift is considered and the effect of this shift on the variance is discussed. Generalisation withs shifts (s≦k) ink samples in also taken up and the semi-or-used priors (SOUPS) have been used to get predictive distributions. Finally, shift afteri (i≦k) stages, from exponential to gamma model is considered and for this case effect of the shift on the variance as well as on the Bayesian prediction region (BPR) is analysed along with set of tables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号