首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到1条相似文献,搜索用时 0 毫秒
1.
Multiscale reliability places priority on the shifting of space-time scale while dual-scale reliability concentrates on time limits. Both can be ranked by applying the principle of least variance, although the prevailing criteria for assessment may differ. The elements measuring reliability can be ideally assumed to be non-interactive or interactive as a rule. Different formulations of the latter can be adopted to yield weak, strong, and mixed reliability depending on the application. Variance can also be referred to the average based on the linear sum, the root mean square, or otherwise. Preference will again depend on the physical system under consideration. Different space-time scale ranges can be chosen for the appropriate time span to failure. Up to now, only partial validation can be made due to the lack of lower scale data that are generated theoretically.A set of R-integrals is defined to account for the evolution effects by way of the root functions from Ideomechanics. The approach calls for a “pulsating mass” model that can connect the physical laws for the small and large bodies, including energy dissipation at all scale level. Non-linearity is no longer an issue when characterization of matter is made by the multiscaling of space-time. Ordinary functions can also be treated with minor modifications.The key objective is not to derive new theories, but to explain the underlying physics of existing test data, and the reliability of diversified propositions for predicting the time span to failure. Present and past investigations have remained at the micro-macro or mi-ma scale range for several decades due to the inability to quantify lower scale data. To this end, the available mi-ma fatigue crack growth data are used to generate those at the na-mi and pi-na scale ranges. Reliability variances are computed for the three different scale ranges, covering effects from the atomic to the macroscopic scale. They include the initial crack or defect length and velocities. Specimen with large initial defects are found to be more reliable. This trend also holds for each of the na-mi and pi-na scale range. Also, large specimen data had smaller reliability variances than the smaller specimens making them more reliable. Variances for the nano- and pico-scale range had much more scatter and were diversified. Uncertainties and un-reliabilities at the atomic and sub-atomic scale are no doubt related, although their connections remain to be found.Reliability with high order precisions are also defined for multi-component systems that can involve trillions of elements at the different scale ranges. Such large scale computations are now within reach by the advent of super-speed computers, especially when reliability, risk, and among other factors may have to be considered simultaneously.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号