首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The growth and morphological evolution of molybdenum-oxide microstructures formed in the high temperature environment of a counter-flow oxy-fuel flame using molybdenum probes is studied. Experiments conducted using various probe retention times show the sequence of the morphological changes. The morphological row begins with micron size objects exhibiting polygonal cubic shape, develops into elongated channels, changes to large structures with leaf-like shape, and ends in dendritic structures. Time of probe–flame interaction is found to be a governing parameter controlling the wide variety of morphological patterns; a molecular level growth mechanism is attributed to their development. This study reveals that the structures are grown in several consecutive stages: material “evaporation and transportation”, “transformation”, “nucleation”, “initial growth”, “intermediate growth”, and “final growth”. XRD analysis shows that the chemical compositions of all structures correspond to MoO2.  相似文献   

2.
We suggest that the observed large-scale universal roughness of brittle fracture surfaces is due to the fracture propagation being a damage coalescence process described by a stress-weighted percolation phenomenon in a self-generated quadratic damage gradient. We use the quasistatic 2D fuse model as a paradigm of a mode I fracture model. We measure for this model, which exhibits a correlated percolation process, the correlation length exponent nu approximately 1.35 and conjecture it to be equal to that of classical percolation, 4/3. We then show that the roughness exponent in the 2D fuse model is zeta=2nu/(1+2nu)=8/11. This is in accordance with the numerical value zeta=0.75. Using the value for 3D percolation, nu=0.88, we predict the roughness exponent in the 3D fuse model to be zeta=0.64, in close agreement with the previously published value of 0.62+/-0.05. We furthermore predict zeta=4/5 for 3D brittle fractures, based on a recent calculation giving nu=2. This is in full accordance with the value zeta=0.80 found experimentally.  相似文献   

3.
Stochastic epidemics and rumours on finite random networks   总被引:3,自引:0,他引:3  
In this paper, we investigate the stochastic spread of epidemics and rumours on networks. We focus on the general stochastic (SIR) epidemic model and a recently proposed rumour model on networks in Nekovee et al. (2007) [3], and on networks with different random structures, taking into account the structure of the underlying network at the level of the degree–degree correlation function. Using embedded Markov chain techniques and ignoring density correlations between neighbouring nodes, we derive a set of equations for the final size of the epidemic/rumour on a homogeneous network that can be solved numerically, and compare the resulting distribution with the solution of the corresponding mean-field deterministic model. The final size distribution is found to switch from unimodal to bimodal form (indicating the possibility of substantial spread of the epidemic/rumour) at a threshold value that is higher than that for the deterministic model. However, the difference between the two thresholds decreases with the network size, n, following a n−1/3 behaviour. We then compare results (obtained by Monte Carlo simulation) for the full stochastic model on a homogeneous network, including density correlations at neighbouring nodes, with those for the approximating stochastic model and show that the latter reproduces the exact simulation results with great accuracy. Finally, further Monte Carlo simulations of the full stochastic model are used to explore the effects on the final size distribution of network size and structure (using homogeneous networks, simple random graphs and the Barabasi–Albert scale-free networks).  相似文献   

4.
Earthquakes (EQs) are large-scale fracture phenomena in the Earth’s heterogeneous crust. Fracture-induced physical fields allow a real-time monitoring of damage evolution in materials during mechanical loading. Electromagnetic (EM) emissions in a wide frequency spectrum ranging from kHz to MHz are produced by opening cracks, this can be considered as the so-called precursors of general fracture. We emphasize that the MHz radiation appears earlier than the kHz on both laboratory and geophysical scales. An important challenge in this field of research is to distinguish characteristic epochs in the evolution of precursory EM activity and identify them with the equivalent last stages in the EQ preparation process. Recently, we proposed the following two-stage model. (i) The first epoch, which includes the initial emergent MHz EM emission, is thought to be due to the fracture of a highly heterogeneous system that surrounds a family of large high-strength asperities distributed along the activated fault sustaining the system. (ii) The second epoch, which includes the emergent strong impulsive kHz EM radiation, is due to the fracture of the asperities themselves. A catastrophic EQ of magnitude Mw=6.3 occurred on 6 April, 2009 (06/04/09) in central Italy. The majority of the damage occurred in the city of L’Aquila. Clear kHz–MHz EM anomalies had been detected prior to the L’Aquila EQ. Here, we investigate the seismogenic origin of the MHz part of the anomalies. The analysis, which is in terms of intermittent dynamics of critical fluctuations, reveals that the candidate EM precursor (i) can be described as analogous to a thermal continuous phase transition and (ii) has anti-persistent behavior. These features suggest that this candidate precursor was triggered by microfractures in the highly disordered system that surrounded the backbone of asperities of the activated fault. A criterion for underlying strong critical behavior is introduced. In this field of research, reproducibility of results is desirable; and is best done by analyzing a number of precursory MHz EM emissions. We refer to previous studies of precursory MHz EM activities associated with nine significant EQs that have occurred in Greece in recent years. We conclude that all the MHz EM precursors studied, including the present one, can be described as analogous to a continuous second-order phase transition having strong criticality and anti-persistent behavior.  相似文献   

5.
In this paper, we investigate the employment of a ternary line coding technique based on Ungerboeck's trellis-coded method in asynchronous optical CDMA systems. The ternary coding we use is predicated upon the equal-weight orthogonal (EWO) scheme. Each user transmits two mutually orthogonal signature sequences to represent “+1” and “−1”, respectively, and nothing is transmitted for “0”. The receiver employs a maximum-likelihood soft-decoder to select the path with minimum Euclidean distance as the preferred path. This trellis ternary coding scheme applies set partitioning with partially overlapping subsets to increase the free Euclidean distance, which considerably improves system performance. Furthermore, due to line coding technique, such scheme comprises sufficient clock information, and thus benefits for baseband timing extraction (i.e. clock recovery). Numerical results reveal that the proposed trellis ternary coding scheme can significantly reduce the error floor and allow more active users to be accommodated in the network.  相似文献   

6.
Market Mill is a complex dependence pattern leading to nonlinear correlations and predictability in intraday dynamics of stock prices. The present paper puts together previous efforts to build a dynamical model reflecting the market mill asymmetries. We show that certain properties of the conditional dynamics at a single time scale such as a characteristic shape of an asymmetry-generating component of the conditional probability distribution result in the “elementary” market mill pattern. This asymmetry-generating component matches the empirical distribution obtained from the market data. Multiple time scale considerations make the resulting “composite” mill similar to the empirical market mill patterns. Multiscale model also reflects a multi-agent nature of the market. Interpretation of variations of asymmetry patterns of individual stocks in terms of specific deformations of the fundamental market mill asymmetry patterns is described.  相似文献   

7.
An almost new method has been confirmed for experimentalists to have a first insight into the solid under study, by investigating the Cole–Cole diagrams of both the electric modulus M* and the permittivity ε* at different temperatures. All points of M* function at different temperatures of the investigated hexagonal ferrite data have been collected in one semicircular master curve for each composition. This indicates that the studied compositions belong to a category of solids having what we have referred to as an “electric stiffness” as the dominating property, which is the reciprocal to an “electric compliance”—this would be the dominating property if the permittivity ε* points could be collected in a master curve. In the present work, it has been found that the Cole–Cole diagrams of M* have given some detailed information that are not obviously displayed in the conductivity representation.Moreover, a fitting of the investigated experimental data of the hexagonal ferrites—BaZn2-xMgxFe16O27, where (x=0.0,0.4,0.8,1.2,1.6 and 2)—with Dyre's macroscopic model of ac conductivity has been performed.An indirect method of fitting of the investigated data with the percolation path approximation (PPA) final equation of Dyre's macroscopic model has shown quite satisfactory results especially at relatively low frequencies . Whereas for the effective medium approximation (EMA) final equation of Dyre's macroscopic model the fitting has failed in hexagonal ferrites on contrary with a limited success found in a previous work with spinel ferrites. This is attributed to the more complex structure of hexagonal ferrites than that of spinel ferrites which makes the EMA no more suitable.  相似文献   

8.
Free Space Optical (FSO) links can be used to setup FSO communication networks or to supplement radio and optical fiber networks. Hence, it is the broadband wireless solution for closing the “last mile” connectivity gap throughout metropolitan networks. Optical wireless fits well into dense urban areas and is ideally suited for urban applications. This paper gives an overview of free-space laser communications. Different network architectures will be described and investigated regarding reliability. The usage of “Optical Repeaters”, Point-to-Point and Point-to-Multipoint solutions will be explained for setting up different network architectures. After having explained the different networking topologies and technologies, FSO applications will be discussed in section 2, including terrestrial applications for short and long ranges, and space applications. Terrestrial applications for short ranges cover the links between buildings on campus or different buildings of a company, which can be established with low-cost technology. For using FSO for long-range applications, more sophisticated systems have to be used. Hence, different techniques regarding emitted optical power, beam divergence, number of beams and tracking will be examined. Space applications have to be divided into FSO links through the troposphere, for example up- and downlinks between the Earth and satellites, and FSO links above the troposphere (e.g., optical inter-satellite links). The difference is that links through the troposphere are mainly influenced by weather conditions similar but not equal to terrestrial FSO links. Satellite orbits are above the atmosphere and therefore, optical inter-satellite links are not influenced by weather conditions. In section 3 the use of optical wireless for the last mile will be investigated and described in more detail. Therefore important design criteria for connecting the user to the “backbone” by FSO techniques will be covered, e.g., line of sight, network topology, reliability and availability. The advantages and disadvantages of different FSO technologies, as well as the backbone technology are discussed in this respect. Furthermore, the last mile access using FSO will be investigated for different environment areas (e.g., urban, rural, mountain) and climate zones. The availability of the FSO link is mainly determined by the local atmospheric conditions and distance and will be examined for the last mile. Results of various studies will complete these investigations. Finally, an example for realizing a FSO network for the last mile will be shown. In this example FSO transmitters with light emitting diodes (LED) instead of laser diodes will be described. By using LEDs, problems with laser- and eye safety are minimized. Some multimedia applications (like video-conferences, live TV-transmissions, etc.) will illustrate the range of applications for FSO last mile networks.  相似文献   

9.
The Rayleigh pulse interaction with a pre-stressed, partially contacting interface between similar and dissimilar materials is investigated experimentally as well as numerically. This study is intended to obtain an improved understanding of the interface (fault) dynamics during the earthquake rupture process. Using dynamic photoelasticity in conjunction with high-speed cinematography, snapshots of time-dependent isochromatic fringe patterns associated with Rayleigh pulse–interface interaction are experimentally recorded. It is shown that interface slip (instability) can be triggered dynamically by a pulse which propagates along the interface at the Rayleigh wave speed. For the numerical investigation, the finite difference wave simulator SWIFD is used for solving the problem under different combinations of contacting materials. The effect of acoustic impedance ratio of the two contacting materials on the wave patterns is discussed. The results indicate that upon interface rupture, Mach (head) waves, which carry a relatively large amount of energy in a concentrated form, can be generated and propagated from the interface contact region (asperity) into the acoustically softer material. Such Mach waves can cause severe damage onto a particular region inside an adjacent acoustically softer area. This type of damage concentration might be a possible reason for the generation of the “damage belt” in Kobe, Japan, on the occasion of the 1995 Hyogo-ken Nanbu (Kobe) Earthquake.  相似文献   

10.
Correlations in the nuclear wave-function beyond the mean-field or Hartree-Fock approximation are very important to describe basic properties of nuclear structure. Various approaches to account for such correlations are described and compared to each other. This includes the holeline expansion, the coupled cluster or “exponential S” approach, the self-consistent evaluation of Greens functions, variational approaches using correlated basis functions and recent developments employing quantum Monte-Carlo techniques. Details of these correlations are explored and their sensitivity to the underlying nucleon-nucleon interaction. Special attention is paid to the attempts to investigate these correlations in exclusive nucleon knock-out experiments induced by electron scattering. Another important issue of nuclear structure physics is the role of relativistic effects as contained in phenomenological mean field models. The sensitivity of various nuclear structure observables on these relativistic features are investigated. The report includes the discussion of nuclear matter as well as finite nuclei.  相似文献   

11.
李瑞涛  唐刚  夏辉  寻之朋  李嘉翔  朱磊 《物理学报》2019,68(5):50301-050301
石墨烯等材料具有典型的二维蜂巢结构,而随机电阻丝模型则是研究非均匀材料断裂十分有效的统计物理学模型.本文尝试对二维蜂巢结构随机电阻丝网络熔断的动力学过程及熔断面性质进行数值模拟分析,以此来研究二维非均质蜂窝材料熔断的动力学性质和熔断面的动力学标度性质.模拟研究表明,二维随机蜂窝网格的熔断动力学过程和熔断面具有明显的标度性质,得到的熔断面整体和局域粗糙度指数分别为α=0.911±0.005和α_(loc)=0.808 ± 0.003,这两者之间的明显差异表明熔断面具有奇异标度性.通过对熔断面极值高度的分析发现,熔断面高度的极值统计分布能很好地满足Asym2sig型分布,而不是最常见的三种极值统计分布.本文的研究表明,随机电阻丝模型在模拟非均匀材料的电流熔断过程和熔断表面标度性的分析中同样适用和有效.  相似文献   

12.
The reflection of a CJ detonation from a perforated plate is used to generate high speed deflagrations downstream in order to investigate the critical conditions that lead to the onset of detonation. Different perforated plates were used to control the turbulence in the downstream deflagration waves. Streak Schlieren photography, ionization probes and pressure transducers are used to monitor the flow field and the transition to detonation. Stoichiometric mixtures of acetylene–oxygen and propane–oxygen were tested at low initial pressures. In some cases, acetylene–oxygen was diluted with 80% argon in order to render the mixture more “stable” (i.e., more regular detonation cell structure). The results show that prior to successful detonation initiation, a deflagration is formed that propagates at about half the CJ detonation velocity of the mixture. This “critical” deflagration (which propagates at a relatively constant velocity for a certain duration prior to the onset of detonation) is comprised of a leading shock wave followed by an extended turbulent reaction zone. The critical deflagration speed is not dependent on the turbulence characteristics of the perforated plate but rather on the energetics of the mixture like a CJ detonation (i.e., the deflagration front is driven by the expansion of the combustion products). Hence, the critical deflagration is identified as a CJ deflagration. The high intensity turbulence that is required to sustain its propagation is maintained via chemical instabilities in the reaction zone due to the coupling of pressure fluctuations with the energy release. Therefore, in “unstable” mixtures, critical deflagrations can be supported for long durations, whereas in “stable” mixtures, deflagrations decay as the initial plate generated turbulence decays. The eventual onset of detonation is postulated to be a result of the amplification of pressure waves (i.e., turbulence) that leads to the formation of local explosion centers via the SWACER mechanism during the pre-detonation period.  相似文献   

13.
I review the present status of two related models addressing scenarios in which the formation of heavy quarkonium states in high energy heavy ion collisions proceed via “off-diagonal” combinations of a quark and an antiquark. The physical process involved belongs to a general class of quark “recombination”, although technically the recombining quarks here were never previously bound in a quarkonium state. Features of these processes relevant as a signature of color deconfinement are discussed.Arrival of the final proofs: 26 April 2005  相似文献   

14.
Experiments have been performed for studying quaternary fission (QF) in spontaneous fission of 252Cf, on the one hand, and for the neutron-induced fission reactions 233, 235U(nth, f ), on the other hand. In this higher-multiplicity fission mode, by definition, four charged products appear in the final state. In other words, as a generalization of the ternary-fission process, not only one but two light charged particles (LCPs) are accompanying the splitting of an actinide nucleus into the customary pair of fission fragments. In the two sets of measurements, which have used quite different approaches, the yields of several QF reactions with α-particles and tritons as the LCPs have been determined and the corresponding kinetic-energy distributions of the α-particles measured. The QF process can appear in two basically different ways: i) the simultaneous creation of two LCPs in the act of fission (“true” QF) and ii) via a fast sequential decay of a single but particle-unstable LCP in common ternary fission (“pseudo” QF). Experimentally the two varieties of QF have been distinguished by exploiting the different patterns of angular correlations between the two outgoing LCPs. The experiments described in the present paper are the first to demonstrate that both types of reactions, true and pseudo QF, occur with quite comparable probabilities. As a new result also, the kinetic-energy distributions related to the two processes have been shown to be significantly different. For all QF reactions which could be explored, the yields for 252Cf(sf) were found to be roughly by an order of magnitude larger than the yields found in the 233U(nth, f ) and 235U(nth, f ) reactions. An interesting by-product has been the measurement of yields of excited LCPs which allows to deduce nuclear temperatures at scission by comparison to the respective yields in the ground state.  相似文献   

15.
X-ray diffraction microscopy (XDM) is a new form of X-ray imaging that is being practiced at several third-generation synchrotron-radiation X-ray facilities. Nine years have elapsed since the technique was first introduced and it has made rapid progress in demonstrating high-resolution three-dimensional imaging and promises few-nanometer resolution with much larger samples than can be imaged in the transmission electron microscope. Both life- and materials-science applications of XDM are intended, and it is expected that the principal limitation to resolution will be radiation damage for life science and the coherent power of available X-ray sources for material science. In this paper we address the question of the role of radiation damage. We use a statistical analysis based on the so-called “dose fractionation theorem” of Hegerl and Hoppe to calculate the dose needed to make an image of a single life-science sample by XDM with a given resolution. We find that the needed dose scales with the inverse fourth power of the resolution and present experimental evidence to support this finding. To determine the maximum tolerable dose we have assembled a number of data taken from the literature plus some measurements of our own which cover ranges of resolution that are not well covered otherwise. The conclusion of this study is that, based on the natural contrast between protein and water and “Rose-criterion” image quality, one should be able to image a frozen-hydrated biological sample using XDM at a resolution of about 10 nm.  相似文献   

16.
This paper presents an error analysis of numerical algorithms for solving the convective continuity equation using flux-corrected transport (FCT) techniques. The nature of numerical errors in Eulerian finite-difference solutions to the continuity equation is analyzed. The properties and intrinsic errors of an “optimal” algorithm are discussed and a flux-corrected form of such an algorithm is demonstrated for a restricted class of problems. This optimal FCT algorithm is applied to a model test problem and the error is monitored for comparison with more generally applicable algorithms. Several improved FCT algorithms are developed and judged against both standard flux-uncorrected transport algorithms and the optimal algorithm. These improved FCT algorithms are found to be four to eight times more accurate than standard non-FCT algorithms, nearly twice as accurate as the original SHASTA FCT algorithm, and approach the accuracy of the optimal algorithm.  相似文献   

17.
We present a first set of improved selective pulses, obtained with a numerical technique similar to the one proposed by Geen and Freeman. The novelty is essentially a robust and efficient “evolution strategy” which consistently leads, in a matter of minutes, to “solutions” better than those published so far. The other two ingredients are a “cost function,” which includes contributions from peak and average radiofrequency power, and some understanding of the peculiar requirements of each type of pulse. For example, good solutions for self-refocusing pulses and “negative phase excitation pulses” (which yield a maximum signal well after the end of the pulse) are found, as may have been predicted, among amplitude modulated pulses with 270° tip angles. Emphasis is given to the search for solutions with low RF power for selective excitation, saturation, and inversion pulses. Experimental verification of accuracy and power requirements of the pulses has been performed with a 4.7 T Sisco imager.  相似文献   

18.
In interferometric fringe pattern analysis, specular and speckle fringe patterns are the two main divisions. While specular fringes are characterized by quality fringes, speckle (that obtains due to the diffuse scattering of the coherent radiation from an optically rough surface) fringe patterns are characterized by noisy fringes. This paper concentrates on this aspect and the Matlab based filtering methods to improve the quality of speckle fringe patterns by developing the appropriate software. Further, the newly developed software “Macurv” will be presented which can give the second order derivative (curvature) fringe information. A software with several functions is written using Matlab. The objective of the software is to provide a more effective way for the post-processing of speckle interferometric fringes. The algorithm and functions of the developed software “Macurv” will be explained.  相似文献   

19.
A single female professional vocal artist and pedagogue sang examples of “twang” and neutral voice quality, which a panel of experts classified, in almost complete agreement with the singer's intentions. Subglottal pressure was measured as the oral pressure during the occlusion during the syllable /pae/. This pressure tended to be higher in “twang,” whereas the sound pressure level (SPL) was invariably higher. Voice source properties and formant frequencies were analyzed by inverse filtering. In “twang,” as compared with neutral, the closed quotient was greater, the pulse amplitude and the fundamental were weaker, and the normalized amplitude tended to be lower, whereas formants 1 and 2 were higher and 3 and 5 were lower. The formant differences, which appeared to be the main cause of the SPL differences, were more important than the source differences for the perception of “twanginess.” As resonatory effects occur independently of the voice source, the formant frequencies in “twang” may reflect a vocal strategy that is advantageous from the point of view of vocal hygiene.  相似文献   

20.
The “pre-processing” procedure and the “break-point” analysis developed in a previous work based on the ADO (analytical discrete ordinates) method are used, along with a nascent delta function to describe the polar-angle dependence of an incident beam, to solve the classical albedo problem for radiative transfer in a plane-parallel, multi-layer medium subject to Fresnel boundary and interface conditions. As a result of the use of a nascent delta function, rather than the Dirac distribution, to model the polar-angle dependence of the incident beam, the computational work is significantly simplified (since a particular solution is not required) in comparison to an approach where both the polar-angle and the azimuthal-angle dependence of the incident beam are formulated in terms of Dirac delta distributions. The numerical results from this approach are (when a sufficiently small “narrowness” parameter is used to define the nascent delta) found to be in complete agreement with already reported (high-quality) results for a set of challenging multi-layer problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号