首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
What is 'unfreezable water', how unfreezable is it,and how much is there?   总被引:1,自引:0,他引:1  
Wolfe J  Bryant G  Koster KL 《Cryo letters》2002,23(3):157-166
Water that remains unfrozen at temperatures below the equilibrium bulk freezing temperature, in the presence of ice, is sometimes called unfreezable or bound. This paper analyses the phenomenon in terms of quantitative measurements of the hydration interaction among membranes or macromolecules at freezing temperatures. These results are related to analogous measurements in which osmotic stress or mechanical compression is used to equilibrate water of hydration with a bulk phase. The analysis provides formulas to estimate, at a given sub-freezing temperature, the amount of unfrozen water due to equilibrium hydration effects. Even at tens of degrees below freezing, this hydration effect alone can explain an unfrozen water volume that considerably exceeds that of a single 'hydration shell' surrounding the hydrophilic surfaces. The formulas provided give a lower bound to the amount of unfrozen water for two reasons. First, the well-known freezing point depression due to small solutes is, to zeroth order, independent of the membrane or macromolecular hydration effect. Further, the unfrozen solution found between membranes or macromolecules at freezing temperatures has high viscosity and small dimensions. This means that dehydration of such systems, especially at freezing temperatures, takes so long that equilibrium is rarely achieved over normal experimental time scales. So, in many cases, the amount of unfrozen water exceeds that expected at equilibrium, which in turn usually exceeds that calculated for a single hydration shell.  相似文献   

2.
It is well known in quantum optics that fluctuations and dissipation inevitably intervene in the dynamics of open quantum systems. Density matrix elements may all decay exponentially and smoothly but we show that two-party entanglement, a valuable quantum coherence, may nevertheless abruptly decrease to zero in a finite time. This is Entanglement Sudden Death. In this talk we show how entanglement sudden death occurs under either phase or amplitude noise, either quantum or classical in origin. Moreover, we show that when two or more noises are active at the same time, the effects of the combined noises is even more unexpected.  相似文献   

3.
4.
Information on available polystyrene calibration spheres is presented regarding the particle diameter, uncertainty in the size, and the width of the size distribution for particles in a size range between 20 and 100nm. The use of differential mobility analysis for measuring the single primary calibration standard in this size range, 100nm NIST Standard Reference Material®1963, is described along with the key factors in the uncertainty assessment. The issues of differences between international standards and traceability to the NIST Standard are presented. The lack of suitable polystyrene spheres in the 20–40nm size range will be discussed in terms of measurement uncertainty and width of the size distributions. Results on characterizing a new class of molecular particles known as dendrimers will be described and the possibilities of using these as size calibration standards for the size range from 3 to 15nm will be discussed.  相似文献   

5.
An obvious criterion to classify theories of modified gravity is to identify their gravitational degrees of freedom and their coupling to the metric and the matter sector. Using this simple idea, we show that any theory which depends on the curvature invariants is equivalent to general relativity in the presence of new fields that are gravitationally coupled to the energy-momentum tensor. We show that they can be shifted into a new energy-momentum tensor. There is no a priori reason to identify these new fields as gravitational degrees of freedom or matter fields. This leads to an equivalence between dark matter particles gravitationally coupled to the standard model fields and modified gravity theories designed to account for the dark matter phenomenon. Due to this ambiguity, it is impossible to differentiate experimentally between these theories and any attempt of doing so should be classified as a mere interpretation of the same phenomenon.  相似文献   

6.
7.
A brief history is presented, outlining the development of rate theory during the past century. Starting from Arrhenius [Z. Phys. Chem. 4, 226 (1889)], we follow especially the formulation of transition state theory by Wigner [Z. Phys. Chem. Abt. B 19, 203 (1932)] and Eyring [J. Chem. Phys. 3, 107 (1935)]. Transition state theory (TST) made it possible to obtain quick estimates for reaction rates for a broad variety of processes even during the days when sophisticated computers were not available. Arrhenius' suggestion that a transition state exists which is intermediate between reactants and products was central to the development of rate theory. Although Wigner gave an abstract definition of the transition state as a surface of minimal unidirectional flux, it took almost half of a century until the transition state was precisely defined by Pechukas [Dynamics of Molecular Collisions B, edited by W. H. Miller (Plenum, New York, 1976)], but even this only in the realm of classical mechanics. Eyring, considered by many to be the father of TST, never resolved the question as to the definition of the activation energy for which Arrhenius became famous. In 1978, Chandler [J. Chem. Phys. 68, 2959 (1978)] finally showed that especially when considering condensed phases, the activation energy is a free energy, it is the barrier height in the potential of mean force felt by the reacting system. Parallel to the development of rate theory in the chemistry community, Kramers published in 1940 [Physica (Amsterdam) 7, 284 (1940)] a seminal paper on the relation between Einstein's theory of Brownian motion [Einstein, Ann. Phys. 17, 549 (1905)] and rate theory. Kramers' paper provided a solution for the effect of friction on reaction rates but left us also with some challenges. He could not derive a uniform expression for the rate, valid for all values of the friction coefficient, known as the Kramers turnover problem. He also did not establish the connection between his approach and the TST developed by the chemistry community. For many years, Kramers' theory was considered as providing a dynamic correction to the thermodynamic TST. Both of these questions were resolved in the 1980s when Pollak [J. Chem. Phys. 85, 865 (1986)] showed that Kramers' expression in the moderate to strong friction regime could be derived from TST, provided that the bath, which is the source of the friction, is handled at the same level as the system which is observed. This then led to the Mel'nikov-Pollak-Grabert-Hanggi [Mel'nikov and Meshkov, J. Chem. Phys. 85, 1018 (1986); Pollak, Grabert, and Hanggi, ibid. 91, 4073 (1989)] solution of the turnover problem posed by Kramers. Although classical rate theory reached a high level of maturity, its quantum analog leaves the theorist with serious challenges to this very day. As noted by Wigner [Trans. Faraday Soc. 34, 29 (1938)], TST is an inherently classical theory. A definite quantum TST has not been formulated to date although some very useful approximate quantum rate theories have been invented. The successes and challenges facing quantum rate theory are outlined. An open problem which is being investigated intensively is rate theory away from equilibrium. TST is no longer valid and cannot even serve as a conceptual guide for understanding the critical factors which determine rates away from equilibrium. The nonequilibrium quantum theory is even less well developed than the classical, and suffers from the fact that even today, we do not know how to solve the real time quantum dynamics for systems with "many" degrees of freedom.  相似文献   

8.
A single female professional vocal artist and pedagogue sang examples of “twang” and neutral voice quality, which a panel of experts classified, in almost complete agreement with the singer's intentions. Subglottal pressure was measured as the oral pressure during the occlusion during the syllable /pae/. This pressure tended to be higher in “twang,” whereas the sound pressure level (SPL) was invariably higher. Voice source properties and formant frequencies were analyzed by inverse filtering. In “twang,” as compared with neutral, the closed quotient was greater, the pulse amplitude and the fundamental were weaker, and the normalized amplitude tended to be lower, whereas formants 1 and 2 were higher and 3 and 5 were lower. The formant differences, which appeared to be the main cause of the SPL differences, were more important than the source differences for the perception of “twanginess.” As resonatory effects occur independently of the voice source, the formant frequencies in “twang” may reflect a vocal strategy that is advantageous from the point of view of vocal hygiene.  相似文献   

9.
Processes such as double Drell–Yan and same-sign WW   production have contributions from double parton scattering, which are not well-defined because of a δ(2)(z=0)δ(2)(z=0) singularity that is generated by QCD evolution. We study the single and double parton contributions to these processes, and show how to handle the singularity using factorization and operator renormalization. We compute the QCD evolution of double parton distribution functions (PDFs) due to mixing with single PDFs. The modified evolution of dPDFs at z=0z=0, including generalized dPDFs for the non-forward case, is given in Appendix A. We include a brief discussion of the experimental interpretation of dPDFs and how they can probe flavor, spin and color correlations of partons in hadrons.  相似文献   

10.
Synchrony of discharge of auditory neurons to two-tone stimuli and "synchrony suppression" have been analyzed by examining the implications of the definition of vector strength. Synchrony suppression, defined as the reduction in the vector strength for one component when a second is introduced, occurs by definition when partial ("half-wave") rectification occurs in an otherwise linear system. It does so with the usual shifts (on the abscissa) of empirical vector strength curves, disproving any necessity for compressive or other nonlinearities. Synchrony suppression is sometimes defined incompatibly as the shift in dB of a vector strength curve--said to be the magnitude of suppression. That this conception is incorrect is shown by the identification of partial rectification with vector strength reduction and curve shift, but it can be shown to be a logical fallacy as well. The vector strength definition was also applied to the complex waveform obtained at the output of an instantaneous amplitude compressive nonlinearity. The shifts of vector strength growth and decay curves (at their crossover points) necessarily equal those in the linear case for any compressive nonlinearity that compresses equal inputs equally. But such a compressive nonlinearity is not without noticeable effects on vector strengths. If the input levels lie in the range leading to compressed outputs, differences in the relative input levels will be accentuated in the relative output levels in the period histogram. Compression thus contributes to greater differences in the vector strengths, for unequal input levels, than in the linear case. More visible effects on vector strength curves result from waveform distortion, which reduces vector strength saturation and crossover values and causes them to recede at higher input levels.  相似文献   

11.
Jongkwang Kim 《Physica A》2008,387(11):2637-2652
Many papers published in recent years show that real-world graphs G(n,m) (n nodes, m edges) are more or less “complex” in the sense that different topological features deviate from random graphs. Here we narrow the definition of graph complexity and argue that a complex graph contains many different subgraphs. We present different measures that quantify this complexity, for instance C1e, the relative number of non-isomorphic one-edge-deleted subgraphs (i.e. DECK size). However, because these different subgraph measures are computationally demanding, we also study simpler complexity measures focussing on slightly different aspects of graph complexity. We consider heuristically defined “product measures”, the products of two quantities which are zero in the extreme cases of a path and clique, and “entropy measures” quantifying the diversity of different topological features. The previously defined network/graph complexity measures Medium Articulation and Offdiagonal complexity (OdC) belong to these two classes. We study OdC measures in some detail and compare it with our new measures. For all measures, the most complex graph has a medium number of edges, between the edge numbers of the minimum and the maximum connected graph . Interestingly, for some measures this number scales exactly with the geometric mean of the extremes: . All graph complexity measures are characterized with the help of different example graphs. For all measures the corresponding time complexity is given.Finally, we discuss the complexity of 33 real-world graphs of different biological, social and economic systems with the six computationally most simple measures (including OdC). The complexities of the real graphs are compared with average complexities of two different random graph versions: complete random graphs (just fixed n,m) and rewired graphs with fixed node degrees.  相似文献   

12.
We propose to characterize the shapes of flat pebbles in terms of the statistical distribution of curvatures measured along the pebble contour. This is demonstrated for the erosion of clay pebbles in a controlled laboratory apparatus. Photographs at various stages of erosion are analyzed, and compared with two models. We find that the curvature distribution complements the usual measurement of aspect ratio, and connects naturally to erosion processes that are typically faster at protruding regions of high curvature.  相似文献   

13.
The question of the cause of inertial reaction forces and the validity of Mach's principle are investigated. A recent claim that the cause of inertial reaction forces can be attributed to an interaction of the electrical charge of elementary particles with the hypothetical quantum mechanical zero-point fluctuation electromagnetic field is shown to be untenable. It fails to correspond to reality because the coupling of electric charge to the electromagnetic field cannot be made to mimic plausibly the universal coupling of gravity and inertia to the stress-energy-momentum (i.e., matter) tensor. The gravitational explanation of the origin of inertial forces is then briefly laid out, and various important features of it explored in the last half-century are addressed.  相似文献   

14.
We develop a systematic procedure of finding integrable 6ldquo;relativistic” (regular one-parameter) deformations for integrable lattice systems. Our procedure is based on the integrable time discretizations and consists of three steps. First, for a given system one finds a local discretization living in the same hierarchy. Second, one considers this discretization as a particular Cauchy problem for a certain 2-dimensional lattice equation, and then looks for another meaningful Cauchy problem, which can be, in turn, interpreted as a new discrete time system. Third, one has to identify integrable hierarchies to which these new discrete time systems belong. These novel hierarchies are called then “relativistic”, the small time step $h$ playing the role of inverse speed of light. We apply this procedure to the Toda lattice (and recover the well-known relativistic Toda lattice), as well as to the Volterra lattice and a certain Bogoyavlensky lattice, for which the “relativistic” deformations were not known previously. Received: 1 April 1998 / Accepted: 21 July 1998  相似文献   

15.
16.
We investigate the problem about what the spin-magnetic moment is. The magnetic moment of the Dirac electron in the frame along z-axis is evaluated. This is identified with the spin-magnetic moment of the electron, because there is not any z-component of magnetic moment caused by orbital angular momentum in our frame. The correct value of the spin-magnetic moment and the correct ratio of the spin-magnetic moment to the spin (i.e. g=2) are obtained explicitly. In deriving them, the negative energy solutions of the Dirac equation perform essential roles. We find that the transition current from a positive energy state to a negative energy state causes spin-magnetic moment of the electrons in vacuum. This fact implies that the ratio of the spin-magnetic moment to the spin may change depending on the environments. For example, it may have different values in materials.  相似文献   

17.
The equations of General Relativity are non-linear. This makes their averaging non-trivial. The notion of mean gravitational field is defined and it is proven that this field obeys the equations of General Relativity if the unaveraged field does. The workings of the averaging procedure on Maxwells field and on perfect fluids in curved space-times are also discussed. It is found that Maxwells equations are still verified by the averaged quantities but that the equation of state for other kinds of matter generally changes upon average. In particular, it is proven that the separation between matter and gravitational field is not scale-independent. The same result can be interpreted by introducing a stress-energy tensor for a mean-vacuum. Possible applications to cosmology are discussed. Finally, the work presented in this article also suggests that the signature of the metric might be scale-dependent too.Received: 16 October 2003, Published online: 15 March 2004PACS: 04.20.Cv Fundamental problems and general formalism 04.40.Nr Einstein-Maxwell spacetimes, spacetimes with fluids, radiation or classical fields - 95.35. + d Dark matter (stellar, interstellar, galactic, and cosmological)  相似文献   

18.
This paper presents a minimal formulation of nonrelativistic quantum mechanics, by which is meant a formulation which describes the theory in a succinct, self-contained, clear, unambiguous and of course correct manner. The bulk of the presentation is the so-called “microscopic theory”, applicable to any closed system S of arbitrary size N, using concepts referring to S alone, without resort to external apparatus or external agents. An example of a similar minimal microscopic theory is the standard formulation of classical mechanics, which serves as the template for a minimal quantum theory. The only substantive assumption required is the replacement of the classical Euclidean phase space by Hilbert space in the quantum case, with the attendant all-important phenomenon of quantum incompatibility. Two fundamental theorems of Hilbert space, the Kochen–Specker–Bell theorem and Gleason’s theorem, then lead inevitably to the well-known Born probability rule. For both classical and quantum mechanics, questions of physical implementation and experimental verification of the predictions of the theories are the domain of the macroscopic theory, which is argued to be a special case or application of the more general microscopic theory.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号