首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
What is 'unfreezable water', how unfreezable is it,and how much is there?   总被引:1,自引:0,他引:1  
Wolfe J  Bryant G  Koster KL 《Cryo letters》2002,23(3):157-166
Water that remains unfrozen at temperatures below the equilibrium bulk freezing temperature, in the presence of ice, is sometimes called unfreezable or bound. This paper analyses the phenomenon in terms of quantitative measurements of the hydration interaction among membranes or macromolecules at freezing temperatures. These results are related to analogous measurements in which osmotic stress or mechanical compression is used to equilibrate water of hydration with a bulk phase. The analysis provides formulas to estimate, at a given sub-freezing temperature, the amount of unfrozen water due to equilibrium hydration effects. Even at tens of degrees below freezing, this hydration effect alone can explain an unfrozen water volume that considerably exceeds that of a single 'hydration shell' surrounding the hydrophilic surfaces. The formulas provided give a lower bound to the amount of unfrozen water for two reasons. First, the well-known freezing point depression due to small solutes is, to zeroth order, independent of the membrane or macromolecular hydration effect. Further, the unfrozen solution found between membranes or macromolecules at freezing temperatures has high viscosity and small dimensions. This means that dehydration of such systems, especially at freezing temperatures, takes so long that equilibrium is rarely achieved over normal experimental time scales. So, in many cases, the amount of unfrozen water exceeds that expected at equilibrium, which in turn usually exceeds that calculated for a single hydration shell.  相似文献   

4.
We discuss the possibility that the recent detection of 511 keV gamma rays from the galactic bulge, as observed by INTEGRAL, is a consequence of low mass (1-100 MeV) particle dark matter annihilations. We discuss the type of halo profile favored by the observations as well as the size of the annihilation cross section needed to account for the signal. We find that such a scenario is consistent with the observed dark matter relic density and other constraints from astrophysics and particle physics.  相似文献   

5.
In this paper, two modified Ricci models are considered as the candidates of unified dark matter–dark energy. In model one, the energy density is given by rMR=3Mpl(aH2+b[(H)\dot])\rho_{\mathrm{MR}}=3M_{\mathrm{pl}}(\alpha H^{2}+\beta\dot{H}), whereas, in model two, by rMR=3Mpl(\fraca6 R+g[(H)\ddot]H-1)\rho_{\mathrm{MR}}=3M_{\mathrm{pl}}(\frac{\alpha}{6} R+\gamma\ddot{H}H^{-1}). We find that they can explain both dark matter and dark energy successfully. A constant equation of state of dark energy is obtained in model one, which means that it gives the same background evolution as the wCDM model, while model two can give an evolutionary equation of state of dark energy with the phantom divide line crossing in the near past.  相似文献   

6.
It is well known that dark matter dominates the dynamics of galaxies and clusters of galaxies. Its constituents remain a mystery despite an assiduous search for them over the past three decades. Recent results from the satellite-based PAMELA experiment show an excess in the positron fraction at energies between 10 and 100 GeV in the secondary cosmic ray spectrum. Other experiments, namely ATIC, HESS and FERMI, show an excess in the total electron (e  +  + e  − ) spectrum for energies greater than 100 GeV. These excesses in the positron fraction as well as the electron spectrum can arise in local astrophysical processes like pulsars, or can be attributed to the annihilation of the dark matter particles. The latter possibility gives clues to the possible candidates for the dark matter in galaxies and other astrophysical systems. In this article, we give a report of these exciting developments.  相似文献   

7.
The fundamental problem of the occurrence/removal of finite-time future singularity in the universe evolution for coupled Dark Energy (DE) is addressed. It is demonstrated the existence of the (instable or local minimum) de Sitter space solution which may cure the Type II or Type IV future singularity for DE coupled with DM as the result of tuning the initial conditions. In case of phantom DE, the corresponding coupling may help to resolve the coincidence problem but not the Big Rip (Type I) singularity issue. We show that modified gravity of special form or inhomogeneous DE fluid may offer the universal scenario to cure the Type I, II, III or IV future singularity of coupled (fluid or scalar) DE evolution.  相似文献   

8.
Simultaneous EEG–fMRI is a powerful tool to study spontaneous and evoked brain activity because of the complementary advantages of the two techniques in terms of temporal and spatial resolution. In recent years, a significant number of scientific works have been published on this subject. However, many technical problems related to the intrinsic incompatibility of EEG and MRI methods are still not fully solved. Furthermore, simultaneous acquisition of EEG and event-related fMRI requires precise synchronization of all devices involved in the experimental setup. Thus, timing issue must be carefully considered in order to avoid significant methodological errors.

The aim of the present work is to highlight and discuss some of technical and methodological open issues associated with the combined use of EEG and fMRI. These issues are presented in the context of preliminary data regarding simultaneous acquisition of event-related evoked potentials and BOLD images during a visual odd-ball paradigm.  相似文献   


9.
We present constraints on the mass of warm dark matter (WDM) particles derived from the Lyman-alpha flux power spectrum of 55 high-resolution HIRES spectra at 2.0 or approximately 1.2 keV (2sigma) if the WDM consists of early decoupled thermal relics and m(WDM) > or approximately 5.6 keV (2sigma) for sterile neutrinos. Adding the Sloan Digital Sky Survey Lyman-alpha flux power spectrum, we get m(WDM) > or approximately 4 keV and m(WDM) > or approximately 28 keV (2sigma) for thermal relics and sterile neutrinos. These results improve previous constraints by a factor of 2.  相似文献   

10.
Information on available polystyrene calibration spheres is presented regarding the particle diameter, uncertainty in the size, and the width of the size distribution for particles in a size range between 20 and 100nm. The use of differential mobility analysis for measuring the single primary calibration standard in this size range, 100nm NIST Standard Reference Material®1963, is described along with the key factors in the uncertainty assessment. The issues of differences between international standards and traceability to the NIST Standard are presented. The lack of suitable polystyrene spheres in the 20–40nm size range will be discussed in terms of measurement uncertainty and width of the size distributions. Results on characterizing a new class of molecular particles known as dendrimers will be described and the possibilities of using these as size calibration standards for the size range from 3 to 15nm will be discussed.  相似文献   

11.
Recent results from the PAMELA, ATIC, FERMI and HESS experiments have focused attention on the possible existence of high energy cosmic rays e+ e- that may originate from dark matter annihilations or decays in the Milky Way. Here we examine the morphology of the associated γ-ray emission after propagation of the electrons generated by both annihilating and decaying dark matter models. We focus on photon energies of 1, 10, and 50 GeV (relevant for the FERMI satellite) and consider different propagation parameters. Our main conclusion is that distinguishing annihilating from decaying dark matter may only be possible if the propagation parameters correspond to the most optimistic diffusion models. In addition, we point to examples where morphology can lead to an erroneous interpretation of the source injection energy.  相似文献   

12.
13.
We construct gauge theory of SU(3)×SU(2)×U(1)SU(3)×SU(2)×U(1) by spectral cover from F-theory and ask how the Standard Model is extended under minimal assumptions on Higgs sector. For the requirement on different numbers between Higgs pairs and matter generations (respectively one and three) distinguished by R-parity, we choose a universal G  -flux obeying SO(10)SO(10) but slightly breaking E6E6 unification relation. This condition forces distinction between up and down Higgs fields, suppression of proton decay operators up to dimension five, and existence and dynamics of a singlet related to μ-parameter.  相似文献   

14.
《Physics letters. A》1998,247(6):385-390
We consider a Brownian particle in a well with a dichotomously fluctuating barrier. We show that, for a linear slope, the time to deterministically slide down the barrier is also the relaxation time for the escape rate after the barrier changes shape.  相似文献   

15.
In this paper we put forward a running coupling scenario for describing the interaction between dark energy and dark matter. The dark sector interaction in our scenario is free of the assumption that the interaction term Q is proportional to the Hubble expansion rate and the energy densities of dark sectors. We only use a time-variable coupling b(a) (with a the scale factor of the universe) to characterize the interaction Q. We propose a parametrization form for the running coupling b(a)=b 0 a+b e (1−a) in which the early-time coupling is given by a constant b e , while today the coupling is given by another constant, b 0. For investigating the feature of the running coupling, we employ three dark energy models, namely, the cosmological constant model (w=−1), the constant w model (w=w 0), and the time-dependent w model (w(a)=w 0+w 1(1−a)). We constrain the models with the current observational data, including the type Ia supernova, the baryon acoustic oscillation, the cosmic microwave background, the Hubble expansion rate, and the X-ray gas mass fraction data. The fitting results indicate that a time-varying vacuum scenario is favored, in which the coupling b(z) crosses the noninteracting line (b=0) during the cosmological evolution and the sign changes from negative to positive. The crossing of the noninteracting line happens at around z=0.2–0.3, and the crossing behavior is favored at about 1σ confidence level. Our work implies that we should pay more attention to the time-varying vacuum model and seriously consider the phenomenological construction of a sign-changeable or oscillatory interaction between dark sectors.  相似文献   

16.
We study the particle motion around a black hole(BH) in Ho?ava-Lifshitz(HL) gravity with the Kehagias-Sfetsos(KS) parameter. First, the innermost stable circular orbit(ISCO) is obtained for massive particles around the BH in HL gravity. We find that the radii of the ISCOs decrease as the KS parameter decreases, meaning that the parameter ? causes the orbits of particles to move inward with respect to that of the Schwarzschild BH case.Then, the optical properties of a KS BH are studied in detail,...  相似文献   

17.
Particle physics has become an interesting testing ground for fundamental questions of quantum mechanics (QM). The massive meson-antimeson systems are specially suitable as they offer a unique laboratory to test various aspects of particle physics ( violation, violation, ...) as well to test the foundations of QM (local realistic theories versus QM, Bell inequalities, decoherence effects, quantum marking and erasure concepts, ...). We focus here on a surprising connection between the violation of a symmetry in particle physics –the symmetry ( =charge conjugation, =parity)– and non-locality. This is achieved via Bell inequalities which discriminate between local realistic theories and QM. Further we present a decoherence model which can be tested by accelerator experiments at the DAΦNE (Italy) and at the KEK-B machine (Japan). We show that there is a simple connection between a decoherence parameter and different measures of entanglement, i.e., entanglement of formation and concurrence. In this way the very basic mathematical and theoretical concepts about entanglement can be confronted directly with experiments. Similar decoherence models can also be tested for entangled photon systems and single neutrons in an interferometer.  相似文献   

18.
19.
A brief history is presented, outlining the development of rate theory during the past century. Starting from Arrhenius [Z. Phys. Chem. 4, 226 (1889)], we follow especially the formulation of transition state theory by Wigner [Z. Phys. Chem. Abt. B 19, 203 (1932)] and Eyring [J. Chem. Phys. 3, 107 (1935)]. Transition state theory (TST) made it possible to obtain quick estimates for reaction rates for a broad variety of processes even during the days when sophisticated computers were not available. Arrhenius' suggestion that a transition state exists which is intermediate between reactants and products was central to the development of rate theory. Although Wigner gave an abstract definition of the transition state as a surface of minimal unidirectional flux, it took almost half of a century until the transition state was precisely defined by Pechukas [Dynamics of Molecular Collisions B, edited by W. H. Miller (Plenum, New York, 1976)], but even this only in the realm of classical mechanics. Eyring, considered by many to be the father of TST, never resolved the question as to the definition of the activation energy for which Arrhenius became famous. In 1978, Chandler [J. Chem. Phys. 68, 2959 (1978)] finally showed that especially when considering condensed phases, the activation energy is a free energy, it is the barrier height in the potential of mean force felt by the reacting system. Parallel to the development of rate theory in the chemistry community, Kramers published in 1940 [Physica (Amsterdam) 7, 284 (1940)] a seminal paper on the relation between Einstein's theory of Brownian motion [Einstein, Ann. Phys. 17, 549 (1905)] and rate theory. Kramers' paper provided a solution for the effect of friction on reaction rates but left us also with some challenges. He could not derive a uniform expression for the rate, valid for all values of the friction coefficient, known as the Kramers turnover problem. He also did not establish the connection between his approach and the TST developed by the chemistry community. For many years, Kramers' theory was considered as providing a dynamic correction to the thermodynamic TST. Both of these questions were resolved in the 1980s when Pollak [J. Chem. Phys. 85, 865 (1986)] showed that Kramers' expression in the moderate to strong friction regime could be derived from TST, provided that the bath, which is the source of the friction, is handled at the same level as the system which is observed. This then led to the Mel'nikov-Pollak-Grabert-Hanggi [Mel'nikov and Meshkov, J. Chem. Phys. 85, 1018 (1986); Pollak, Grabert, and Hanggi, ibid. 91, 4073 (1989)] solution of the turnover problem posed by Kramers. Although classical rate theory reached a high level of maturity, its quantum analog leaves the theorist with serious challenges to this very day. As noted by Wigner [Trans. Faraday Soc. 34, 29 (1938)], TST is an inherently classical theory. A definite quantum TST has not been formulated to date although some very useful approximate quantum rate theories have been invented. The successes and challenges facing quantum rate theory are outlined. An open problem which is being investigated intensively is rate theory away from equilibrium. TST is no longer valid and cannot even serve as a conceptual guide for understanding the critical factors which determine rates away from equilibrium. The nonequilibrium quantum theory is even less well developed than the classical, and suffers from the fact that even today, we do not know how to solve the real time quantum dynamics for systems with "many" degrees of freedom.  相似文献   

20.
It is well known in quantum optics that fluctuations and dissipation inevitably intervene in the dynamics of open quantum systems. Density matrix elements may all decay exponentially and smoothly but we show that two-party entanglement, a valuable quantum coherence, may nevertheless abruptly decrease to zero in a finite time. This is Entanglement Sudden Death. In this talk we show how entanglement sudden death occurs under either phase or amplitude noise, either quantum or classical in origin. Moreover, we show that when two or more noises are active at the same time, the effects of the combined noises is even more unexpected.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号