首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article reconsiders the concept of physical reality in quantum theory and the concept of quantum measurement, following Bohr, whose analysis of quantum measurement led him to his concept of a (quantum) “phenomenon,” referring to “the observations obtained under the specified circumstances,” in the interaction between quantum objects and measuring instruments. This situation makes the terms “observation” and “measurement,” as conventionally understood, inapplicable. These terms are remnants of classical physics or still earlier history, from which classical physics inherited it. As defined here, a quantum measurement does not measure any preexisting property of the ultimate constitution of the reality responsible for quantum phenomena. An act of measurement establishes a quantum phenomenon by an interaction between the instrument and the quantum object or in the present view the ultimate constitution of the reality responsible for quantum phenomena and, at the time of measurement, also quantum objects. In the view advanced in this article, in contrast to that of Bohr, quantum objects, such as electrons or photons, are assumed to exist only at the time of measurement and not independently, a view that redefines the concept of quantum object as well. This redefinition becomes especially important in high-energy quantum regimes and quantum field theory and allows this article to define a new concept of quantum field. The article also considers, now following Bohr, the quantum measurement as the entanglement between quantum objects and measurement instruments. The argument of the article is grounded in the concept “reality without realism” (RWR), as underlying quantum measurement thus understood, and the view, the RWR view, of quantum theory defined by this concept. The RWR view places a stratum of physical reality thus designated, here the reality ultimately responsible for quantum phenomena, beyond representation or knowledge, or even conception, and defines the corresponding set of interpretations quantum mechanics or quantum field theory, such as the one assumed in this article, in which, again, not only quantum phenomena but also quantum objects are (idealizations) defined by measurement. As such, the article also offers a broadly conceived response to J. Bell’s argument “against ‘measurement’”.  相似文献   

2.
This study aimed to investigate consumers’ visual image evaluation of wrist wearables based on Kansei engineering. A total of 8 representative samples were screened from 99 samples using the multidimensional scaling (MDS) method. Five groups of adjectives were identified to allow participants to express their visual impressions of wrist wearable devices through a questionnaire survey and factor analysis. The evaluation of eight samples using the five groups of adjectives was analyzed utilizing the triangle fuzzy theory. The results showed a relatively different evaluation of the eight samples in the groups of “fashionable and individual” and “rational and decent”, but little distinction in the groups of “practical and durable”, “modern and smart” and “convenient and multiple”. Furthermore, wrist wearables with a shape close to a traditional watch dial (round), with a bezel and mechanical buttons (moderate complexity) and asymmetric forms received a higher evaluation. The acceptance of square- and elliptical-shaped wrist wearables was relatively low. Among the square- and rectangular-shaped wrist wearables, the greater the curvature of the chamfer, the higher the acceptance. Apparent contrast between the color of the screen and the casing had good acceptance. The influence of display size on consumer evaluations was relatively small. Similar results were obtained in the evaluation of preferences and willingness to purchase. The results of this study objectively and effectively reflect consumers’ evaluation and potential demand for the visual images of wrist wearables and provide a reference for designers and industry professionals.  相似文献   

3.
4.
Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, it remained unclear which underlying mechanisms governing the complex behavior of a neuron–astrocyte network are essential to generating positive integrated information. This study presents an analytic consideration of this question based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for a reduced mathematical model (discrete-time, discrete-state stochastic model) reflecting the main features of the “spiking–bursting” dynamics of a neuron–astrocyte network. The analysis was performed in terms of the empirical “whole minus sum” version of integrated information in comparison to the “decoder based” version. The “whole minus sum” information may change sign, and an interpretation of this transition in terms of “net synergy” is available in the literature. This motivated our particular interest in the sign of the “whole minus sum” information in our analytical considerations. The behaviors of the “whole minus sum” and “decoder based” information measures are found to bear a lot of similarity—they have mutual asymptotic convergence as time-uncorrelated activity increases, and the sign transition of the “whole minus sum” information is associated with a rapid growth in the “decoder based” information. The study aims at creating a theoretical framework for using the spiking–bursting model as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior. The model can also be of interest as a new discrete-state test bench for different formulations of integrated information.  相似文献   

5.
Objective: To examine the changes in postural alignment and kyphosis-correlated factors after 6 months of back extensor strengthening exercise in a group of community-dwelling older adults aged ≥65 years. Methods: We quasi-randomized 29 subjects into an intervention group treated with a back extensor strengthening program and a control group treated with a full-body exercise program. These groups completed 20-30 minutes of exercise directed by a physical therapist one or more times per week and were instructed to exercise at home as well. The participants were assessed prior to and after the intervention using the following criteria: postural alignment of “usual” and “best” posture, physical function, physical performance, self-efficacy, and quality of life. The differences between two factors (group and period) were compared for each of the measurement variables. Results: Subjects who adequately completed the exercises were analyzed. A reduced knee flexion angle was noted in the “best” posture of both groups, as were improved physical function and performance with the exception of one-leg standing time. Verifying the effect size in the post-hoc analysis, the body parts that showed changes to postural alignment after the intervention differed between groups. Conclusions: Back extensor strengthening exercises improved physical function and performance, but did not improve spinal alignment. The changes due to these interventions were not significantly different from changes observed in the full-body exercise group. However, post-hoc analysis revealed that the effect size of posture change was different, possible indicating that the two groups experienced different changes in the postural alignment.  相似文献   

6.
To estimate the amount of evapotranspiration in a river basin, the “short period water balance method” was formulated. Then, by introducing the “complementary relationship method,” the amount of evapotranspiration was estimated seasonally, and with reasonable accuracy, for both small and large areas. Moreover, to accurately estimate river discharge in the low water season, the “weighted statistical unit hydrograph method” was proposed and a procedure for the calculation of the unit hydrograph was developed. Also, a new model, based on the “equivalent roughness method,” was successfully developed for the estimation of flood runoff from newly reclaimed farmlands. Based on the results of this research, a “composite reservoir model” was formulated to analyze the repeated use of irrigation water in large spatial areas. The application of this model to a number of watershed areas provided useful information with regard to the realities of water demand-supply systems in watersheds predominately dedicated to paddy fields, in Japan.  相似文献   

7.
In this paper, a new parametric compound G family of continuous probability distributions called the Poisson generalized exponential G (PGEG) family is derived and studied. Relevant mathematical properties are derived. Some new bivariate G families using the theorems of “Farlie-Gumbel-Morgenstern copula”, “the modified Farlie-Gumbel-Morgenstern copula”, “the Clayton copula”, and “the Renyi’s entropy copula” are presented. Many special members are derived, and a special attention is devoted to the exponential and the one parameter Pareto type II model. The maximum likelihood method is used to estimate the model parameters. A graphical simulation is performed to assess the finite sample behavior of the estimators of the maximum likelihood method. Two real-life data applications are proposed to illustrate the importance of the new family.  相似文献   

8.
Finding the critical factor and possible “Newton’s laws” in financial markets has been an important issue. However, with the development of information and communication technologies, financial models are becoming more realistic but complex, contradicting the objective law “Greatest truths are the simplest.” Therefore, this paper presents an evolutionary model independent of micro features and attempts to discover the most critical factor. In the model, information is the only critical factor, and stock price is the emergence of collective behavior. The statistical properties of the model are significantly similar to the real market. It also explains the correlations of stocks within an industry, which provides a new idea for studying critical factors and core structures in the financial markets.  相似文献   

9.
During the former half of the last century the mechanism of heterogeneous catalysis had been studied, keeping the catalyst in a black box, and on the basis of the information outside of the black box, it was discussed just from mere conjectures. The author initiated a method to study directly the behavior of the working catalyst surface, looking into the inside of the black box by measuring adsorption on the working catalyst surface. In the same period many varieties of recent physical tools to study the solid surfaces have been developed and were applied to study the in situ dynamics of working catalyst surface. However, even if some chemisorbed species were observed on the working catalyst surface, it does not follow that they are reaction intermediates. A new dynamic approach to identify the dynamic behavior of each of the chemisorbed species under the reaction conditions, had been proposed by the author by use of “isotope jump method”, in which labeled species are replaced during the course of reaction to study the behavior of each of the adsorbed species under the reaction conditions. By using such new techniques we could not only identify the reaction path and the rate-determining step, but also new phenomena which are called “adsorption assisted processes” were discovered, It is, accordingly, hoped that by means of new nanotechnologies recently developed to study the behavior of single molecules on solid surfaces the nature of heterogeneous catalysis should make a remarkable advances on the basis of this in situ dynamic methods. In this review article emphasis has been put in the fundamental methods of dynamic approaches.  相似文献   

10.
In this paper, we focus on the critical periods in the economy that are characterized by unusual and large fluctuations in macroeconomic indicators, like those measuring inflation and unemployment. We analyze U.S. data for 70 years from 1948 until 2018. To capture their fluctuation essence, we concentrate on the non-Gaussianity of their distributions. We investigate how the non-Gaussianity of these variables affects the coupling structure of them. We distinguish “regular” from “rare” events, in calculating the correlation coefficient, emphasizing that both cases might lead to a different response of the economy. Through the “multifractal random wall” model, one can see that the non-Gaussianity depends on time scales. The non-Gaussianity of unemployment is noticeable only for periods shorter than one year; for longer periods, the fluctuation distribution tends to a Gaussian behavior. In contrast, the non-Gaussianities of inflation fluctuations persist for all time scales. We observe through the “bivariate multifractal random walk” that despite the inflation features, the non-Gaussianity of the coupled structure is finite for scales less than one year, drops for periods larger than one year, and becomes small for scales greater than two years. This means that the footprint of the monetary policies intentionally influencing the inflation and unemployment couple is observed only for time horizons smaller than two years. Finally, to improve some understanding of the effect of rare events, we calculate high moments of the variables’ increments for various q orders and various time scales. The results show that coupling with high moments sharply increases during crises.  相似文献   

11.
12.
The task of reconstructing the system’s state from the measurements results, known as the Pauli problem, usually requires repetition of two successive steps. Preparation in an initial state to be determined is followed by an accurate measurement of one of the several chosen operators in order to provide the necessary “Pauli data”. We consider a similar yet more general problem of recovering Feynman’s transition (path) amplitudes from the results of at least three consecutive measurements. The three-step histories of a pre- and post-selected quantum system are subjected to a type of interference not available to their two-step counterparts. We show that this interference can be exploited, and if the intermediate measurement is “fuzzy”, the path amplitudes can be successfully recovered. The simplest case of a two-level system is analysed in detail. The “weak measurement” limit and the usefulness of the path amplitudes are also discussed.  相似文献   

13.
Entropy indicates irregularity or randomness of a dynamic system. Over the decades, entropy calculated at different scales of the system through subsampling or coarse graining has been used as a surrogate measure of system complexity. One popular multi-scale entropy analysis is the multi-scale sample entropy (MSE), which calculates entropy through the sample entropy (SampEn) formula at each time scale. SampEn is defined by the “logarithmic likelihood” that a small section (within a window of a length m) of the data “matches” with other sections will still “match” the others if the section window length increases by one. “Match” is defined by a threshold of r times standard deviation of the entire time series. A problem of current MSE algorithm is that SampEn calculations at different scales are based on the same matching threshold defined by the original time series but data standard deviation actually changes with the subsampling scales. Using a fixed threshold will automatically introduce systematic bias to the calculation results. The purpose of this paper is to mathematically present this systematic bias and to provide methods for correcting it. Our work will help the large MSE user community avoiding introducing the bias to their multi-scale SampEn calculation results.  相似文献   

14.
The present study investigates the similarity problem associated with the onset of the Mach reflection of Zel’dovich–von Neumann–Döring (ZND) detonations in the near field. The results reveal that the self-similarity in the frozen-limit regime is strictly valid only within a small scale, i.e., of the order of the induction length. The Mach reflection becomes non-self-similar during the transition of the Mach stem from “frozen” to “reactive” by coupling with the reaction zone. The triple-point trajectory first rises from the self-similar result due to compressive waves generated by the “hot spot”, and then decays after establishment of the reactive Mach stem. It is also found, by removing the restriction, that the frozen limit can be extended to a much larger distance than expected. The obtained results elucidate the physical origin of the onset of Mach reflection with chemical reactions, which has previously been observed in both experiments and numerical simulations.  相似文献   

15.
16.
Objectives: To reveal self-rated changes of health status during stay-at-home orders among older adults and to verify whether decrease in frequency of going outdoors during these orders was related to self-rated changes in health status. Method: A self-completed questionnaire for older adults was provided in 2 dayservice facilities and a nursing station. We operationally defined health status with 4 domains (motor function, oral and swallowing function, depression, and social networks) and designed the questionnaire to determine self-rated changes in health status using factor analysis. After factor analysis, regression analyses were conducted. Dependent variable was each factor score (self-rated changes of health status), and independent variable was decrease in frequency of going outdoors. Results: Approximately 80% of participants answered that their health status had “worsened” in motor function (75.0%-87.2%). Moreover, more than 70% of participants answered “worsened” in “Feeling energy” and “Getting together and speaking with friends” (72.3% and 75.7%, respectively). Regression analyses demonstrated that, after adjusting for covariates, the decrease in frequency of going outdoors was related to self-rated changes of motor function and friend network. Conclusion: During stay-at-home orders, older adults felt deterioration in their motor function, in feeling energy, and in their friend network, especially people who had decreased their frequency of going outdoors felt more deterioration in their motor function and in their friend network.  相似文献   

17.
We consider state changes in quantum theory due to “conditional action” and relate these to the discussion of entropy decrease due to interventions of “intelligent beings” and the principles of Szilard and Landauer/Bennett. The mathematical theory of conditional actions is a special case of the theory of “instruments”, which describes changes of state due to general measurements and will therefore be briefly outlined in the present paper. As a detailed example, we consider the imperfect erasure of a qubit that can also be viewed as a conditional action and will be realized by the coupling of a spin to another small spin system in its ground state.  相似文献   

18.
Causal Geometry     
Information geometry has offered a way to formally study the efficacy of scientific models by quantifying the impact of model parameters on the predicted effects. However, there has been little formal investigation of causation in this framework, despite causal models being a fundamental part of science and explanation. Here, we introduce causal geometry, which formalizes not only how outcomes are impacted by parameters, but also how the parameters of a model can be intervened upon. Therefore, we introduce a geometric version of “effective information”—a known measure of the informativeness of a causal relationship. We show that it is given by the matching between the space of effects and the space of interventions, in the form of their geometric congruence. Therefore, given a fixed intervention capability, an effective causal model is one that is well matched to those interventions. This is a consequence of “causal emergence,” wherein macroscopic causal relationships may carry more information than “fundamental” microscopic ones. We thus argue that a coarse-grained model may, paradoxically, be more informative than the microscopic one, especially when it better matches the scale of accessible interventions—as we illustrate on toy examples.  相似文献   

19.
The lack of adequate indicators in the research of digital economy may lead to the shortage of data support on decision making for governments. To solve this problem, first we establish a digital economy indicator evaluation system by dividing the digital economy into four types: “basic type”, “technology type”, “integration type” and “service type” and select 5 indicators for each type. On this basis, the weight of each indicator is calculated to find the deficiencies in the development of some digital economic fields by the improved entropy method. By drawing on the empowerment idea of Analytic Hierarchy Process, the improved entropy method firstly compares the difference coefficient of indicators in pairs and maps the comparison results to the scales 1–9. Then, the judgment matrix is constructed based on the information entropy, which can solve as much as possible the problem that the difference among the weight of each indicator is too large in traditional entropy method. The results indicate that: the development of digital economy in Guangdong Province was relatively balanced from 2015 to 2018 and will be better in the future while the development of rural e-commerce in Guangdong Province is relatively backward, and there is an obvious digital gap between urban and rural areas. Next we extract two new variables respectively to replace the 20 indicators we select through principal component analysis and factor analysis methods in multivariate statistical analysis, which can retain the original information to the greatest extent and provide convenience for further research in the future. Finally, we and provide constructive comments of digital economy in Guangdong Province from 2015 to 2018.  相似文献   

20.
This paper assesses two different theories for explaining consciousness, a phenomenon that is widely considered amenable to scientific investigation despite its puzzling subjective aspects. I focus on Integrated Information Theory (IIT), which says that consciousness is integrated information (as ϕMax) and says even simple systems with interacting parts possess some consciousness. First, I evaluate IIT on its own merits. Second, I compare it to a more traditionally derived theory called Neurobiological Naturalism (NN), which says consciousness is an evolved, emergent feature of complex brains. Comparing these theories is informative because it reveals strengths and weaknesses of each, thereby suggesting better ways to study consciousness in the future. IIT’s strengths are the reasonable axioms at its core; its strong logic and mathematical formalism; its creative “experience-first” approach to studying consciousness; the way it avoids the mind-body (“hard”) problem; its consistency with evolutionary theory; and its many scientifically testable predictions. The potential weakness of IIT is that it contains stretches of logic-based reasoning that were not checked against hard evidence when the theory was being constructed, whereas scientific arguments require such supporting evidence to keep the reasoning on course. This is less of a concern for the other theory, NN, because it incorporated evidence much earlier in its construction process. NN is a less mature theory than IIT, less formalized and quantitative, and less well tested. However, it has identified its own neural correlates of consciousness (NCC) and offers a roadmap through which these NNCs may answer the questions of consciousness using the hypothesize-test-hypothesize-test steps of the scientific method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号