In this essay we explore analogies between macroscopic patterns, which result from a sequence of phase transitions/instabilities starting from a homogeneous state, and similar phenomena in cosmology, where a sequence of phase transitions in the early universe is believed to have separated the fundamental forces from each other, and also shaped the structure and distribution of matter in the universe. We discuss three distinct aspects of this analogy: (i) Defects and topological charges in macroscopic patterns are analogous to spins and charges of quarks and leptons; (ii) Defects in generic 3+1 stripe patterns carry an energy density that accounts for phenomena that are currently attributed to dark matter; (iii) Space-time patterns of interacting nonlinear waves display behaviors reminiscent of quantum phenomena including inflation, entanglement and dark energy. 相似文献
Motivated by issues in machine learning and statistical pattern classification, we investigate a class cover problem (CCP) with an associated family of directed graphs—class cover catch digraphs (CCCDs). CCCDs are a special case of catch digraphs. Solving the underlying CCP is equivalent to finding a smallest cardinality dominating set for the associated CCCD, which in turn provides regularization for statistical pattern classification. Some relevant properties of CCCDs are studied and a characterization of a family of CCCDs is given. 相似文献
We have observed a remarkable two-armed spiral in the collapse process of a floating monolayer at the air-water interface
by phase contrast microscopy. This demonstrates that the floating monolayer as a form of soft condensed matter reorganizes
itself due to a certain kind of macroscopic or collective behavior of molecules as it collapses. This pattern formation is
caused by the breakdown of a critical dynamical balance between the deformation of solid domain and the applied surface pressure.
The fragility as well as the flexibility of the floating monolayer can be associated with the observed pattern growth. There
are also observed interesting, periodically arranged collections of molecules in numerous collapsed regions.
Received: 8 July 1997 / Accepted: 4 November 1997 相似文献
In this paper, we apply a three-stage-DEA model to the Spanish Professional Football League, which means separating the teams’
economic behaviour into three components: operating efficiency—of the offence and defence—athletic or operating effectiveness,
and social effectiveness. The results showed that the technical inefficiency of the defence is greater than that of the offence,
both being caused by aspects linked to the poor management of players’ abilities and by the football team’s size. Teams showed
a favourable evolution of their offensive and defensive efficiency during the 2004/2005 season and to a lesser extent in the
season before. The point system assigned by the professional football league regulations evaluates the teams’ athletic effectiveness,
but we detected that the teams with the most experience perform athletically in a more effective manner. Their social effectiveness
is strongly related to the level of play in itself and to factors linked to their PFL ranking: participation in international
competitions for important football teams; or the struggle of minor football teams to stay in the first division. 相似文献
Stochastic models with varying degrees of complexity are increasingly widespread in the oceanic and atmospheric sciences. One application is data assimilation, i.e., the combination of model output with observations to form the best picture of the system under study. For any given quantity to be estimated, the relative weights of the model and the data will be adjusted according to estimated model and data error statistics, so implementation of any data assimilation scheme will require some assumption about errors, which are considered to be random. For dynamical models, some assumption about the evolution of errors will be needed. Stochastic models are also applied in studies of predictability.
The formal theory of stochastic processes was well developed in the last half of the twentieth century. One consequence of this theory is that methods of simulation of deterministic processes cannot be applied to random processes without some modification. In some cases the rules of ordinary calculus must be modified.
The formal theory was developed in terms of mathematical formalism that may be unfamiliar to many oceanic and atmospheric scientists. The purpose of this article is to provide an informal introduction to the relevant theory, and to point out those situations in which that theory must be applied in order to model random processes correctly. 相似文献
An approach to dealing with missing data, both during the design and normal operation of a neuro-fuzzy classifier is presented in this paper. Missing values are processed within a general fuzzy min–max neural network architecture utilising hyperbox fuzzy sets as input data cluster prototypes. An emphasis is put on ways of quantifying the uncertainty which missing data might have caused. This takes a form of classification procedure whose primary objective is the reduction of a number of viable alternatives rather than attempting to produce one winning class without supporting evidence. If required, the ways of selecting the most probable class among the viable alternatives found during the primary classification step, which are based on utilising the data frequency information, are also proposed. The reliability of the classification and the completeness of information is communicated by producing upper and lower classification membership values similar in essence to plausibility and belief measures to be found in the theory of evidence or possibility and necessity values to be found in the fuzzy sets theory. Similarities and differences between the proposed method and various fuzzy, neuro-fuzzy and probabilistic algorithms are also discussed. A number of simulation results for well-known data sets are provided in order to illustrate the properties and performance of the proposed approach. 相似文献
This paper re-assesses three independently developed approaches that are aimed at solving the problem of zero-weights or non-zero
slacks in Data Envelopment Analysis (DEA). The methods are weights restricted, non-radial and extended facet DEA models. Weights
restricted DEA models are dual to envelopment DEA models with restrictions on the dual variables (DEA weights) aimed at avoiding
zero values for those weights; non-radial DEA models are envelopment models which avoid non-zero slacks in the input-output
constraints. Finally, extended facet DEA models recognize that only projections on facets of full dimension correspond to
well defined rates of substitution/transformation between all inputs/outputs which in turn correspond to non-zero weights
in the multiplier version of the DEA model. We demonstrate how these methods are equivalent, not only in their aim but also
in the solutions they yield. In addition, we show that the aforementioned methods modify the production frontier by extending
existing facets or creating unobserved facets. Further we propose a new approach that uses weight restrictions to extend existing
facets. This approach has some advantages in computational terms, because extended facet models normally make use of mixed
integer programming models, which are computationally demanding. 相似文献
In this paper, an adaptive FE analysis is presented based on error estimation, adaptive mesh refinement and data transfer for enriched plasticity continua in the modelling of strain localization. As the classical continuum models suffer from pathological mesh-dependence in the strain softening models, the governing equations are regularized by adding rotational degrees-of-freedom to the conventional degrees-of-freedom. Adaptive strategy using element elongation is applied to compute the distribution of required element size using the estimated error distribution. Once a new mesh is generated, state variables and history-dependent variables are mapped from the old finite element mesh to the new one. In order to transfer the history-dependent variables from the old to new mesh, the values of internal variables available at Gauss point are first projected at nodes of old mesh, then the values of the old nodes are transferred to the nodes of new mesh and finally, the values at Gauss points of new elements are determined with respect to nodal values of the new mesh. Finally, the efficiency of the proposed model and computational algorithms is demonstrated by several numerical examples. 相似文献