Stochastic models with varying degrees of complexity are increasingly widespread in the oceanic and atmospheric sciences. One application is data assimilation, i.e., the combination of model output with observations to form the best picture of the system under study. For any given quantity to be estimated, the relative weights of the model and the data will be adjusted according to estimated model and data error statistics, so implementation of any data assimilation scheme will require some assumption about errors, which are considered to be random. For dynamical models, some assumption about the evolution of errors will be needed. Stochastic models are also applied in studies of predictability.
The formal theory of stochastic processes was well developed in the last half of the twentieth century. One consequence of this theory is that methods of simulation of deterministic processes cannot be applied to random processes without some modification. In some cases the rules of ordinary calculus must be modified.
The formal theory was developed in terms of mathematical formalism that may be unfamiliar to many oceanic and atmospheric scientists. The purpose of this article is to provide an informal introduction to the relevant theory, and to point out those situations in which that theory must be applied in order to model random processes correctly. 相似文献
Little Higgs models are an interesting extension of the Standard Model at the TeV scale. They provide a simple and attractive
mechanism of electroweak symmetry breaking. We review one of the simplest models of this class, the Littlest Higgs model,
and its extension with T parity. The model with T parity satisfies precision electroweak constraints without fine-tuning, contains an attractive dark matter candidate, and
leads to interesting phenomenology at the Large Hadron Collider (LHC). 相似文献
Temperature effects on deposition rate of silicon nitride films were characterized by building a neural network prediction model. The silicon nitride films were deposited by using a plasma enhanced chemical vapor deposition system and process parameter effects were systematically characterized by 26−1 fractional factorial experiment. The process parameters involved include a radio frequency power, pressure, temperature, SiH4, N2, and NH3 flow rates. The prediction performance of generalized regression neural network was drastically improved by optimizing multi-valued training factors using a genetic algorithm. Several 3D plots were generated to investigate parameter effects at various temperatures. Predicted variations were experimentally validated. The temperature effect on the deposition rate was a complex function of parameters but N2 flow rate. Larger decreases in the deposition rate with the temperature were only noticed at lower SiH4 (or higher NH3) flow rates. Typical effects of SiH4 or NH3 flow rate were only observed at higher or lower temperatures. A comparison with the refractive index model facilitated a selective choice of either SiH4 or NH3 for process optimization. 相似文献
This paper re-assesses three independently developed approaches that are aimed at solving the problem of zero-weights or non-zero
slacks in Data Envelopment Analysis (DEA). The methods are weights restricted, non-radial and extended facet DEA models. Weights
restricted DEA models are dual to envelopment DEA models with restrictions on the dual variables (DEA weights) aimed at avoiding
zero values for those weights; non-radial DEA models are envelopment models which avoid non-zero slacks in the input-output
constraints. Finally, extended facet DEA models recognize that only projections on facets of full dimension correspond to
well defined rates of substitution/transformation between all inputs/outputs which in turn correspond to non-zero weights
in the multiplier version of the DEA model. We demonstrate how these methods are equivalent, not only in their aim but also
in the solutions they yield. In addition, we show that the aforementioned methods modify the production frontier by extending
existing facets or creating unobserved facets. Further we propose a new approach that uses weight restrictions to extend existing
facets. This approach has some advantages in computational terms, because extended facet models normally make use of mixed
integer programming models, which are computationally demanding. 相似文献
In this paper, an adaptive FE analysis is presented based on error estimation, adaptive mesh refinement and data transfer for enriched plasticity continua in the modelling of strain localization. As the classical continuum models suffer from pathological mesh-dependence in the strain softening models, the governing equations are regularized by adding rotational degrees-of-freedom to the conventional degrees-of-freedom. Adaptive strategy using element elongation is applied to compute the distribution of required element size using the estimated error distribution. Once a new mesh is generated, state variables and history-dependent variables are mapped from the old finite element mesh to the new one. In order to transfer the history-dependent variables from the old to new mesh, the values of internal variables available at Gauss point are first projected at nodes of old mesh, then the values of the old nodes are transferred to the nodes of new mesh and finally, the values at Gauss points of new elements are determined with respect to nodal values of the new mesh. Finally, the efficiency of the proposed model and computational algorithms is demonstrated by several numerical examples. 相似文献
The major goal of this paper is to examine the hypothesis that stock returns and return volatility are asymmetric, threshold nonlinear, functions of change in trading volume. A minor goal is to examine whether return spillover effects also display such asymmetry. Employing a double-threshold GARCH model with trading volume as a threshold variable, we find strong evidence supporting this hypothesis in five international market return series. Asymmetric causality tests lend further support to our trading volume threshold model and conclusions. Specifically, an increase in volume is positively associated, while decreasing volume is negatively associated, with the major price index in four of the five markets. The volatility of each series also displays an asymmetric reaction, four of the markets display higher volatility following increases in trading volume. Using posterior odds ratio, the proposed threshold model is strongly favored in three of the five markets, compared to a US news double threshold GARCH model and a symmetric GARCH model. We also find significant nonlinear asymmetric return spillover effects from the US market. 相似文献
This paper presents a new scheduling method for manufacturing system based on the Timed Petri Net model and a reactive fast graph search algorithm. The following two typical problems are addressed in this paper. (1) Minimization of the maximum completion time. (2) Minimization of the total tardiness. As for the problem (1), a new search algorithm which combines the RTA∗ and a rule-based supervisor is proposed. As for problem (2), the original Petri Net model is converted to its reverse model and the algorithm developed for the problem (1) is applied, regarding the due date as the starting time in the reverse model. Some numerical experiments are carried out to demonstrate usefulness of our algorithm. 相似文献
We consider the problem of maintaining a dynamic ordered set of n integers in a universe U under the operations of insertion, deletion and predecessor queries. The computation model used is a unit-cost RAM, with a word length of w bits, and the universe size is |U|=2w. We present a data structure that uses O(|U|/log|U|+n) space, performs all the operations in O(loglog|U|) time and needs O(loglog|U|/logloglog|U|) structural changes per update operation. The data structure is a simplified version of the van Emde Boas' tree introducing, in its construction and functioning, new concepts, which help to keep the important information for searching along the path of the tree, in a more compact and organized way. 相似文献