首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Most of the atmospheric and oceanic data assimilation (DA) schemes rely on the Best Linear Unbiased Estimator (BLUE), which is sub-optimal if errors of assimilated data are non-Gaussian, thus calling for a full Bayesian data assimilation. This paper contributes to the study of the non-Gaussianity of errors in the observational space. Possible sources of non-Gaussianity range from the inherent statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species), the nonlinearity, both of the data assimilation models and of the observation operators among others. Deviations from Gaussianity can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background), leading to consistency relationships between the error statistics. From samples of observations and backgrounds as well as their specified error variances, we evaluate some measures of the innovation non-Gaussianity, such as the skewness, kurtosis and negentropy. Under the assumption of additive errors and by relating statistical moments from both data errors and innovations, we identify potential sources of the innovation non-Gaussianity. These sources range from: (1) univariate error non-Gaussianity, (2), nonlinear correlations between errors, (3) spatio-temporal variability of error variances (heteroscedasticity) and (4) multiplicative noise. Observational and background errors are often assumed independent. This leads to variance-dependent bounds for the skewness and the kurtosis of errors. From innovation statistics, we assess the potential DA impact of some scenarios of non-Gaussian errors. This impact is measured through the mean square difference between the BLUE and the Minimum Variance Unbiased Estimator (MVUE), obtained with univariate observations and background estimates. In order to accomplish this, we compute maximum entropy probability density functions (pdfs) of the errors, constrained by the first four order moments. These pdfs are then used to compute the Bayesian posterior pdf and the MVUE. The referred impact is studied for a large range of statistical moments, being higher for skewed innovations and growing in average with the skewness of data errors, specially if the skewnesses have the same sign. An application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder (HIRS) channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20%-60% of the posterior error variance as compared to the BLUE, specially for extreme values of the innovations.  相似文献   

2.
Errors in numerical forecasts arise due to errors in the initial conditions and the discrepancies between the model and nature (and may amplify due to chaos). In a quest to reduce forecast errors, initial conditions for forecast integrations are traditionally chosen to be as close to nature as possible. When such an initial condition (analysis) is used to initialize an imperfect model that is systematically different from nature, the model will drift from a state on or near the attractor of nature to a state near the model’s attractor. Such a drift will induce forecast errors.

To reduce drift-induced errors, a mapping paradigm is proposed where a link (i.e., mapping vector) is established between states of nature and corresponding states on (or near) the model attractor. Observations from near the attractor of nature are moved with the mapping vector to the vicinity of the model attractor. Data assimilation is performed with the mapped observations and the mapped initial conditions are then used to initialize model forecasts to be used in the next assimilation cycle. For practical applications, the mapped initial conditions as well as the forecasts are “remapped” back to be close to nature using the mapping vector with an opposite sign.

The mapping paradigm is demonstrated in a setting where a simple Lorenz model is used to generate “nature” and a modified version is used as an imperfect model. The mapping vector is first estimated as the difference between the climate mean of nature and the model. Model related errors in the Lorenz system with the mapping algorithm are reduced by 67%, leading to improvements in the quality of both the numerical forecasts made with the imperfect model and the analyses produced with the forecasts. Considering that the mapping vector may be a function of phase space location or no long-term climatology for nature or the model may be available, an adaptive approach that can be used with a relatively small amount of data was also introduced and successfully tested.  相似文献   


3.
A variation of the Minority Game has been applied to study the timing of promotional actions at retailers in the fast moving consumer goods market. The underlying hypotheses for this work are that price promotions are more effective when fewer than average competitors do a promotion, and that a promotion strategy can be based on past sales data. The first assumption has been checked by analysing 1467 promotional actions for three products on the Dutch market (ketchup, mayonnaise and curry sauce) over a 120-week period, both on an aggregated level and on retailer chain level.

The second assumption was tested by analysing past sales data with the Minority Game. This revealed that high or low competitor promotional pressure for actual ketchup, mayonnaise, curry sauce and barbecue sauce markets is to some extent predictable up to a forecast of some 10 weeks. Whereas a random guess would be right 50% of the time, a single-agent game can predict the market with a success rate of 56% for a 6–9 week forecast. This number is the same for all four mentioned fast moving consumer markets. For a multi-agent game a larger variability in the success rate is obtained, but predictability can be as high as 65%.

Contrary to expectation, the actual market does the opposite of what game theory would predict. This points at a systematic oscillation in the market. Even though this result is not fully understood, merely observing that this trend is present in the data could lead to exploitable trading benefits. As a check, random history strings were generated from which the statistical variation in the game prediction was studied. This shows that the odds are 1:1,000,000 that the observed pattern in the market is based on coincidence.  相似文献   


4.
Variational data assimilation technique applied to identification of optimal approximations of derivatives near boundary is discussed in frames of one-dimensional wave equation. Simplicity of the equation and of its numerical scheme allows us to discuss in detail as the development of the adjoint model and assimilation results. It is shown what kind of errors can be corrected by this control and how these errors are corrected. This study is carried out in view of using this control to identify optimal numerical schemes in coastal regions of ocean models.  相似文献   

5.
Some years ago Szabó and Fine proposed a local hidden variable theory for the GHZ experiment based on the assumption that “the detection efficiency is not (only) the effect of random errors in the detector equipment, but it is a more fundamental phenomenon, the manifestation of a predetermined hidden property of the particles”. Szabó and Fine, however, did not provide a general approach to quantum phenomena which avoids nonlocality. Such an approach, based on the same assumption, was instead recently supplied by some of us and called extended semantic realism (ESR) model. We show here that one can extract from the ESR model several local finite models referring to the specific physical situation considered in the GHZ experiment, and that these models can be converted into the toy models for the GHZ experiment worked out by Szabó and Fine.  相似文献   

6.
Graphical models for statistical inference and data assimilation   总被引:1,自引:0,他引:1  
In data assimilation for a system which evolves in time, one combines past and current observations with a model of the dynamics of the system, in order to improve the simulation of the system as well as any future predictions about it. From a statistical point of view, this process can be regarded as estimating many random variables which are related both spatially and temporally: given observations of some of these variables, typically corresponding to times past, we require estimates of several others, typically corresponding to future times.

Graphical models have emerged as an effective formalism for assisting in these types of inference tasks, particularly for large numbers of random variables. Graphical models provide a means of representing dependency structure among the variables, and can provide both intuition and efficiency in estimation and other inference computations. We provide an overview and introduction to graphical models, and describe how they can be used to represent statistical dependency and how the resulting structure can be used to organize computation. The relation between statistical inference using graphical models and optimal sequential estimation algorithms such as Kalman filtering is discussed. We then give several additional examples of how graphical models can be applied to climate dynamics, specifically estimation using multi-resolution models of large-scale data sets such as satellite imagery, and learning hidden Markov models to capture rainfall patterns in space and time.  相似文献   


7.
Described here is a path integral, sampling-based approach for data assimilation, of sequential data and evolutionary models. Since it makes no assumptions on linearity in the dynamics, or on Gaussianity in the statistics, it permits consideration of very general estimation problems. The method can be used for such tasks as computing a smoother solution, parameter estimation, and data/model initialization.Speedup in the Monte Carlo sampling process is essential if the path integral method has any chance of being a viable estimator on moderately large problems. Here a variety of strategies are proposed and compared for their relative ability to improve the sampling efficiency of the resulting estimator. Provided as well are details useful for its implementation and testing.The method is applied to a problem in which standard methods are known to fail, an idealized flow/drifter problem, which has been used as a testbed for assimilation strategies involving Lagrangian data. It is in this kind of context that the method may prove to be a useful assimilation tool in oceanic studies.  相似文献   

8.
The transport of radiation through a medium which is spatially random is studied using diffusion theory and the method of smoothing. Equations are established for the average flux and current in the medium, together with the variance of these quantities. The theory is applied to a plane slab one side of which is irradiated by a uniform source of radiation. The reflection and transmission factors are calculated and a measure of their fluctuations is obtained. For more generality, the boundary conditions allow internal reflection of the radiation using the Fresnel coefficient, which is particularly useful for applications to optical tomography where we believe this problem to have some relevance. The results are illustrated numerically using stochastic models for weak and strong clumping and applied to transmission through adult brain tissue. Stochastic effects are seen to be significant.  相似文献   

9.
Infrared spectra of high temperature H2O-CO2-CO mixtures are calculated using narrow band models in order to simulate hot jet signature at long distance. The correlated k-distribution with fictitious gas (CKFG) approach generally gives accurate data in such situations (especially for long atmospheric paths) but results in long computation time in cases involving mixtures of gases. This time may be reduced if the mixture is treated as a single gas (single-mixture gas assumption, SMG). Thus the lines of the single-mixture gas are assigned to the fictitious gases. In this study, the accuracy of two narrow band models is evaluated. The first narrow band model considers one single-mixture gas and no fictitious gas (CK-SMG) whereas the second model accounts for one single-mixture gas and three fictitious gases (CKFG-SMG). Both narrow band models are compared with reference spectra calculated with a line-by-line (LBL) approach. As expected, the narrow band accuracy is improved by the fictitious gas (FG) assumption particularly when long atmospheric paths are involved. Concerning the SMG assumption, it may lead to an underestimation of about 10% depending on the variation of the gas mixture composition ratio. Nevertheless, in most of realistic situations the SMG assumption results in negligible errors and may be used for remote sensing of plume signature.  相似文献   

10.
In this paper, explicit method of constructing approximations (the triangle entropy method) is developed for nonequilibrium problems. This method enables one to treat any complicated nonlinear functionals that fit best the physics of a problem (such as, for example, rates of processes) as new independent variables.

The work of the method is demonstrated on the Boltzmann's-type kinetics. New macroscopic variables are introduced (moments of the Boltzmann collision integral, or scattering rates). They are treated as independent variables rather than as infinite moment series. This approach gives the complete account of rates of scattering processes. Transport equations for scattering rates are obtained (the second hydrodynamic chain), similar to the usual moment chain (the first hydrodynamic chain). Various examples of the closure of the first, of the second, and of the mixed hydrodynamic chains are considered for the hard sphere model. It is shown, in particular, that the complete account of scattering processes leads to a renormalization of transport coefficients.

The method gives the explicit solution for the closure problem, provides thermodynamic properties of reduced models, and can be applied to any kinetic equation with a thermodynamic Lyapunov function.  相似文献   


11.
For efficient progress, model properties and measurement needs can adapt to oceanic events and interactions as they occur. The combination of models and data via data assimilation can also be adaptive. These adaptive concepts are discussed and exemplified within the context of comprehensive real-time ocean observing and prediction systems. Novel adaptive modeling approaches based on simplified maximum likelihood principles are developed and applied to physical and physical–biogeochemical dynamics. In the regional examples shown, they allow the joint calibration of parameter values and model structures. Adaptable components of the Error Subspace Statistical Estimation (ESSE) system are reviewed and illustrated. Results indicate that error estimates, ensemble sizes, error subspace ranks, covariance tapering parameters and stochastic error models can be calibrated by such quantitative adaptation. New adaptive sampling approaches and schemes are outlined. Illustrations suggest that these adaptive schemes can be used in real time with the potential for most efficient sampling.  相似文献   

12.
This paper describes significant developments in methods for the automatic, quantitative analysis of interferograms. All areas of analysis have been considered: fringe field generation, pre-processing, and phase unwrapping.

A new quasi-heterodyne holographic technique is described in which the image is reconstructed using a single beam. The errors in the reconstructed fringe field are mainly linear in form, and an error compensation scheme is proposed. The final error in the phase measurement using automatic analysis is λ/40.

The process of image smoothing by an averaging filter is considered to reduce the effects of random noise. It is shown that by measuring the signal-to-noise ratio of the fringe field an optimum degree of smoothing may be applied. This is demonstrated on holographic and electronic speckle pattern interferometry (ESPI) data.

Two methods for cosinusoidal fringe image combination are compared, using three or four fields. It is shown that an automatic analysis can be achieved using four phase stepped images.

A new algorithm to automatically unwrap the phase of complex fringe patterns is described. The fringe field is segmented into small rectangular areas, called tiles. This allows local data to be obtained on fringe consistency and density. A confidence tree can then be formed to produce an optimal solution for the whole field. Results are presented and discussed for both holographic and ESPI data.  相似文献   


13.
蔡俊  李学彬  詹国伟  武鹏飞  徐春燕  青春  吴晓庆 《物理学报》2018,67(1):14206-014206
2016年12月13日至2017年1月2日期间,在茂名博贺海洋气象科学实验基地,采用自行研制的湍流气象探空仪,获取了30份海边温、湿、压、风速、风向和C_n~2等探空数据.基于HMNSP99外尺度模式,利用海边的探空数据拟合得到一个茂名大气光学湍流外尺度经验公式.同时对实验测得的高空湍流廓线数据进行统计平均,然后基于Hufnagel-Valley模式拟合得到符合海边湍流廓线规律的统计平均模式(C_n~2sea model).根据Tatarski高空湍流参数化方案,将用茂名外尺度公式估算的C_n~2分别与探空测量的C_n~2以及用其他外尺度模式估算的C_n~2进行了比较.对其进行统计性分析发现,利用新拟合的茂名外尺度公式、HMNSP99,Dewan以及(Coulman等外尺度模式计算的log10C_n~2)与实测值的整体相关系数分别为0.924,0.848,0.763和0.651,在变化趋势和量级上都表现出较好的一致性;以上四种外尺度模式估算结果的误差都很小,其整体平均绝对误差和平均相对误差分别为0.514和2.963%,0.627和3.612%,0.943和5.439%,0.766和4.417%,新拟合的外尺度模式的误差最小.进一步验证了新的海边外尺度和C_n~2廓线模式的可靠性和有效性,此外还发现高空大气光学湍流的发生与风切变和温度梯度具有十分密切的关系,为光电工程在海边场景应用所需的大气光学湍流廓线模式提供支持.  相似文献   

14.
The Arctic Ocean and sea ice form a feedback system that plays an important role in the global climate. The complexity of highly parameterized global circulation (climate) models makes it very difficult to assess feedback processes in climate without the concurrent use of simple models where the physics is understood. We introduce a two-dimensional energy-based regular network model to investigate feedback processes in an Arctic ice-ocean layer. The model includes the nonlinear aspect of the ice-water phase transition, a nonlinear diffusive energy transport within a heterogeneous ice-ocean lattice, and spatiotemporal atmospheric and oceanic forcing at the surfaces. First results for a horizontally homogeneous ice-ocean layer show bistability and related hysteresis between perennial ice and perennial open water for varying atmospheric heat influx. Seasonal ice cover exists as a transient phenomenon. We also find that ocean heat fluxes are more efficient than atmospheric heat fluxes to melt Arctic sea ice.  相似文献   

15.
The viewpoint taken in this paper is that data assimilation is fundamentally a statistical problem and that this problem should be cast in a Bayesian framework. In the absence of model error, the correct solution to the data assimilation problem is to find the posterior distribution implied by this Bayesian setting. Methods for dealing with data assimilation should then be judged by their ability to probe this distribution. In this paper we propose a range of techniques for probing the posterior distribution, based around the Langevin equation; and we compare these new techniques with existing methods.

When the underlying dynamics is deterministic, the posterior distribution is on the space of initial conditions leading to a sampling problem over this space. When the underlying dynamics is stochastic the posterior distribution is on the space of continuous time paths. By writing down a density, and conditioning on observations, it is possible to define a range of Markov Chain Monte Carlo (MCMC) methods which sample from the desired posterior distribution, and thereby solve the data assimilation problem. The basic building-blocks for the MCMC methods that we concentrate on in this paper are Langevin equations which are ergodic and whose invariant measures give the desired distribution; in the case of path space sampling these are stochastic partial differential equations (SPDEs).

Two examples are given to show how data assimilation can be formulated in a Bayesian fashion. The first is weather prediction, and the second is Lagrangian data assimilation for oceanic velocity fields. Furthermore the relationship between the Bayesian approach outlined here and the commonly used Kalman filter based techniques, prevalent in practice, is discussed. Two simple pedagogical examples are studied to illustrate the application of Bayesian sampling to data assimilation concretely. Finally a range of open mathematical and computational issues, arising from the Bayesian approach, are outlined.  相似文献   


16.
Classical formulations of data assimilation, whether sequential, ensemble-based or variational, are amplitude adjustment methods. Such approaches can perform poorly when forecast locations of weather systems are displaced from their observations. Compensating position errors by adjusting amplitudes can produce unacceptably “distorted” states, adversely affecting analysis, verification and subsequent forecasts.

There are many sources of position error. It is non-trivial to decompose position error into constituent sources and yet correcting position errors during assimilation can be essential for operationally predicting strong, localized weather events such as tropical cyclones.

In this paper, we propose a method that accounts for both position and amplitude errors. The proposed method assimilates observations in two steps. The first step is field alignment, where the current model state is aligned with observations by adjusting a continuous field of local displacements, subject to certain constraints. The second step is amplitude adjustment, where contemporary assimilation approaches are used. We demonstrate with 1D and 2D examples how applying field alignment produces better analyses with sparse and uncertain observations.  相似文献   


17.
Roberto Garra  Federico Polito 《Physica A》2011,390(21-22):3704-3709
In this note we highlight the role of fractional linear birth and linear death processes, recently studied in Orsingher et al. (2010) [5] and Orsingher and Polito (2010) [6], in relation to epidemic models with empirical power law distribution of the events. Taking inspiration from a formal analogy between the equation for self-consistency of the epidemic type aftershock sequences (ETAS) model and the fractional differential equation describing the mean value of fractional linear growth processes, we show some interesting applications of fractional modelling in studying ab initio epidemic processes without the assumption of any empirical distribution. We also show that, in the framework of fractional modelling, subcritical regimes can be linked to linear fractional death processes and supercritical regimes to linear fractional birth processes.Moreover we discuss a simple toy model in order to underline the possible application of these stochastic growth models to more general epidemic phenomena such as tumoral growth.  相似文献   

18.
卫星遥感技术已成为城市污染气体SO2监测和全球火山活动监测及预警的重要手段. 目前新的PCA (principal component analysis)算法有效减小了反演数据噪声, 并替代之前业务算法BRD (band residual difference)用于边界层SO2柱总量产品的反演. 然而, 目前对PCA算法反演产品精度的评价和验证研究较少, 缺少与BRD算法产品进行长时间序列的比较以评估算法适用性, 尤其在中国大气污染重点城市区域. 本文利用地基多轴差分吸收光谱仪(MAX-DOAS)观测及多尺度空气质量模式系统(RAMS-CMAQ)大气化学模式模拟等数据, 评估PCA和BRD 反演算法的精度及误差. 另外, 选取洁净海洋地区、中国大气污染重点城市区域和高浓度火山喷发三种情况, 比较分析PCA 与BRD SO2 总量的时空格局变化差异及对不同SO2总量下的适用性, 并对两种算法反演不确定性进行分析讨论. 结果表明, 在中国京津冀、珠江三角洲和长江三角洲区域, PCA SO2总量反演值低于BRD, BRD反演结果更接近于地基的MAX-DOAS观测值, 冬季BRD和PCA SO2总量值低于RAMS-CMAQ 模拟结果, 夏季7月和8月BRD SO2总量值高于RAMS-CMAQ 模拟结果. 在SO2总量接近于0 值的洁净海洋地区, PCA 算法产品噪声水平低于BRD算法, 但PCA 反演结果整体偏差大于BRD算法. 在高浓度火山喷发情况下, 当SO2总量大于25 DU时BRD SO2总量反演值低于PCA, 且随着SO2 总量增大, 两种算法反演值差异亦增大. 该研究对于OMI (Ozone Monitering Instrument) SO2产品的应用具有重要的参考价值, 通过分析不同反演算法的差异及对其不确定性追因, 对于算法改进研究也具有重要的科学意义.  相似文献   

19.
Noncommutative geometry is based on an idea that an associative algebra can be regarded as “an algebra of functions on a noncommutative space”. The major contribution to noncommutative geometry was made by A. Connes, who, in particular, analyzed Yang–Mills theories on noncommutative spaces, using important notions that were introduced in his papers (connection, Chern character, etc). It was found recently that Yang–Mills theories on noncommutative spaces appear naturally in string/M-theory; the notions and results of noncommutative geometry were applied very successfully to the problems of physics.

In this paper we give a mostly self-contained review of some aspects of M(atrix) theory, of Connes’ noncommutative geometry and of applications of noncommutative geometry to M(atrix) theory. The topics include introduction to BFSS and IKKT matrix models, compactifications on noncommutative tori, a review of basic notions of noncommutative geometry with a detailed discussion of noncommutative tori, Morita equivalence and -duality, an elementary discussion of noncommutative orbifolds, noncommutative solitons and instantons. The review is primarily intended for physicists who would like to learn some basic techniques of noncommutative geometry and how they can be applied in string theory and to mathematicians who would like to learn about some new problems arising in theoretical physics.

The second part of the review (Sections 10–12) devoted to solitons and instantons on noncommutative Euclidean space is almost independent of the first part.  相似文献   


20.
In this paper relations between the differential cross sections for three-body processes following from the non-relativistic quark model are classified and compared with experimental data. In general, within the experimental errors, they show satisfactory agreement, except relations for strangeness-exchange processes, namely so called SU(6) relations and relations following from the assumption of identity of quarks constituting mesons and baryons. The relation of these predictions to those of other models, and this investigation to a previous analysis of quasi-two-body reactions in the quark model is discussed shortly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号