首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
张子静  吴龙  宋杰  赵远 《中国物理 B》2017,26(10):104207-104207
Single-photon detectors possess the ultra-high sensitivity, but they cannot directly respond to signal intensity. Conventional methods adopt sampling gates with fixed width and count the triggered number of sampling gates, which is capable of obtaining photon counting probability to estimate the echo signal intensity. In this paper, we not only count the number of triggered sampling gates, but also record the triggered time position of photon counting pulses. The photon counting probability density distribution is obtained through the statistics of a series of the triggered time positions. Then Minimum Variance Unbiased Estimation(MVUE) method is used to estimate the echo signal intensity. Compared with conventional methods, this method can improve the estimation accuracy of echo signal intensity due to the acquisition of more detected information. Finally, a proof-of-principle laboratory system is established. The estimation accuracy of echo signal intensity is discussed and a high accuracy intensity image is acquired under low-light level environments.  相似文献   

2.
随着主动声呐距离及角度分辨率的提高,海洋混响包络概率密度函数不再是瑞利分布,而是函数分布后端较瑞利分布具有严重拖尾性的非瑞利分布。但非瑞利分布模型的概率密度函数复杂,参数估计困难。通过对非瑞利分布混响进行恒虚警率检测处理,将严重拖尾部分数据值衰减为背景均值,从而将非瑞利分布混响信号转化为近似瑞利分布。由于去掉了对背景功率估计影响较重的干扰,并基于瑞利分布模型进行目标检测,本文目标检测方法具有较好的鲁棒性和计算简单等优点。文中对一组高分辨率声呐数据进行了恒虚警率检测,结果验证了方法的有效性。   相似文献   

3.
This paper aims to estimate an unknown density of the data with measurement errors as a linear combination of functions from a dictionary. The main novelty is the proposal and investigation of the corrected sparse density estimator (CSDE). Inspired by the penalization approach, we propose the weighted Elastic-net penalized minimal 2-distance method for sparse coefficients estimation, where the adaptive weights come from sharp concentration inequalities. The first-order conditions holding a high probability obtain the optimal weighted tuning parameters. Under local coherence or minimal eigenvalue assumptions, non-asymptotic oracle inequalities are derived. These theoretical results are transposed to obtain the support recovery with a high probability. Some numerical experiments for discrete and continuous distributions confirm the significant improvement obtained by our procedure when compared with other conventional approaches. Finally, the application is performed in a meteorology dataset. It shows that our method has potency and superiority in detecting multi-mode density shapes compared with other conventional approaches.  相似文献   

4.
基于两次谱分析的时延估计方法研究   总被引:2,自引:0,他引:2       下载免费PDF全文
张卫平  张合  王伟策  刘强  方向 《应用声学》2008,27(3):222-226
时延估计是目标定位跟踪系统的关键技术之一,在水声、雷达、声探测等领域广泛应用。时延估计的基本方法是互相关法和相位谱法。互相关法时延估计分辨率与信号带宽近似成反比,因此很难估计多目标时延。相位谱时延估计只能估计单目标时延,并且存在相位解绕问题。本文提出了两次谱分析时延估计方法,即将互功率谱函数再次进行谱估计,二次谱峰值位置间距即为时延估计,这种方法既能够估计单目标时延,又能够估计多目标时延,并且不用相位解绕。仿真计算验证了两次谱时延估计方法的可行性。  相似文献   

5.
This investigation tackles the probabilistic parameter estimation problem involving the Arrhenius parameters for the rate coefficient of the chain branching reaction H + O2 → OH + O. This is achieved in a Bayesian inference framework that uses indirect data from the literature in the form of summary statistics by approximating the maximum entropy solution with the aid of approximate bayesian computation. The summary statistics include nominal values and uncertainty factors of the rate coefficient, obtained from shock-tube experiments performed at various initial temperatures. The Bayesian framework allows for the incorporation of uncertainty in the rate coefficient of a secondary reaction, namely OH + H2 → H2O + H, resulting in a consistent joint probability density on Arrhenius parameters for the two rate coefficients. It also allows for uncertainty quantification in numerical ignition predictions while conforming with the published summary statistics. The method relies on probabilistic reconstruction of the unreported data, OH concentration profiles from shock-tube experiments, along with the unknown Arrhenius parameters. The data inference is performed using a Markov chain Monte Carlo sampling procedure that relies on an efficient adaptive quadrature in estimating relevant integrals needed for data likelihood evaluations. For further efficiency gains, local Padé–Legendre approximants are used as surrogates for the time histories of OH concentration, alleviating the need for 0-D auto-ignition simulations. The reconstructed realisations of the missing data are used to provide a consensus joint posterior probability density on the unknown Arrhenius parameters via probabilistic pooling. Uncertainty quantification analysis is performed for stoichiometric hydrogen–air auto-ignition computations to explore the impact of uncertain parameter correlations on a range of quantities of interest.  相似文献   

6.
为评估基于单矢量水听器的方位估计能力,在黄海海域对矢量水听器进行实验。矢量水听器吊放于接收船尾部,采用平均声强器和复声强器方位估计方法,并提出以概率密度值最大的方位角作为目标方位估计值的具体处理准则,对恒定方向、匀速行驶的目标船方位进行估计,并求出两种方法的方位估计误差。结果表明,水听器布放深度10 m时,对正横距离为0.42 km的航速10 kn的目标船,平均声强器方法的水平方位角估计误差18°,极角估计误差为5°,可以在离目标船最远1.17 km处估计其方位;复声强法的水平方位角估计误差为13°,极角估计误差为8°,可以在离目标船最远2.35 km处估计其方位。在有接收船的噪声干扰情况下,复声强器比平均声强器方法估计的方位更准确,可以对更远处的噪声源进行方位估计。  相似文献   

7.
陈羽  孟洲  马树青  包长春 《声学学报》2015,40(6):807-815
为了实现矢量水听器垂直阵列对目标的高分辨方位估计,提出了基于MUSIC子频带最优加权数据融合方法。该方法采用MUSIC算法对划分的各窄带信号进行方位估计,并在各子频带对多基元方位估计结果进行最优加权最小二乘融合处理,最后通过加权直方图统计法得到最终方位估计结果。对算法进行的仿真及海上试验数据处理结果表明:本文算法在方位估计精度、方位估计正确概率、多目标分辨以及对噪声子频带的抑制能力方面都优于单个基元MUSIC以及多基元复声强器融合算法。   相似文献   

8.
The Bayesian perspective on statistics asserts that it makes sense to speak of a probability of an unknown parameter having a particular value. Given a model for an observed, noise-corrupted signal, we may use Bayesian methods to estimate not only the most probable value for each parameter but also their distributions. We present an implementation of the Bayesian parameter estimation formalism developed by G. L. Bretthorst (1990,J. Magn. Reson.88, 533) using the Metropolis Monte Carlo sampling algorithm to perform the parameter and error estimation. This allows us to make very few assumptions about the shape of the posterior distribution, and allows the easy introduction of prior knowledge about constraints among the model parameters. We present evidence that the error estimates obtained in this manner are realistic, and that the Monte Carlo approach can be used to accurately estimate coupling constants from antiphase doublets in synthetic and experimental data.  相似文献   

9.
张华  许录平  谢强  罗楠 《物理学报》2011,60(4):49701-049701
累积轮廓、流量和周期是X射线脉冲星辐射信号的三个重要特征,将其应用于X射线脉冲星信号检测中,提出了一种基于Bayesian估计的X射线脉冲星周期辐射信号时域检测方法.该方法以非脉冲区噪声观测为先验知识,利用X射线脉冲星辐射信号的泊松分布模型推导了信号概率密度分布函数,以该函数的累积分布函数为判据,对X射线脉冲星微弱信号进行检测,并提取位相偏移量.利用仿真数据和RXTE卫星的实测数据进行实验验证,结果表明:本文方法性能优于同类的基于高斯分布模型的检测方法,在检测信号的同时能在一定精度下给出信号位相偏移值. 关键词: 脉冲星 Bayesian估计 位相测量 时域检测  相似文献   

10.
11.
针对当前多无源传感器数据关联算法构造关联代价时,未考虑位置估计不确定性所引入的误差,提出一种基于位置估计不确定性的被动传感器数据关联算法。首先通过量测与伪量测概率密度函数之间的瑞利熵构建关联代价函数,以准确描述两个相似的概率密度函数之间差异,然后通过具体实验测试本文算法的有效性和优越性。实验结果表明,相对于当前经典的数据关联算法,本文算法提高了数据关联的正确率和速度,具有更高的实际应用价值。  相似文献   

12.
In financial market risk measurement, Value-at-Risk (VaR) techniques have proven to be a very useful and popular tool. Unfortunately, most VaR estimation models suffer from major drawbacks: the lognormal (Gaussian) modeling of the returns does not take into account the observed fat tail distribution and the non-stationarity of the financial instruments severely limits the efficiency of the VaR predictions. In this paper, we present a new approach to VaR estimation which is based on ideas from the field of information theory and lossless data compression. More specifically, the technique of context modeling is applied to estimate the VaR by conditioning the probability density function on the present context. Tree-structured vector quantization is applied to partition the multi-dimensional state space of both macroeconomic and microeconomic priors into an increasing but limited number of context classes. Each class can be interpreted as a state of aggregation with its own statistical and dynamic behavior, or as a random walk with its own drift and step size. Results on the US S&P500 index, obtained using several evaluation methods, show the strong potential of this approach and prove that it can be applied successfully for, amongst other useful applications, VaR and volatility prediction. The October 1997 crash is indicated in time. Received 2 September 2000 and Received in final form 12 October 2000  相似文献   

13.
针对低信噪比下利用单水听器估计辐射噪声功率谱密度精度较差的问题,提出一种基于多途信道传输函数估计的垂直阵测量估计1 m处舰船辐射噪声功率谱密度的方法。该方法将信道传输函数表示为多途路径近场阵列流形向量的叠加,较快地估计了信道传输函数,将其用于舰船辐射噪声功率谱密度估计,可较简便地估计距声中心1 m处辐射噪声的功率谱密度,即谱源级。分析了产生功率谱密度估计误差的原因,包括信道估计误差和环境噪声引起的误差,为降低估计误差提供了理论依据。仿真结果表明,该方法估计1 m处辐射噪声功率谱密度的性能良好。  相似文献   

14.
In this paper we propose a new gamut-constrained illuminant estimation framework using an improved category correlation method. Firstly, we obtain a set of feasible illuminations by the original gamut mapping method. Then, the probability of each feasible illumination as the ground truth illuminant is calculated according to its ability to map the corrected image onto specific colors using the improved category correlation method. Differently from the original category correlation method, to decrease the effect of image noise and the computation complexity, instead of using an entire pixel set for estimating the probability of portable illuminant, superpixel segments of an input image are used in our improved method. And the best illuminant estimate is given on the basis of the measure of the degree to which each feasible illumination is consistent with the image data. Experiment results prove that our improved method shows better current state-of-the-art performance Gamut mapping methods.  相似文献   

15.
The objective of this study was to evaluate the performances of different algorithms for diffusion parameters estimation in intravoxel incoherent motion method for diffusion-weighted magnetic resonance imaging (DW-MRI) data analysis. Traditionally, the method of non-linear least squares analysis by means of Levenberg–Marquardt algorithms has been used to estimate the parameters obtained from exponential decay data. In this study, we evaluated the Variable Projection curve-fitting algorithm and the performance of two non-linear regression methods when single and multiple starting points were used. Analysis was done on simulation data to which different amounts of Gaussian noise had been added. The performance of two non-linear regression methods was compared using the residual sum of squares and the number of failures in data fitting. We conclude that the VarPro algorithm is superior to the LM algorithm for curve fitting in intravoxel incoherent motion method for DW-MRI data analysis.  相似文献   

16.
Roberto da Silva  Fahad Kalil 《Physica A》2012,391(5):2119-2128
Many discussions have enlarged the literature in Bibliometrics since the Hirsch proposal, the so called h-index. Ranking papers according to their citations, this index quantifies a researcher only by its greatest possible number of papers that are cited at least h times. A closed formula for h-index distribution that can be applied for distinct databases is not yet known. In fact, to obtain such distribution, the knowledge of citation distribution of the authors and its specificities are required. Instead of dealing with researchers randomly chosen, here we address different groups based on distinct databases. The first group is composed of physicists and biologists, with data extracted from Institute of Scientific Information (ISI). The second group is composed of computer scientists, in which data were extracted from Google-Scholar system. In this paper, we obtain a general formula for the h-index probability density function (pdf) for groups of authors by using generalized exponentials in the context of escort probability. Our analysis includes the use of several statistical methods to estimate the necessary parameters. Also an exhaustive comparison among the possible candidate distributions are used to describe the way the citations are distributed among authors. The h-index pdf should be used to classify groups of researchers from a quantitative point of view, which is meaningfully interesting to eliminate obscure qualitative methods.  相似文献   

17.
In disease modeling, a key statistical problem is the estimation of lower and upper tail probabilities of health events from given data sets of small size and limited range. Assuming such constraints, we describe a computational framework for the systematic fusion of observations from multiple sources to compute tail probabilities that could not be obtained otherwise due to a lack of lower or upper tail data. The estimation of multivariate lower and upper tail probabilities from a given small reference data set that lacks complete information about such tail data is addressed in terms of pertussis case count data. Fusion of data from multiple sources in conjunction with the density ratio model is used to give probability estimates that are non-obtainable from the empirical distribution. Based on a density ratio model with variable tilts, we first present a univariate fit and, subsequently, improve it with a multivariate extension. In the multivariate analysis, we selected the best model in terms of the Akaike Information Criterion (AIC). Regional prediction, in Washington state, of the number of pertussis cases is approached by providing joint probabilities using fused data from several relatively small samples following the selected density ratio model. The model is validated by a graphical goodness-of-fit plot comparing the estimated reference distribution obtained from the fused data with that of the empirical distribution obtained from the reference sample only.  相似文献   

18.
Different brain imaging devices are presently available to provide images of the human functional cortical activity, based on hemodynamic, metabolic or electromagnetic measurements. However, static images of brain regions activated during particular tasks do not convey the information of how these regions are interconnected. The concept of brain connectivity plays a central role in the neuroscience, and different definitions of connectivity, functional and effective, have been adopted in literature. While the functional connectivity is defined as the temporal coherence among the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally among cortical sites. The structural equation modeling (SEM) is the most used method to estimate effective connectivity in neuroscience, and its typical application is on data related to brain hemodynamic behavior tested by functional magnetic resonance imaging (fMRI), whereas the directed transfer function (DTF) method is a frequency-domain approach based on both a multivariate autoregressive (MVAR) modeling of time series and on the concept of Granger causality.

This study presents advanced methods for the estimation of cortical connectivity by applying SEM and DTF on the cortical signals estimated from high-resolution electroencephalography (EEG) recordings, since these signals exhibit a higher spatial resolution than conventional cerebral electromagnetic measures. To estimate correctly the cortical signals, we used a subject's multicompartment head model (scalp, skull, dura mater, cortex) constructed from individual MRI, a distributed source model and a regularized linear inverse source estimates of cortical current density. Before the application of SEM and DTF methodology to the cortical waveforms estimated from high-resolution EEG data, we performed a simulation study, in which different main factors (signal-to-noise ratio, SNR, and simulated cortical activity duration, LENGTH) were systematically manipulated in the generation of test signals, and the errors in the estimated connectivity were evaluated by the analysis of variance (ANOVA). The statistical analysis returned that during simulations, both SEM and DTF estimators were able to correctly estimate the imposed connectivity patterns under reasonable operative conditions, that is, when data exhibit an SNR of at least 3 and a LENGTH of at least 75 s of nonconsecutive EEG recordings at 64 Hz of sampling rate.

Hence, effective and functional connectivity patterns of cortical activity can be effectively estimated under general conditions met in any practical EEG recordings, by combining high-resolution EEG techniques and linear inverse estimation with SEM or DTF methods. We conclude that the estimation of cortical connectivity can be performed not only with hemodynamic measurements, but also with EEG signals treated with advanced computational techniques.  相似文献   


19.
In this work, a novel method for detecting low intensity fast moving objects with low cost Medium Wavelength Infrared (MWIR) cameras is proposed. The method is based on background subtraction in a video sequence obtained with a low density Focal Plane Array (FPA) of the newly available uncooled lead selenide (PbSe) detectors. Thermal instability along with the lack of specific electronics and mechanical devices for canceling the effect of distortion make background image identification very difficult. As a result, the identification of targets is performed in low signal to noise ratio (SNR) conditions, which may considerably restrict the sensitivity of the detection algorithm. These problems are addressed in this work by means of a new technique based on the empirical mode decomposition, which accomplishes drift estimation and target detection. Given that background estimation is the most important stage for detecting, a previous denoising step enabling a better drift estimation is designed. Comparisons are conducted against a denoising technique based on the wavelet transform and also with traditional drift estimation methods such as Kalman filtering and running average. The results reported by the simulations show that the proposed scheme has superior performance.  相似文献   

20.
Lévy processes have been widely used to model a large variety of stochastic processes under anomalous diffusion. In this note we show that Lévy processes play an important role in the study of the Generalized Langevin Equation (GLE). The solution to the GLE is proposed using stochastic integration in the sense of convergence in probability. Properties of the solution processes are obtained and numerical methods for stochastic integration are developed and applied to examples. Time series methods are applied to obtain estimation formulas for parameters related to the solution process. A Monte Carlo simulation study shows the estimation of the memory function parameter. We also estimate the stability index parameter when the noise is a Lévy process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号