首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
In this paper, a multi-space data association algorithm based on the wavelet transform is proposed. In addition to carrying out the traditional hard logic data association in measurement space, the new algorithm updates the state of the target in the pattern space. Such a function significantly reduces the complicated environment misassociation effects on the data association. Simulation results show that the performance of the multi-spaced data association is much better than the existing data association algorithms in complicated clutter environments, such as the nearest-neighbor standard filter (NNSF), the probabilistic data association (PDA) and the joint probabilistic data association (JPDA). The computation of the multiple-space data association is much less than the aforementioned other existing data associations, and this new data association does not need any priori information of the environment. In complicated clutter environments, compared with the other data association, the new data association proposed in this paper is very robust, reliable and stable.  相似文献   

2.
We develop a new modeling and solution method for stochastic programming problems that include a joint probabilistic constraint in which the multirow random technology matrix is discretely distributed. We binarize the probability distribution of the random variables in such a way that we can extract a threshold partially defined Boolean function (pdBf) representing the probabilistic constraint. We then construct a tight threshold Boolean minorant for the pdBf. Any separating structure of the tight threshold Boolean minorant defines sufficient conditions for the satisfaction of the probabilistic constraint and takes the form of a system of linear constraints. We use the separating structure to derive three new deterministic formulations for the studied stochastic problem, and we derive a set of strengthening valid inequalities. A crucial feature of the new integer formulations is that the number of integer variables does not depend on the number of scenarios used to represent uncertainty. The computational study, based on instances of the stochastic capital rationing problem, shows that the mixed-integer linear programming formulations are orders of magnitude faster to solve than the mixed-integer nonlinear programming formulation. The method integrating the valid inequalities in a branch-and-bound algorithm has the best performance.  相似文献   

3.
Joint models for longitudinal and survival data are routinely used in clinical trials or other studies to assess a treatment effect while accounting for longitudinal measures such as patient-reported outcomes. In the Bayesian framework, the deviance information criterion (DIC) and the logarithm of the pseudo-marginal likelihood (LPML) are two well-known Bayesian criteria for comparing joint models. However, these criteria do not provide separate assessments of each component of the joint model. In this article, we develop a novel decomposition of DIC and LPML to assess the fit of the longitudinal and survival components of the joint model, separately. Based on this decomposition, we then propose new Bayesian model assessment criteria, namely, ΔDIC and ΔLPML, to determine the importance and contribution of the longitudinal (survival) data to the model fit of the survival (longitudinal) data. Moreover, we develop an efficient Monte Carlo method for computing the conditional predictive ordinate statistics in the joint modeling setting. A simulation study is conducted to examine the empirical performance of the proposed criteria and the proposed methodology is further applied to a case study in mesothelioma. Supplementary materials for this article are available online.  相似文献   

4.
In this paper, we introduce a Bayesian analysis for mixture of distributions belonging to the exponential family. As a special case we consider a mixture of normal exponential distributions including joint modeling of the mean and variance. We also consider joint modeling of the mean and variance heterogeneity. Markov Chain Monte Carlo (MCMC) methods are used to obtain the posterior summaries of interest. We also introduce and apply an EM algorithm, where the maximization is obtained applying the Fisher scoring algorithm. Finally, we also include analysis of real data sets to illustrate the proposed methodology.  相似文献   

5.
This paper first presents an improved method of temporal decomposition for minimising the searching space of scheduling problems. Next, the effects of the temporal decomposition procedure in scheduling problems are analysed. It is theoretically shown that the complexity of a scheduling algorithm using decomposed subsets varies inversely with that of the decomposition procedure. Therefore, the efficiency of the overall scheduling algorithm is strongly related to the decomposability of the set of operations to be processed on each machine. This decomposability is evaluated using a probabilistic approach where the probability distributions of the scheduling parameters are obtained from historical workshop data. The average number of decomposed subsets and the average size of these subsets are estimated. Both theoretical analysis and simulation results have revealed that the decomposition procedure leads to optimal effects when some conditions on scheduling parameters are met.  相似文献   

6.
We present an approach for penalized tensor decomposition (PTD) that estimates smoothly varying latent factors in multiway data. This generalizes existing work on sparse tensor decomposition and penalized matrix decompositions, in a manner parallel to the generalized lasso for regression and smoothing problems. Our approach presents many nontrivial challenges at the intersection of modeling and computation, which are studied in detail. An efficient coordinate-wise optimization algorithm for PTD is presented, and its convergence properties are characterized. The method is applied both to simulated data and real data on flu hospitalizations in Texas and motion-capture data from video cameras. These results show that our penalized tensor decomposition can offer major improvements on existing methods for analyzing multiway data that exhibit smooth spatial or temporal features.  相似文献   

7.
Bias phenomenon in multiple target tracking has been observed for a long time. This paper is devoted to a study of the bias resulting from the miscorrelation in data association. One result of this paper is a necessary condition for miscorrelation to cause bias. Relying on the necessary condition and a model for data association process, techniques are developed to give general directions for where and how to compensate the bias related to miscorrelation in general tracking algorithms. Case studies on the bias phenomenon in two tracking algorithms, i.e., global nearest neighborhood (GNN) and joint probabilistic data association (JPDA), are launched as a practice of the ideas and results presented in this paper. The outcome of the examples illustrates and strongly supports our results. A discussion of several statistical issues is given in the end of this paper, in which the behavior for the bias in GNN and JPDA is studied.  相似文献   

8.
The study is concerned with data association of bearings-only multi-target tracking using two stationary observers in a 2-D scenario. In view of each target moving with a constant speed, two objective functions, i.e., distance and slope differences, are proposed and a multi-objective-ant-colony-optimization-based algorithm is then introduced to execute data association by minimizing the two objective functions. Numerical simulations are conducted to evaluate the effectiveness of the proposed algorithm in comparison with the data association results of the joint maximum likelihood (ML) method under different noise levels and track figurations.  相似文献   

9.
Adaptive principal component analysis is prohibitively expensive when a large‐scale data matrix must be updated frequently. Therefore, we consider the truncated URV decomposition that allows faster updates to its approximation to the singular value decomposition while still producing a good enough approximation to recover principal components. Specifically, we suggest an efficient algorithm for the truncated URV decomposition when a rank 1 matrix updates the data matrix. After the algorithm development, the truncated URV decomposition is successfully applied to the template tracking problem in a video sequence proposed by Matthews et al. [IEEE Trans. Pattern Anal. Mach. Intell., 26:810‐815 2004], which requires computation of the principal components of the augmented image matrix at every iteration. From the template tracking experiments, we show that, in adaptive applications, the truncated URV decomposition maintains a good approximation to the principal component subspace more efficiently than other procedures.  相似文献   

10.
Factor clustering methods have been developed in recent years thanks to improvements in computational power. These methods perform a linear transformation of data and a clustering of the transformed data, optimizing a common criterion. Probabilistic distance (PD)-clustering is an iterative, distribution free, probabilistic clustering method. Factor PD-clustering (FPDC) is based on PD-clustering and involves a linear transformation of the original variables into a reduced number of orthogonal ones using a common criterion with PD-clustering. This paper demonstrates that Tucker3 decomposition can be used to accomplish this transformation. Factor PD-clustering alternatingly exploits Tucker3 decomposition and PD-clustering on transformed data until convergence is achieved. This method can significantly improve the PD-clustering algorithm performance; large data sets can thus be partitioned into clusters with increasing stability and robustness of the results. Real and simulated data sets are used to compare FPDC with its main competitors, where it performs equally well when clusters are elliptically shaped but outperforms its competitors with non-Gaussian shaped clusters or noisy data.  相似文献   

11.
Chance constraint is widely used for modeling solution reliability in optimization problems with uncertainty. Due to the difficulties in checking the feasibility of the probabilistic constraint and the non-convexity of the feasible region, chance constrained problems are generally solved through approximations. Joint chance constrained problem enforces that several constraints are satisfied simultaneously and it is more complicated than individual chance constrained problem. This work investigates the tractable robust optimization approximation framework for solving the joint chance constrained problem. Various robust counterpart optimization formulations are derived based on different types of uncertainty set. To improve the quality of robust optimization approximation, a two-layer algorithm is proposed. The inner layer optimizes over the size of the uncertainty set, and the outer layer optimizes over the parameter t which is used for the indicator function upper bounding. Numerical studies demonstrate that the proposed method can lead to solutions close to the true solution of a joint chance constrained problem.  相似文献   

12.
Joint latent class modeling of disease prevalence and high-dimensional semicontinuous biomarker data has been proposed to study the relationship between diseases and their related biomarkers. However, statistical inference of the joint latent class modeling approach has proved very challenging due to its computational complexity in seeking maximum likelihood estimates. In this article, we propose a series of composite likelihoods for maximum composite likelihood estimation, as well as an enhanced Monte Carlo expectation–maximization (MCEM) algorithm for maximum likelihood estimation, in the context of joint latent class models. Theoretically, the maximum composite likelihood estimates are consistent and asymptotically normal. Numerically, we have shown that, as compared to the MCEM algorithm that maximizes the full likelihood, not only the composite likelihood approach that is coupled with the quasi-Newton method can substantially reduce the computational complexity and duration, but it can simultaneously retain comparative estimation efficiency.  相似文献   

13.
We propose the use of a third-order approximation for the representation of probabilistic data in expert systems and compare it to tree-structured representations. The differences are illustrated using the example of a reliability problem. We show that using the third-order representation results in significantly reduced losses as compared to tree structures, with a small increase in computational complexity. We present heuristic and exact techniques to determine the optimal third-order representation and propose a decomposition technique that allows the exact algorithm to be efficiently used for solving large problem instances.  相似文献   

14.
In previous publications, the authors have introduced the notion of stochastic satisfiability modulo theories (SSMT) and the corresponding SiSAT solving algorithm, which provide a symbolic method for the reachability analysis of probabilistic hybrid systems. SSMT extends satisfiability modulo theories (SMT) with randomized (or stochastic), existential, and universal quantification, as known from stochastic propositional satisfiability. In this paper, we extend the SSMT-based procedures to the symbolic analysis of concurrent probabilistic hybrid systems. After formally introducing the computational model, we provide a mechanized translation scheme to encode probabilistic bounded reachability problems of concurrent probabilistic hybrid automata as linearly sized SSMT formulae, which in turn can be solved by the SiSAT tool. We furthermore propose an algorithmic enhancement which tailors SiSAT to probabilistic bounded reachability problems by caching and reusing solutions obtained on bounded reachability problems of smaller depth. An essential part of this article is devoted to a case study from the networked automation systems domain. We explain in detail the formal model in terms of concurrent probabilistic automata, its encoding into the SiSAT modeling language, and finally the automated quantitative analysis.  相似文献   

15.
On a multivariate Pareto distribution   总被引:2,自引:0,他引:2  
A multivariate distribution possessing arbitrarily parameterized Pareto margins is formulated and studied. The distribution is believed to allow for an adequate modeling of dependent heavy tailed risks with a non-zero probability of simultaneous loss. Numerous links to certain existing probabilistic models, as well as seemingly useful characteristic results are proved. Expressions for, e.g., decumulative distribution functions, densities, (joint) moments and regressions are developed. An application to the classical pricing problem is considered, and some formulas are derived using the recently introduced economic weighted premium calculation principles.  相似文献   

16.
Multiresolutional signal processing has been employed in image processing and computer vision to achieve improved performance that cannot be achieved using conventional signal processing techniques at only one resolution level[1,2,5,6]. In this paper,we have associated the thought of multiresolutional analysis with traditional Kalman filtering and proposed A new fusion algorithm based on singular Sensor and Multipale Models for maneuvering target tracking.  相似文献   

17.
Probability theory has become the standard framework in the field of mobile robotics because of the inherent uncertainty associated with sensing and acting. In this paper, we show that the theory of belief functions with its ability to distinguish between different types of uncertainty is able to provide significant advantages over probabilistic approaches in the context of robotics. We do so by presenting solutions to the essential problems of simultaneous localization and mapping (SLAM) and planning based on belief functions. For SLAM, we show how the joint belief function over the map and the robot's poses can be factored and efficiently approximated using a Rao-Blackwellized particle filter, resulting in a generalization of the popular probabilistic FastSLAM algorithm. Our SLAM algorithm produces occupancy grid maps where belief functions explicitly represent additional information about missing and conflicting measurements compared to probabilistic grid maps. The basis for this SLAM algorithm are forward and inverse sensor models, and we present general evidential models for range sensors like sonar and laser scanners. Using the generated evidential grid maps, we show how optimal decisions can be made for path planning and active exploration. To demonstrate the effectiveness of our evidential approach, we apply it to two real-world datasets where a mobile robot has to explore unknown environments and solve different planning problems. Finally, we provide a quantitative evaluation and show that the evidential approach outperforms a probabilistic one both in terms of map quality and navigation performance.  相似文献   

18.
§1. DiscreteWaveletTransformationThemultiresolutionalanaysisthoughtisthatwedecomposethesignalwhichisdeakedtodifferentresolutionlevelusingwavelettransformation,thelowerresolutionsignaldecomposedinsmothingsignal,thesignalthatexistinhigherresolutionleve…  相似文献   

19.
The aim of the present paper was to formulate probabilistic modeling for random variables with inconsistent data to facilitate accurate reliability assessment. Traditionally, random variables have some outputs available, based on which, some distribution is identified. However, as will be illustrated, the data relevant to those extreme events might not necessarily follow the same distribution as well as the other part, but they generally have small weights in the definition of the distribution due to their small quantity. The adoption of one single probabilistic distribution to describe random variables with such inconsistent data might cause great errors in the reliability assessment, especially for extreme events. One new formulation of probabilistic modeling is proposed here for such type of random variables. The inconsistency within the data set is identified and based on how the set is divided. Each division is described by the respective distribution and finally they are unified into one framework. The relevant problems in the modeling (e.g., the identification of the boundary between the divisions, the definition of the probability distributions, and the unification of the distributions into one framework) are presented and solved. The realization of the proposed approach in the practical numerical analysis is further investigated afterwards. Finally, two examples are presented to illustrate the application from different perspectives.  相似文献   

20.
Probability constraints play a key role in optimization problems involving uncertainties. These constraints request that an inequality system depending on a random vector has to be satisfied with a high enough probability. In specific settings, copulæ can be used to model the probabilistic constraints with uncertainty on the left-hand side. In this paper, we provide eventual convexity results for the feasible set of decisions under local generalized concavity properties of the constraint mappings and involved copulæ. The results cover all Archimedean copulæ. We consider probabilistic constraints wherein the decision and random vector are separated, i.e. left/right-hand side uncertainty. In order to solve the underlying optimization problem, we propose and analyse convergence of a regularized supporting hyperplane method: a stabilized variant of generalized Benders decomposition. The algorithm is tested on a large set of instances involving several copulæ among which the Gaussian copula. A Numerical comparison with a (pure) supporting hyperplane algorithm and a general purpose solver for non-linear optimization is also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号