首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we study simple algorithms for three-dimensional tracking objects in a stereo video sequence, by combining optical flow and stereo vision. This method is not able to handle the occlusion of the moving objects when they disappear due to an obstacle. To improve the performance of this method, we propose the use of adaptive filters and neural networks to predict the expected instantaneous velocities of the objects. In the previous works, this system has been successfully proved in two-dimensional tracking.  相似文献   

2.
In this paper a novel approach for recognizing actions in video sequences is presented, where the information obtained from the segmentation and tracking algorithms is used as input data. First of all, the fuzzification of input data is done and this process allows to successfully manage the uncertainty inherent to the information obtained from low-level and medium-level vision tasks, to unify the information obtained from different vision algorithms into a homogeneous representation and to aggregate the characteristics of the analyzed scenario and the objects in motion. Another contribution is the novelty of representing actions by means of an automaton and the generation of input symbols for the finite automaton depending on the comparison process between objects and actions, i.e., the main reasoning process is based on the operation of automata with capability to manage fuzzy representations of all video data. The experiments on several real traffic video sequences demonstrate encouraging results, especially when no training algorithms to obtain predefined actions to be identified are required.  相似文献   

3.
Creation of a modern infrastructure for multimedia data (voice, data, video) transmission along the long transport routes is one of the most important problems while designing and building up the new highways and exploiting the existing ones. The solution of this problem is especially relevant for countries with a vast territory, like the Russian Federation. Creation of such communication infrastructure allows to provide (i) the operating control over the technical parameters of a route by the means of high-speed data transfer from sensors and data units to the control center (ii) the security control over the route sections and strategically important objects using data from the video surveillance systems and (iii) the voice communication (IP-telephony) and transmission of multimedia information between the stationary and mobile objects on long highways as well as communication with the control center etc.  相似文献   

4.
5.
Data assimilation refers to the methodology of combining dynamical models and observed data with the objective of improving state estimation. Most data assimilation algorithms are viewed as approximations of the Bayesian posterior (filtering distribution) on the signal given the observations. Some of these approximations are controlled, such as particle filters which may be refined to produce the true filtering distribution in the large particle number limit, and some are uncontrolled, such as ensemble Kalman filter methods which do not recover the true filtering distribution in the large ensemble limit. Other data assimilation algorithms, such as cycled 3DVAR methods, may be thought of as controlled estimators of the state, in the small observational noise scenario, but are also uncontrolled in general in relation to the true filtering distribution. For particle filters and ensemble Kalman filters it is of practical importance to understand how and why data assimilation methods can be effective when used with a fixed small number of particles, since for many large-scale applications it is not practical to deploy algorithms close to the large particle limit asymptotic. In this paper, the authors address this question for particle filters and, in particular, study their accuracy (in the small noise limit) and ergodicity (for noisy signal and observation) without appealing to the large particle number limit. The authors first overview the accuracy and minorization properties for the true filtering distribution, working in the setting of conditional Gaussianity for the dynamics-observation model. They then show that these properties are inherited by optimal particle filters for any fixed number of particles, and use the minorization to establish ergodicity of the filters. For completeness we also prove large particle number consistency results for the optimal particle filters, by writing the update equations for the underlying distributions as recursions. In addition to looking at the optimal particle filter with standard resampling, they derive all the above results for (what they term) the Gaussianized optimal particle filter and show that the theoretical properties are favorable for this method, when compared to the standard optimal particle filter.  相似文献   

6.
Daniel Watzenig  Gerald Steiner 《PAMM》2007,7(1):1151603-1151604
The estimation of cross-sectional material distributions from non-stationary sparse tomographic measurement data is a demanding class of ill-posed inverse problems. In this context, electrical capacitance tomography (ECT) is a well-established modality that aims at monitoring and controlling dynamic industrial processes arising e.g. in pneumatic conveying or heterogeneous flow fields. By measuring the capacitances between certain electrodes that are arranged around the periphery, the permittivity distribution inside closed objects can be spatially resolved. In this paper, the main focus is on the robust estimation of time-dependent material distributions given uncertain measurements. The underlying inverse problem is formulated in a Bayesian inferential framework, by specifying a prior parameter distribution, and characterizing the statistics of the measurement noise, to give a posterior distribution conditioned on measured data. Transitions between different material phases are described by means of a Fourier contour model of second order implying a geometric regularization. Sequential importance filtering – particle filtering – is applied to solve the non-stationary ECT problem. A tracking experiment is presented in order to show the robustness of the Bayesian filtering approach to solve the non-stationary inverse problem. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
In this paper, we address the problem of complex object tracking using the particle filter framework, which essentially amounts to estimate high-dimensional distributions by a sequential Monte Carlo algorithm. For this purpose, we first exploit Dynamic Bayesian Networks to determine conditionally independent subspaces of the object’s state space, which allows us to independently perform the particle filter’s propagations and corrections over small spaces. Second, we propose a swapping process to transform the weighted particle set provided by the update step of the particle filter into a “new particle set” better focusing on high peaks of the posterior distribution. This new methodology, called Swapping-Based Partitioned Sampling, is proved to be mathematically sound and is successfully tested and validated on synthetic video sequences for single or multiple articulated object tracking.  相似文献   

8.
In this paper, a new optimization method named gravitational search algorithm (GSA) is adopted for designing optimal linear phase finite impulse response band pass (BP) and band stop (BS) digital filters. Other various population based evolutionary algorithms like real coded genetic algorithm, conventional particle swarm optimization, differential evolution (DE), bee swarm optimization have also been applied for the sake of comparative study of the same optimal designs. In GSA, particles are considered as objects and their performances are measured by their masses. All these objects attract each other by gravity forces, and these forces produce global movements of all objects towards the objects with heavier masses. GSA guarantees the exploitation step of the algorithm and it is apparently free from premature convergence. Extensive simulation results justify superior optimization capability of GSA over the afore-mentioned optimization techniques for the solution of the multimodal, non-differentiable, highly non-linear, and constrained filter design problems.  相似文献   

9.
We propose techniques based on graphical models for efficiently solving data association problems arising in multiple target tracking with distributed sensor networks. Graphical models provide a powerful framework for representing the statistical dependencies among a collection of random variables, and are widely used in many applications (e.g., computer vision, error-correcting codes). We consider two different types of data association problems, corresponding to whether or not it is known a priori which targets are within the surveillance range of each sensor. We first demonstrate how to transform these two problems to inference problems on graphical models. With this transformation, both problems can be solved efficiently by local message-passing algorithms for graphical models, which solve optimization problems in a distributed manner by exchange of information among neighboring nodes on the graph. Moreover, a suitably reweighted version of the max–product algorithm yields provably optimal data associations. These approaches scale well with the number of sensors in the network, and moreover are well suited to being realized in a distributed fashion. So as to address trade-offs between performance and communication costs, we propose a communication-sensitive form of message-passing that is capable of achieving near-optimal performance using far less communication. We demonstrate the effectiveness of our approach with experiments on simulated data.  相似文献   

10.
11.
Classic scattering from objects of arbitrary shape must generally be treated by numerical methods. It has proven very difficult to describe scattering from general bounded objects without resorting to frequency-limiting approximations. The starting point of many numerical methods is the Helmholtz integral representation of a given wavefield. From that point several departures are possible for constructing computationally feasible approximate schemes. To date, attempts at direct solutions have been rare.One method (originated by P. Waterman) that attacks the exact numerical solution for a very broad class of problems begins with the Helmholtz integral representations for a point exterior and interior to the target in a partial wave expansion. After truncating the partial wave space, one arrives at a set of matrix equations useful in describing the field. This method is often referred to as the T-matrix method, null-field, or extended integral equation method. It leads to a unique solution of the exterior boundary integral equation by incorporating the interior solution (extinction theorem) as a constraint. In principle, there are no theoretical limitations on frequency, although numerical complications can arise and must be appropriately dealt with for the method to be computationally reliable.For submerged objects the formalism will be outlined for acoustical scattering from targets that are rigid; sound-soft and penetrable; elastic solids; elastic shells; and layered elastic objects. Finally, illustrations of several numerical examples for the above will be presented to emphasize specific response features peculiar to a variety of targets.  相似文献   

12.
Knowledge of particle deposition in turbulent flows is often required in engineering situations. Examples include fouling of turbine blades, plate-out in nuclear reactors and soot deposition. Thus it is important for numerical simulations to be able to predict particle deposition. Particle deposition is often principally determined by the forces acting on the particles in the boundary layer. The particle tracking facility in the CFD code uses the eddy lifetime model to simulate turbulent particle dispersion, no specific boundary layer being modelled. The particle tracking code has been modified to include a boundary layer. The non-dimensional yplus, y+, distance of the particle from the wall is determined and then values for the fluid velocity, fluctuating fluid velocity and eddy lifetime appropriate for a turbulent boundary layer used. Predictions including the boundary layer have been compared against experimental data for particle deposition in turbulent pipe flow. The results giving much better agreement. Many engineering problems also involve heat transfer and hence temperature gradients. Thermophoresis is a phenomena by which small particles experience a force in the opposite direction to the temperature gradient. Thus particles will tend to deposit on cold walls and be repulsed by hot walls. The effect of thermophoresis on the deposition of particles can be significant. The modifications of the particle tracking facility have been extended to include the effect of thermophoresis. A preliminary test case involving the deposition of particles in a heated pipe has been simulated. Comparison with experimental data from an extensive experimental programme undertaken at ISPRA, known as STORM (Simplified Tests on Resuspension Mechanisms), has been made.  相似文献   

13.
The problem of recovering a low-rank matrix from a set of observations corrupted with gross sparse error is known as the robust principal component analysis (RPCA) and has many applications in computer vision, image processing and web data ranking. It has been shown that under certain conditions, the solution to the NP-hard RPCA problem can be obtained by solving a convex optimization problem, namely the robust principal component pursuit (RPCP). Moreover, if the observed data matrix has also been corrupted by a dense noise matrix in addition to gross sparse error, then the stable principal component pursuit (SPCP) problem is solved to recover the low-rank matrix. In this paper, we develop efficient algorithms with provable iteration complexity bounds for solving RPCP and SPCP. Numerical results on problems with millions of variables and constraints such as foreground extraction from surveillance video, shadow and specularity removal from face images and video denoising from heavily corrupted data show that our algorithms are competitive to current state-of-the-art solvers for RPCP and SPCP in terms of accuracy and speed.  相似文献   

14.
Some features of international environmental problems are considered. A basic problem is to induce countries to adopt a cooperative approach. One of the instruments to induce countries to cooperate is an exchange of concessions in fields of relative strengths, such as swapping trade concessions for cooperation on international environmental problems. This instrument will be modelled in this paper with tensor games. Both tradeoff and non-tradeoff tensor games will be addressed, with emphasis on tradeoff tensor games with linear strict weights. The relationship between the Pareto equilibria of a non-tradeoff tensor game and the Nash equilibria of the associated tradeoff tensor games will be studied. Due to structural similarities between tensor games and repeated multiple objective games, some attention will also be paid to the latter. Relationships between objects related to Folk theorems for the tradeoff tensor game with completely additive weights and the corresponding objects for its constituting isolated games will be studied. Since many international environmental problems have prisoners' dilemma characteristics, it is analyzed how interconnection may enhance cooperation in prisoners' dilemma games.  相似文献   

15.
Mark Sainsbury 《Metaphysica》2013,14(2):225-237
Vagueness demands many boundaries. Each is permissible, in that a thinker may without error use it to distinguish objects, though none is mandatory. This is revealed by a thought experiment—scrambled sorites—in which objects from a sorites series are presented in a random order, and subjects are required to make their judgments without access to any previous objects or their judgments concerning them.  相似文献   

16.
The ever-increasing demand in surveillance is to produce highly accurate target and track identification and estimation in real-time, even for dense target scenarios and in regions of high track contention. The use of multiple sensors, through more varied information, has the potential to greatly enhance target identification and state estimation. For multitarget tracking, the processing of multiple scans all at once yields high track identification. However, to achieve this accurate state estimation and track identification, one must solve an NP-hard data association problem of partitioning observations into tracks and false alarms in real-time. The primary objective in this work is to formulate a general class of these data association problems as multidimensional assignment problems to which new, fast, near-optimal, Lagrangian relaxation based algorithms are applicable. The dimension of the formulated assignment problem corresponds to the number of data sets being partitioned with the constraints defining such a partition. The linear objective function is developed from Bayesian estimation and is the negative log posterior or likelihood function, so that the optimal solution yields the maximum a posteriori estimate. After formulating this general class of problems, the equivalence between solving data association problems by these multidimensional assignment problems and by the currently most popular method of multiple hypothesis tracking is established. Track initiation and track maintenance using anN-scan sliding window are then used as illustrations. Since multiple hypothesis tracking also permeates multisensor data fusion, two example classes of problems are formulated as multidimensional assignment problems.This work was partially supported by the Air Force Office of Scientific Research through AFOSR Grant Numbers AFOSR-91-0138 and F49620-93-1-0133 and by the Federal Systems Company of the IBM Corporation in Boulder, CO and Owego, NY.  相似文献   

17.
In 1983 an algorithm feed was introduced for the systematic exact evaluation of higher-order partial derivatives of functions of many variables. In 1986 the feed library was extended to permit the automatic differentiation of functions expressed in terms of the derivatives of other functions. Building on this work, the present paper further extends the feed library to permit the automatic differentiation of expressions involving nested matrix and derivative operations. The need to differentiate such expressions arose naturally in the course of designing sequential filters for a class of nonlinear tracking problems.  相似文献   

18.
Sampling from a truncated multivariate normal distribution (TMVND) constitutes the core computational module in fitting many statistical and econometric models. We propose two efficient methods, an iterative data augmentation (DA) algorithm and a non-iterative inverse Bayes formulae (IBF) sampler, to simulate TMVND and generalize them to multivariate normal distributions with linear inequality constraints. By creating a Bayesian incomplete-data structure, the posterior step of the DA algorithm directly generates random vector draws as opposed to single element draws, resulting obvious computational advantage and easy coding with common statistical software packages such as S-PLUS, MATLAB and GAUSS. Furthermore, the DA provides a ready structure for implementing a fast EM algorithm to identify the mode of TMVND, which has many potential applications in statistical inference of constrained parameter problems. In addition, utilizing this mode as an intermediate result, the IBF sampling provides a novel alternative to Gibbs sampling and eliminates problems with convergence and possible slow convergence due to the high correlation between components of a TMVND. The DA algorithm is applied to a linear regression model with constrained parameters and is illustrated with a published data set. Numerical comparisons show that the proposed DA algorithm and IBF sampler are more efficient than the Gibbs sampler and the accept-reject algorithm.  相似文献   

19.
Witold Kosiński 《PAMM》2007,7(1):2010005-2010006
In real-life problems both parameters and data used in mathematical modelling are vague. Pattern recognition, system modelling, diagnosis, image analysis, fault detection and others are fields where soft calculation with unprecise, fuzzy, objects plays an important role. In the presentation recent results in the theory of ordered fuzzy numbers and their normed algebra are shortly reviewed. Then possible applications in modelling dynamical systems and mechanics are presented. From the classical framework known algebraic and evolution equations describing such systems are transformed to their fuzzy versions. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
In this paper, we study exponential stability and tracking control problems for uncertain time-delayed systems. First, sufficient conditions of exponential stability for a class of uncertain time-delayed systems are established by employing Lyapunov functional methods and algebraic matrix inequality techniques. Furthermore, tracking control problems are investigated in which an uncertain linear time-delayed system is used to track the reference system. Sufficient conditions for solvability of tracking control problems are obtained for the cases that the system state is measurable and non-measurable, respectively. When the state is measurable, we design an impulsive control law to achieve the tracking performance. When the state information is not directly available from measurement, an impulsive control law based on the measured output will be used. Finally, numerical examples are presented to illustrate the effectiveness and usefulness of our results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号