首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mixed pixel classification with robust statistics   总被引:1,自引:0,他引:1  
The authors present a novel method for mixed pixel classification where the Hough transform and the trimmed means methods are used to classify small sets of pixels. They compare the performance of these methods with the least squares error method, and they show that in the presence of outliers, the trimmed means method is far more reliable than the traditional least squares error method, and even when no outliers are present, its performance is comparable to that of the least squares error method. The method is exhaustively tested using simulated data, and it is also applied to real Landsat TM data for which ground data are available  相似文献   

2.
3.
4.
Locally optimised differential methods for computing optical flow have the merit of being faster and more reliable under noise when compared to their globally optimised counterparts. However, they produce sparse flow estimates as a result of being unable to deal with local image regions of little texture. They are also particularly inefficient for regions whose change of feature-constancy function is highly nonlinear. In this study, we treat several limitations of the local approach in monochrome images. We present a robust H data fusion-based framework to propagate flow information from high confidence regions to those suspected of poor quality estimates. The adopted data fusion engine is tolerant towards uncertainty and error inherited in the optical flow computation process. A new integrated confidence measure is also presented to predict the accuracy of the recovered flow across the image enabling the data fusion engine to work as an intelligent filling-in effect, not only when the intensity is problematic but also for other uncertain regions. Results demonstrate the significance of the proposed method.  相似文献   

5.
We show in this note that by deterministic packet sampling, the tail of the distribution of the original flow size can be obtained by rescaling that of the sampled flow size. To recover information on the flow size distribution lost through packet sampling, we propose a parametric method based on measurements from different backbone IP networks. This method allows us to recover the complete flow size distribution and has successfully been tested by using a real ADSL traffic trace.  相似文献   

6.
Joint compliance can enable successful robot grasping despite uncertainties in target object location. Compliance also enhances manipulator robustness by minimizing contact forces in the event of unintended contacts or impacts. In this paper, we describe the design, fabrication, and evaluation of a novel compliant robotic grasper constructed using polymer-based shape deposition manufacturing. Joints are formed by elastomeric flexures, and actuator and sensor components are embedded in tough rigid polymers. The result is a robot gripper with the functionality of conventional metal prototypes for grasping in unstructured environments but with robustness properties that allow for large forces due to inadvertent contact.  相似文献   

7.
In this paper, we present an automatic foreground object detection method for videos captured by freely moving cameras. While we focus on extracting a single foreground object of interest throughout a video sequence, our approach does not require any training data nor the interaction by the users. Based on the SIFT correspondence across video frames, we construct robust SIFT trajectories in terms of the calculated foreground feature point probability. Our foreground feature point probability is able to determine candidate foreground feature points in each frame, without the need of user interaction such as parameter or threshold tuning. Furthermore, we propose a probabilistic consensus foreground object template (CFOT), which is directly applied to the input video for moving object detection via template matching. Our CFOT can be used to detect the foreground object in videos captured by a fast moving camera, even if the contrast between the foreground and background regions is low. Moreover, our proposed method can be generalized to foreground object detection in dynamic backgrounds, and is robust to viewpoint changes across video frames. The contribution of this paper is trifold: (1) we provide a robust decision process to detect the foreground object of interest in videos with contrast and viewpoint variations; (2) our proposed method builds longer SIFT trajectories, and this is shown to be robust and effective for object detection tasks; and (3) the construction of our CFOT is not sensitive to the initial estimation of the foreground region of interest, while its use can achieve excellent foreground object detection results on real-world video data.  相似文献   

8.
This article addresses a problem of constrained regularized image restoration. A consistent approach to the solution of this problem is based on minimizing the Tikhonov's (1977) regularizing functional subject to a set of constraints. It is demonstrated that a set of regularized solutions resulting from minimization of Tikhonov's functional is closed and convex and a projection operator on this set is derived  相似文献   

9.
We propose a novel approach aimed at adaptively setting the threshold of the smoothed Teager energy operator (STEO) detector to be used in extracellular recording of neural signals. In this proposed approach, to set the adaptive threshold of the STEO detector, we derive the relationship between the low-order statistics of its input signal and the ones of its output signal. This relationship is determined with only the background noise component assumed to be present at the input. Robust statistics theory techniques were used to achieve an unbiased estimation of these low-order statistics of the background noise component directly from the neural input signal. In this paper, the emphasis is made on extracellular neural recordings. However, the proposed method can be used in the analysis of different biomedical signals where spikes are important for diagnostic (e.g., ECG, EEG, etc.). We validated the efficacy of the proposed method using synthetic neural signals constructed from real neural recordings signals. Four different sets of extracellular recordings from four distinct neural sources have been exploited to that purpose. The first dataset is recorded from an adult male monkey using the Utath 10×10 microelectrode array implemented in the prefrontal cortex, the second one was obtained from the visual cortex of a rat using a stainless-steel-tipped microelectrode, the third dataset came from recording in a human medial lobe using intracranial electrode, and finally, the fourth one was extracted from recordings in a macaque parietal cortex using a single tetrode. Simulation results show that our approach is effective and robust, and outperforms state-of-the-art adaptive detection methods in its category (i.e., efficient and simple, and do not require a priori knowledge about neural spike waveforms shapes).  相似文献   

10.
Robust recovery of subspace structures from noisy data has received much attention in visual analysis recently. To achieve this goal, previous works have developed a number of low-rank based methods, among of which Low-Rank Representation (LRR) is a typical one. As a refined variant, Latent LRR constructs the dictionary using both observed and hidden data to relieve the insufficient sampling problem. However, they fail to consider the observation that each data point can be represented by only a small subset of atoms in a dictionary. Motivated by this, we present the Sparse Latent Low-rank representation (SLL) method, which explicitly imposes the sparsity constraint on Latent LRR to encourage a sparse representation. In this way, each data point can be represented by only selecting a few points from the same subspace. Its objective function is solved by the linearized Augmented Lagrangian Multiplier method. Favorable experimental results on subspace clustering, salient feature extraction and outlier detection have verified promising performances of our method.  相似文献   

11.
Multidimensional Systems and Signal Processing - This paper aims to develop a robust anisotropic diffusion filter associated with a robust spatial gradient estimator for simultaneously removing the...  相似文献   

12.
A design method for a suboptimal constrained nonlinear quadratic regulator (CNLQR) via invariant sets switching is presented. The CNLQR has the merits of both the control invariant set and the gain scheduling, and solves the constrained nonlinear quadratic regulation problem effectively. It first calculates the equilibrium surface of the nonlinear system, and then obtains the off-line local LQR control laws and the corresponding control invariant sets for several equilibrium points. These control invariant sets cover the equilibrium surface such that the closed-loop stability of the nonlinear system can be guaranteed by switching the local LQR laws on-line. The algorithm is computationally efficient, because the state feedback control law and control invariant sets are all solved off-line so that the computational burden of on-line optimization is greatly reduced. A simulation example illustrating the method is presented.  相似文献   

13.
Independent Component Analysis (ICA) designed for complete bases is used in a variety of applications with great success, despite the often questionable assumption of having N sensors and M sources with NM. In this article, we assume a source model with more sources than sensors (M>N), only L<N of which are assumed to have a non-Gaussian distribution. We argue that this is a realistic source model for a variety of applications, and prove that for ICA algorithms designed for complete bases (i.e., algorithms assuming N=M) based on mutual information the mixture coefficients of the L non-Gaussian sources can be reconstructed in spite of the overcomplete mixture model. Further, it is shown that the reconstructed temporal activity of non-Gaussian sources is arbitrarily mixed with Gaussian sources. To obtain estimates of the temporal activity of the non-Gaussian sources, we use the correctly reconstructed mixture coefficients in conjunction with linearly constrained minimum variance spatial filtering. This results in estimates of the non-Gaussian sources minimizing the variance of the interference of other sources. The approach is applied to the denoising of Event Related Fields recorded by MEG, and it is shown that it performs superiorly to ordinary ICA.  相似文献   

14.
The goal of the project described in this paper is to build a prototype of an operational system, which will provide registration within subpixel accuracy of multitemporal Landsat data, acquired by either Landsat-5 or Landsat-7 Thematic Mapper instruments. Integrated within an automated mass processing system for Landsat data, the input to our registration system consists of scenes that have been geometrically and radiometrically corrected, as well as preprocessed for detection of clouds and cloud shadows. Such preprocessed scenes are then georegistered relative to a database of Landsat chips. This paper describes the entire registration process, including the use of landmark chips, feature extraction performed by an overcomplete wavelet representation, and feature matching using statistically robust techniques. Knowing the approximate longitudes and latitudes or the UTM coordinates of the four corners of each incoming scene, a subset of the chips that represent landmarks included in the scene are selected to perform the registration. For each of these selected landmark chips, a corresponding window is extracted from the incoming scene, and each chip-window pair is registered using a robust wavelet feature-matching methodology. Based on the transformations from the chip-window pairs, a global transformation is then computed for the entire scene using a variant of a robust least median of squares estimator. Empirical results of this registration process, which provided subpixel accuracy for several multitemporal scenes from different study areas, are presented and discussed.  相似文献   

15.
Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.   相似文献   

16.
Measuring the size of the Internet via Monte Carlo sampling requires probing a large portion of the Internet protocol (IP) address space to obtain an accurate estimate. However, the distribution of information servers on the Internet is highly nonuniform over the IP address space. This allows us to design probing strategies based on importance sampling for measuring the prevalence of an information service on the Internet that are significantly more effective than strategies relying on Monte Carlo sampling. We present thorough analysis of our strategies together with accurate estimates for the current size of the Internet Protocol Version 4 (IPv4) Internet as measured by the number of publicly accessible web servers and FTP servers.  相似文献   

17.
When the statistics of the noise are non-Gaussian, analytic expressions for the probability of false alarms in detection systems are rarely available. Monte Carlo estimation techniques are therefore typically necessary. The author presents an importance sampling biasing distribution which renders exponential savings over standard Monte Carlo simulations. Two important features of this biasing strategy are that no importance sampling parameters need to be determined and no additional computations are required for implementation  相似文献   

18.
The reliability of interconnects and contacts depends on their microstructure. However, a large change in the average grain size does not necessarily positively affect reliability. When grain sizes and feature sizes are comparable, interconnect and via reliability depends much more strongly on the nature of the grain size distribution and the probability of occurrence of specific microstructural features, than on the average grain size. Also, when grain sizes and feature sizes are comparable, different microstructure-specific failure mechanisms can occur, and multimodal failure statistics are often observed. In this case, if failure data are incorrectly fit to a single failure-time distribution, the resulting reliability assessment may be pessimistic or optimistic, but is in any case, incorrect. In this regime, accurate reliability assessment requires a detailed knowledge of the microstructure of the interconnects, and a characterization of the failure of the weakest microstructural features present in the population to be assessed.  相似文献   

19.
We present a new framework for real-time tracking method of complex non-rigid objects. This new method successfully coped with camera motion, partial occlusions, and target scale variations. The shape of the object tracker is approximated by an ellipse and its appearance by histogram based features derived from local image properties. We use an efficient search scheme (Accept–Reject color histogram-based method (AR), using Bhattacharyya kernel as a similarity measure) to find the image region with a histogram most similar to the target of object tracker. In this paper, we address the problem of scale/shape adaptation and orientation changes of the target. The proposed approach is compared with recent state-of-the-art algorithms. Extensive experiments are performed to testify the proposed method and validate its robustness and effectiveness to track the scale and orientation changes of the target in real-time.  相似文献   

20.
HDI板通盲孔不匹配会导致PCB产品的开路、短路以及图形偏位,本文通过改进定位方式,提高对位精度,探索出一种改善通盲不匹配问题的方法,就对位、定位系统及板材涨缩几个方面对HDI板通盲不匹配产生的原因及改善方法提出看法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号