首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Position estimation is an important technique for location-based services. Many services and applications, such as navigation assistance, surveillance of patients and social networking, have been developed based on users’ position. Although the GPS plays an important role in positioning systems, its signal strength is extremely weak inside buildings. Thus, other sensing devices are necessary to improve the accuracy of indoor localisation. In the past decade, researchers have developed a series of indoor positioning technologies based on the received signal strength (RSS) of WiFi, ZigBee or Bluetooth devices under the infrastructure of wireless sensor network for location estimation. We can compute the distance of the devices by measuring their RSS, but the correctness of the result is unsatisfactory because the radio signal interference is a considerable issue and the indoor radio propagation is too complicated to model. Using the location fingerprint to estimate a target position is a feasible strategy because the location fingerprint records the characteristics of the signals and the signal strength is related to the space relation. This type of algorithm estimates the location of a target by matching online measurements with the closest a-priori location fingerprints. The matching or classification algorithm is a key issue in the correctness of location fingerprinting. In this paper, we propose an effective location fingerprinting algorithm based on the general and weighted k-nearest neighbour algorithms to estimate the position of the target node. The grid points are trained with an interval of 2 m, and the estimated position error is about 1.8 m. Thus, the proposed method is low computation consumption, and with an acceptable accuracy.  相似文献   

2.
We study the reconstruction of the missing thermal and mechanical data on an inaccessible part of the boundary in the case of two‐dimensional linear isotropic thermoelastic materials from overprescribed noisy measurements taken on the remaining accessible boundary part. This inverse problem is solved by using the method of fundamental solutions together with the method of particular solutions. The stabilization of this inverse problem is achieved using several singular value decomposition (SVD)‐based regularization methods, such as the Tikhonov regularization method (Tikhonov and Arsenin, Methods for solving ill‐posed problems, Nauka, Moscow, 1986), the damped SVD and the truncated SVD (Hansen, Rank‐deficient and discrete ill‐posed problems: numerical aspects of linear inversion, SIAM, Philadelphia, 1998), whilst the optimal regularization parameter is selected according to the discrepancy principle (Morozov, Sov Math Doklady 7 (1966), 414–417), generalized cross‐validation criterion (Golub et al. Technometrics 22 (1979), 1–35) and Hansen's L‐curve method (Hansen and O'Leary, SIAM J Sci Comput 14 (1993), 1487–503). © 2014 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 31: 168–201, 2015  相似文献   

3.
In this paper, parametric regression analyses including both linear and nonlinear regressions are investigated in the case of imprecise and uncertain data, represented by a fuzzy belief function. The parameters in both the linear and nonlinear regression models are estimated using the fuzzy evidential EM algorithm, a straightforward fuzzy version of the evidential EM algorithm. The nonlinear regression model is derived by introducing a kernel function into the proposed linear regression model. An unreliable sensor experiment is designed to evaluate the performance of the proposed linear and nonlinear parametric regression methods, called parametric evidential regression (PEVREG) models. The experimental results demonstrate the high prediction accuracy of the PEVREG models in regressions with crisp inputs and a fuzzy belief function as output.  相似文献   

4.
Electrical impedance tomography (EIT), as an inverse problem, aims to calculate the internal conductivity distribution at the interior of an object from current-voltage measurements on its boundary. Many inverse problems are ill-posed, since the measurement data are limited and imperfect. To overcome ill-posedness in EIT, two main types of regularization techniques are widely used. One is categorized as the projection methods, such as truncated singular value decomposition (SVD or TSVD). The other categorized as penalty methods, such as Tikhonov regularization, and total variation methods. For both of these methods, a good regularization parameter should yield a fair balance between the perturbation error and regularized solution. In this paper a new method combining the least absolute shrinkage and selection operator (LASSO) and the basis pursuit denoising (BPDN) is introduced for EIT. For choosing the optimum regularization we use the L1-curve (Pareto frontier curve) which is similar to the L-curve used in optimising L2-norm problems. In the L1-curve we use the L1-norm of the solution instead of the L2 norm. The results are compared with the TSVD regularization method where the best regularization parameters are selected by observing the Picard condition and minimizing generalized cross validation (GCV) function. We show that this method yields a good regularization parameter corresponding to a regularized solution. Also, in situations where little is known about the noise level σ, it is also useful to visualize the L1-curve in order to understand the trade-offs between the norms of the residual and the solution. This method gives us a means to control the sparsity and filtering of the ill-posed EIT problem. Tracing this curve for the optimum solution can decrease the number of iterations by three times in comparison with using LASSO or BPDN separately.  相似文献   

5.
In this paper, a noniterative linear least-squares error method developed by Yang and Chen for solving the inverse problems is re-examined. For the method, condition for the existence of a unique solution and the error bound of the resulting inverse solution considering the measurement errors are derived. Though the method was shown to be able to give the unique inverse solution at only one iteration in the literature, however, it is pointed out with two examples that for some inverse problems the method is practically not applicable, once the unavoidable measurement errors are included. The reason behind this is that the so-called reverse matrix for these inverse problems has a huge number of 1-norm, thus, magnifying a small measurement error to an extent that is unacceptable for the resulting inverse solution in a practical sense. In other words, the method fails to yield a reasonable solution whenever applied to an ill-conditioned inverse problem. In such a case, two approaches are recommended for decreasing the very high condition number: (i) by increasing the number of measurements or taking measurements as close as possible to the location at which the to-be-estimated unknown condition is applied, and (ii) by using the singular value decomposition (SVD).  相似文献   

6.
An inverse forced vibration problem, based on the conjugate gradient method (CGM), (or the iterative regularization method), is examined in this study to estimate the unknown spatial and temporal-dependent external forces for the cutting tools by utilizing the simulated beam displacement measurements. The tool is represented by an Euler–Bernoulli beam. The accuracy of the inverse analysis is examined by using the simulated exact and inexact displacement measurements. The numerical experiments are performed to test the validity of the present algorithm by using different types of external forces, sensor arrangements and measurement errors. Results show that excellent estimations on the external forces can be obtained with any arbitrary initial guesses.  相似文献   

7.
This paper presents a new algorithm for the prediction of indoor suspension particle dispersion based on a v2-f model. In order to handle the near-wall turbulence anisotropy properly, which is significant in the dispersion of fine particles, the particle eddy diffusivity is calculated using different formulae among regions of the turbulent core and in the vicinity of walls. The new algorithm is validated by several cases performed in two ventilated rooms with various air distribution patterns. The simulation results reveal that v2-f nonlinear turbulence model combined with a particle convective equation gives satisfactory agreement with the experimental data. It is generally found that the dynamic equation combined with the v2-f model can properly handle low Reynolds number (LRN) flows which are usually encountered in indoor air flows and fine particle dispersion.  相似文献   

8.

Sensor placement and feature selection are critical steps in engineering, modeling, and data science that share a common mathematical theme: the selected measurements should enable solution of an inverse problem. Most real-world systems of interest are nonlinear, yet the majority of available techniques for feature selection and sensor placement rely on assumptions of linearity or simple statistical models. We show that when these assumptions are violated, standard techniques can lead to costly over-sensing without guaranteeing that the desired information can be recovered from the measurements. In order to remedy these problems, we introduce a novel data-driven approach for sensor placement and feature selection for a general type of nonlinear inverse problem based on the information contained in secant vectors between data points. Using the secant-based approach, we develop three efficient greedy algorithms that each provide different types of robust, near-minimal reconstruction guarantees. We demonstrate them on two problems where linear techniques consistently fail: sensor placement to reconstruct a fluid flow formed by a complicated shock–mixing layer interaction and selecting fundamental manifold learning coordinates on a torus.

  相似文献   

9.
Integrated navigation systems based on gyros and accelerometers are well established devices for vehicle guidance. The system design is traditionally based on the assumption that the vehicle is a rigid body. However, generalizing such integrated systems to flexible structures is possible. The example of the motion of a simple beam being considered here is meant to be a first approach to obtain sophisticated motional measurements of a wing of a large airplane during flight. The principle of integrated navigation systems consists of combining different measuring methods by using their specific advantages. Gyros and accelerometers are used to obtain reliable signals within a short period of time. On the other hand, aiding sensors like radar units and strain gauges are used because of their long-term accuracy. The kernel of the integrated system consists, however, of an extended Kalman filter that estimates the motion state of the structure. Besides the sensor signals, the basis for the filter is an additional kinematical model of the structure. By means of a model reduction, a kinematical model of the beam was developed. Based on simulation the paper presents this approach, the appropriate sensor set, and first estimated motion results. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
We propose a new hot mudflow prediction model based on Cellular Automata (CA). Using our CA prediction model, we present simulations of the LUSI hot mudflow in the Sidoarjo disaster area. Our CA method to predict mudflow is based on a fluid dynamic model, because hot mudflow characteristics are similar to fluid. The CA model also takes into consideration landscape data, including features such as dikes and buildings. The Moore neighborhood model is adopted for CA to take into account the relationship between the cell of interest and the surrounding cells. A Gaussian interpolation is used to approximate the behavior of the hot mudflow over landscape features. We evaluated the prediction accuracy of our CA model, by comparing results from the CA model with remote sensing satellite data from the disaster areas and measurements of the mudflow disaster area. Simulation results of the LUSI hot mudflow show relatively good prediction accuracy in comparison with conventional models. Therefore, we conclude that the CA model will be valuable for predictions pertaining to hot mudflow in future disasters of a similar nature.  相似文献   

11.
An inverse problem utilizing the Levenberg–Marquardt method (LMM) is applied in this study to determine simultaneously the unknown spatial-dependent effective thermal conductivity and volumetric heat capacity for a biological tissue based on temperature measurements. The accuracy of this inverse problem is examined by using the simulated exact and inexact temperature measurements in the numerical experiments. A statistical analysis is performed to obtain the 99% confidence bounds for the estimated thermal properties. Results show that good estimation on the spatial-dependent thermal conductivity and volumetric heat capacity can be obtained using the present algorithm for the test cases considered in this study.  相似文献   

12.
In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.  相似文献   

13.
The wet spinning process depends strongly on the acid and salt diffusivity coefficients in the fiber; however, these two coefficients are normally function of concentration of salt and are difficult to be measured directly. For this reason the technology for the inverse mass transfer problems need be applied to resolve these two concentration-dependent diffusivities simultaneously. An Iterative Regularization Method (IRM) using Conjugate Gradient Method (CGM) is applied in this study to determine simultaneously the unknown diffusivities of acid and salt for polymer solution in a wet spinning process by using measurements of concentration components. The accuracy of this inverse mass transfer problem is examined by using the simulated exact and inexact concentration measurements in the numerical experiments. Results show that the diffusivity of acid can be estimated more accurately than the diffusivity of salt and the estimation of the diffusivities can be obtained in a very short CPU time on a HP d2000 2.66 GHz personal computer.  相似文献   

14.
This paper presents a new method for the computation of truncated singular value decomposition (SVD) of an arbitrary matrix. The method can be qualified as deterministic because it does not use randomized schemes. The number of operations required is asymptotically lower than that using conventional methods for nonsymmetric matrices and is at a par with the best existing deterministic methods for unstructured symmetric ones. It slightly exceeds the asymptotical computational cost of SVD methods based on randomization; however, the error estimate for such methods is significantly higher than for the presented one. The method is one‐pass, that is, each value of the matrix is used just once. It is also readily parallelizable. In the case of full SVD decomposition, it is exact. In addition, it can be modified for a case when data are obtained sequentially rather than being available all at once. Numerical simulations confirm accuracy of the method.  相似文献   

15.
In this study, an inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied to estimate the unknown time-dependent frictional heat flux at the interface of two semi-spaces, one of them is covered by a strip of coating, during a sliding-contact process from the knowledge of temperature measurements taken within one of the semi-space. It is assumed that no prior information is available on the functional form of the unknown heat generation; hence the procedure is classified as the function estimation in inverse calculation. Results show that the relative position between the measured and the estimated quantities is of crucial importance to the accuracy of the inverse algorithm. The current methodology can be applied to the prediction of heat generation in engineering problems involving sliding-contact elements.  相似文献   

16.
Abstract

In this paper, we focus on three inverse problems for a coupled model from temperature-seepage field in high-dimensional spaces. These inverse problems aim to determine an unknown heat transfer coefficient and a source sink term in seepage continuity equation with specified initial-boundary conditions and additional measurements. Some finite difference schemes of coupled equations are presented and analyzed.Three algorithms for these inverse problems are proposed. Some numerical experiments are provided to assert the accuracy and efficiency of proposed algorithms.  相似文献   

17.
In this paper, we present an interactive visualization and clustering algorithm for real-time multi-attribute digital forensic data such as network anomalous events. In the model, glyphs are defined with multiple network attributes and clustered with the recursive optimization algorithm for dimensional reduction. The user's visual latency time is incorporated into the recursive process so that it updates the display and the optimization model according to the human factor and maximizes the capacity of real-time computation. The interactive search interface is developed to enable the display of similar data points according to their similarity of attributes. Finally, typical network anomalous events are analyzed and visualized such as password guessing, etc. This technology is expected to have an impact on real-time visual data mining for network security, sensor networks and many other multivariable real-time monitoring systems. Our usability study shows a decent accuracy of context-independent glyph identification (89.37%) with a high precision for anomaly detection (94.36%). The results indicate that, without any context, users tend to classify unknown patterns as possibly harmful. On the other hand, in the dynamic clustering (context-dependent) experiment, clusters of very extremely unusual glyphs normally contain fewer packets. In this case, the packet identification accuracy is remarkably high (99.42%).  相似文献   

18.
We consider the general problem of analysing and modelling call centre arrival data. A method is described for analysing such data using singular value decomposition (SVD). We illustrate that the outcome from the SVD can be used for data visualization, detection of anomalies (outliers), and extraction of significant features from noisy data. The SVD can also be employed as a data reduction tool. Its application usually results in a parsimonious representation of the original data without losing much information. We describe how one can use the reduced data for some further, more formal statistical analysis. For example, a short‐term forecasting model for call volumes is developed, which is multiplicative with a time series component that depends on day of the week. We report empirical results from applying the proposed method to some real data collected at a call centre of a large‐scale U.S. financial organization. Some issues about forecasting call volumes are also discussed. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
We consider an inverse problem for a one-dimensional integrodifferential hyperbolic system, which comes from a simplified model of thermoelasticity. This inverse problem aims to identify the displacement u, the temperature η and the memory kernel k simultaneously from the weighted measurement data of temperature. By using the fixed point theorem in suitable Sobolev spaces, the global in time existence and uniqueness results of this inverse problem are obtained. Moreover, we prove that the solution to this inverse problem depends continuously on the noisy data in suitable Sobolev spaces. For this nonlinear inverse problem, our theoretical results guarantee the solvability for the proposed physical model and the well-posedness for small measurement time τ, which is quite different from general inverse problems.  相似文献   

20.
Incident detection involves both the collection and analysis of traffic data. In this paper, we take a look at the various traffic flow sensing technologies, and discuss the effects that the environment has on each. We provide recommendations on the selection of sensors, and propose a mix of wide-area and single-lane sensors to ensure reliable performance. We touch upon the issue of sensor accuracy and identify the increased use of neural networks and fuzzy logic for incident detection.Specifically, this paper addresses a novel approach to use measurements from a single station to detect anomalies in traffic flow. Anomalies are ascertained from deviations from the expected norms of traffic patterns calibrated at each individual station.We use an extension to the McMaster incident detection algorithm as a baseline to detect traffic anomalies. The extensions allow the automatic field calibration of the sensor.The paper discusses the development of a new novel time indexed anomaly detection algorithm. We establish norms as a time dependent function for each station by integrating past “normal” traffic patterns for a given time period. Time indexing will include time of day, day of week, and season. Initial calibration will take place over the prior few weeks. Online background calibration continues after initial calibration to continually tune and build the global seasonal time index. We end with a discussion of fuzzy-neural implementations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号