首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Kernel canonical correlation analysis (KCCA) is a procedure for assessing the relationship between two sets of random variables when the classical method, canonical correlation analysis (CCA), fails because of the nonlinearity of the data. The KCCA method is mostly used in machine learning, especially for information retrieval and text mining. Because the data is often represented with non-negative numbers, we propose to incorporate the non-negativity restriction directly into the KCCA method. Similar restrictions have been studied in relation to the classical CCA and called restricted canonical correlation analysis (RCCA), so that we call the proposed method restricted kernel canonical correlation analysis (RKCCA). We also provide some possible approaches for solving the optimization problem to which our method translates. The motivation for introducing RKCCA is given in Section 2.  相似文献   

2.
3.
4.
In this paper we are concerned with the study of a class of quasilinear elliptic differential inclusions involving the anisotropic $\overrightarrow {p}(\cdot)$ -Laplace operator, on a bounded open subset of ${\mathbb R}^n$ which has a smooth boundary. The abstract framework required to study this kind of differential inclusions lies at the interface of three important branches in analysis: nonsmooth analysis, the variable exponent Lebesgue–Sobolev spaces theory and the anisotropic Sobolev spaces theory. Using the concept of nonsmooth critical point we are able to prove that our problem admits at least two non-trivial weak solutions.  相似文献   

5.
Adaptive data analysis provides an important tool in extracting hidden physical information from multiscale data that arise from various applications. In this paper, we review two data-driven time-frequency analysis methods that we introduced recently to study trend and instantaneous frequency of nonlinear and nonstationary data. These methods are inspired by the empirical mode decomposition method (EMD) and the recently developed compressed (compressive) sensing theory. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode functions of the form {a(t) cos(θ(t))}, where a is assumed to be less oscillatory than cos(θ(t)) and θ′ ? 0. This problem can be formulated as a nonlinear l 0 optimization problem. We have proposed two methods to solve this nonlinear optimization problem. The first one is based on nonlinear basis pursuit and the second one is based on nonlinear matching pursuit. Convergence analysis has been carried out for the nonlinear matching pursuit method. Some numerical experiments are given to demonstrate the effectiveness of the proposed methods.  相似文献   

6.
Principal component analysis (PCA) of an objects ×  variables data matrix is used for obtaining a low-dimensional biplot configuration, where data are approximated by the inner products of the vectors corresponding to objects and variables. Borg and Groenen (Modern multidimensional scaling. Springer, New York, 1997) have suggested another biplot procedure which uses a technique for approximating data by projections of object vectors on variable vectors. This technique is formulated as constraining the variable vectors in PCA to be of unit length and can be called unit-length vector analysis (UVA). However, an algorithm for UVA has not yet been developed. In this paper, we present such an algorithm, discuss the properties of UVA solutions, and demonstrate the advantage of UVA in biplots for standardized data with homogeneous variances among variables. The advantage of UVA-based biplots is that the projections of object vectors onto variable vectors express the approximation of data in an easy way, while in PCA-based biplots we must consider not only the projections, but also the lengths of variable vectors in order to visualize approximations.  相似文献   

7.
Clustering is a popular data analysis and data mining technique. Since clustering problem have NP-complete nature, the larger the size of the problem, the harder to find the optimal solution and furthermore, the longer to reach a reasonable results. A popular technique for clustering is based on K-means such that the data is partitioned into K clusters. In this method, the number of clusters is predefined and the technique is highly dependent on the initial identification of elements that represent the clusters well. A large area of research in clustering has focused on improving the clustering process such that the clusters are not dependent on the initial identification of cluster representation. Another problem about clustering is local minimum problem. Although studies like K-Harmonic means clustering solves the initialization problem trapping to the local minima is still a problem of clustering. In this paper we develop a new algorithm for solving this problem based on a tabu search technique—Tabu K-Harmonic means (TabuKHM). The experiment results on the Iris and the other well known data, illustrate the robustness of the TabuKHM clustering algorithm.  相似文献   

8.
In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.  相似文献   

9.
We consider an inverse problem for a one-dimensional integrodifferential hyperbolic system, which comes from a simplified model of thermoelasticity. This inverse problem aims to identify the displacement u, the temperature η and the memory kernel k simultaneously from the weighted measurement data of temperature. By using the fixed point theorem in suitable Sobolev spaces, the global in time existence and uniqueness results of this inverse problem are obtained. Moreover, we prove that the solution to this inverse problem depends continuously on the noisy data in suitable Sobolev spaces. For this nonlinear inverse problem, our theoretical results guarantee the solvability for the proposed physical model and the well-posedness for small measurement time τ, which is quite different from general inverse problems.  相似文献   

10.
The evaluation processes are widely used for quality inspection, design, marketing exploitation and other fields in industrial companies. In many of these fields the items, products, designs, etc., are evaluated according to the knowledge acquired via human senses (sight, taste, touch, smell and hearing), in such cases, we talk about sensory evaluation, in it an important problem arises as it is the modelling and management of uncertain knowledge in the evaluation process, because the information acquired by our senses throughout human perceptions always involves uncertainty, vagueness and imprecision.The decision analysis techniques have been utilized in many evaluation processes, hence this paper proposes and shows the application of the linguistic decision analysis to sensory evaluation and its advantages, particularly based on the linguistic 2-tuple representation model, in order to model and manage consistently the uncertainty and vagueness of the information in this type of problems.  相似文献   

11.
12.
In the present paper, the Cauchy problem for the Laplace equation with nonhomogeneous Neumann data in an infinite “strip” domain is considered. This problem is severely ill-posed, i.e., the solution does not depend continuously on the data. A conditional stability result is given. A new a posteriori Fourier method for solving this problem is proposed. The corresponding error estimate between the exact solution and its regularization approximate solution is also proved. Numerical examples show the effectiveness of the method and the comparison of numerical effect between the a posteriori and the a priori Fourier method are also taken into account.  相似文献   

13.
This paper deals with the complexity of the decomposition of a digital surface into digital plane segments (DPSs for short). We prove that the decision problem (does there exist a decomposition with less than λ DPSs?) is NP-complete, and thus that the optimization problem (finding the minimum number of DPSs) is NP-hard. The proof is based on a polynomial reduction of any instance of the well-known 3-SAT problem to an instance of the digital surface decomposition problem. A geometric model for the 3-SAT problem is proposed.  相似文献   

14.
In this paper, we have first given a numerical procedure for the solution of second order non-linear ordinary differential equations of the typey″ = f (x;y, y′) with given initial conditions. The method is based on geometrical interpretation of the equation, which suggests a simple geometrical construction of the integral curve. We then translate this geometrical method to the numerical procedure adaptable to desk calculators and digital computers. We have studied the efficacy of this method with the help of an illustrative example with known exact solution. We have also compared it with Runge-Kutta method. We have then applied this method to a physical problem, namely, the study of the temperature distribution in a semi-infinite solid homogeneous medium for temperature-dependent conductivity coefficient.  相似文献   

15.
This paper is a contribution to our knowledge of Greek geometric analysis. In particular, we investigate the aspect of analysis know as diorism, which treats the conditions, arrangement, and totality of solutions to a given geometric problem, and we claim that diorism must be understood in a broader sense than historians of mathematics have generally admitted. In particular, we show that diorism was a type of mathematical investigation, not only of the limitation of a geometric solution, but also of the total number of solutions and of their arrangement. Because of the logical assumptions made in the analysis, the diorism was necessarily a separate investigation which could only be carried out after the analysis was complete.  相似文献   

16.
In this article we investigate the existence of a solution to a semi-linear, elliptic, partial differential equation with distributional coefficients and data. The problem we consider is a generalization of the Lichnerowicz equation that one encounters in studying the constraint equations in general relativity. Our method for solving this problem consists of solving a net of regularized, semi-linear problems with data obtained by smoothing the original, distributional coefficients. In order to solve these regularized problems, we develop a priori L -bounds and sub- and super-solutions to apply a fixed point argument. We then show that the net of solutions obtained through this process satisfies certain decay estimates by determining estimates for the sub- and super-solutions and utilizing classical, a priori elliptic estimates. The estimates for this net of solutions allow us to regard this collection of functions as a solution in a Colombeau-type algebra. We motivate this Colombeau algebra framework by first solving an ill-posed critical exponent problem. To solve this ill-posed problem, we use a collection of smooth, “approximating” problems and then use the resulting sequence of solutions and a compactness argument to obtain a solution to the original problem. This approach is modeled after the more general Colombeau framework that we develop, and it conveys the potential that solutions in these abstract spaces have for obtaining classical solutions to ill-posed non-linear problems with irregular data.  相似文献   

17.
The Weber problem consists of finding a point in Rn that minimizes the weighted sum of distances from m points in Rn that are not collinear. An application that motivated this problem is the optimal location of facilities in the 2-dimensional case. A classical method to solve the Weber problem, proposed by Weiszfeld in 1937, is based on a fixed-point iteration.In this work we generalize the Weber location problem considering box constraints. We propose a fixed-point iteration with projections on the constraints and demonstrate descending properties. It is also proved that the limit of the sequence generated by the method is a feasible point and satisfies the KKT optimality conditions. Numerical experiments are presented to validate the theoretical results.  相似文献   

18.
19.
In the last decade, many parallel process mechanisms have been developed in information systems for enhancing their performance. But I/O throughput rates are still the bottleneck for data processing in the systems. In particular, relational database systems encounter this performance problem dealing with expensive operations such as the join operation. To treat a class of two-way join problems in database, Rotem et al. proposed a linearization method for finding the optimal allocation of relations to multidisk database such that the expected query cost is minimized. For the multidisk allocation problem with N relations and M disks, their model needs MN+N(N−1)/2+MN(N−1)/2 0–1 variables. This paper proposes a concise method to reformulate the same problem, which requires only MN+N(N−1)/2 0–1 variables. The problem can hence be more efficiently solved by the concise method. The analytical superiority of the concise method in terms of the number of iterations and execution times can be seen, through a computational experiment conducted on a set of generated test examples.  相似文献   

20.
Based on a mathematical model of laser beams, we present a spectral Galerkin method for solving a Cauchy problem of the Helmholtz equation in a rectangle, where the Cauchy data pairs are given at y?=?0 and boundary data are for x?=?0 and x?=?π. The solution is sought in the interval 0?<?y?<?1. The spectral Galerkin method is considered as a regularization method. We then perform an analysis on the error bound for this method. For illustration, several numerical experiments are constructed to demonstrate the feasibility and efficiency of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号