首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In recent years, a great deal of research has focused on the sparse representation for signal. Particularly, a dictionary learning algorithm, K-SVD, is introduced to efficiently learn an redundant dictionary from a set of training signals. Indeed, much progress has been made in different aspects. In addition, there is an interesting technique named extreme learning machine (ELM), which is an single-layer feed-forward neural networks (SLFNs) with a fast learning speed, good generalization and universal classification capability. In this paper, we propose an optimization method about K-SVD, which is an denoising deep extreme learning machines based on autoencoder (DDELM-AE) for sparse representation. In other words, we gain a new learned representation through the DDELM-AE and as the new “input”, it makes the conventional K-SVD algorithm perform better. To verify the classification performance of the new method, we conduct extensive experiments on real-world data sets. The performance of the deep models (i.e., Stacked Autoencoder) is comparable. The experimental results indicate the fact that our proposed method is very efficient in the sight of speed and accuracy.  相似文献   

2.
The goal of this paper is to find a low‐rank approximation for a given nth tensor. Specifically, we give a computable strategy on calculating the rank of a given tensor, based on approximating the solution to an NP‐hard problem. In this paper, we formulate a sparse optimization problem via an l1‐regularization to find a low‐rank approximation of tensors. To solve this sparse optimization problem, we propose a rescaling algorithm of the proximal alternating minimization and study the theoretical convergence of this algorithm. Furthermore, we discuss the probabilistic consistency of the sparsity result and suggest a way to choose the regularization parameter for practical computation. In the simulation experiments, the performance of our algorithm supports that our method provides an efficient estimate on the number of rank‐one tensor components in a given tensor. Moreover, this algorithm is also applied to surveillance videos for low‐rank approximation.  相似文献   

3.
4.
5.
This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large-scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L 1-regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.  相似文献   

6.
Consider the set of vectors over a field having non-zero coefficients only in a fixed sparse set and multiplication defined by convolution, or the set of integers having non-zero digits (in some base b) in a fixed sparse set. We show the existence of an optimal (or almost-optimal, in the latter case) ‘magic’ multiplier constant that provides a perfect hash function which transfers the information from the given sparse coefficients into consecutive digits. Studying the convolution case we also obtain a result of non-degeneracy for Schur functions as polynomials in the elementary symmetric functions in positive characteristic.  相似文献   

7.
Shape optimization is a widely used technique in the design phase of a product. Current ongoing improvement policies require a product to fulfill a series of conditions from the perspective of mechanical resistance, fatigue, natural frequency, impact resistance, etc. All these conditions are translated into equality or inequality restrictions which must be satisfied during the optimization process that is necessary in order to determine the optimal shape. This article describes a new method for shape optimization that considers any regular shape as a possible shape, thereby improving on traditional methods limited to straight profiles or profiles established a priori. Our focus is based on using functional techniques and this approach is, based on representing the shape of the object by means of functions belonging to a finite-dimension functional space. In order to resolve this problem, the article proposes an optimization method that uses machine learning techniques for functional data in order to represent the perimeter of the set of feasible functions and to speed up the process of evaluating the restrictions in each iteration of the algorithm. The results demonstrate that the functional approach produces better results in the shape optimization process and that speeding up the algorithm using machine learning techniques ensures that this approach does not negatively affect design process response times.  相似文献   

8.
Reza Akhtar 《Discrete Mathematics》2012,312(22):3417-3423
We study the representation number for some special sparse graphs. For graphs with a single edge and for complete binary trees we give an exact formula, and for hypercubes we improve the known lower bound. We also study the prime factorization of the representation number of graphs with one edge.  相似文献   

9.
One of the most effective numerical techniques for solving nonlinear programming problems is the sequential quadratic programming approach. Many large nonlinear programming problems arise naturally in data fitting and when discretization techniques are applied to systems described by ordinary or partial differential equations. Problems of this type are characterized by matrices which are large and sparse. This paper describes a nonlinear programming algorithm which exploits the matrix sparsity produced by these applications. Numerical experience is reported for a collection of trajectory optimization problems with nonlinear equality and inequality constraints.The authors wish to acknowledge the insightful contributions of Dr. William Huffman.  相似文献   

10.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method.  相似文献   

11.
In this paper, we propose a general strategy for rapidly computing sparse Legendre expansions. The resulting methods yield a new class of fast algorithms capable of approximating a given function f : [?1, 1] → ? with a near-optimal linear combination of s Legendre polynomials of degree ≤ N in just \((s \log N)^{\mathcal {O}(1)}\)-time. When s ? N, these algorithms exhibit sublinear runtime complexities in N, as opposed to traditional Ω(NlogN)-time methods for computing all of the first N Legendre coefficients of f. Theoretical as well as numerical results demonstrate the effectiveness of the proposed methods.  相似文献   

12.
Kernel extreme learning machine (KELM) increases the robustness of extreme learning machine (ELM) by turning linearly non-separable data in a low dimensional space into a linearly separable one. However, the internal power parameters of ELM are initialized at random, causing the algorithm to be unstable. In this paper, we use the active operators particle swam optimization algorithm (APSO) to obtain an optimal set of initial parameters for KELM, thus creating an optimal KELM classifier named as APSO-KELM. Experiments on standard genetic datasets show that APSO-KELM has higher classification accuracy when being compared to the existing ELM, KELM, and these algorithms combining PSO/APSO with ELM/KELM, such as PSO-KELM, APSO-ELM, PSO-ELM, etc. Moreover, APSO-KELM has good stability and convergence, and is shown to be a reliable and effective classification algorithm.  相似文献   

13.
This paper is concerned with the sparse representation of analytic signal in Hardy space , where is the open unit disk in the complex plane. In recent years, adaptive Fourier decomposition has attracted considerable attention in the area of signal analysis in . As a continuation of adaptive Fourier decomposition‐related studies, this paper proves rapid decay properties of singular values of the dictionary. The rapid decay properties lay a foundation for applications of compressed sensing based on this dictionary. Through Hardy space decomposition, this program contributes to sparse representations of signals in the most commonly used function spaces, namely, the spaces of square integrable functions in various contexts. Numerical examples are given in which both compressed sensing and ?1‐minimization are used. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell’s equation numerical solutions.  相似文献   

15.
This paper argues that curvelets provide a powerful tool for representing very general linear symmetric systems of hyperbolic differential equations. Curvelets are a recently developed multiscale system [7, 9] in which the elements are highly anisotropic at fine scales, with effective support shaped according to the parabolic scaling principle width ≈ length2 at fine scales. We prove that for a wide class of linear hyperbolic differential equations, the curvelet representation of the solution operator is both optimally sparse and well organized.
  • It is sparse in the sense that the matrix entries decay nearly exponentially fast (i.e., faster than any negative polynomial) and
  • well organized in the sense that the very few nonnegligible entries occur near a few shifted diagonals.
Indeed, we show that the wave group maps each curvelet onto a sum of curveletlike waveforms whose locations and orientations are obtained by following the different Hamiltonian flows—hence the diagonal shifts in the curvelet representation. A physical interpretation of this result is that curvelets may be viewed as coherent waveforms with enough frequency localization so that they behave like waves but at the same time, with enough spatial localization so that they simultaneously behave like particles. © 2005 Wiley Periodicals, Inc.  相似文献   

16.
The computational complexity of a new class of combinatorial optimization problems that are induced by optimal machine learning procedures in the class of collective piecewise linear classifiers of committee type is studied.  相似文献   

17.
We present here a computational study comparing the performance of leading machine learning techniques to that of recently developed graph-based combinatorial optimization algorithms (SNC and KSNC). The surprising result of this study is that SNC and KSNC consistently show the best or close to best performance in terms of their F1-scores, accuracy, and recall. Furthermore, the performance of SNC and KSNC is considerably more robust than that of the other algorithms; the others may perform well on average but tend to vary greatly across data sets. This demonstrates that combinatorial optimization techniques can be competitive as compared to state-of-the-art machine learning techniques. The code developed for SNC and KSNC is publicly available.  相似文献   

18.
压缩感知和稀疏优化简介   总被引:1,自引:0,他引:1       下载免费PDF全文
介绍压缩感知和稀疏优化的基本概念、理论基础和算法概要. 压缩感知利用原始信号的稀疏性,从远少于信号元素个数的测量出发,通过求解稀疏优化问题来恢复完整的原始稀疏信号. 通过一个小例子展示这一过程,并以此说明压缩感知和稀疏优化的基本理念. 接着简要介绍用以保证l1凸优化恢复稀疏信号的零空间性质和RIP条件. 最后介绍求解稀疏优化的几个经典算法.  相似文献   

19.
In this paper, a robust visual tracking method is proposed based on local spatial sparse representation. In the proposed approach, the learned target template is sparsely and compactly expressed by forming local spatial and trivial samples dynamically. An adaptive multiple subspaces appearance model is developed to describe the target appearance and construct the candidate target templates during the tracking process. An effective selection strategy is then employed to select the optimal sparse solution and locate the target accurately in the next frame. The experimental results have demonstrated that our method can perform well in the complex and noisy visual environment, such as heavy occlusions, dramatic illumination changes, and large pose variations in the video. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
We consider the convex optimization problem P:minx {f(x) : x ? K}{{\rm {\bf P}}:{\rm min}_{\rm {\bf x}} \{f({\rm {\bf x}})\,:\,{\rm {\bf x}}\in{\rm {\bf K}}\}} where f is convex continuously differentiable, and K ì \mathbb Rn{{\rm {\bf K}}\subset{\mathbb R}^n} is a compact convex set with representation {x ? \mathbb Rn : gj(x) 3 0, j = 1,?,m}{\{{\rm {\bf x}}\in{\mathbb R}^n\,:\,g_j({\rm {\bf x}})\geq0, j = 1,\ldots,m\}} for some continuously differentiable functions (g j ). We discuss the case where the g j ’s are not all concave (in contrast with convex programming where they all are). In particular, even if the g j are not concave, we consider the log-barrier function fm{\phi_\mu} with parameter μ, associated with P, usually defined for concave functions (g j ). We then show that any limit point of any sequence (xm) ì K{({\rm {\bf x}}_\mu)\subset{\rm {\bf K}}} of stationary points of fm, m? 0{\phi_\mu, \mu \to 0} , is a Karush–Kuhn–Tucker point of problem P and a global minimizer of f on K.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号