首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   198篇
  免费   52篇
  国内免费   9篇
化学   1篇
综合类   8篇
数学   18篇
物理学   31篇
无线电   201篇
  2023年   2篇
  2022年   7篇
  2021年   10篇
  2020年   6篇
  2019年   10篇
  2018年   17篇
  2017年   13篇
  2016年   20篇
  2015年   24篇
  2014年   33篇
  2013年   15篇
  2012年   17篇
  2011年   15篇
  2010年   9篇
  2009年   10篇
  2008年   7篇
  2007年   6篇
  2006年   7篇
  2005年   5篇
  2004年   4篇
  2003年   6篇
  2002年   2篇
  2001年   1篇
  2000年   3篇
  1998年   3篇
  1997年   1篇
  1995年   4篇
  1993年   2篇
排序方式: 共有259条查询结果,搜索用时 31 毫秒
91.
直接序列扩频信号因具有良好的隐蔽性和抗干扰性能被广泛应用,压缩感知能有效降低直扩信号的采样速率。当通过冗余字典稀疏分解直扩信号时,观测矩阵和稀疏基一般有强相关性,该文提出正交预处理(Orthogonal Pretreatment:OPT)方法对观测矩阵和稀疏基进行预处理,降低观测矩阵与稀疏基之间的相关性,从而提高信息恢复的精度与稳定性,仿真结果表明提出的方法有效。  相似文献   
92.
基于非局部稀疏编码的超分辨率图像复原   总被引:1,自引:0,他引:1  
基于压缩感知的超分辨率图像复原方法通常采用局部稀疏编码策略,对每一图像块独立编码,易产生人工的分块效应。针对上述问题,该文提出一种基于非局部稀疏编码的超分辨率图像复原方法。该算法在字典训练和图像编码过程中分别运用图像的非局部自相似先验知识,即利用低分辨率图像的插值图像训练字典,并通过计算相似块局部编码的加权平均,得到每一图像块的非局部稀疏编码。仿真实验表明,所提算法能够获得更优的复原效果,并且对于含噪图像具有较强的鲁棒性。  相似文献   
93.
提出了一种面向对象的通用数据库访问接口泛化的方法。通过从数据库系统的数据目录中提取数据字典,将数据库访问请求基于数据字典进行编码来实现数据库访问接口参数的泛化,在数据库接口中通过对泛化后的数据库访问请求解码和查询结果的对象化实现对数据操纵语言和数据查询语言访问的泛化,解决了传统的面向对象的数据库访问接口与数据库结构深度耦和的问题,从而实现了数据库访问接口的标准化和通用化。  相似文献   
94.
研究基于Gabor的过完备字典的匹配追踪(Matching Pursuit,MP)稀疏分解算法,首先对混合语音信号进行稀疏分解。针对传统MP算法运行时间长,占用存储范围大以及语音信号稀疏分解特性的特点,利用快速傅里叶变换(Fast Fourier Transformation,FFT)的MP稀疏分解缩小了最佳原子的搜索范围,提高运行速度。然后基于峭度的自适应盲源分离算法,通过自适应地学习算法中的激活函数最终实现语音信号的盲源分离。此算法经过仿真实验,证明分离效果比传统算法有了一定的改进,实验结果证实算法的有效性。  相似文献   
95.
In this paper, we propose a compression-based anomaly detection method for time series and sequence data using a pattern dictionary. The proposed method is capable of learning complex patterns in a training data sequence, using these learned patterns to detect potentially anomalous patterns in a test data sequence. The proposed pattern dictionary method uses a measure of complexity of the test sequence as an anomaly score that can be used to perform stand-alone anomaly detection. We also show that when combined with a universal source coder, the proposed pattern dictionary yields a powerful atypicality detector that is equally applicable to anomaly detection. The pattern dictionary-based atypicality detector uses an anomaly score defined as the difference between the complexity of the test sequence data encoded by the trained pattern dictionary (typical) encoder and the universal (atypical) encoder, respectively. We consider two complexity measures: the number of parsed phrases in the sequence, and the length of the encoded sequence (codelength). Specializing to a particular type of universal encoder, the Tree-Structured Lempel–Ziv (LZ78), we obtain a novel non-asymptotic upper bound, in terms of the Lambert W function, on the number of distinct phrases resulting from the LZ78 parser. This non-asymptotic bound determines the range of anomaly score. As a concrete application, we illustrate the pattern dictionary framework for constructing a baseline of health against which anomalous deviations can be detected.  相似文献   
96.
Automated intensity estimation of spontaneous Facial Action Units (AUs) defined by Facial Action Coding System (FACS) is a relatively new and challenging problem. This paper presents a joint supervised dictionary learning (SDL) and regression model for solving this problem. The model is casted as an optimization function consisting of two terms. The first term in the optimization concerns representing the facial images in a sparse domain using dictionary learning whereas the second term concerns estimating AU intensities using a linear regression model in the sparse domain. The regression model is designed in a way that considers disagreement between raters by a constant biasing factor in measuring the AU intensity values. Furthermore, since the intensity of facial AU is a non-negative value (i.e., the intensity values are between 0 and 5), we impose a non-negative constraint on the estimated intensities by restricting the search space for the dictionary learning and the regression function. Our experimental results on DISFA and FERA2015 databases show that this approach is very promising for automated measurement of spontaneous facial AUs.  相似文献   
97.
Sparse coding has been used for image representation successfully. However, when there is considerable variation between source and target domain, sparse coding cannot achieve satisfactory results. In this paper, we proposed a Projected Transfer Sparse Coding algorithm. In order to reduce their distribution difference, we project source and target data into a shared low dimensional space. Meanwhile, we learn a projection matrix and a shared dictionary and the sparse coding of source and target data in the low dimensional space. Unlike existing methods, the sparse representations are learnt using the projected data which are invariant to the distribution difference and the irrelevant samples. Thus, the sparse representations are robust and can improve the classification performance. We do not need to know any explicit correspondence across domains. We learn the projection matrix, the discriminative sparse representations, and the dictionary in a unified objective function. Our image representation method yields state-of-the-art results.  相似文献   
98.
The current study puts forward a supervised within-class-similar discriminative dictionary learning (SCDDL) algorithm for face recognition. Some popular discriminative dictionary learning schemes for recognition tasks always incorporate the linear classification error term into the objective function or make some discriminative restrictions on representation coefficients. In the presented SCDDL algorithm, we propose to directly restrict the representation coefficients to be similar within the same class and simultaneously include the linear classification error term in the supervised dictionary learning scheme to derive a more discriminative dictionary for face recognition. The experimental results on three large well-known face databases suggest that our approach can enhance the fisher ratio of representation coefficients when compared with several dictionary learning algorithms that incorporate linear classifiers. In addition, the learned discriminative dictionary, the large fisher ratio of representation coefficients and the simultaneously learned classifier can improve the recognition rate compared with some state-of-the-art dictionary learning algorithms.  相似文献   
99.
张长伦  余沾  王恒友  何强 《电子学报》2018,46(10):2400-2409
针对传统压缩感知重建算法存在重建质量偏低、重建时间偏长等问题,本文提出了一种基于分离字典训练的快速重建算法.首先选取某类图像作为训练集,建立其广义低秩矩阵分解模型;其次采用交替方向乘子法求解该模型,训练出一组分离字典;最后将该分离字典用于图像重建中,通过简单的线性运算实现图像的快速重建.实验结果表明,本文算法相比于传统的重建算法,针对训练集同类图像,具有十分显著的重建性能,对于其他不同类型的图像,依然有不错的重建质量,极大地降低了重建时间.  相似文献   
100.
代晓婷  龚敬  聂生东 《电子学报》2018,46(6):1445-1453
肺部LDCT(Low-Dose Computed Tomography)图像中噪声及条状伪影等异常显著,顶部和底部图像尤为严重.为提高整个肺部LDCT图像的质量,本文提出一种基于结构联合字典的图像降噪方法.首先,利用肺部CT图像的灰度特点,将HRCT(High Resolution Computed Tomography)图像块分类并训练,获得4类字典,通过计算原子的信息熵和HOG(Histogram of Oriented Gradient)特征,得到相应的结构字典,进而构造出结构联合字典;然后,在对肺部LDCT图像进行非局部均值滤波的基础上,将结构联合字典作为全局字典,对图像进行稀疏表示及重构,获得降噪后的图像.为验证算法有效性,选用模拟和临床两类数据进行实验,并与KSVD、AS-LNLM、BF-MCA等3种算法对比.对比发现,本文算法在去除噪声和条状伪影以及保留细节方面效果较好,特别是对序列顶层和底层图像处理优势更加明显.该方法能够显著提升整个肺部LDCT图像的质量.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号