全文获取类型
收费全文 | 198篇 |
免费 | 52篇 |
国内免费 | 9篇 |
专业分类
化学 | 1篇 |
综合类 | 8篇 |
数学 | 18篇 |
物理学 | 31篇 |
无线电 | 201篇 |
出版年
2023年 | 2篇 |
2022年 | 7篇 |
2021年 | 10篇 |
2020年 | 6篇 |
2019年 | 10篇 |
2018年 | 17篇 |
2017年 | 13篇 |
2016年 | 20篇 |
2015年 | 24篇 |
2014年 | 33篇 |
2013年 | 15篇 |
2012年 | 17篇 |
2011年 | 15篇 |
2010年 | 9篇 |
2009年 | 10篇 |
2008年 | 7篇 |
2007年 | 6篇 |
2006年 | 7篇 |
2005年 | 5篇 |
2004年 | 4篇 |
2003年 | 6篇 |
2002年 | 2篇 |
2001年 | 1篇 |
2000年 | 3篇 |
1998年 | 3篇 |
1997年 | 1篇 |
1995年 | 4篇 |
1993年 | 2篇 |
排序方式: 共有259条查询结果,搜索用时 31 毫秒
91.
92.
93.
94.
研究基于Gabor的过完备字典的匹配追踪(Matching Pursuit,MP)稀疏分解算法,首先对混合语音信号进行稀疏分解。针对传统MP算法运行时间长,占用存储范围大以及语音信号稀疏分解特性的特点,利用快速傅里叶变换(Fast Fourier Transformation,FFT)的MP稀疏分解缩小了最佳原子的搜索范围,提高运行速度。然后基于峭度的自适应盲源分离算法,通过自适应地学习算法中的激活函数最终实现语音信号的盲源分离。此算法经过仿真实验,证明分离效果比传统算法有了一定的改进,实验结果证实算法的有效性。 相似文献
95.
In this paper, we propose a compression-based anomaly detection method for time series and sequence data using a pattern dictionary. The proposed method is capable of learning complex patterns in a training data sequence, using these learned patterns to detect potentially anomalous patterns in a test data sequence. The proposed pattern dictionary method uses a measure of complexity of the test sequence as an anomaly score that can be used to perform stand-alone anomaly detection. We also show that when combined with a universal source coder, the proposed pattern dictionary yields a powerful atypicality detector that is equally applicable to anomaly detection. The pattern dictionary-based atypicality detector uses an anomaly score defined as the difference between the complexity of the test sequence data encoded by the trained pattern dictionary (typical) encoder and the universal (atypical) encoder, respectively. We consider two complexity measures: the number of parsed phrases in the sequence, and the length of the encoded sequence (codelength). Specializing to a particular type of universal encoder, the Tree-Structured Lempel–Ziv (LZ78), we obtain a novel non-asymptotic upper bound, in terms of the Lambert W function, on the number of distinct phrases resulting from the LZ78 parser. This non-asymptotic bound determines the range of anomaly score. As a concrete application, we illustrate the pattern dictionary framework for constructing a baseline of health against which anomalous deviations can be detected. 相似文献
96.
Automated intensity estimation of spontaneous Facial Action Units (AUs) defined by Facial Action Coding System (FACS) is a relatively new and challenging problem. This paper presents a joint supervised dictionary learning (SDL) and regression model for solving this problem. The model is casted as an optimization function consisting of two terms. The first term in the optimization concerns representing the facial images in a sparse domain using dictionary learning whereas the second term concerns estimating AU intensities using a linear regression model in the sparse domain. The regression model is designed in a way that considers disagreement between raters by a constant biasing factor in measuring the AU intensity values. Furthermore, since the intensity of facial AU is a non-negative value (i.e., the intensity values are between 0 and 5), we impose a non-negative constraint on the estimated intensities by restricting the search space for the dictionary learning and the regression function. Our experimental results on DISFA and FERA2015 databases show that this approach is very promising for automated measurement of spontaneous facial AUs. 相似文献
97.
Sparse coding has been used for image representation successfully. However, when there is considerable variation between source and target domain, sparse coding cannot achieve satisfactory results. In this paper, we proposed a Projected Transfer Sparse Coding algorithm. In order to reduce their distribution difference, we project source and target data into a shared low dimensional space. Meanwhile, we learn a projection matrix and a shared dictionary and the sparse coding of source and target data in the low dimensional space. Unlike existing methods, the sparse representations are learnt using the projected data which are invariant to the distribution difference and the irrelevant samples. Thus, the sparse representations are robust and can improve the classification performance. We do not need to know any explicit correspondence across domains. We learn the projection matrix, the discriminative sparse representations, and the dictionary in a unified objective function. Our image representation method yields state-of-the-art results. 相似文献
98.
The current study puts forward a supervised within-class-similar discriminative dictionary learning (SCDDL) algorithm for face recognition. Some popular discriminative dictionary learning schemes for recognition tasks always incorporate the linear classification error term into the objective function or make some discriminative restrictions on representation coefficients. In the presented SCDDL algorithm, we propose to directly restrict the representation coefficients to be similar within the same class and simultaneously include the linear classification error term in the supervised dictionary learning scheme to derive a more discriminative dictionary for face recognition. The experimental results on three large well-known face databases suggest that our approach can enhance the fisher ratio of representation coefficients when compared with several dictionary learning algorithms that incorporate linear classifiers. In addition, the learned discriminative dictionary, the large fisher ratio of representation coefficients and the simultaneously learned classifier can improve the recognition rate compared with some state-of-the-art dictionary learning algorithms. 相似文献
99.
针对传统压缩感知重建算法存在重建质量偏低、重建时间偏长等问题,本文提出了一种基于分离字典训练的快速重建算法.首先选取某类图像作为训练集,建立其广义低秩矩阵分解模型;其次采用交替方向乘子法求解该模型,训练出一组分离字典;最后将该分离字典用于图像重建中,通过简单的线性运算实现图像的快速重建.实验结果表明,本文算法相比于传统的重建算法,针对训练集同类图像,具有十分显著的重建性能,对于其他不同类型的图像,依然有不错的重建质量,极大地降低了重建时间. 相似文献
100.
肺部LDCT(Low-Dose Computed Tomography)图像中噪声及条状伪影等异常显著,顶部和底部图像尤为严重.为提高整个肺部LDCT图像的质量,本文提出一种基于结构联合字典的图像降噪方法.首先,利用肺部CT图像的灰度特点,将HRCT(High Resolution Computed Tomography)图像块分类并训练,获得4类字典,通过计算原子的信息熵和HOG(Histogram of Oriented Gradient)特征,得到相应的结构字典,进而构造出结构联合字典;然后,在对肺部LDCT图像进行非局部均值滤波的基础上,将结构联合字典作为全局字典,对图像进行稀疏表示及重构,获得降噪后的图像.为验证算法有效性,选用模拟和临床两类数据进行实验,并与KSVD、AS-LNLM、BF-MCA等3种算法对比.对比发现,本文算法在去除噪声和条状伪影以及保留细节方面效果较好,特别是对序列顶层和底层图像处理优势更加明显.该方法能够显著提升整个肺部LDCT图像的质量. 相似文献