首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   214篇
  免费   34篇
  国内免费   10篇
化学   11篇
力学   2篇
综合类   20篇
数学   122篇
物理学   103篇
  2024年   1篇
  2023年   7篇
  2022年   19篇
  2021年   5篇
  2020年   8篇
  2019年   4篇
  2018年   6篇
  2017年   1篇
  2016年   10篇
  2015年   5篇
  2014年   15篇
  2013年   19篇
  2012年   7篇
  2011年   10篇
  2010年   11篇
  2009年   15篇
  2008年   16篇
  2007年   10篇
  2006年   7篇
  2005年   11篇
  2004年   8篇
  2003年   6篇
  2002年   9篇
  2001年   6篇
  2000年   2篇
  1999年   1篇
  1998年   6篇
  1997年   8篇
  1996年   3篇
  1995年   6篇
  1993年   2篇
  1992年   2篇
  1990年   1篇
  1989年   2篇
  1988年   2篇
  1987年   2篇
  1986年   2篇
  1985年   2篇
  1981年   1篇
排序方式: 共有258条查询结果,搜索用时 15 毫秒
1.
一种小波域音频信息隐藏方法   总被引:1,自引:0,他引:1  
提出了一种基于量化的小波域音频隐藏算法,将保密语音隐藏到载体音频中.为提高隐藏重和保密语音传输的安全性,对保密语音进行了小波域压缩编码和m序列的扩频调制,生成待隐藏的比特序列;通过量化方法,将编码和调制后的保密语音隐藏到载体音频的小波系数中;保密语音的恢复过程不需要使用原始音频、仿真结果表明,隐藏有保密语音的载体音频听觉质量没有明显下降,提取的保密语音感知质量较好;该算法对重量化、加噪、低通滤波等攻击均有良好的鲁棒性.  相似文献   
2.
3.
Extraction of relevant lip features is of continuing interest in the visual speech domain. Using end-to-end feature extraction can produce good results, but at the cost of the results being difficult for humans to comprehend and relate to. We present a new, lightweight feature extraction approach, motivated by human-centric glimpse-based psychological research into facial barcodes, and demonstrate that these simple, easy to extract 3D geometric features (produced using Gabor-based image patches), can successfully be used for speech recognition with LSTM-based machine learning. This approach can successfully extract low dimensionality lip parameters with a minimum of processing. One key difference between using these Gabor-based features and using other features such as traditional DCT, or the current fashion for CNN features is that these are human-centric features that can be visualised and analysed by humans. This means that it is easier to explain and visualise the results. They can also be used for reliable speech recognition, as demonstrated using the Grid corpus. Results for overlapping speakers using our lightweight system gave a recognition rate of over 82%, which compares well to less explainable features in the literature.  相似文献   
4.
Speech range profile (SRP) is a graphical display of frequency-intensity occurring interactions during functional speech activity. Few studies have suggested the potential clinical applications of SRP. However, these studies are limited to qualitative case comparisons and vocally healthy participants. The present study aimed to examine the effects of voice disorders on speaking and maximum voice ranges in a group of vocally untrained women. It also aimed to examine whether voice limit measures derived from SRP were as sensitive as those derived from voice range profile (VRP) in distinguishing dysphonic from healthy voices. Ninety dysphonic women with laryngeal pathologies and 35 women with normal voices, who served as controls, participated in this study. Each subject recorded a VRP for her physiological vocal limits. In addition, each subject read aloud the "North Wind and the Sun" passage to record SRP. All the recordings were captured and analyzed by Soundswell's computerized real-time phonetogram Phog 1.0 (Hitech Development AB, T?by, Sweden). The SRPs and the VRPs were compared between the two groups of subjects. Univariate analysis results demonstrated that individual SRP measures were less sensitive than the corresponding VRP measures in discriminating dysphonic from normal voices. However, stepwise logistic regression analyses revealed that the combination of only two SRP measures was almost as effective as a combination of three VRP measures in predicting the presence of dysphonia (overall prediction accuracy: 93.6% for SRP vs 96.0% for VRP). These results suggest that in a busy clinic where quick voice screening results are desirable, SRP can be an acceptable alternate procedure to VRP.  相似文献   
5.
Speaker recognition is an important classification task, which can be solved using several approaches. Although building a speaker recognition model on a closed set of speakers under neutral speaking conditions is a well-researched task and there are solutions that provide excellent performance, the classification accuracy of developed models significantly decreases when applying them to emotional speech or in the presence of interference. Furthermore, deep models may require a large number of parameters, so constrained solutions are desirable in order to implement them on edge devices in the Internet of Things systems for real-time detection. The aim of this paper is to propose a simple and constrained convolutional neural network for speaker recognition tasks and to examine its robustness for recognition in emotional speech conditions. We examine three quantization methods for developing a constrained network: floating-point eight format, ternary scalar quantization, and binary scalar quantization. The results are demonstrated on the recently recorded SEAC dataset.  相似文献   
6.
一种复杂车辆图像中的多车牌定位方法   总被引:4,自引:1,他引:3  
针对复杂背景中多个车牌的定位问题,提出一种新的定位方法.该方法综合利用边缘检测、连通域分析、倾斜矫正等多种方法,解决了复杂背景中定位难的问题.能够准确定位杂乱背景中的车牌,对天气、光照变化、车牌在图像中的移动和旋转等,具有良好的适应能力.该方法为后续的字符分割和字符识别提供旋转角度、字符区域定位信息.  相似文献   
7.
8.
9.
10.
We construct faithful actions of quantum permutation groups on connected compact metrizable spaces. This disproves a conjecture of Goswami.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号