首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   952篇
  免费   86篇
  国内免费   60篇
化学   193篇
晶体学   4篇
力学   44篇
综合类   1篇
数学   29篇
物理学   276篇
无线电   551篇
  2024年   5篇
  2023年   91篇
  2022年   114篇
  2021年   133篇
  2020年   85篇
  2019年   41篇
  2018年   24篇
  2017年   43篇
  2016年   36篇
  2015年   29篇
  2014年   38篇
  2013年   32篇
  2012年   32篇
  2011年   40篇
  2010年   25篇
  2009年   24篇
  2008年   24篇
  2007年   25篇
  2006年   24篇
  2005年   20篇
  2004年   19篇
  2003年   23篇
  2002年   13篇
  2001年   13篇
  2000年   23篇
  1999年   15篇
  1998年   9篇
  1997年   14篇
  1996年   10篇
  1995年   9篇
  1994年   12篇
  1993年   9篇
  1992年   4篇
  1991年   8篇
  1990年   6篇
  1989年   9篇
  1988年   7篇
  1987年   1篇
  1986年   1篇
  1984年   1篇
  1983年   1篇
  1982年   1篇
  1980年   1篇
  1978年   1篇
  1976年   2篇
  1974年   1篇
排序方式: 共有1098条查询结果,搜索用时 281 毫秒
121.
The human visual system analyzes the complex scenes rapidly. It devotes the limited perceptual resources to the most salient subsets and/or objects of scenes while ignoring their less salient parts. Gaze prediction models try to predict the human eye fixations (human gaze) under free-viewing conditions while imitating the attentive mechanism. Previous studies on saliency benchmark datasets have shown that visual attention is affected by the salient objects of the scenes and their features. These features include the identity, the location, and the visual features of objects in the scenes, beside to the context of the input image. Moreover, the human eye fixations often converge to the specific parts of salient objects in the scenes. In this paper, we propose a deep gaze prediction model using object detection via image segmentation. It uses some deep neural modules to find the identity, location, and visual features of the salient objects in the scenes. In addition, we introduce a deep module to capture the prior bias of human eye fixations. To evaluate our model, several challenging saliency benchmark datasets are used in the experiments. We also conduct an ablation study to show the effectiveness of our proposed modules and its architecture. Despite its fewer parameters, our model has comparable, or even better performance on some datasets, to the state-of-the-art saliency models.  相似文献   
122.
Neural network based methods for fisheye distortion correction are effective and increasingly popular, although training network require a high amount of labeled data. In this paper, we propose an unsupervised fisheye correction network to address the aforementioned issue. During the training process, the predicted parameters are employed to correct strong distortion that exists in the fisheye image and synthesize the corresponding distortion using the original distortion-free image. Thus, the network is constrained with bidirectional loss to obtain more accurate distortion parameters. We calculate the two losses at the image level as opposed to directly minimizing the difference between the predicted and ground truth of distortion parameters. Additionally, we leverage the geometric prior that the distortion distribution depends on the geometric regions of fisheye images and the straight line should be straight in the corrected images. The network focuses more on the geometric prior regions as opposed to equally perceiving the whole image without any attention mechanisms. To generate more appealing corrected results in visual appearance, we introduce a coarse-to-fine inpainting network to fill the hole regions caused by the irreversible mapping function using distortion parameters. Each module of the proposed network is differentiable, and thus the entire framework is completely end-to-end. When compared with the previous supervised methods, our method is more flexible and shows better practical applications for distortion rectification. The experiment results demonstrate that our proposed method outperforms state-of-the-art methods on the correction performance without any labeled distortion parameters.  相似文献   
123.
罗迎  倪嘉成  张群 《雷达学报》2020,9(1):107-122
对感兴趣目标的数量、位置、型号等参数信息的精确获取一直是合成孔径雷达(SAR)技术中最为重要的研究内容之一。现阶段的SAR信息处理主要分为成像和解译两大部分,两者的研究相对独立。SAR成像和解译各自开发了大量算法,复杂度越来越高,但SAR解译并未因成像分辨率提升而变得简单,特别是对重点目标识别率低的问题并未从本质上得以解决。针对上述问题,该文从SAR成像解译一体化角度出发,尝试利用“数据驱动+智能学习”的方法提升机载SAR的信息处理能力。首先分析了基于“数据驱动+智能学习”方法的SAR成像解译一体化的可行性及现阶段存在的主要问题;在此基础上,提出一种“数据驱动+智能学习”的SAR学习成像方法,给出了学习成像框架、网络参数选取方法、网络训练方法和初步的仿真结果,并分析了需要解决的关键性技术问题。   相似文献   
124.
Unmanned Aerial Vehicle (UAV) has emerged as a promising technology for the support of human activities, such as target tracking, disaster rescue, and surveillance. However, these tasks require a large computation load of image or video processing, which imposes enormous pressure on the UAV computation platform. To solve this issue, in this work, we propose an intelligent Task Offloading Algorithm (iTOA) for UAV edge computing network. Compared with existing methods, iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search (MCTS), the core algorithm of Alpha Go. MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward, such as lowest latency or power consumption. To accelerate the search convergence of MCTS, we also proposed a splitting Deep Neural Network (sDNN) to supply the prior probability for MCTS. The sDNN is trained by a self-supervised learning manager. Here, the training data set is obtained from iTOA itself as its own teacher. Compared with game theory and greedy search-based methods, the proposed iTOA improves service latency performance by 33% and 60%, respectively.  相似文献   
125.
Hyperspectral image quality assessment (HIQA) is an indispensable technique in both academic and industry domain However, HIQA is still a challenging task since those fine-grained and quality-aware visual details are difficult to be captured. Compared with the conventional low-level features, mid-level features usually contain more semantic and quality clues and exhibit higher discriminant ability. Thus, we aim to leverage the mid-level features for HIQA. More specifically, three-scale superpixel mosaics are generated from the input image pre-processed by PCA. Each superpixel scale corresponds to various homogeneousobject parts. Subsequently, three mid-level visual features (fisher vector, combined mean features, reconstructed image matrix) as well as deep features of hyperspectral images are calculated with three-scale superpixel images to constitute multiple kernels. Afterwards, we integrate these kernels into a multimodal one, which is further integrated into a feature vector by row-wise stacking. The image quality evaluation can be calculated based on the designed similarity metric. Comprehensive experiments have demonstrated the effectiveness of our proposed HIQA algorithm.  相似文献   
126.
In recent years, hyperspectral image super-resolution has attracted the attention of many researchers and has become a hot topic in the field of computer vision. However, it is difficult to obtain high-resolution images due to imaging hardware devices. At present, many existing hyperspectral image super-resolution methods have not achieved good results. In this paper, we propose a hyperspectral image super-resolution method combining with deep residual convolutional neural network (DRCNN) and spectral unmixing. Firstly, the spatial resolution of the image is enhanced by learning a priori knowledge of natural images. The DRCNN reconstructs high spatial resolution hyperspectral images by concatenating multiple residual blocks, each containing two convolutional layers. Secondly, the spectral features of low-resolution and high-resolution hyperspectral images are linked by spectral unmixing. This approach aims to obtain the endmember matrix and the abundance matrix. The final reconstruction result is obtained by multiplying the endmember matrix and the abundance matrix. In addition, in order to improve the visual effect of the reconstructed image, the total variation regularity is used to impose constraints on the abundance matrix to enhance the relationship between the pixels. The experimental results of remote sensing data based on ground facts show that the proposed method has good performance and preserves spatial information and spectral information without the need for auxiliary images.  相似文献   
127.
Compressed sensing (CS) aims to precisely reconstruct the original signal from under-sampled measurements, which is a typical ill-posed problem. Solving such a problem is challenging and generally needs to incorporate suitable priors about the underlying signals. Traditionally, these priors are hand-crafted and the corresponding approaches generally have limitations in expressive capacity. In this paper, a nonconvex optimization inspired multi-scale reconstruction network is developed for block-based CS, abbreviated as iPiano-Net, by unfolding the classic iPiano algorithm. In iPiano-Net, a block-wise inertial gradient descent interleaves with an image-level network-inducing proximal mapping to exploit the local block and global content information alternately. Therein, network-inducing proximal operators can be adaptively learned in each module, which can efficiently characterize image priors and improve the modeling capacity of iPiano-Net. Such learned image-level priors can suppress blocky artifacts and noises/corruptions while preserving the global information. Different from existing discriminative CS reconstruction models trained with specific measurement ratios, an effective single model is learned to handle CS reconstruction with several measurement ratios even the unseen ones. Experimental results demonstrate that the proposed approach is substantially superior to previous CS methods in terms of Peak Signal to Noise Ratio (PSNR) and visual quality, especially at low measurement ratios. Meanwhile, it is robust to noise while maintaining comparable execution speed.  相似文献   
128.
With the prevalence of accessible depth sensors, dynamic skeletons have attracted much attention as a robust modality for action recognition. Convolutional neural networks (CNNs) excel at modeling local relations within local receptive fields and are typically inefficient at capturing global relations. In this article, we first view the dynamic skeletons as a spatio-temporal graph (STG) and then learn the localized correlated features that generate the embedded nodes of the STG by message passing. To better extract global relational information, a novel model called spatial–temporal graph interaction networks (STG-INs) is proposed, which perform long-range temporal modeling of human body parts. In this model, human body parts are mapped to an interaction space where graph-based reasoning can be efficiently implemented via a graph convolutional network (GCN). After reasoning, global relation-aware features are distributed back to the embedded nodes of the STG. To evaluate our model, we conduct extensive experiments on three large-scale datasets. The experimental results demonstrate the effectiveness of our proposed model, which achieves the state-of-the-art performance.  相似文献   
129.
Image quality assessment (IQA) attempts to quantify the quality-aware visual attributes perceived by humans. They can be divided into subjective and objective image quality assessment. Subjective IQA algorithms rely on human judgment of image quality, where the human visual perception functions as the dominant factor However, they cannot be widely applied in practice due to the heavy reliance on different individuals. Motivated by the fact that objective IQA largely depends on image structural information, we propose a structural cues-based full-reference IPTV IQA algorithm. More specifically, we first design a grid-based object detection module to extract multiple structural information from both the reference IPTV image (i.e., video frame) and the test one. Afterwards, we propose a structure-preserved deep neural networks to generate the deep representation for each IPTV image. Subsequently, a new distance metric is proposed to measure the similarity between the reference image and the evaluated image. A test IPV image with a small calculated distance is considered as a high quality one. Comprehensive comparative study with the state-of-the-art IQA algorithms have shown that our method is accurate and robust.  相似文献   
130.
With modern e-healthcare developments, ambulatory healthcare has become a prominent requirement for physical or mental ailed, elderly, childhood people. One of the major challenges in such applications is timing and precision. A potential solution to this problem is the fog-assisted cloud computing architecture. The activity recognition task is performed with the hybrid advantages of deep learning and genetic algorithms. The video frames captured from vision cameras are subjected to the genetic change detection algorithm, which detects changes in activities of subsequent frames. Consequently, the deep learning algorithm recognizes the activity of the changed frame. This hybrid algorithm is run on top of fog-assisted cloud framework, fogbus and the performance measures including latency, execution time, arbitration time and jitter are observed. Empirical evaluations of the proposed model against three activity data sets shows that the proposed deep genetic algorithm exhibits higher accuracy in inferring human activities as compared to the state-of-the-art algorithms.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号