首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1130篇
  免费   96篇
  国内免费   61篇
化学   194篇
晶体学   4篇
力学   44篇
综合类   1篇
数学   29篇
物理学   288篇
无线电   727篇
  2024年   15篇
  2023年   131篇
  2022年   156篇
  2021年   163篇
  2020年   106篇
  2019年   64篇
  2018年   31篇
  2017年   48篇
  2016年   43篇
  2015年   31篇
  2014年   38篇
  2013年   33篇
  2012年   32篇
  2011年   40篇
  2010年   25篇
  2009年   24篇
  2008年   24篇
  2007年   25篇
  2006年   25篇
  2005年   20篇
  2004年   19篇
  2003年   23篇
  2002年   13篇
  2001年   13篇
  2000年   23篇
  1999年   15篇
  1998年   9篇
  1997年   14篇
  1996年   10篇
  1995年   9篇
  1994年   12篇
  1993年   9篇
  1992年   4篇
  1991年   8篇
  1990年   6篇
  1989年   9篇
  1988年   7篇
  1987年   1篇
  1986年   1篇
  1984年   1篇
  1983年   1篇
  1982年   1篇
  1980年   1篇
  1978年   1篇
  1976年   2篇
  1974年   1篇
排序方式: 共有1287条查询结果,搜索用时 15 毫秒
111.
Low-light images enhancement is a challenging task because enhancing image brightness and reducing image degradation should be considered simultaneously. Although existing deep learning-based methods improve the visibility of low-light images, many of them tend to lose details or sacrifice naturalness. To address these issues, we present a multi-stage network for low-light image enhancement, which consists of three sub-networks. More specifically, inspired by the Retinex theory and the bilateral grid technique, we first design a reflectance and illumination decomposition network to decompose an image into reflectance and illumination maps efficiently. To increase the brightness while preserving edge information, we then devise an attention-guided illumination adjustment network. The reflectance and the adjusted illumination maps are fused and refined by adversarial learning to reduce image degradation and improve image naturalness. Experiments are conducted on our rebuilt SICE low-light image dataset, which consists of 1380 real paired images and a public dataset LOL, which has 500 real paired images and 1000 synthetic paired images. Experimental results show that the proposed method outperforms state-of-the-art methods quantitatively and qualitatively.  相似文献   
112.
The performance of computer vision algorithms can severely degrade in the presence of a variety of distortions. While image enhancement algorithms have evolved to optimize image quality as measured according to human visual perception, their relevance in maximizing the success of computer vision algorithms operating on the enhanced image has been much less investigated. We consider the problem of image enhancement to combat Gaussian noise and low resolution with respect to the specific application of image retrieval from a dataset. We define the notion of image quality as determined by the success of image retrieval and design a deep convolutional neural network (CNN) to predict this quality. This network is then cascaded with a deep CNN designed for image denoising or super resolution, allowing for optimization of the enhancement CNN to maximize retrieval performance. This framework allows us to couple enhancement to the retrieval problem. We also consider the problem of adapting image features for robust retrieval performance in the presence of distortions. We show through experiments on distorted images of the Oxford and Paris buildings datasets that our algorithms yield improved mean average precision when compared to using enhancement methods that are oblivious to the task of image retrieval. 1  相似文献   
113.
Image quality assessment is an indispensable in computer vision applications, such as image classification, image parsing. With the development of Internet, image data acquisition becomes more conveniently. However, image distortion is inevitable due to imperfect image acquisition system, image transmission medium and image recording equipment. Traditional image quality assessment algorithms only focus on low-level visual features such as color or texture, which could not encode high-level features effectively. CNN-based methods have shown satisfactory results in image quality assessment. However, existing methods have problems such as incomplete feature extraction, partial image block distortion, and inability to determine scores. So in this paper, we propose a novel framework for image quality assessment based on deep learning. We incorporate both low-level visual features and high-level semantic features to better describe images. And image quality is analyzed in a parallel processing mode. Experiments are conducted on LIVE and TID2008 datasets demonstrate the proposed model can predict the quality of the distorted image well, and both SROCC and PLCC can reach 0.92 or higher.  相似文献   
114.
Haze is a poor-quality state described by the opalescent appearance of the atmosphere which reduces the visibility. It is caused by high concentrations of atmospheric air pollutants, such as dust, smoke and other particles that scatter and absorb sunlight. The poor visibility can result in the failure of multiple computer vision applications such as smart transport systems, image processing, object detection, surveillance etc. One of the major issues in the field of image processing is the restoration of images that are corrupted due to different degradations. Typically, the images or videos captured in the outside environment have low contrast, colour fade and restricted visibility due to suspended particles of the atmosphere that directly influence the image quality. This can cause difficulty in identifying the objects in the captured hazy images or frames. To address this problem, several image dehazing techniques have been developed in the literature, each of which has its own advantages and limitations, but effective image restoration remains a challenging task. In recent times, various learning (Machine learning & Deep learning) based methods greatly condensed the drawbacks of manual design of haze related features and reduces the difficulty in efficient restoration of images with less computational time and cost. The current state-of-the-art methods for haze free images, mainly from the last decade, are thoroughly examined in this survey. Moreover, this paper systematically summarizes the hardware implementations of various haze removal methods in real time. It is with the hope that this current survey acts as a reference for researchers in this scientific area and to provide a direction for future improvements based on current achievements.  相似文献   
115.
Due to the light absorption and scattering, captured underwater images usually contain severe color distortion and contrast reduction. To address the above problems, we combine the merits of deep learning and conventional image enhancement technology to improve the underwater image quality. We first propose a two-branch network to compensate the global distorted color and local reduced contrast, respectively. Adopting this global–local network can greatly ease the learning problem, so that it can be handled by using a lightweight network architecture. To cope with the complex and changeable underwater environment, we then design a compressed-histogram equalization to complement the data-driven deep learning, in which the parameters are fixed after training. The proposed compression strategy is able to generate vivid results without introducing over-enhancement and extra computing burden. Experiments demonstrate that our method significantly outperforms several state-of-the-arts in both qualitative and quantitative qualities.  相似文献   
116.
A simple and efficient method is reported for the synthesis of pyrroles via condensation of a series of tricarbonyl compounds with ammonia, which was generated in situ from decomposition of the deep eutectic solvent choline chloride/urea.  相似文献   
117.
The human visual system analyzes the complex scenes rapidly. It devotes the limited perceptual resources to the most salient subsets and/or objects of scenes while ignoring their less salient parts. Gaze prediction models try to predict the human eye fixations (human gaze) under free-viewing conditions while imitating the attentive mechanism. Previous studies on saliency benchmark datasets have shown that visual attention is affected by the salient objects of the scenes and their features. These features include the identity, the location, and the visual features of objects in the scenes, beside to the context of the input image. Moreover, the human eye fixations often converge to the specific parts of salient objects in the scenes. In this paper, we propose a deep gaze prediction model using object detection via image segmentation. It uses some deep neural modules to find the identity, location, and visual features of the salient objects in the scenes. In addition, we introduce a deep module to capture the prior bias of human eye fixations. To evaluate our model, several challenging saliency benchmark datasets are used in the experiments. We also conduct an ablation study to show the effectiveness of our proposed modules and its architecture. Despite its fewer parameters, our model has comparable, or even better performance on some datasets, to the state-of-the-art saliency models.  相似文献   
118.
Neural network based methods for fisheye distortion correction are effective and increasingly popular, although training network require a high amount of labeled data. In this paper, we propose an unsupervised fisheye correction network to address the aforementioned issue. During the training process, the predicted parameters are employed to correct strong distortion that exists in the fisheye image and synthesize the corresponding distortion using the original distortion-free image. Thus, the network is constrained with bidirectional loss to obtain more accurate distortion parameters. We calculate the two losses at the image level as opposed to directly minimizing the difference between the predicted and ground truth of distortion parameters. Additionally, we leverage the geometric prior that the distortion distribution depends on the geometric regions of fisheye images and the straight line should be straight in the corrected images. The network focuses more on the geometric prior regions as opposed to equally perceiving the whole image without any attention mechanisms. To generate more appealing corrected results in visual appearance, we introduce a coarse-to-fine inpainting network to fill the hole regions caused by the irreversible mapping function using distortion parameters. Each module of the proposed network is differentiable, and thus the entire framework is completely end-to-end. When compared with the previous supervised methods, our method is more flexible and shows better practical applications for distortion rectification. The experiment results demonstrate that our proposed method outperforms state-of-the-art methods on the correction performance without any labeled distortion parameters.  相似文献   
119.
罗迎  倪嘉成  张群 《雷达学报》2020,9(1):107-122
对感兴趣目标的数量、位置、型号等参数信息的精确获取一直是合成孔径雷达(SAR)技术中最为重要的研究内容之一。现阶段的SAR信息处理主要分为成像和解译两大部分,两者的研究相对独立。SAR成像和解译各自开发了大量算法,复杂度越来越高,但SAR解译并未因成像分辨率提升而变得简单,特别是对重点目标识别率低的问题并未从本质上得以解决。针对上述问题,该文从SAR成像解译一体化角度出发,尝试利用“数据驱动+智能学习”的方法提升机载SAR的信息处理能力。首先分析了基于“数据驱动+智能学习”方法的SAR成像解译一体化的可行性及现阶段存在的主要问题;在此基础上,提出一种“数据驱动+智能学习”的SAR学习成像方法,给出了学习成像框架、网络参数选取方法、网络训练方法和初步的仿真结果,并分析了需要解决的关键性技术问题。   相似文献   
120.
Unmanned Aerial Vehicle (UAV) has emerged as a promising technology for the support of human activities, such as target tracking, disaster rescue, and surveillance. However, these tasks require a large computation load of image or video processing, which imposes enormous pressure on the UAV computation platform. To solve this issue, in this work, we propose an intelligent Task Offloading Algorithm (iTOA) for UAV edge computing network. Compared with existing methods, iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search (MCTS), the core algorithm of Alpha Go. MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward, such as lowest latency or power consumption. To accelerate the search convergence of MCTS, we also proposed a splitting Deep Neural Network (sDNN) to supply the prior probability for MCTS. The sDNN is trained by a self-supervised learning manager. Here, the training data set is obtained from iTOA itself as its own teacher. Compared with game theory and greedy search-based methods, the proposed iTOA improves service latency performance by 33% and 60%, respectively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号