首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   132篇
  免费   27篇
综合类   2篇
数学   4篇
物理学   5篇
无线电   148篇
  2023年   6篇
  2022年   8篇
  2021年   9篇
  2020年   11篇
  2019年   13篇
  2018年   5篇
  2017年   9篇
  2016年   9篇
  2015年   11篇
  2014年   11篇
  2013年   11篇
  2012年   2篇
  2011年   1篇
  2010年   4篇
  2009年   6篇
  2008年   4篇
  2007年   7篇
  2006年   7篇
  2005年   7篇
  2004年   6篇
  2003年   5篇
  2002年   5篇
  2001年   2篇
排序方式: 共有159条查询结果,搜索用时 15 毫秒
11.
Wireless mesh networks (WMNs) have been proposed to provide cheap, easily deployable and robust Internet access. The dominant Internet-access traffic from clients causes a congestion bottleneck around the gateway, which can significantly limit the throughput of the WMN clients in accessing the Internet. In this paper, we present MeshCache, a transparent caching system for WMNs that exploits the locality in client Internet-access traffic to mitigate the bottleneck effect at the gateway, thereby improving client-perceived performance. MeshCache leverages the fact that a WMN typically spans a small geographic area and hence mesh routers are easily over-provisioned with CPU, memory, and disk storage, and extends the individual wireless mesh routers in a WMN with built-in content caching functionality. It then performs cooperative caching among the wireless mesh routers.We explore two architecture designs for MeshCache: (1) caching at every client access mesh router upon file download, and (2) caching at each mesh router along the route the Internet-access traffic travels, which requires breaking a single end-to-end transport connection into multiple single-hop transport connections along the route. We also leverage the abundant research results from cooperative web caching in the Internet in designing cache selection protocols for efficiently locating caches containing data objects for these two architectures. We further compare these two MeshCache designs with caching at the gateway router only.Through extensive simulations and evaluations using a prototype implementation on a testbed, we find that MeshCache can significantly improve the performance of client nodes in WMNs. In particular, our experiments with a Squid-based MeshCache implementation deployed on the MAP mesh network testbed with 15 routers show that compared to caching at the gateway only, the MeshCache architecture with hop-by-hop caching reduces the load at the gateway by 38%, improves the average client throughput by 170%, and increases the number of transfers that achieve a throughput greater than 1 Mbps by a factor of 3.  相似文献   
12.
Web caching has been the solution of choice to web latency problems. The efficiency of a Web cache is strongly affected by the replacement algorithm used to decide which objects to evict once the cache is saturated. Numerous web cache replacement algorithms have appeared in the literature. Despite their diversity, a large number of them belong to a class known as stack‐based algorithms. These algorithms are evaluated mainly via trace‐driven simulation. The very few analytical models reported in the literature were targeted at one particular replacement algorithm, namely least recently used (LRU) or least frequently used (LFU). Further they provide a formula for the evaluation of the Hit Ratio only. The main contribution of this paper is an analytical model for the performance evaluation of any stack‐based web cache replacement algorithm. The model provides formulae for the prediction of the object Hit Ratio, the byte Hit Ratio, and the delay saving ratio. The model is validated against extensive discrete event trace‐driven simulations of the three popular stack‐based algorithms, LRU, LFU, and SIZE, using NLANR and DEC traces. Results show that the analytical model achieves very good accuracy. The mean error deviation between analytical and simulation results is at most 6% for LRU, 6% for the LFU, and 10% for the SIZE stack‐based algorithms. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
13.
Network processing in the current Internet is at the entirety of the data packet, which is problematic when encountering network congestion. The newly proposed Internet service named Qualitative Communication changes the network processing paradigm to an even finer granularity, namely chunk level, which obsoletes many existing networking policies and schemes, especially the caching algorithms and cache replacement policies that have been extensively explored in Web Caching, Content Delivery Networks (CDN) or Information-Centric Networks (ICN). This paper outlines all the new factors that are brought by random linear network coding-based Qualitative Communication and proves the importance and necessity of considering them. A novel metric is proposed by taking these new factors into consideration. An optimization problem is formulated to maximize the metric value of all retained chunks in the local storage of network nodes under the constraint of storage limit. A cache replacement scheme that obtains the optimal result in a recursive manner is proposed correspondingly. With the help of the introduced intelligent cache replacement algorithm, the performance evaluations show remarkably reduced end-to-end latency compared to the existing schemes in various network scenarios.  相似文献   
14.
In this paper, we consider a cache-enable device-to-device (D2D) communication network with user mobility and design a mobility-aware coded caching scheme to exploit multicasting opportunities for reducing network traffic. In addition to the static cache memory that can be used to reap coded caching gains, we assign a dynamic cache memory to mobile users such that users who never meet can still exchange contents via relaying. We consider content exchange as an information flow among dynamic cache memories of mobile users and leverage network coding to reduce network traffic. Specifically, we transfer our storage and broadcast problem into a network coding problem. By solving the formulated network coding problem, we obtain a dynamic content replacement and broadcast strategy. Numerical results verify that our algorithm significantly outperforms the random and greedy algorithms in terms of the amount of broadcasting data, and the standard Ford–Fulkerson algorithm in terms of the successful decoding ratio.  相似文献   
15.
梁茹冰  刘琼 《通信学报》2014,35(3):23-207
提出了断接下移动终端简单查询算法SQPID,该算法通过合并与裁剪操作构建综合相关语义缓存项,且合并过程不涉及间接相关性判断,从而简化了以往算法的处理过程,提高了近似查询结果的导出速度。实验表明,SQPID算法在查询响应时间和精确度方面都更好地满足了用户的需求。  相似文献   
16.
Recently, content-centric networking (CCN) has become a hot research topic for the diffusion of contents over the Internet. Most existing works on CCN focus on the improvement of network resource utilization. Consequently, the energy consumption aspect of CCN is largely ignored. In this paper, we propose a distributed energyefficient in-network caching scheme for CCN, where each content router only needs locally available information to make caching decisions considering both caching energy consumption and transport energy consumption. We formulate the in-network caching problem as a non-cooperative game. Through rigorous mathematical analysis, we prove that pure strategy Nash equilibria exist in the proposed scheme, and it always has a strategy profile that implements the socially optimal configuration, even if the touters are self-interested in nature. Simulation results are presented to show that the distributed solution is competitive to the centralized scheme, and has superior performance compared to other popular caching schemes in CCN. Besides, it exhibits a fast convergence speed when the capacity of content routers varies.  相似文献   
17.
Aiming at the problem of mass data content transmission and limited wireless backhaul resource of UAV in UAV-assisted cellular network,a cooperative caching algorithm for cache-enabled UAV and user was proposed.By deploying caches on UAV and user device,the popular content requested by user was cached and delivered,which alleviated the backhaul resource and energy consumption of UAV,reduced the traffic load and user delay.A joint optimization problem of UAV and user caching was established with the goal of minimizing user content acquisition delay,and decomposed into UAV caching sub-problem and user caching sub-problem,which were solved based on alternating direction method of multiplier and global greedy algorithm respectively.The iterative way was used to obtain convergent optimization result,and the cooperative caching of UAV and user was realized.Simulation results show that the proposed algorithm can effectively reduce user content acquisition delay and improve system performance.  相似文献   
18.
文章研究了基于TTL的Web缓存层次过滤效果,业务量性质对基于TTL的动态Web缓存系统的性能有重要影响。在层次缓存中,由于只有错失的请求才会被转发给下一级缓存,因而逐级对业务量存在过滤作用,业务量性质随之改变。文章利用仿真研究了基于TTL的动态Web缓存层次过滤对业务量的影响。重点考察了请求到达间隔模型及对象流行度分布的变化。  相似文献   
19.
为了分析对比文中提出的预测双缓存模型与传统的点播系统中的缓存模型优缺点以及证明该模型的可行性和正确性,在分析了传统的仿真器的基础上,利用VC开发出了一个适合预测双缓存模型以及传统缓存模型的仿真器.先分析了仿真器的系统结构,对每个模块的实现给出了较为详细的描述,并对所需参数的设置进行了说明.  相似文献   
20.
王纪伟 《电子科技》2019,32(11):74-77
针对现有单片机的数据处理速率较低不利于高速数据采集与处理的问题,文中研究并设计基于单片机控制的高速数据采集与处理系统。在数据采集方面,使用A/D高速采样芯片实现高速数据采集。为满足高速数据处理与存储的需要,文中使用PC终端的IDE接口硬盘作为系统的存储装置。另外,为协调数据采集与数据处理过程,使用单片机核心控制模块控制高速双口RAM实现高速数据缓存排队,从而实现数据从A/D采样芯片到IDE硬盘的高速无损传输。该高速数据采集与处理系统在数据采集、处理方面更加集成化,具有较高的工程应用价值。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号