首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   132篇
  免费   27篇
综合类   2篇
数学   4篇
物理学   5篇
无线电   148篇
  2023年   6篇
  2022年   8篇
  2021年   9篇
  2020年   11篇
  2019年   13篇
  2018年   5篇
  2017年   9篇
  2016年   9篇
  2015年   11篇
  2014年   11篇
  2013年   11篇
  2012年   2篇
  2011年   1篇
  2010年   4篇
  2009年   6篇
  2008年   4篇
  2007年   7篇
  2006年   7篇
  2005年   7篇
  2004年   6篇
  2003年   5篇
  2002年   5篇
  2001年   2篇
排序方式: 共有159条查询结果,搜索用时 15 毫秒
141.
The well-developed traditional data management techniques need to be augmented with new approaches in order to continue to be effective in the mobile environment. In this paper, we focus on the challenge of maintaining integrity constraints in the presence of disconnections and expensive communication. Our approach of localization is to reformulate global constraints so as to enhance the autonomy of the mobile hosts in processing transactions. We show how this approach unifies techniques of maintaining replicated data with methods of enforcing polynomial inequalities. We also discuss how localization can be realized in PRO-MOTION, a flexible infrastructure for transaction processing in a mobile environment.  相似文献   
142.
We consider the least‐recently‐used cache replacement rule with a Zipf‐type page request distribution and investigate an asymptotic property of the fault probability with respect to an increase of cache size. We first derive the asymptotics of the fault probability for the independent‐request model and then extend this derivation to a general dependent‐request model, where our result shows that under some weak assumptions the fault probability is asymptotically invariant with regard to dependence in the page request process. In a previous study, a similar result was derived by applying a Poisson embedding technique, where a continuous‐time proof was given through some assumptions based on a continuous‐time modeling. The Poisson embedding, however, is just a technique used for the proof and the problem is essentially on a discrete‐time basis; thus, it is preferable to make assumptions, if any, directly in the discrete‐time setting. We consider a general dependent‐request model and give a direct discrete‐time proof under different assumptions. A key to the proof is that the numbers of requests for respective pages represent conditionally negatively associated random variables. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2006  相似文献   
143.
In this paper we discuss the performance of a document distribution model that interconnects Web caches through a satellite channel. During recent years Web caching has emerged as an important way to reduce client-perceived latency and network resource requirements in the Internet. Also a satellite distribution is being rapidly deployed to offer Internet services while avoiding highly congested terrestrial links. When Web caches are interconnected through a satellite distribution, caches end up containing all documents requested by a huge community of clients. Having a large community of clients connected to a cache, the probability that a client is the first one to request a document is very small, and the number of requests that are hit in the cache increases. In this paper we develop analytical models to study the performance of a cache-satellite distribution. We derive simple expressions for the hit rate of the caches, the bandwidth in the satellite channel, the latency experienced by the clients, and the required capacity of the caches. Additionally, we use trace driven simulations to validate our model and evaluate the performance of a real cache-satellite distribution.  相似文献   
144.
信息中心网络(CCN)是一种全新的网络架构,其显著的特点是处处缓存,合理的内容缓存部署能够显著提高网络传输效率.缓存替换策略是缓存管理中的重要组成部分,合理地进行缓存内容的替换,成为影响网络整体性能的关键.考虑到内容自身的特性,设计了一种基于节点缓存命中贡献率的贪婪双倍命中(GDH)缓存替换方案.该方案综合考虑了内容的请求次数、传输代价、缓存代价,设计全新的多目标价值函数,用于评估内容的缓存价值,当缓存空间不足时,替换掉价值最小的内容,实现节点缓存内容价值的最大化.仿真结果表明,该替换算法提高了节点的命中率,降低了获取内容的平均跳数.  相似文献   
145.
To overcome the problems of the on-path caching schemes in the content centric networking,a coordinated caching scheme based on the node with the max betweenness value and edge node was designed.According to the topol-ogy characteristics,the popular content was identified at the node with the max betweenness value and tracked at the edge node.The on-path caching location was given by the popularity and the cache size.Simulation results show that,com-pared with the classical schemes,this scheme promotes the cache hit ratio and decreases the average hop ratio,thus en-hancing the efficiency of the cache system.  相似文献   
146.
N.  D.  Y.   《Ad hoc Networks》2010,8(2):214-240
The production of cheap CMOS cameras, which are able to capture rich multimedia content, combined with the creation of low-power circuits, gave birth to what is called Wireless Multimedia Sensor Networks (WMSNs). WMSNs introduce several new research challenges, mainly related to mechanisms to deliver application-level Quality-of-Service (e.g., latency minimization). Such issues have almost completely been ignored in traditional WSNs, where the research focused on energy consumption minimization. Towards achieving this goal, the technique of cooperative caching multimedia content in sensor nodes can efficiently address the resource constraints, the variable channel capacity and the in-network processing challenges associated with WMSNs. The technological advances in gigabyte-storage flash memories make sensor caching to be the ideal solution for latency minimization. Though, with caching comes the issue of maintaining the freshness of cached contents. This article proposes a new cache consistency and replacement policy, called NICC, to address the cache consistency issues in a WMSN. The proposed policies recognize and exploit the mediator nodes that relay on the most “central” points in the sensor network so that they can forward messages with small latency. With the utilization of mediator nodes that lie between the source node and cache nodes, both push-based and pull-based strategies can be applied in order to minimize the query latency and the communication overhead. Simulation results attest that NICC outperforms the state-of-the-art cache consistency policy for MANETs.  相似文献   
147.
提出一种新颖的基于可重构路由器上缓存的协作分发策略来加速流媒体。通过网络存储即多个边缘路由器节点对热点视频数据进行合作缓存,就近为用户提供服务,从而使得流媒体服务器的性能要求尤其是带宽需求得到巨大的降低,骨干网传输的流量也明显减少,同时用户响应延迟也得到明显的改善。此外,实现了一个原型系统来评价基于路由器上缓存的流媒体协作分发策略的性能,结果表明该方案相比于现有的方案在改善网络性能以及用户体验方面取得很大的提升。  相似文献   
148.
一种在线的动态网页分块缓存方法   总被引:1,自引:1,他引:0       下载免费PDF全文
尤朝  周明辉  林泊  曹东刚  梅宏 《电子学报》2009,37(5):1087-1091
 分块缓存技术能够有效提高动态网页的服务质量.现有的既存系统较少使用分块缓存技术设计,如何将其应用于这些系统是一个很大的挑战.本文提出了一种在线的动态网页分块缓存方法,使原系统演化成基于分块的系统,为用户服务.该方法具有三方面优点:(1)使原系统在线演化,不影响系统对用户的服务提供;(2)简化了模板的维护,使逻辑执行的粒度从页面降低到分块,减轻了服务器端的压力;(3)独立于原系统,有效支持系统的变化和升级.文章最后对方法进行了实现和评估,结果说明该方法能够较好实现系统的演化,提高系统的服务质量.  相似文献   
149.
Recommendation-aware Content Caching (RCC) at the edge enables a significant reduction of the network latency and the backhaul load, thereby invigorating ubiquitous latency-sensitive innovative services. However, the effectiveness of RCC strategies is highly dependent on explicit information as regards subscribers’ content request patterns, the sophisticated caching placement policy, and the personalized recommendation tactics. In this article, we investigate how the potentials of Artificial Intelligence (AI) and optimization techniques can be harnessed to address those core issues and facilitate the full implementation of RCC for the upcoming intelligent 6G era. Towards this end, we first elaborate on the hierarchical RCC network architecture. Then, the devised AI and optimization empowered paradigm is introduced, whereas AI and optimization techniques are leveraged to predict the users’ content preferences in real-time situations with the assistance of their historical behavior data and determine the cache pushing and recommendation decision, respectively. Through extensive case studies, we validate the effectiveness of AI-based predictors in estimating users’ content preference and the superiority of optimized RCC policies over the conventional benchmarks. At last, we shed light on the opportunities and challenges in the future.  相似文献   
150.
Mobile edge caching technology is gaining more and more attention because it can effectively improve the Quality of Experience (QoE) of users and reduce backhaul burden. This paper aims to improve the utility of mobile edge caching technology from the perspectie of caching resource management by examining a network composed of one operator, multiple users and Content Providers (CPs). The caching resource management model is constructed on the premise of fully considering the QoE of users and the servicing capability of the Base Station (BS). In order to create the best caching resource allocation scheme, the original problem is transformed into a multi-leader multi-follower Stackelberg game model through the analysis of the system model. The strategy combinations and the utility functions of players are analyzed. The existence and uniqueness of the Nash Equilibrium (NE) solution are also analyzed and proved. The optimal strategy combinations and the best responses are deduced in detail. Simulation results and analysis show that the proposed model and algorithm can achieve the optimal allocation of caching resource and improve the QoE of users.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号