首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25篇
  免费   1篇
化学   1篇
数学   2篇
无线电   23篇
  2016年   1篇
  2013年   2篇
  1999年   1篇
  1998年   1篇
  1996年   1篇
  1995年   3篇
  1994年   1篇
  1993年   2篇
  1990年   1篇
  1987年   1篇
  1985年   1篇
  1983年   2篇
  1982年   1篇
  1981年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1973年   1篇
排序方式: 共有26条查询结果,搜索用时 8 毫秒
1.
Commercial television was standardized originally mainly for entertainment, sports, and news, using over-the-air broadcasts. While a few upward compatible changes have been added over the years, the standard has remained essentially unchanged. Moreover, no other standard has had so many products and services built around it and withstood such a period of rapid technological change. Digital television promises to disturb the current equilibrium in consumer television and in the broader media industries. It is forcing a convergence with the simultaneous expansion of personal computing into the home and the commercialization of research networks. Digitizing television in a cost-effective manner and integrating it with computers, telecommunication networks, and consumer products will produce a large array of new products and services. The paper describes digital television, factors affecting it, its potential, and some major issues in its evolution  相似文献   
2.
We present some novel architectures for rearrangeably nonblocking multistage photonic space switches implemented using arrays ofTi:LiNbO_{3}directional couplers. Multistage networks, studied mostly in the electronic domain, are obtained by minimizing the number of 2 × 2 elements needed to implement a switch. Unfortunately, straightforward extensions of these networks to the photonic domain show that the switch size has to be severely limited by the crosstalk in each of theTi:LiNbO_{3} 2 times 2switching elements. Our networks, on the other hand, have a controllable (including almost zero) amount of crosstalk, low optical path loss, and an asymptotically optimal number of directional coupler switches for a given switch size. In addition, the switch has a simple control algorithm and its performance for light loading appears very promising. The switch is easily decomposable into smaller arrays of no more than two types, making it easy to partition the switch into chips. At the cost of a slight increase in crosstalk, the switch can be made single fault tolerant in terms of its ability to connect any input to any output.  相似文献   
3.
The design, analysis, and implementation of an end-to-end transport protocol that is capable of high throughput consistent with the evolving high-speed physical networks based on fiber-optic transmission lines and high-capacity switches are presented. Unlike current transport protocols in which changes in control/state information are exchanged between the two communicating entities only when some significant event occurs, this protocol exchanges relevant and full state information periodically and frequently. It is shown that this reduces the complexity of protocol processing by removing many of the procedures required to recover from network inadequacies such as bit errors, packet loss, and out-of-sequence packets and makes it more amenable to parallel processing. Also, to increase channel utilization in the presence of high-speed, long-latency networks and to support diagrams, and efficient implementation of the selective repeat method of error control is incorporated in the protocol. An implementation using a Motorola 68030-based multiprocessor as a front-end processor is described. The current implementation can comfortably handle 10-15 kpackets/s  相似文献   
4.
Protocol pruning     
A communication system uses a precise set of rules called a protocol, to define interactions among its entities. With advancing computer transmission and switching technology, communication systems are providing sophisticated services demanded by users over a wide area. Protocol standards include a very, large number of options to take care of different service possibilities and to please all the people involved in the Standards Committees. Consequently, protocols have become large and complex, and, therefore their design and analysis have become a formidable task. To cope with this problem, a variety of approaches to simplify the protocols have been proposed in the published literature, such as protocol projection, homomorphism, selective resolution, and many others. We have recently developed a new technique called protocol pruning. It reduces the complexity of the protocols by pruning them to keep only that part which is required for a specified subset of services. More importantly, it takes polynomial (rather than exponential) time and space in the size of the protocol specification. This makes the algorithm feasible for engineers to use for practical problems involving large and complex protocols. We describe the technique and discuss applications to synthesis of protocol converters/gateways, protocol conformance testing, and thinning for lightweight and high performance protocols. The technique could also be useful for protocol implementation, synthesis, validation, and verification  相似文献   
5.
Visual thresholds play an important role in the process of incorporating properties of the human visual system in encoding picture signals. They tell us how much the picture signal can be perturbed without the perturbations being visible to human observers. We describe psychovisual experiments to determine the amplitude thresholds at a single edge having a given slope and then present methods to incorporate the visual threshold data directly into the design of quantizers for use in Differential Pulse Code Modulation (DPCM) systems. In the first class of methods, quantizer characteristics are obtained such that the quantization error is kept below the visual threshold as determined by the slope at a picture element and either (a) the number of quantizer levels, or (b) the entropy of the quantized output is minimized. In the second class of methods, different measures of the suprathreshold quantization error are minimized for a fixed number of levels, or for a given constraint on the entropy of the quantized signal. We present empirical relationships between the various distortion measures and the subjective quality of the pictures rated on a five point impairment scale. We then discuss the structure of the quantizers obtained by the above mentioned optimization methods, evaluate their performance on real pictures, and compare them with the ones described in the literature.  相似文献   
6.
This paper describes a technique for varying the quantization characteristics in a Differential Pulse Code Modulation (DPCM) System. A multiplier, based on already transmitted picture elements (pels) which are spatially close to the pel being coded, is used to multiply the prediction error in a DPCM system. This multiplied prediction error is quantized by a fixed quantizer. Coarseness of quantization is thus dependent on the value of the multiplier, which is adjusted on the basis of the sensitivity of the human observers to quantization errors. The implementation is particularly useful in those cases where the quantizer outputs are coded using fixed length code words. Computer simulations on a variety of pictures indicate that a good quality picture can be produced by using a quantizer having between 8 to 10 levels and a two-dimensional predictor. Similar picture quality can be obtained by a simple nonadaptive previous element DPCM system having a quantizer with 13 to 16 levels.  相似文献   
7.
Interpolative picture coding refers to sending coded information about a few picture elements separated in space and interpolating all the rest of the picture elements. In this paper we consider sending coded information about picture elements separated by as large a distance as possible along a scan line. We study the effects of a few twodimensional interpolation strategies and evaluate the usefulness of several different error criteria required to judge the faithfulness of the interpolated signal. The error criteria are motivated by our knowledge of pictorial information processing in the human visual system. Based on the picture quality and entropy of the coded output as the criterion for judging the coding schemes, we find that error measures in which the interpolation error is filtered adaptively and compared to a varying threshold perform the best. The filter is adapted based on the spatial activity of the signal: high-bandwidth filter for low activity areas and low-bandwidth filter for high activity areas. The variation in threshold is based on the spatial masking of the interpolation error and has a high value in high activity areas and a low value in low activity areas. Our computer simulations indicate that, for head-and-head-and-shoulders-type pictures, it is possble, without affecting the picture quality, to reduce the entropy of the coded output by as much as 40 percent over that obtainable from previous element differential pulse code modulation (DPCM) system.  相似文献   
8.
We develop and evaluate motion compensation schemes for predictive coding of the component color television signal. Algorithms are discussed for estimation of motion of each color component (luminance and chrominance) separately as well as in combination. Techniques for switching of the predictors for individual components are proposed and simulated. Simulations show that it is sufficient to estimate parameters of motion based only on the luminance and use them for motion-based prediction and switching the predictors for both the luminance and chrominance. Thus, only one motion estimator and prediction switch is needed for the three components of the color signal. Compression capability of motion compensation is scene dependent, in some video conference type of scenes, bit rate is reduced by as much as 60 percent compared to conditional replenishment coding.  相似文献   
9.
We describe a system for storing pictures in a database and retrieving them from a remote location over a low bit-rate channel. Even with compression techniques, it takes a long time to transmit a picture, making search for a picture of desired attributes time-consuming. Our system improves the human interaction with the picture database by constructing an auxiliary text database containing a list of attributes of each picture, a hierarchical encoder-decoder, and a light pen to select the areas of picture buildup. Picture searching (or browsing) takes place in two stages: in the first stage, knowing the required picture attributes, a user selects a subset of pictures by matching attributes to the text database; in the second stage, these selected pictures are displayed hierarchically so that a low resolution picture is reproduced first and made sharper gradually. A light pen allows the user to give priority to upgrade selected areas or reject a picture. Our techniques of hierarchical coding are simple to implement. Informal tests indicate that it is much easier to browse through a picture database using this system and that the time to retrieve a picture of given attributes decreases considerably compared to sequential picture presentation.  相似文献   
10.
This paper develops a discrete approach to the design of planar curves that minimize cost functions dependent upon their shape. The curves designed by using this approach are piecewise linear with equal length segments and obey various types of endpoint constraints.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号