首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper focuses on K-receiver discrete-time memoryless broadcast channels (DM-BCs) with private messages, where the transmitter wishes to convey K private messages to K receivers. A general inner bound on the capacity region is proposed based on an exhaustive message splitting and a K-level modified Marton’s coding. The key idea is to split every message into j=1KKj1 submessages each corresponding to a set of users who are assigned to recover them, and then send these submessages via codewords chosen from a K-level structure codebooks. To guarantee the joint typicality among all transmitted codewords, a sufficient condition on the subcodebooks’ sizes is derived through a newly establishing hierarchical covering lemma, which extends the 2-level multivariate covering lemma to the K-level case with more intricate dependences. As the number of auxiliary random variables and rate conditions both increase exponentially with K, the standard Fourier–Motzkin elimination procedure becomes infeasible when K is large. To tackle this problem, we obtain a closed form of achievable rate region with a special observation of disjoint unions of sets that constitute the power set of {1,,K}. The proposed achievable rate region allows arbitrary input probability mass functions and improves over previously known achievable (closed form) rate regions for K-receiver (K3) BCs.  相似文献   

2.
Caching technique is a promising approach to reduce the heavy traffic load and improve user latency experience for the Internet of Things (IoT). In this paper, by exploiting edge cache resources and communication opportunities in device-to-device (D2D) networks and broadcast networks, two novel coded caching schemes are proposed that greatly reduce transmission latency for the centralized and decentralized caching settings, respectively. In addition to the multicast gain, both schemes obtain an additional cooperation gain offered by user cooperation and an additional parallel gain offered by the parallel transmission among the server and users. With a newly established lower bound on the transmission delay, we prove that the centralized coded caching scheme is order-optimal, i.e., achieving a constant multiplicative gap within the minimum transmission delay. The decentralized coded caching scheme is also order-optimal if each user’s cache size is larger than a threshold which approaches zero as the total number of users tends to infinity. Moreover, theoretical analysis shows that to reduce the transmission delay, the number of users sending signals simultaneously should be appropriately chosen according to the user’s cache size, and always letting more users send information in parallel could cause high transmission delay.  相似文献   

3.
Emerging wireless technologies are envisioned to support a variety of applications that require simultaneously maintaining low latency and high reliability. Non-orthogonal multiple access techniques constitute one candidate for grant-free transmission alleviating the signaling requirements for uplink transmissions. In open-loop transmissions over fading channels, in which the transmitters do not have access to the channel state information, the existing approaches are prone to facing frequent outage events. Such outage events lead to repeated re-transmissions of the duplicate information packets, penalizing the latency. This paper proposes a multi-access broadcast approach in which each user splits its information stream into several information layers, each adapted to one possible channel state. This approach facilitates preventing outage events and improves the overall transmission latency. Based on the proposed approach, the average queuing delay of each user is analyzed for different arrival processes at each transmitter. First, for deterministic arrivals, closed-form lower and upper bounds on the average delay are characterized analytically. Secondly, for Poisson arrivals, a closed-form expression for the average delay is delineated using the Pollaczek-Khinchin formula. Based on the established bounds, the proposed approach achieves less average delay than single-layer outage approaches. Under optimal power allocation among the encoded layers, numerical evaluations demonstrate that the proposed approach significantly minimizes average sum delays compared to traditional outage approaches, especially under high arrival rates.  相似文献   

4.
This paper studies the Gallager’s exponent for coherent multiple-input multiple-output (MIMO) free space optical (FSO) communication systems over gamma–gamma turbulence channels. We assume that the perfect channel state information (CSI) is known at the receiver, while the transmitter has no CSI and equal power is allocated to all of the transmit apertures. Through the use of Hadamard inequality, the upper bound of the random coding exponent, the ergodic capacity and the expurgated exponent are derived over gamma–gamma fading channels. In the high signal-to-noise ratio (SNR) regime, simpler closed-form upper bound expressions are presented to obtain further insights into the effects of the system parameters. In particular, we found that the effects of small and large-scale fading are decoupled for the ergodic capacity upper bound in the high SNR regime. Finally, a detailed analysis of Gallager’s exponents for space-time block code (STBC) MIMO systems is discussed. Monte Carlo simulation results are provided to verify the tightness of the proposed bounds.  相似文献   

5.
Satellite communication is expected to play a vital role in realizing Internet of Remote Things (IoRT) applications. This article considers an intelligent reflecting surface (IRS)-assisted downlink low Earth orbit (LEO) satellite communication network, where IRS provides additional reflective links to enhance the intended signal power. We aim to maximize the sum-rate of all the terrestrial users by jointly optimizing the satellite’s precoding matrix and IRS’s phase shifts. However, it is difficult to directly acquire the instantaneous channel state information (CSI) and optimal phase shifts of IRS due to the high mobility of LEO and the passive nature of reflective elements. Moreover, most conventional solution algorithms suffer from high computational complexity and are not applicable to these dynamic scenarios. A robust beamforming design based on graph attention networks (RBF-GAT) is proposed to establish a direct mapping from the received pilots and dynamic network topology to the satellite and IRS’s beamforming, which is trained offline using the unsupervised learning approach. The simulation results corroborate that the proposed RBF-GAT approach can achieve more than 95% of the performance provided by the upper bound with low complexity.  相似文献   

6.
7.
The rapid time variations and large channel estimation errors in underwater acoustic (UWA) channels mean that transmitters for adaptive resource allocation quickly become outdated and provide inaccurate channel state information (CSI). This results in poor resource allocation efficiency. To address this issue, this paper proposes an optimization approach for imperfect CSI based on a Gauss–Markov model and the per-subcarrier channel temporal correlation (PSCTC) factor. The proposed scheme is applicable to downlink UWA orthogonal frequency division multiple access systems. The proposed PSCTC factors are measured, and their long-term stability is verified using data recorded in real-world sea tests. Simulation and experimental results show that the optimized CSI effectively mitigates the effects of the temporal variability of UWA channels. It demonstrates that the resource allocation scheme using optimized CSI achieves a higher effective throughput and a lower bit error rate than both imperfect CSI and the CSI predicted by the recursive least-squares (RLS) algorithm.  相似文献   

8.
This paper addresses the optimization of distributed compression in a sensor network with partial cooperation among sensors. The widely known Chief Executive Officer (CEO) problem, where each sensor has to compress its measurements locally in order to forward them over capacity limited links to a common receiver is extended by allowing sensors to mutually communicate. This extension comes along with modified statistical dependencies among involved random variables compared to the original CEO problem, such that well-known outer and inner bounds do not hold anymore. Three different inter-sensor communication protocols are investigated. The successive broadcast approach allows each sensor to exploit instantaneous side-information of all previously transmitting sensors. As this leads to dimensionality problems for larger networks, a sequential point-to-point communication scheme is considered forwarding instantaneous side-information to only one successor. Thirdly, a two-phase transmission protocol separates the information exchange between sensors and the communication with the common receiver. Inspired by algorithmic solutions for the original CEO problem, the sensors are optimized in a greedy manner. It turns out that partial communication among sensors improves the performance significantly. In particular, the two-phase transmission can reach the performance of a fully cooperative CEO scenario, where each sensor has access to all measurements and the knowledge about all channel conditions. Moreover, exchanging instantaneous side-information increases the robustness against bad Wyner–Ziv coding strategies, which can lead to significant performance losses in the original CEO problem.  相似文献   

9.
In complex network environments, there always exist heterogeneous devices with different computational powers. In this work, we propose a novel scalable random linear network coding (RLNC) framework based on embedded fields, so as to endow heterogeneous receivers with different decoding capabilities. In this framework, the source linearly combines the original packets over embedded fields based on a precoding matrix and then encodes the precoded packets over GF(2) before transmission to the network. After justifying the arithmetic compatibility over different finite fields in the encoding process, we derive a sufficient and necessary condition for decodability over different fields. Moreover, we theoretically study the construction of an optimal precoding matrix in terms of decodability. The numerical analysis in classical wireless broadcast networks illustrates that the proposed scalable RLNC not only guarantees a better decoding compatibility over different fields compared with classical RLNC over a single field, but also outperforms Fulcrum RLNC in terms of a better decoding performance over GF(2). Moreover, we take the sparsity of the received binary coding vector into consideration, and demonstrate that for a large enough batch size, this sparsity does not affect the completion delay performance much in a wireless broadcast network.  相似文献   

10.
In this work, we consider the zero-delay transmission of bivariate Gaussian sources over a Gaussian broadcast channel with one-bit analog-to-digital converter (ADC) front ends. An outer bound on the conditional distortion region is derived. Focusing on the minimization of the average distortion, two types of methods are proposed to design nonparametric mappings. The first one is based on the joint optimization between the encoder and decoder with the use of an iterative algorithm. In the second method, we derive the necessary conditions to develop the optimal encoder numerically. Using these necessary conditions, an algorithm based on gradient descent search is designed. Subsequently, the characteristics of the optimized encoding mapping structure are discussed, and inspired by which, several parametric mappings are proposed. Numerical results show that the proposed parametric mappings outperform the uncoded scheme and previous parametric mappings for broadcast channels with infinite resolution ADC front ends. The nonparametric mappings succeed in outperforming the parametric mappings. The causes for the differences between the performances of two nonparametric mappings are analyzed. The average distortions of the parametric and nonparametric mappings proposed here are close to the bound for the cases with one-bit ADC front ends in low channel signal-to-noise ratio regions.  相似文献   

11.
Valued in hundreds of billions of Malaysian ringgit, the Bursa Malaysia Financial Services Index’s constituents comprise several of the strongest performing financial constituents in Bursa Malaysia’s Main Market. Although these constituents persistently reside mostly within the large market capitalization (cap), the existence of the individual constituent’s causal influence or intensity relative to each other’s performance during uncertain or even certain times is unknown. Thus, the key purpose of this paper is to identify and analyze the individual constituent’s causal intensity, from early 2018 (pre-COVID-19) to the end of the year 2021 (post-COVID-19) using Granger causality and Schreiber transfer entropy. Furthermore, network science is used to measure and visualize the fluctuating causal degree of the source and the effected constituents. The results show that both the Granger causality and Schreiber transfer entropy networks detected patterns of increasing causality from pre- to post-COVID-19 but with differing causal intensities. Unexpectedly, both networks showed that the small- and mid-caps had high causal intensity during and after COVID-19. Using Bursa Malaysia’s sub-sector for further analysis, the Insurance sub-sector rapidly increased in causality as the year progressed, making it one of the index’s largest sources of causality. Even after removing large amounts of weak causal intensities, Schreiber transfer entropy was still able to detect higher amounts of causal sources from the Insurance sub-sector, whilst Granger causal sources declined rapidly post-COVID-19. The method of using directed temporal networks for the visualization of temporal causal sources is demonstrated to be a powerful approach that can aid in investment decision making.  相似文献   

12.
This note is a part of my effort to rid quantum mechanics (QM) nonlocality. Quantum nonlocality is a two faced Janus: one face is a genuine quantum mechanical nonlocality (defined by the Lüders’ projection postulate). Another face is the nonlocality of the hidden variables model that was invented by Bell. This paper is devoted the deconstruction of the latter. The main casualty of Bell’s model is that it straightforwardly contradicts Heisenberg’s uncertainty and Bohr’s complementarity principles generally. Thus, we do not criticize the derivation or interpretation of the Bell inequality (as was done by numerous authors). Our critique is directed against the model as such. The original Einstein-Podolsky-Rosen (EPR) argument assumed the Heisenberg’s principle without questioning its validity. Hence, the arguments of EPR and Bell differ crucially, and it is necessary to establish the physical ground of the aforementioned principles. This is the quantum postulate: the existence of an indivisible quantum of action given by the Planck constant. Bell’s approach with hidden variables implicitly implies rejection of the quantum postulate, since the latter is the basis of the reference principles.  相似文献   

13.
Psychotherapy involves the modification of a client’s worldview to reduce distress and enhance well-being. We take a human dynamical systems approach to modeling this process, using Reflexively Autocatalytic foodset-derived (RAF) networks. RAFs have been used to model the self-organization of adaptive networks associated with the origin and early evolution of both biological life, as well as the evolution and development of the kind of cognitive structure necessary for cultural evolution. The RAF approach is applicable in these seemingly disparate cases because it provides a theoretical framework for formally describing under what conditions systems composed of elements that interact and ‘catalyze’ the formation of new elements collectively become integrated wholes. In our application, the elements are mental representations, and the whole is a conceptual network. The initial components—referred to as foodset items—are mental representations that are innate, or were acquired through social learning or individual learning (of pre-existing information). The new elements—referred to as foodset-derived items—are mental representations that result from creative thought (resulting in new information). In clinical psychology, a client’s distress may be due to, or exacerbated by, one or more beliefs that diminish self-esteem. Such beliefs may be formed and sustained through distorted thinking, and the tendency to interpret ambiguous events as confirmation of these beliefs. We view psychotherapy as a creative collaborative process between therapist and client, in which the output is not an artwork or invention but a more well-adapted worldview and approach to life on the part of the client. In this paper, we model a hypothetical albeit representative example of the formation and dissolution of such beliefs over the course of a therapist–client interaction using RAF networks. We show how the therapist is able to elicit this worldview from the client and create a conceptualization of the client’s concerns. We then formally demonstrate four distinct ways in which the therapist is able to facilitate change in the client’s worldview: (1) challenging the client’s negative interpretations of events, (2) providing direct evidence that runs contrary to and counteracts the client’s distressing beliefs, (3) using self-disclosure to provide examples of strategies one can use to diffuse a negative conclusion, and (4) reinforcing the client’s attempts to assimilate such strategies into their own ways of thinking. We then discuss the implications of such an approach to expanding our knowledge of the development of mental health concerns and the trajectory of the therapeutic change.  相似文献   

14.
Digital Signature using Self-Image signing is introduced in this paper. This technique is used to verify the integrity and originality of images transmitted over insecure channels. In order to protect the user’s medical images from changing or modifying, the images must be signed. The proposed approach uses the Discrete Wavelet Transform to subdivide a picture into four bands and the Discrete Cosine Transform DCT is used to embed a mark from each sub-band to another sub-band of DWT according to a determined algorithm. To increase the security, the marked image is then encrypted using Double Random Phase Encryption before transmission over the communication channel. By verifying the presence of the mark, the authority of the sender is verified at the receiver. Authorized users’ scores should, in theory, always be higher than illegal users’ scores. If this is the case, a single threshold might be used to distinguish between authorized and unauthorized users by separating the two sets of scores. The results are compared to those obtained using an approach that does not employ DWT.  相似文献   

15.
Entropy-based measures are an important tool for studying human gaze behavior under various conditions. In particular, gaze transition entropy (GTE) is a popular method to quantify the predictability of a visual scanpath as the entropy of transitions between fixations and has been shown to correlate with changes in task demand or changes in observer state. Measuring scanpath predictability is thus a promising approach to identifying viewers’ cognitive states in behavioral experiments or gaze-based applications. However, GTE does not account for temporal dependencies beyond two consecutive fixations and may thus underestimate the actual predictability of the current fixation given past gaze behavior. Instead, we propose to quantify scanpath predictability by estimating the active information storage (AIS), which can account for dependencies spanning multiple fixations. AIS is calculated as the mutual information between a processes’ multivariate past state and its next value. It is thus able to measure how much information a sequence of past fixations provides about the next fixation, hence covering a longer temporal horizon. Applying the proposed approach, we were able to distinguish between induced observer states based on estimated AIS, providing first evidence that AIS may be used in the inference of user states to improve human–machine interaction.  相似文献   

16.
Music has become a common adjunctive treatment for Alzheimer’s disease (AD) in recent years. Because Alzheimer’s disease can be classified into different degrees of dementia according to its severity (mild, moderate, severe), this study is to investigate whether there are differences in brain response to music stimulation in AD patients with different degrees of dementia. Seventeen patients with mild-to-moderate dementia, sixteen patients with severe dementia, and sixteen healthy elderly participants were selected as experimental subjects. The nonlinear characteristics of electroencephalogram (EEG) signals were extracted from 64-channel EEG signals acquired before, during, and after music stimulation. The results showed the following. (1) At the temporal level, both at the whole brain area and sub-brain area levels, the EEG responses of the mild-to-moderate patients showed statistical differences from those of the severe patients (p < 0.05). The nonlinear characteristics during music stimulus, including permutation entropy (PmEn), sample entropy (SampEn), and Lempel–Ziv complexity (LZC), were significantly higher in both mild-to-moderate patients and healthy controls compared to pre-stimulation, while it was significantly lower in severe patients. (2) At the spatial level, the EEG responses of the mild-to-moderate patients and the severe patients showed statistical differences (p < 0.05), showing that as the degree of dementia progressed, fewer pairs of EEG characteristic showed significant differences among brain regions under music stimulation. In this paper, we found that AD patients with different degrees of dementia had different EEG responses to music stimulation. Our study provides a possible explanation for this discrepancy in terms of the pathological progression of AD and music cognitive hierarchy theory. Our study has adjunctive implications for clinical music therapy in AD., potentially allowing for more targeted treatment. Meanwhile, the variations in the brains of Alzheimer’s patients in response to music stimulation might be a model for investigating the neural mechanism of music perception.  相似文献   

17.
Session-based recommendations aim to predict a user’s next click based on the user’s current and historical sessions, which can be applied to shopping websites and APPs. Existing session-based recommendation methods cannot accurately capture the complex transitions between items. In addition, some approaches compress sessions into a fixed representation vector without taking into account the user’s interest preferences at the current moment, thus limiting the accuracy of recommendations. Considering the diversity of items and users’ interests, a personalized interest attention graph neural network (PIA-GNN) is proposed for session-based recommendation. This approach utilizes personalized graph convolutional networks (PGNN) to capture complex transitions between items, invoking an interest-aware mechanism to activate users’ interest in different items adaptively. In addition, a self-attention layer is used to capture long-term dependencies between items when capturing users’ long-term preferences. In this paper, the cross-entropy loss is used as the objective function to train our model. We conduct rich experiments on two real datasets, and the results show that PIA-GNN outperforms existing personalized session-aware recommendation methods.  相似文献   

18.
Abstract

A survivable wavelength division multiplexing passive optical network enabling both point-to-point service and broadcast service is presented and demonstrated. This architecture provides an automatic traffic recovery against feeder and distribution fiber link failure, respectively. In addition, it also simplifies the protection design for multiple services transmission in wavelength division multiplexing passive optical networks.  相似文献   

19.
Estimates based on expert judgements of quantities of interest are commonly used to supplement or replace measurements when the latter are too expensive or impossible to obtain. Such estimates are commonly accompanied by information about the uncertainty of the estimate, such as a credible interval. To be considered well-calibrated, an expert’s credible intervals should cover the true (but unknown) values a certain percentage of time, equal to the percentage specified by the expert. To assess expert calibration, so-called calibration questions may be asked in an expert elicitation exercise; these are questions with known answers used to assess and compare experts’ performance. An approach that is commonly applied to assess experts’ performance by using these questions is to directly compare the stated percentage cover with the actual coverage. We show that this approach has statistical drawbacks when considered in a rigorous hypothesis testing framework. We generalize the test to an equivalence testing framework and discuss the properties of this new proposal. We show that comparisons made on even a modest number of calibration questions have poor power, which suggests that the formal testing of the calibration of experts in an experimental setting may be prohibitively expensive. We contextualise the theoretical findings with a couple of applications and discuss the implications of our findings.  相似文献   

20.
The holonomic approach to controlling (nitrogen-vacancy) NV-center qubits provides an elegant way of theoretically devising universal quantum gates that operate on qubits via calculable microwave pulses. There is, however, a lack of simulated results from the theory of holonomic control of quantum registers with more than two qubits describing the transition between the dark states. Considering this, we have been experimenting with the IBM Quantum Experience technology to determine the capabilities of simulating holonomic control of NV-centers for three qubits describing an eight-level system that produces a non-Abelian geometric phase. The tunability of the geometric phase via the detuning frequency is demonstrated through the high fidelity (~85%) of three-qubit off-resonant holonomic gates over the on-resonant ones. The transition between the dark states shows the alignment of the gate’s dark state with the qubit’s initial state hence decoherence of the multi-qubit system is well-controlled through a π/3 rotation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号