首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 43 毫秒
1.
Hybrid analog/digital multiple input multiple output (MIMO) system is proposed to mitigate the challenges of millimeter wave (mmWave) communication. This architecture enables utilizing the large array gain with reasonable power consumption. However, new methods are required for the channel estimation problem of hybrid architecture-based systems due to the fewer number of radio frequency (RF) chains than antenna elements. Leveraging the sparse nature of the mmWave channels, compressed sensing (CS)-based channel estimation methods are proposed. Recently, machine learning (ML)-aided methods have been investigated to improve the channel estimation performance. Additionally, the Doppler effect should be considered for the high mobility scenarios, and we deal with the time-varying channel model. Therefore, in this article, we consider the scenario of time-varying channels for a multi-user mmWave hybrid MIMO system. By proposing a Deep Neural Network (DNN) and defining the inputs and outputs, we introduce a novel algorithm called Deep Learning Assisted Angle Estimation (DLA-AE) for improving the estimation of the Angles of Departure/Arrival (AoDs/AoAs) of the channel paths. In addition, we suggest Linear Phase Interpolation (LPI) to acquire the path gains for the data transmission instants. Simulation results show that utilizing the proposed DLA-AE and LPI methods enhance the time-varying channel estimation accuracy with low computational complexity.  相似文献   

2.
Multicast hybrid precoding reaches a compromise among hardware complexity, transmission performance and wireless resource efficiency in massive MIMO systems. However, system security is extremely challenging with the appearance of eavesdroppers. Physical layer security (PLS) is a relatively effective approach to improve transmission and security performance for multicast massive MIMO wiretap systems. In this paper, we consider a transmitter with massive antennas transmits the secret signal to many legitimate users with multiple-antenna, while eavesdroppers attempt to eavesdrop the information. A fractional problem aims at maximizing sum secrecy rate is proposed to optimize secure hybrid precoding in multicast massive MIMO wiretap system. Because the proposed optimized model is an intractable non-convex problem, we equivalently transform the original problem into two suboptimal problems to separately optimize the secure analog precoding and secure digital precoding. Secure analog precoding is achieved by applying singular value decomposition (SVD) of secure channel. Then, employing semidefinite program (SDP), secure digital precoding with fixed secure analog precoding is obtained to ensure quality of service (QoS) of legitimate users and limit QoS of eavesdroppers. Complexity of the proposed SVD-SDP algorithm related to the number of transmitting antennas squared is lower compared with that of constant modulus precoding algorithm (CMPA) which is in connection with that number cubed. Simulation results illustrate that SVD-SDP algorithm brings higher sum secrecy rate than those of CMPA and SVD-SVD algorithm.  相似文献   

3.
Large-scale Multiple-Input Multiple Output (MIMO) is the key technology of 5G communication. However, dealing with physical channels is a complex process. Machine learning techniques have not been utilized commercially because of the limited learning capabilities of traditional machine learning algorithms. We design a deep learning hybrid precoding scheme based on the attention mechanism. The method mainly includes channel modeling and deep learning encoding two modules. The channel modeling module mainly describes the problem formally, which is convenient for the subsequent method design and processing. The model design module introduces the design framework, details, and main training process of the model. We utilize the attention layer to extract the eigenvalues of the interference between multiple users through the output attention distribution matrix. Then, according to the characteristics of inter-user interference, the loss minimization function is used to study the optimal precoder to achieve the maximum reachable rate of the system. Under the same condition, we compare our proposed method with the traditional unsupervised learning-based hybrid precoding algorithm, the TTD-based (True-Time-Delay, TTD) phase correction hybrid precoding algorithm, and the deep learning-based method. Additionally, we verify the role of attention mechanism in the model. Extensive simulation results demonstrate the effectiveness of the proposed method. The results of this research prove that deep learning technology can play a driving role in the encoding and processing of MIMO with its unique feature extraction and modeling capabilities. In addition, this research also provides a good reference for the application of deep learning in MIMO data processing problems.  相似文献   

4.
In massive multiple-input multiple-output (MIMO), it is much challenging to obtain accurate channel state information (CSI) after radio frequency (RF) chain reduction due to the high dimensions. With the fast development of machine learning(ML), it is widely acknowledged that ML is an effective method to deal with channel models which are typically unknown and hard to approximate. In this paper, we use the low complexity vector approximate messaging passing (VAMP) algorithm for channel estimation, combined with a deep learning framework for soft threshold shrinkage function training. Furthermore, in order to improve the estimation accuracy of the algorithm for massive MIMO channels, an optimized threshold function is proposed. This function is based on Gaussian mixture (GM) distribution modeling, and the expectation maximum Algorithm (EM Algorithm) is used to recover the channel information in beamspace. This contraction function and deep neural network are improved on the vector approximate messaging algorithm to form a high-precision channel estimation algorithm. Simulation results validate the effectiveness of the proposed network.  相似文献   

5.
Sparse Bayesian learning (SBL) and particularly relevant vector machines (RVMs) have drawn much attention to improving the performance of existing machine learning models. The methodology depends on a parameterized prior that enforces models with weight sparsity, where only a few are non-zeros. Wideband mmWave massive multiple-input multiple-output (mMIMO) systems with lens antenna array (LAA), expect to play a key role in future fifth-generation (5G) wireless systems. To provide the beamforming gain required to overcome path loss, we consider a lens antenna array (LAA)-based beamspace mMIMO system. However, the spatial-wideband influence causes the beam squint effect to emerge, making the beamspace channel path components exhibit a unique frequency-dependent sparse structure, and thus nullifies the frequency domain common support assumption. In this paper, we first propose a channel estimation (CE) algorithm, namely a reduced-antenna selection progressive support-detection (RAS-PSD), for the wideband mmWave mMIMO-OFDM systems with LAA, which considers the beam squint effect. Secondly, by exploring Bayesian learning (BL), a Gaussian Process hyperparameter optimization-based CE (GP-HOCE) algorithm is proposed for the considered system, where both its own hyperparameter and the hyperparameters of its adjacent neighbors governs the sparsity of each coefficient. The simulation results show that the wideband beamspace channel coefficients can be estimated more efficiently than those of the existing state-of-the-art algorithms in terms of normalized mean square error (NMSE) of CE for wideband mmWave mMIMO-OFDM systems.  相似文献   

6.
Cooperative communication technology has realized the enhancement in the wireless communication system’s spectrum utilization rate without resorting to any additional equipment; additionally, it ensures system reliability in transmission, increasingly becoming a research focus within the sphere of wireless sensor networks (WSNs). Since the selection of relay is crucial to cooperative communication technology, this paper proposes two different relay selection schemes subject to deep reinforcement learning (DRL), in response to the issues in WSNs with relay selection in cooperative communications, which can be summarized as the Deep-Q-Network Based Relay Selection Scheme (DQN-RSS), as well as the Proximal Policy Optimization Based Relay Selection Scheme (PPO-RSS); it further compared the commonly used Q-learning relay selection scheme (Q-RSS) with random relay selection scheme. First, the cooperative communication process in WSNs is modeled as a Markov decision process, and DRL algorithm is trained in accordance with the outage probability, as well as mutual information (MI). Under the condition of unknown instantaneous channel state information (CSI), the best relay is adaptively selected from multiple candidate relays. Thereafter, in view of the slow convergence speed of Q-RSS in high-dimensional state space, the DRL algorithm is used to accelerate the convergence. In particular, we employ DRL algorithm to deal with high-dimensional state space while speeding up learning. The experimental results reveal that under the same conditions, the random relay selection scheme always has the worst performance. And compared to Q-RSS, the two relay selection schemes designed in this paper greatly reduce the number of iterations and speed up the convergence speed, thereby reducing the computational complexity and overhead of the source node selecting the best relay strategy. In addition, the two relay selection schemes designed and raised in this paper are featured by lower-level outage probability with lower-level energy consumption and larger system capacity. In particular, PPO-RSS has higher reliability and practicability.  相似文献   

7.
We investigate a bitstream-based adaptive-connected massive multiple-input multiple-output (MIMO) architecture that trades off between high-power full-connected and low-performance sub-connected hybrid precoding architectures. The proposed adaptive-connected architecture which enables each data stream to be computed independently and in parallel, consists of fewer phase shifters (PS) and switches than the other adaptive-connected architectures. With smaller array groups, the proposed architecture uses fewer PS and switches, so that its power consumption gradually decreases in millimeter wave (mmWave) Multiuser MIMO (MU-MIMO) system. To fully demonstrate the performance of the proposed architecture in mmWave MU-MIMO system with practical constraints, we combine the connection-state matrix with the hybrid precoders and combiners to maximize energy efficiency (EE) of the system equipped with the proposed architecture. We then propose the hybrid precoding and combining (HPC) scheme suitable for multi-user and multi-data streams which utilizes the SCF algorithm to obtain the constant modulus of the analog precoder at convergence. In the digital precoding and combining stage, the digital precoder and combiner are designed to reduce the amount of computation by utilizing the singular value decomposition (SVD) of corresponding equivalent channel. In the mmWave MU-MIMO-OFDM system equipped with the proposed architecture, with the increase of the total number of data streams, simulation results demonstrate that we can exploit the proposed HPC scheme to achieve better EE than the traditional hybrid full-connected architecture exploiting some existing schemes.  相似文献   

8.
Computational efficiency is a direction worth considering in moving edge computing (MEC) systems. However, the computational efficiency of UAV-assisted MEC systems is rarely studied. In this paper, we maximize the computational efficiency of the MEC network by optimizing offloading decisions, UAV flight paths, and allocating users’ charging and offloading time reasonably. The method of deep reinforcement learning is used to optimize the resources of UAV-assisted MEC system in complex urban environment, and the user’s computation-intensive tasks are offloaded to the UAV-mounted MEC server, so that the overloaded tasks in the whole system can be alleviated. We study and design a framework algorithm that can quickly adapt to task offload decision making and resource allocation under changing wireless channel conditions in complex urban environments. The optimal offloading decisions from state space to action space is generated through deep reinforcement learning, and then the user’s own charging time and offloading time are rationally allocated to maximize the weighted sum computation rate. Finally, combined with the radio map to optimize the UAC trajectory to improve the overall weighted sum computation rate of the system. Simulation results show that the proposed DRL+TO framework algorithm can significantly improve the weighted sum computation rate of the whole MEC system and save time. It can be seen that the MEC system resource optimization scheme proposed in this paper is feasible and has better performance than other benchmark schemes.  相似文献   

9.
The key principle of physical layer security (PLS) is to permit the secure transmission of confidential data using efficient signal-processing techniques. Also, deep learning (DL) has emerged as a viable option to address various security concerns and enhance the performance of conventional PLS techniques in wireless networks. DL is a strong data exploration technique which can be used to learn normal and abnormal behavior of 5G and beyond wireless networks in an insecure channel paradigm. Also, since DL techniques can successfully predict future new instances by learning from existing ones, they can successfully predict new attacks, which frequently involve mutations of earlier attacks. Thus, motivated by the benefits of DL and PLS, this survey provides a comprehensive review that overviews how DL-based PLS techniques can be employed for solving various security concerns in 5G and beyond networks. The survey begins with an overview of physical layer threats and security concerns in 5G and beyond networks. Then, we present a detailed analysis of various DL and deep reinforcement learning (DRL) techniques that are applicable to PLS applications. We present the specific use-cases of PLS design for each type of technique, including attack detection, physical layer authentication (PLA), and other PLS techniques. Then, we present an in-depth overview of the key areas of PLS where DL can be used to enhance the security of wireless networks, such as automatic modulation classification (AMC), secure beamforming, PLA, etc. Performance evaluation metrics for DL-based PLS design are subsequently covered. Finally, we provide insights to the readers about various challenges and future research trends in the design of DL-based PLS for terrestrial communications in 5G and beyond networks.  相似文献   

10.
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such hybrid quantizers is quite complex, and their implementation requires complete knowledge of the statistical model of the analog signal. In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools. These mappings are designed to facilitate the task of recovering underlying information from the quantized signals. By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it. Our main target application is multiple-input multiple-output (MIMO) communication receivers, which simultaneously acquire a set of analog signals, and are commonly subject to constraints on the number of bits. Our results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model. Furthermore, for a symbol detection scenario, it is demonstrated that the proposed approach can realize reliable bit-efficient hybrid MIMO receivers capable of setting their quantization rule in light of the task.  相似文献   

11.
6G – sixth generation – is the latest cellular technology currently under development for wireless communication systems. In recent years, machine learning (ML) algorithms have been applied widely in various fields, such as healthcare, transportation, energy, autonomous cars, and many more. Those algorithms have also been used in communication technologies to improve the system performance in terms of frequency spectrum usage, latency, and security. With the rapid developments of ML techniques, especially deep learning (DL), it is critical to consider the security concern when applying the algorithms. While ML algorithms offer significant advantages for 6G networks, security concerns on artificial intelligence (AI) models are typically ignored by the scientific community so far. However, security is also a vital part of AI algorithms because attackers can poison the AI model itself. This paper proposes a mitigation method for adversarial attacks against proposed 6G ML models for the millimeter-wave (mmWave) beam prediction using adversarial training. The main idea behind generating adversarial attacks against ML models is to produce faulty results by manipulating trained DL models for 6G applications for mmWave beam prediction. We also present a proposed adversarial learning mitigation method’s performance for 6G security in mmWave beam prediction application a fast gradient sign method attack. The results show that the defended model under attack’s mean square errors (i.e., the prediction accuracy) are very close to the undefended model without attack.  相似文献   

12.
In order to reduce the backhaul link pressure of wireless networks, edge caching technology has been regarded as a promising solution. However, with massive and dynamical communication connections, it is challenging to provide analytical caching solution to achieve the best performance, particularly when the requested contents are changing and their popularities are unknown. In this paper, we propose a deep Q-learning (DQN) method to address the issue of caching placement. Considering a content caching network containing multiple cooperating SBSs with unknown content popularity, we need to determine which content to cache and where to cache. Therefore, the learning network has to be designed for dual aims, one of which is to estimate the content popularities while the other is to assign contents to the proper channels. An elaborate DQN is proposed to make decisions to cache contents with limited storage space of base-station by considering channel conditions. Specifically, the content requests of users are first collected as one of the training samples of the learning network. Second, the channel state information for the massive links are estimated as the other training samples. Then, we train the network based on the proposed method thereby improving spectral efficiency of the entire system and reducing bit-error rate. Our major contribution is to contrive a caching strategy for enhanced performance in massive connection communications without knowing the content popularity. Numerical studies are performed to show that the proposed method acquires apparent performance gain over random caching in terms of average spectral efficiency and bit-error rate of the network.  相似文献   

13.
Massive multiple-input multiple-output (MIMO) is a key technology for modern wireless communication systems. In massive MIMO receivers, data detection is a computationally expensive task. In this paper, we explore the performance and the computational complexity of matrix decomposition based detectors in realistic channel scenarios for different massive MIMO configurations. In addition, data detectors based on decomposition algorithms are compared to the approximate-inversion detection (AID) methods. It is shown that the alternating-direction-method-of-multipliers-based-Infinity-Norm (ADMIN) detection is promising in realistic channel environment and the performance is stable even when the ratio of the base-station (BS) antenna elements to the number of users is small. In addition, this paper studies the performance of several detectors in imperfect channel state information (CSI) and correlated channels. Our work provides valuable insights for massive MIMO systems and very large-scale integration (VLSI) designers to select the appropriate massive MIMO detector based on their specifications.  相似文献   

14.
We consider an intelligent reflecting surface (IRS)-assisted wireless powered communication network (WPCN) in which a multi antenna power beacon (PB) sends a dedicated energy signal to a wireless powered source. The source first harvests energy and then utilizing this harvested energy, it sends an information signal to destination where an external interference may also be present. For the considered system model, we formulated an analytical problem in which the objective is to maximize the throughput by jointly optimizing the energy harvesting (EH) time and IRS phase shift matrices. The optimization problem is high dimensional non-convex, thus a good quality solution can be obtained by invoking any state-of-the-art algorithm such as Genetic algorithm (GA). It is well-known that the performance of GA is generally remarkable, however it incurs a high computational complexity. To this end, we propose a deep unsupervised learning (DUL) based approach in which a neural network (NN) is trained very efficiently as time-consuming task of labeling a data set is not required. Numerical examples show that our proposed approach achieves a better performance–complexity trade-off as it is not only several times faster but also provides almost same or even higher throughput as compared to the GA.  相似文献   

15.
Conventional optimization-based relay selection for multihop networks cannot resolve the conflict between performance and cost. The optimal selection policy is centralized and requires local channel state information (CSI) of all hops, leading to high computational complexity and signaling overhead. Other optimization-based decentralized policies cause non-negligible performance loss. In this paper, we exploit the benefits of reinforcement learning in relay selection for multihop clustered networks and aim to achieve high performance with limited costs. Multihop relay selection problem is modeled as Markov decision process (MDP) and solved by a decentralized Q-learning scheme with rectified update function. Simulation results show that this scheme achieves near-optimal average end-to-end (E2E) rate. Cost analysis reveals that it also reduces computation complexity and signaling overhead compared with the optimal scheme.  相似文献   

16.
This paper investigates the problem of energy efficient relay precoder design in multiple-input multiple-output cognitive relay networks (MIMO-CRNs). This is a non-convex fractional programming problem, which is traditionally solved using computationally expensive optimization methods. In this paper, we propose a deep learning (DL) based approach to compute an approximate solution. Specifically, a deep neural network (DNN) is employed and trained using offline computed optimal solution. The proposed scheme consists of an offline data generation phase, an offline training phase, and an online deployment phase. The numerical results show that the proposed DNN provides comparable performance at significantly lower computational complexity as compared to the conventional optimization-based algorithm that makes the proposed approach suitable for real-time implementation.  相似文献   

17.
To fully attain array gains of massive multiple-input multiple-output (MIMO) and its energy and spectral efficiency, deriving channel state information (CSI) at the base station (BS) side is essential. However, CSI estimation of frequency-division duplex (FDD) based massive MIMO is a challenging task owning to the required pilots, which are proportional to the number of antennas at the BS side. Therefore, the pilot overhead should be inevitably mitigated in the process of downlink channel estimation of FDD technique. In this paper, we propose a novel compressed sensing (CS) algorithm which takes advantage of correlation between the received and transmitted signals to estimate the channel with high precision, and moreover, to reduce the computational complexity imposed on the BS side. The main idea behind the proposed algorithm is to sort the specific number of maximum correlations as a common support in each iteration of the algorithm. Simulation results indicate that the proposed algorithm is capable of estimating downlink channel better than the counterpart algorithms in terms of mean square error (MSE) and the computational complexity. Meanwhile, the complexity of the proposed method linearly grows up when the number of BS antennas increases.  相似文献   

18.
This paper proposes a resource allocation scheme for hybrid multiple access involving both orthogonal multiple access and non-orthogonal multiple access (NOMA) techniques. The proposed resource allocation scheme employs multi-agent deep reinforcement learning (MA-DRL) to maximize the sum-rate for all users. More specifically, the MA-DRL-based scheme jointly allocates subcarrier and power resources for users by utilizing deep Q networks and multi-agent deep deterministic policy gradient networks. Meanwhile, an adaptive learning determiner mechanism is introduced into our allocation scheme to achieve better sum-rate performance. However, the above deep reinforcement learning adopted by our scheme cannot optimize parameters quickly in the new communication model. In order to better adapt to the new environment and make the resource allocation strategy more robust, we propose a transfer learning scheme based on deep reinforcement learning (T-DRL). The T-DRL-based scheme allows us to transfer the subcarrier allocation network and the power allocation network collectively or independently. Simulation results show that the proposed MA-DRL-based resource allocation scheme can achieve better sum-rate performance. Furthermore, the T-DRL-based scheme can effectively improve the convergence speed of the deep resource allocation network.  相似文献   

19.
The emergence of more and more computation-intensive applications has imposed higher requirements in spectrum and energy efficiency (EE) for internet of things (IoT) wireless networks. Massive multiple-input multiple-output (MIMO) is utilized to gain spectral efficiency as an important part of wireless systems. However, the power expansion from hardware lowers the massive MIMO performance remarkably. Reconfigurable intelligent surface (RIS) technology can solve this problem well since it can not only provide higher array gain but also reduce energy depletion and hardware expense. In this article, we study joint optimization about beam-forming, RIS phase shift, and energy harvesting of IoT devices for maximizing EE of the multiple-input single-input downlink system with multiple IoT devices and an energy harvesting device. Different from existing works focusing on ergodic capacity with known statistic channel information of BS-RIS-device, we suppose that statistics information of RIS-device is known. Mathematically, the joint optimization problem is cast into a challenging non-convex one. To this end, based on successive convex approximation, we convert the original problem into two parts and then provide two heuristic schemes to tackle them, respectively. Next, an iterative scheme integrated by two heuristic algorithms is proposed to earn feasible solution in polynomial time. Finally, the proposed scheme is verified to be effective by simulations.  相似文献   

20.
In centralized massive multiple-input multiple-output (MIMO) systems, the channel hardening phenomenon can occur, in which the channel behaves as almost fully deterministic as the number of antennas increases. Nevertheless, in a cell-free massive MIMO system, the channel is less deterministic. In this paper, we propose using instantaneous channel state information (CSI) instead of statistical CSI to obtain the power control coefficient in cell-free massive MIMO. Access points (APs) and user equipment (UE) have sufficient time to obtain instantaneous CSI in a slowly time-varying channel environment. We derive the achievable downlink rate under instantaneous CSI for frequency division duplex (FDD) cell-free massive MIMO systems and apply the results to the power control coefficients. For FDD systems, quantized channel coefficients are proposed to reduce feedback overhead. The simulation results show that the spectral efficiency performance when using instantaneous CSI is approximately three times higher than that achieved using statistical CSI.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号