首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 156 毫秒
1.
信号的功率放大器是电子通信系统的关键器件之一,功放的输出信号相对于输入信号可能产生非线性变形,这将带来无益的干扰信号,研究其机理并采取措施改善,具有重要意义.通过利用无记忆非线性功放和记忆非线性功放的实测数据用数学方法对其分别进行建模,而后使用前置预失真器的方法改善功放的非线性特性,并对其中预失真器的建模做了研究,采用间接学习型结构构建预失真器模型.仿真结果显示功放非线性模型结合预失真器模型能够很好地逼近实际情况,并且能很好地抑制带外频谱扩展.  相似文献   

2.
功率放大器的非线性化导致信号的失真,是电子信息行业普遍存在的问题之一.在功率放大器的前端引入预失真模块能对失真有很好的改善,也是目前研究较多的一种方法.从这一点出发,研究了非记忆性功放和记忆性功放的输入输出特征曲线,分别利用分段线性模型以及记忆多项式模型进行了拟合与数学表示,并构建了预失真模型以及预失真函数,对引入预失真模块的整个系统进行了仿真实验和评价指标的计算.最后对有记忆功放进行了Burg法功率谱密度分析,计算出了ACPR值并进行分析.  相似文献   

3.
针对无记忆功率放大器的非线性特性及预失真建模的问题,首先建立了多项式模型、极坐标Saleh模型和基于正交三角函数的模型并利用MATLAB对其进行了求解,然后给出了无记忆多项式预失真处理器特性函数表达式及最小二乘解.针对记忆功率放大器的非线性特性及预失真建模的问题,首先建立了记忆多项式模型并对其进行了求解,然后建立了相应的有记忆多项式预失真模型并利用最小二乘法进行了求解,并提出了联合功率放大器特性和输入信号幅值范围的有记忆功放自适应预失真模型.最后求出所给输入信号、输出信号以及加入预失真后线性系统的输出信号的功率谱密度,并计算和比较了信道的带外失真参数ACPR;结果显示,加入预失真后大大提升了系统的性能,线性特性明显加强.  相似文献   

4.
电子器件中的功率放大器常常伴随着非线性失真效应.为解决此类非线性失真问题,通过研究无记忆和有记忆功放的失真特性,运用最小二乘法来构建多种形式的特性拟合函数,在选择了效果较好的特性拟合函数基础上,根据实际约束条件进行预失真模型的建立,使失真处理后的输出信号趋于线性,最后从信号的功率谱密度的角度出发检验预失真模型的补偿效果,证明模型具有较好的可行性和准确性.  相似文献   

5.
宽带移动通信传输正在改变着人们的生活.作为信息传输的重要环节,信号的功率放大在无线通信中起着关键作用.第十届全国研究生数模竞赛B题:功率放大器非线性特性及预失真建模,利用数学建模的思路辨识功率放大器非线性失真行为模型并通过预畸变来减少失真.通信系统的发展需要有技术弥补功率器件的固有非线性特性,保证功率放大器的能量转换效率和线性度.随着器件的更新和信息传输速率的增加,解决此问题的数学模型—预失真算法一直在更新和发展.在研究生数模竞赛中引入此类赛题,可以开阔思维,激发研究生使用数学的兴趣,提高利用数学模型解决实际工程问题的能力,同时利用学生思维活跃的优势,提出有价值的创新点,丰富现有的算法类型,为国内相关企业研究解决此类问题提供新的思路方法,具有重要的意义.  相似文献   

6.
数据包络分析(DEA)是评价决策单元相对效率的有效方法,其中的交叉效率评价方法可用来对决策单元进行区分排序.针对原有模型中交叉效率值的不唯一问题,结合灰色关联分析思想,构建理想决策单元,定义各决策单元与理想决策单元的灰色关联度,以灰色关联度值最大为目标,建立优化模型来计算输入和输出指标的最佳权重,据此得出决策单元的交叉效率值,实现对决策单元的完全排序.最后通过算例来验证模型的有效性和实用性.  相似文献   

7.
数据包络分析(DEA)是评价决策单元相对效率的有效方法,其中的交叉效率评价方法可用来对决策单元进行区分排序.针对原有模型中交叉效率值的不唯一问题,结合灰色关联分析思想,构建理想决策单元,定义各决策单元与理想决策单元的灰色关联度,以灰色关联度值最大为目标,建立优化模型来计算输入和输出指标的最佳权重,据此得出决策单元的交叉效率值,实现对决策单元的完全排序.最后通过算例来验证模型的有效性和实用性.  相似文献   

8.
主要构建了基于SV-AJD模型参数的Malmquist DEA投资组合动态效率评价方法.模型纳入了传统DEA模型未考虑的单位净值相关指标和非单位净值相关指标,并使用本征向量法对5个输入指标和5个输出指标进行加权,综合为单个输入和单个输出指标,克服模型的多维失真.实证过程中,分别对13个类别共172只基金在牛市和熊市情景...  相似文献   

9.
研究了一类不确定非线性系统的量化输出反馈控制设计问题.不同于现有文献,所研究系统的非线性增长依赖于不可测状态.这使得现有文献中的观测器设计方法不再适用,且闭环系统性能分析更为复杂和困难.文章首先引入了一个新的高增益观测器,进而利用现有文献中的集值映射和迭代设计方法,构造了一个量化输出反馈控制器,最后利用循环小增益定理和动态量化策略,得到了保证闭环系统所有状态有界性且输出最终任意小的充分性条件.  相似文献   

10.
传统的DEA模型(CCR、BCC)的径向效率度量是不完全的.它们只是分别地度量输入效率与输出效率,且效率度量没有考虑非零的输入松弛与输出松弛.修正的Russell方法消除了这些缺陷.基于修正的Russell方法,利用可信度方法,模糊DEA模型得以建立并求解.一个算例佐证了这一方法.  相似文献   

11.
Comparing with the classical local gradient flow and phase field models, the nonlocal models such as nonlocal Cahn–Hilliard equations equipped with nonlocal diffusion operator can describe more practical phenomena for modeling phase transitions. In this paper, we construct an accurate and efficient scalar auxiliary variable approach for the nonlocal Cahn–Hilliard equation with general nonlinear potential. The first contribution is that we have proved the unconditional energy stability for nonlocal Cahn–Hilliard model and its semi‐discrete schemes carefully and rigorously. Second, what we need to focus on is that the nonlocality of the nonlocal diffusion term will lead the stiffness matrix to be almost full matrix which generates huge computational work and memory requirement. For spatial discretizaion by finite difference method, we find that the discretizaition for nonlocal operator will lead to a block‐Toeplitz–Toeplitz‐block matrix by applying four transformation operators. Based on this special structure, we present a fast procedure to reduce the computational work and memory requirement. Finally, several numerical simulations are demonstrated to verify the accuracy and efficiency of our proposed schemes.  相似文献   

12.
This study investigates the potential of Time Lag Recurrent Neural Networks (TLRN) for modeling the daily inflow into Eleviyan reservoir, Iran. TLRN are extended with short term memory structures that have local recurrent connections, thus making them an appropriate model for processing temporal (time-varying) information. For this study, the daily inflow into Eleviyan reservoir between years 2004–2007 was considered. To compare the performance of TLRN, a back propagation neural network was used. The TLRN model with gamma memory structure, eight input layer nodes, two hidden layer and one output layer (8-2-1) was found performing best out of three different models used in forecasting daily inflow. A comparison of results with back propagation neural network suggest that neither TLRN nor back propagation approaches were good in forecasting high inflow but, both approaches perform well when used to forecast low inflow values. However, statistical test suggests that both TLRN and back propagation neural network models were able to reproduce similar basic statistics as that of the actual data.  相似文献   

13.
Test problems for the nonlinear Boltzmann and Smoluchowski kinetic equations are used to analyze the efficiency of various versions of weighted importance modeling as applied to the evolution of multiparticle ensembles. For coagulation problems, a considerable gain in computational costs is achieved via the approximate importance modeling of the “free path” of the ensemble combined with the importance modeling of the index of a pair of interacting particles. A weighted modification of the modeling of the initial velocity distribution was found to be the most efficient for model solutions to the Boltzmann equation. The technique developed can be useful as applied to real-life coagulation and relaxation problems for which the model problems considered give approximate solutions.  相似文献   

14.
It is common to subsample Markov chain output to reduce the storage burden. Geyer shows that discarding k ? 1 out of every k observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning Markov chain Monte Carlo (MCMC) output cannot improve statistical efficiency. Here, we suppose that it costs one unit of time to advance a Markov chain and then θ > 0 units of time to compute a sampled quantity of interest. For a thinned process, that cost θ is incurred less often, so it can be advanced through more stages. Here, we provide examples to show that thinning will improve statistical efficiency if θ is large and the sample autocorrelations decay slowly enough. If the lag ? ? 1 autocorrelations of a scalar measurement satisfy ρ? > ρ? + 1 > 0, then there is always a θ < ∞ at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with ρ? = ρ|?| for some ? 1 < ρ < 1. For an AR(1) process, it is possible to compute the most efficient subsampling frequency k. The optimal k grows rapidly as ρ increases toward 1. The resulting efficiency gain depends primarily on θ, not ρ. Taking k = 1 (no thinning) is optimal when ρ ? 0. For ρ > 0, it is optimal if and only if θ ? (1 ? ρ)2/(2ρ). This efficiency gain never exceeds 1 + θ. This article also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes. Supplementary materials for this article are available online.  相似文献   

15.
16.
Florian Beck  Peter Eberhard 《PAMM》2016,16(1):425-426
Abrasive wear is one of the mechanisms which cause the decrease of efficiency of hydraulic machines. The working fluid of a hydraulic machine, e.g., a turbine of a hydroelectric power plant, transports small solid particles of different sizes. Those small particles damage the surface of the hydraulic machine when contacting. In contrast to classical approaches in fluid dynamics, here, we present an approach where only mesh-free methods are applied. The Smoothed Particle Hydrodynamics (SPH) method is used for modeling the fluid in this study. The SPH method is a mesh-free method which has its advantages in describing transient fluid flows with free surfaces and large motions. The loading of the fluid consists of small solid particles of different sizes. A coupled approach for describing the loading is used. For the larger abrasive particles the Discrete Element Method and for smaller ones a transport equation is utilized. In doing so it is possible to model a loading of the fluid consisting of small particles of different sizes. The abrasive wear is described with an abrasive wear model. The wear model takes into account different parameters like the size, the velocity of the abrasive particles, and of course material parameters of both the target and the particles. On impact of an abrasive particle, the amount of removed material is stored at the boundary and in doing so the removed material over time is identified. In this work, a representative numerical example is presented. The simulations were performed with the code Pasimodo, developed at the Institute of Engineering and Computational Mechanics. It is the aim of this work to point out that it is possible to model abrasive wear due to abrasive particles with different sizes with a mesh-free approach. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号