首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 825 毫秒
1.
动态规划是解决多阶段决策过程的最优化问题的一种方法.多阶段决策过程是指一类过程由于它的特殊性可以将过程(一般是根据时间或空间)分为若干个阶段,而在每一个阶段都需要作出决定以便使整个过程取得最优的效果.在经济、工程技术、工业生产、军事以及其他领域中有许多问题都可以归结于这类问题.这一类问题属于规划问题的范  相似文献   

2.
动态规划方法在矩阵连乘积中的应用@翁耀明¥上海医疗器械高等专科学校动态规划方法在矩阵连乘积中的应用翁耀明(上海医疗器械高等专科学校,上海200093)动态规划方法虽然对于解决多阶段决策过程的最优化问题是比较有效的,然而它也可以用来解决有些似乎与多阶段决策无...  相似文献   

3.
利用动态规划证明某些不等式翁耀明,汤明华(上海医疗器械高等专科学校)(合肥职工大学)动态规划虽然是解决多阶段决策问题的一种运筹学方法,然而它也可以用来解决某些与多阶段决策似乎完全无关的数学问题。有些不等式,我们可以把它化为等价的规划问题,然后只需人为...  相似文献   

4.
<正> 动态规划虽然是解决多阶段决策问题的一种运筹学方法,然而它也可以用来解决某些与多阶段决策似乎完全无关的数学问题。有些不等式,我们可以把它化为等价的规划问题,然后只需人为地在问题中引进“阶段”和“状态”,将其化为多阶段决策过程来处理。本文将利用动态规划证明数学上某些初等不等式。例1 设x_i>0(i=1,…,n),α<0<β,则有  相似文献   

5.
翁耀明 《工科数学》1997,13(2):121-122
动态规划方法虽然对于解决多阶段决策过程的最优化问题是比较有效的,然而它也可以用来解决有些似乎与多阶段决策无关的数学问题,如矩阵连乘问题,我们可以引进“时段”和“状态”,把问题分解成一系列形式上很相似的子问题来解决。  相似文献   

6.
该文分析了折扣准则下基于多数量需求拍卖机制的多阶段存贮问题,运用动态规划方法,在有限阶段对该问题研究了每个时期应该出售的最优产品数量、最优分配方案和最优订购策略,提出了运用修正的多需求二级价格拍卖模型来实现最优分配,并对无限阶段下的动态存贮/分配问题进行了讨论.  相似文献   

7.
多维多目标模糊优选动态规划及其在资源分配中的应用   总被引:5,自引:0,他引:5  
将目前所研究的一维模糊优选动态规划扩展为多维模糊优选动态规划。在求解多维多阶段问题时,采用遗传算法与模糊动态规划法相结合进行求解,保证了优化变量的全局最优性。在其中权系数的处理中,本文采用了主客观综合评定的方法,保证了数据的合理性及准确性。并且文中用此方法解决了多维多阶段多目标的资源分配问题。  相似文献   

8.
在—般情况下,动态规划的阶段变量是一维的。但在实际工作中存在一类多阶段决策问题,由于这类问题的约束条件较多,造成状态变量的转移方向比较复杂,因而无法用一维动态规划模型求解。本文将动态规划数学模型中的阶段变量扩展到二维,通过建立二维动态规划模型对以上问题进行了研究。并且应用所建立的模型求解了一个实际的资源分配问题。  相似文献   

9.
考虑到无人飞机在交通监控中存在突发的监测目标,需要动态规划无人飞机的巡航路径.首先,引入时间轴的概念,将动态无人飞机路径规划问题分两阶段解决,第一阶段为初始优化,第二阶段为动态时刻点实时优化,进而转化为静态问题.接着,建立了无人飞机路径多目标优化模型,优化目标为无人飞机广义巡航距离最短、无人飞机的使用数量最少.然后,提出了动态无人飞机可行路径插入法,设计了基于帕累托最优的多目标优化算法.最后,进行了案例分析和算法敏感性分析,分析结果表明,提出的模型和算法是可行、有效的.  相似文献   

10.
现有的多阶段投资组合优化问题普遍利用动态规划方法来进行求解,然而对于较为复杂的优化问题而言,该方法并不适用.针对此类复杂投资组合优化问题,文章首先在广义的多阶段均值一方差理论框架下,构建一类存在凸锥约束的投资组合优化模型.进而提出了一种有效的线性反馈策略,通过计算投资组合在各阶段财富值的收益和风险值,可将该动态随机控制问题转化为一个传统的凸优化问题.最后,在开环和闭环两种策略形式下通过数值仿真验证文章所提出反馈策略的有效性.结果表明,所提出的开环策略占优于已有的开环策略,且能够有效地拟合由动态规划所得精确投资策略.  相似文献   

11.
We derive the maximum decoding radius for interleaved Hermitian (IH) codes if a collaborative decoding scheme is used. A decoding algorithm that achieves this bound, which is based on a division decoding algorithm, is given. Based on the decoding radius for the interleaved codes, we derive a bound on the code rate below which virtual extension of non-interleaved Hermitian codes can improve the decoding capabilities.  相似文献   

12.
An important property of low-density parity-check codes is the existence of highly efficient algorithms for their decoding. Many of the most efficient, recent graph-based algorithms, e.g. message-passing iterative decoding and linear programming decoding, crucially depend on the efficient representation of a code in a graphical model. In order to understand the performance of these algorithms, we argue for the characterization of codes in terms of a so-called fundamental cone in Euclidean space. This cone depends upon a given parity-check matrix of a code, rather than on the code itself. We give a number of properties of this fundamental cone derived from its connection to unramified covers of the graphical models on which the decoding algorithms operate. For the class of cycle codes, these developments naturally lead to a characterization of the fundamental cone as the Newton polyhedron of the Hashimoto edge zeta function of the underlying graph.  相似文献   

13.
We generalize Gabidulin codes to a large family of fields, non necessarily finite, possibly with characteristic zero. We consider a general field extension and any automorphism in the Galois group of the extension. This setting enables one to give several definitions of metrics related to the rank-metric, yet potentially different. We provide sufficient conditions on the given automorphism to ensure that the associated rank metrics are indeed all equal and proper, in coherence with the usual definition from linearized polynomials over finite fields. Under these conditions, we generalize the notion of Gabidulin codes. We also present an algorithm for decoding errors and erasures, whose complexity is given in terms of arithmetic operations. Over infinite fields the notion of code alphabet is essential, and more issues appear that in the finite field case. We first focus on codes over integer rings and study their associated decoding problem. But even if the code alphabet is small, we have to deal with the growth of intermediate values. A classical solution to this problem is to perform the computations modulo a prime ideal. For this, we need study the reduction of generalized Gabidulin codes modulo an ideal. We show that the codes obtained by reduction are the classical Gabidulin codes over finite fields. As a consequence, under some conditions, decoding generalized Gabidulin codes over integer rings can be reduced to decoding Gabidulin codes over a finite field.  相似文献   

14.
Hermitian codes obtained from Hermitian curves are shown to be concatenated generalized Reed-Solomon codes. This interpretation of Hermitian codes is used to investigate their structure. An efficient encoding algorithm is given for Hermitian codes. A new general decoding algorithm is given and applied to Hermitian codes to give a decoding algorithm capable of decoding up to the full error correcting capability of the code.This work is supported by a Natural Science and Engineering Research Council Grant A7382.  相似文献   

15.
16.
The antiblocking decoding algorithm established in Kroll and Vincenti (2010) [6] is based on the notion of an antiblocking system. It is comparable with the permutation decoding algorithm. Instead of a permutation decoding set, called a PD-set, consisting of automorphisms of the code, it uses an antiblocking system, called an AI-system, consisting of information sets.As the permutation decoding algorithm is more efficient the smaller the PD-set, so the antiblocking decoding algorithm is more effective the smaller the AI-system. Therefore, it is important for the applications to find small AI-systems.As in the case of PD-sets, there is no method that guarantees in general how to construct optimal or nearly optimal AI-systems.In this paper, we present first some general results on the existence and construction of small antiblocking systems using properties of antiblocking systems derived in Kroll and Vincenti (2008) [4]. The crucial point for the construction of antiblocking systems is a lemma, in which a recursive procedure is provided. In the second part, we apply these findings to construct small AI-systems for some codes arising from a cap of 20 points in PG(4,3).  相似文献   

17.
Many important signal processing tasks in digital communications are based on integer programming problems whose raw complexity is extremely high. Such problems include the decoding of convolutional codes, channel equalization, multiuser detection, and the joint performance of these tasks. In each of these problems, the high complexity arises from the need to perform simultaneous processing on long sequences of finite-valued symbols in order to optimally detect or decode them. Fortunately, the complexity of these optimization problems can often be greatly reduced through the use of dynamic programming, which efficiently finds optimal [e.g., maximum likelihood (ML) or maximum a posteriori probability (MAP)] decisions in long sequences of symbols. This paper reviews four decades of progress in this area: the Viterbi algorithm for ML decoding of convolutional codes of the 1960s; the ML sequence detectors for channel equalization and the BCJR algorithm for MAP decoding of convolutional codes of the 1970s; the ML and MAP multiuser detectors of the 1980s; and combinations of these through the turbo processing of the 1990s.  相似文献   

18.
In this paper we study the computation of symmetric systems of bilinear forms over finite fields via symmetric bilinear algorithms. We show that, in general, the symmetric complexity of a system is upper bounded by a constant multiple of the bilinear complexity; we characterize symmetric algorithms in terms of the cosets of a specific cyclic code, and we show that the problem of finding an optimal symmetric algorithm is equivalent to the maximum-likelihood decoding problem for this code.  相似文献   

19.
线性码译码的一种算法杜宏(中国科学院系统科学研究所,北京100080)1992年3月25日收到.引言线性码的译码算法一直是一个公开问题.J.Justesen等人在文[2]中给出了平面代数曲线上代数几何码译码的一种算法之后,A.N.Skorobogat...  相似文献   

20.
Iterative decoding and linear programming decoding are guaranteed to converge to the maximum-likelihood codeword when the underlying Tanner graph is cycle-free. Therefore, cycles are usually seen as the culprit of low-density parity-check codes. In this paper, we argue in the context of graph cover pseudocodeword that, for a code that permits a cycle-free Tanner graph, cycles have no effect on error performance as long as they are a part of redundant rows. Specifically, we characterize all parity-check matrices that are pseudocodeword-free for such class of codes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号