首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
胡行华  秦艳杰 《计算数学》2023,45(1):109-129
本文基于现有的切比雪夫神经网络,提出了一种利用遗传算法优化切比雪夫神经网络求解分数阶Bagley-Torvik方程数值解的新方法,结合多点处的泰勒公式原理,给出数值解的一般形式,将原问题转化为求解无约束最小化问题.与现有数值方法的数值结果进行比较表明了本文方法的可行性和有效性,为分数阶微分方程中类似问题的求解提供了新的思路.  相似文献   

2.
代数神经网络算法能够克服BP神经网络易于陷入局部极小和收敛慢的问题,通过优选激励函数和采用代数算法计算权值,将复杂的非线性优化问题转化为简单的代数方程组求解问题,提高了神经网络的精度与收敛速度.在使用代数神经网络算法进行煤自燃预测的实例中,采用均值规格化数据预处理,解决了煤自燃指标气体异动对分类结果的过度扰动.实验结果表明了算法的有效性和实用性.  相似文献   

3.
人工神经网络近年来得到了快速发展,将此方法应用于数值求解偏微分方程是学者们关注的热点问题.相比于传统方法其具有应用范围广泛(即同一种模型可用于求解多种类型方程)、网格剖分条件要求低等优势,并且能够利用训练好的模型直接计算区域中任意点的数值.该文基于卷积神经网络模型,对传统有限体积法格式中的权重系数进行优化,以得到在粗粒度网格下具有较高精度的新数值格式,从而更适用于复杂问题的求解.该网络模型可以准确、有效地求解Burgers方程和level set方程,数值结果稳定,且具有较高数值精度.  相似文献   

4.
利用李小平等提出的相邻工件加工结束时间差矩阵,将求解无等待流水调度问题的最小最大完工时间(Makespan)问题映射为TSP问题,构造对应的能量函数,进而得到随机混沌神经网络(SCSA)算法.实验结果证明该混沌神经网络优化算法优于RAJ算法和GANRAJ算法.  相似文献   

5.
一类神经网络模型的稳定性   总被引:2,自引:1,他引:1  
本文将一种求解凸规划问题的神经网络模型推广到求解一般的非凸非线性规划问题.理论分析表明;在适当的条件下,本文提出的求解非凸非线性规划问题的神经网络模型的平衡点是渐近稳定的,对应于非线性规划问题的局部最优解  相似文献   

6.
一类不可微二次规划逆问题   总被引:1,自引:0,他引:1  
本文求解了一类二次规划的逆问题,具体为目标函数是矩阵谱范数与向量无穷范数之和的最小化问题.首先将该问题转化为目标函数可分离变量的凸优化问题,提出用G-ADMM法求解.并结合奇异值阈值算法,Moreau-Yosida正则化算法,matlab优化工具箱的quadprog函数来精确求解相应的子问题.而对于其中一个子问题的精确求解过程中发现其仍是目标函数可分离变量的凸优化问题,由于其变量都是矩阵,所以采用适合多个矩阵变量的交替方向法求解,通过引入新的变量,使其每个子问题的解都具有显示表达式.最后给出采用的G-ADMM法求解本文问题的数值实验.数据表明,本文所采用的方法能够高效快速地解决该二次规划逆问题.  相似文献   

7.
该文首次采用一种组合神经网络的方法,求解了一维时间分数阶扩散方程.组合神经网络是由径向基函数(RBF)神经网络与幂激励前向神经网络相结合所构造出的一种新型网络结构.首先,利用该网络结构构造出符合时间分数阶扩散方程条件的数值求解格式,同时设置误差函数,使原问题转化为求解误差函数极小值问题;然后,结合神经网络模型中的梯度下降学习算法进行循环迭代,从而获得神经网络的最优权值以及各项最优参数,最终得到问题的数值解.数值算例验证了该方法的可行性、有效性和数值精度.该文工作为时间分数阶扩散方程的求解开辟了一条新的途径.  相似文献   

8.
提出求解含平衡约束数学规划问题(简记为MPEC问题)的熵函数法,在将原问题等价改写为单层非光滑优化问题的基础上,通过熵函数逼近,给出求解MPEC问题的序列光滑优化方法,证明了熵函数逼近问题解的存在性和算法的全局收敛性,数值算例表明了算法的有效性。  相似文献   

9.
一类新的记忆梯度法及其全局收敛性   总被引:1,自引:0,他引:1  
研究了求解无约束优化问题的记忆梯度法,利用当前和前面迭代点的信息产生下降方向,得到了一类新的无约束优化算法,在Wolfe线性搜索下证明了其全局收敛性.新算法结构简单,不用计算和存储矩阵,适于求解大型优化问题.数值试验表明算法有效.  相似文献   

10.
在结构构件尺寸、材料属性以及外部载荷等不确定性因素影响下,基于可靠度的优化给出了兼顾结构的成本和安全性能的安全设计方案.由于传统的可靠度优化方法采用嵌套的双层优化列式求解,因此导致计算量过大.为了克服这个问题,学者们相继提出了解耦方法和单循环方法等方法.该文采用RBF神经网络模型用于可靠度优化问题的求解中,通过拉丁超立方方法构造代理模型,并用误差指标来验证代理模型的精确程度,同时自适应更新代理模型直至满足需求.通过与现有可靠度优化4种主流算法的比较,说明了该文提出算法的高效性和稳健性.  相似文献   

11.
Hopfield neural networks and affine scaling interior point methods are combined in a hybrid approach for solving linear optimization problems. The Hopfield networks perform the early stages of the optimization procedures, providing enhanced feasible starting points for both primal and dual affine scaling interior point methods, thus facilitating the steps towards optimality. The hybrid approach is applied to a set of real world linear programming problems. The results show the potential of the integrated approach, indicating that the combination of neural networks and affine scaling interior point methods can be a good alternative to obtain solutions for large-scale optimization problems.  相似文献   

12.
曹阳  戴华 《计算数学》2014,36(4):381-392
本文研究求解非线性特征值问题的数值方法.基于矩阵值函数的二次近似,将非线性特征值问题转化为二次特征值问题,提出了求解非线性特征值问题的逐次二次近似方法,分析了该方法的收敛性.结合求解二次特征值问题的Arnoldi方法和Jacobi-Davidson方法,给出求解非线性特征值问题的一些二次近似方法.数值结果表明本文所给算法是有效的.  相似文献   

13.
解线性不等式的神经网络   总被引:2,自引:0,他引:2  
本文提出两个解线性不等式的Hopfiedl-Tank型的神经网络。第一个网络模拟同时松弛投影方法,第二个网络是二次规划方法。当线性不等式的解集非空时,这两个方法都给出该线性不等式的解。同时我们还给出了这两个网络的数值模拟。  相似文献   

14.
A new artificial neural network solution approach is proposed to solve combinatorial optimization problems. The artificial neural network is called the Tabu Machine because it has the same structure as the Boltzmann Machine does but uses tabu search to govern its state transition mechanism. Similar to the Boltzmann Machine, the Tabu Machine consists of a set of binary state nodes connected with bidirectional arcs. Ruled by the transition mechanism, the nodes adjust their states in order to search for a global minimum energy state. Two combinatorial optimization problems, the maximum cut problem and the independent set problem, are used as examples to conduct a computational experiment. Without using overly sophisticated tabu search techniques, the Tabu Machine outperforms the Boltzmann Machine in terms of both solution quality and computation time.  相似文献   

15.
A common challenge in regression is that for many problems, the degrees of freedom required for a high-quality solution also allows for overfitting. Regularization is a class of strategies that seek to restrict the range of possible solutions so as to discourage overfitting while still enabling good solutions, and different regularization strategies impose different types of restrictions. In this paper, we present a multilevel regularization strategy that constructs and trains a hierarchy of neural networks, each of which has layers that are wider versions of the previous network's layers. We draw intuition and techniques from the field of Algebraic Multigrid (AMG), traditionally used for solving linear and nonlinear systems of equations, and specifically adapt the Full Approximation Scheme (FAS) for nonlinear systems of equations to the problem of deep learning. Training through V-cycles then encourage the neural networks to build a hierarchical understanding of the problem. We refer to this approach as multilevel-in-width to distinguish from prior multilevel works which hierarchically alter the depth of neural networks. The resulting approach is a highly flexible framework that can be applied to a variety of layer types, which we demonstrate with both fully connected and convolutional layers. We experimentally show with PDE regression problems that our multilevel training approach is an effective regularizer, improving the generalize performance of the neural networks studied.  相似文献   

16.
王贵珍 《数学杂志》1998,18(4):445-449
本文给出了分块的超松驰组合牛顿-乘子(BSOR-N-M)方法,来求解一类约束函数可分块的规划问题,证明了其收敛性,进一步给出与理论相应的数值结果。  相似文献   

17.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.  相似文献   

18.
Incremental Gradient Algorithms with Stepsizes Bounded Away from Zero   总被引:4,自引:0,他引:4  
We consider the class of incremental gradient methods for minimizing a sum of continuously differentiable functions. An important novel feature of our analysis is that the stepsizes are kept bounded away from zero. We derive the first convergence results of any kind for this computationally important case. In particular, we show that a certain -approximate solution can be obtained and establish the linear dependence of on the stepsize limit. Incremental gradient methods are particularly well-suited for large neural network training problems where obtaining an approximate solution is typically sufficient and is often preferable to computing an exact solution. Thus, in the context of neural networks, the approach presented here is related to the principle of tolerant training. Our results justify numerous stepsize rules that were derived on the basis of extensive numerical experimentation but for which no theoretical analysis was previously available. In addition, convergence to (exact) stationary points is established when the gradient satisfies a certain growth property.  相似文献   

19.
In this paper we propose a nonmonotone approach to recurrent neural networks training for temporal sequence processing applications. This approach allows learning performance to deteriorate in some iterations, nevertheless the network’s performance is improved over time. A self-scaling BFGS is equipped with an adaptive nonmonotone technique that employs approximations of the Lipschitz constant and is tested on a set of sequence processing problems. Simulation results show that the proposed algorithm outperforms the BFGS as well as other methods previously applied to these sequences, providing an effective modification that is capable of training recurrent networks of various architectures.  相似文献   

20.
Iterative methods and especially Krylov subspace methods (KSM) are a very useful numerical tool in solving for large and sparse linear systems problems arising in science and engineering modeling. More recently, the nested loop KSM have been proposed that improve the convergence of the traditional KSM. In this article, we review the residual cutting (RC) and the generalized residual cutting (GRC) that are nested loop methods for large and sparse linear systems problems. We also show that GRC is a KSM that is equivalent to Orthomin with a variable preconditioning. We use the modified Gram–Schmidt method to derive a stable GRC algorithm. We show that GRC presents a general framework for constructing a class of “hybrid” (nested) KSM based on inner loop method selection. We conduct numerical experiments using nonsymmetric indefinite matrices from a widely used library of sparse matrices that validate the efficiency and the robustness of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号