共查询到20条相似文献,搜索用时 484 毫秒
1.
研究的是自主招生的面试安排问题.它与一个经典问题(Steiner System问题)有很紧密的联系.首先我们形式化地提出了这个问题,并针对问题提出了3种算法.值得一提的是,我们提出的同余构造算法在时间复杂度较低的情况下,具有很高的近似比(强于FPTAS).对于文理分科的情况,我们同样在形式化地提出问题之后,给出了相应的算法.我们编写程序实现了所述的算法. 相似文献
2.
本文研究带有消失约束的数学规划问题.针对这一问题,我们提出了一种基于伪Huber函数的光滑正则化方法,该方法只对部分消失约束进行光滑化.对于新的光滑问题,我们证明Mangasarian-Fromovitz约束规格在某些情况下是成立的.我们也分析该方法的收敛性质,即,一个光滑正则化问题稳定点序列的聚点是原问题的T-稳定点,并给出光滑正则化问题稳定点序列的聚点是原问题的M-稳定点或S-稳定点的一些充分条件.最后初步的数值结果表明该方法是可行的. 相似文献
3.
4.
本文之(Ⅰ)[8]是关于最小多项式矩阵的理论;其(Ⅱ)是关于这一理论在线性多变量系统中的应用.在本部分的第一节中,我们利用(Ⅰ)中的理论,详细地讨论线性多变量系统输入问题的一些结果.在第二节中,利用对偶性,我们给出行n.p.m.及行生成组等概念,并讨论线性多变量系统输出问题的某些结果.在第三节中.我们讨论化状态空间型为多项式矩阵型的方法.在第四节中,我们讨论这一问题的反问题,即化多项式矩阵型为状态空间型的问题.为说明这些理论和方法,我们给出一些有趣的例子. 相似文献
5.
6.
7.
8.
“小猫小猫几个眼,你跑我就辇.”儿时的游戏在我的眼前经常闪过,何时能够辇上?这显然是我们数学问题中的环形问题.而这一问题一直是困扰我们的难题,碰到这样的问题好象进入了迷宫,如何走出迷宫?其实环形问题并不可怕,只要认真分析、观察、找出规律,也可以像行程问题那样,顺利解决.下面我们就这一情形进行探究: 相似文献
9.
10.
1.引言在河南省教育学会的大力支持下,我们成立了河南省教育学会创新教育专业委员会,我们对创新教育的研究可以说是刚刚起步.创新教育是以培养学生创新精神和创新能力为基本价值取向的教育.在高中数学教学中如何培养学生的创新意识、创新精神和创新能力是我们专业委员会重点研究的一个课题.当代美国著名数学家哈尔莫斯(P.R.Halmos)曾说:问题是数学的心脏.那么从某种意义上可以进一步说,数学学习的实质就是问题解决.基于创新教育理念,高中数学教学应该通过问题的提出、问题的分析、问题的讨论、问题的解决、问题的运用、问题的发展、问题的反思等七大环节来展开,从而推进整个数学学习过程,以培养学生的创新意识、创新精神和创新能力.本文重点研究了问题意识的培养、问题解决的思路、问题设计的原则、问题解决的误区. 相似文献
11.
BP神经网络算法是目前应用最广泛的一种神经网络算法,但有收敛速度慢和易陷入局部极小值等缺陷.本文利用混沌遗传算法(CGA)具有混沌运动遍历性、遗传算法反演性的特性来改进BP神经网络算法.该算法的基本思想是用混沌遗传算法对BP神经网络算法的初始权值和初始阈值进行优化.把混沌变量加入遗传算法中,提高遗传算法的全局搜索能力和收敛速度;用混沌遗传算法优化后得到的最优解作为BP神经网络算法的初始权值和阈值.通过实验观察,改进后的结果与普通的BP神经网络算法的结果相比,具有更高的准确率. 相似文献
12.
A rank-one algorithm is presented for unconstrained function minimization. The algorithm is a modified version of Davidon's variance algorithm and incorporates a limited line search. It is shown that the algorithm is a descent algorithm; for quadratic forms, it exhibits finite convergence, in certain cases. Numerical studies indicate that it is considerably superior to both the Davidon-Fletcher-Powell algorithm and the conjugate-gradient algorithm. 相似文献
13.
A descent algorithm for nonsmooth convex optimization 总被引:1,自引:0,他引:1
Masao Fukushima 《Mathematical Programming》1984,30(2):163-175
This paper presents a new descent algorithm for minimizing a convex function which is not necessarily differentiable. The
algorithm can be implemented and may be considered a modification of the ε-subgradient algorithm and Lemarechal's descent
algorithm. Also our algorithm is seen to be closely related to the proximal point algorithm applied to convex minimization
problems. A convergence theorem for the algorithm is established under the assumption that the objective function is bounded
from below. Limited computational experience with the algorithm is also reported. 相似文献
14.
P. P. B. Eggermont 《Applied Mathematics and Optimization》1999,39(1):75-91
We study a modification of the EMS algorithm in which each step of the EMS algorithm is preceded by a nonlinear smoothing
step of the form , where S is the smoothing operator of the EMS algorithm. In the context of positive integral equations (à la positron emission tomography)
the resulting algorithm is related to a convex minimization problem which always admits a unique smooth solution, in contrast
to the unmodified maximum likelihood setup. The new algorithm has slightly stronger monotonicity properties than the original
EM algorithm. This suggests that the modified EMS algorithm is actually an EM algorithm for the modified problem. The existence
of a smooth solution to the modified maximum likelihood problem and the monotonicity together imply the strong convergence
of the new algorithm. We also present some simulation results for the integral equation of stereology, which suggests that
the new algorithm behaves roughly like the EMS algorithm.
Accepted 1 April 1997 相似文献
15.
提出了一种凸组合共轭梯度算法,并将其算法应用到ARIMA模型参数估计中.新算法由改进的谱共轭梯度算法与共轭梯度算法作凸组合构造而成,具有下述特性:1)具备共轭性条件;2)自动满足充分下降性.证明了在标准Wolfe线搜索下新算法具备完全收敛性,最后数值实验表明通过调节凸组合参数,新算法更加快速有效,通过具体实例证实了模型的显著拟合效果. 相似文献
16.
A Gaussian kernel approximation algorithm for a feedforward neural network is presented. The approach used by the algorithm, which is based on a constructive learning algorithm, is to create the hidden units directly so that automatic design of the architecture of neural networks can be carried out. The algorithm is defined using the linear summation of input patterns and their randomized input weights. Hidden-layer nodes are defined so as to partition the input space into homogeneous regions, where each region contains patterns belonging to the same class. The largest region is used to define the center of the corresponding Gaussian hidden nodes. The algorithm is tested on three benchmark data sets of different dimensionality and sample sizes to compare the approach presented here with other algorithms. Real medical diagnoses and a biological classification of mushrooms are used to illustrate the performance of the algorithm. These results confirm the effectiveness of the proposed algorithm. 相似文献
17.
目前求解置换流水车间调度问题的智能优化算法都是随机型优化方法,存在的一个问题是解的稳定性较差。针对该问题,本文给出一种确定型智能优化算法——中心引力优化算法的求解方法。为处理基本中心引力优化算法对初始解选择要求高的问题,利用低偏差序列生成初始解,提高初始解质量;利用加速度和位置迭代方程更新解的状态;利用两位置交换排序法进行局部搜索,提高算法的优化性能。采用置换流水车间调度问题标准测试算例进行数值实验,并和基本中心引力优化算法、NEH启发式算法、微粒群优化算法和萤火虫算法进行比较。结果表明该算法不仅具有更好的解的稳定性,而且具有更高的计算精度,为置换流水车间调度问题的求解提供了一种可行有效的方法。 相似文献
18.
Hao Jiang Roberto Barrio Housen LiXiangke Liao Lizhi Cheng Fang Su 《Applied mathematics and computation》2011,217(23):9702-9716
This paper presents a compensated algorithm to accurately evaluate a polynomial expressed in Chebyshev basis of the first and second kind with floating-point coefficients. The principle is to apply error-free transformations to improve the traditional Clenshaw algorithm. The new algorithm is as accurate as the Clenshaw algorithm performed in twice the working precision. Forward error analysis and numerical experiments illustrate the accuracy and properties of the proposed algorithm. 相似文献
19.
The Adjoint Newton Algorithm for Large-Scale Unconstrained Optimization in Meteorology Applications 总被引:1,自引:0,他引:1
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed. 相似文献
20.
This paper presents a new composite sub-steps algorithm for solving reliable numerical responses in structural dynamics. The newly developed algorithm is a two sub-steps, second-order accurate and unconditionally stable implicit algorithm with the same numerical properties as the Bathe algorithm. The detailed analysis of the stability and numerical accuracy is presented for the new algorithm, which shows that its numerical characteristics are identical to those of the Bathe algorithm. Hence, the new sub-steps scheme could be considered as an alternative to the Bathe algorithm. Meanwhile, the new algorithm possesses the following properties: (a) it produces the same accurate solutions as the Bathe algorithm for solving linear and nonlinear problems; (b) it does not involve any artificial parameters and additional variables, such as the Lagrange multipliers; (c) The identical effective stiffness matrices can be obtained inside two sub-steps; (d) it is a self-starting algorithm. Some numerical experiments are given to show the superiority of the new algorithm and the Bathe algorithm over the dissipative CH-α algorithm and the non-dissipative trapezoidal rule. 相似文献