首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
为了对这种具有非线性特性的时间序列进行预测,提出一种基于混沌最小二乘支持向量机.算法将时间序列在相空间重构得到嵌入维数和时间延滞作为数据样本的选择依据,结合最小二乘法原理和支持向量机构建了基于混沌最小二乘支持向量机的预测模型.利用此预测模型对栾城站土壤含水量时间序列进行了预测.结果表明,经过相空间重构优化了数据样本的选取,通过模型的评价指标,混沌最小二乘支持向量机的预测模型能精确地预测具有非线性特性的时间序列,具有很好的理论和应用价值.  相似文献   

2.
改进GM(2,1)模型的MATLAB实现及其应用   总被引:1,自引:0,他引:1  
针对经济预测,根据灰色模型GM(1,1)的应用介绍了灰色模型GM(2,1)的原理,并利用最小二乘法改进GM(2,1)算法及其预测步骤,用MATLAB实现了预测,用中国经济增长率数据做了仿真,对观测时间序列拟合出数学模型.  相似文献   

3.
本文对一般时变自回归模型(TVAR)的时变系数提出一种估计方法,即建立一个关于时变系数的向量自回归时间序列模型,利用最小二乘方法计算其系数矩阵,在此基础上预测时变系数,从而得到时变自回归序列的点预测,另外给出了点预测和区间预测的方法.  相似文献   

4.
自从Suykens提出新型统计理论学习方法-最小二乘支持向量机(LSSVM)以来,这种方法引起了广泛的关注,它在预测方面的良好性能得到了广泛应用.应用自组织数据挖掘(GMDH)方法改进LSSVM,提升了预测精度.首先利用GMDH方法选择有效的输入变量,再将这些变量作为LSSVM模型的输入,进行时间序列的预测,从而建立LSSVM和GMDH组合的混合模型GLSSVM.并通过汇率时间序列对本文模型进行了实证.结果表明,混合模型预测精度得到了明显的提高.  相似文献   

5.
针对GM(1,1)幂模型对于小样本振荡序列对含突变信息无能为力的问题,提出了基于小波变换的小样本振荡序列灰色预测模型.首先,针对原始数据序列建立GM(1,1)幂模型描述其总体趋势特征;然后,利用小波变换提取GM(1,1)幂模型残差序列所包含的有用信号和随机噪声,并结合GM(1,1)幂模型构成新的时间相应函数;最后,以与原始平均误差最小为原则确定小波变换的小波基和分解层次并对小波进行重构GM(1,1)幂模型残差序列,并结合原始GM(1,1)幂模型对随机振荡序列进行预测.算例中通过对城市用水量的拟合及预测结果表明:应用基于傅立叶变换的GM(1,1)幂振荡序列模型和基于分数阶离散GM(1,1)幂模型研究了振荡序列模型平均误差分别为3.22%和5.66%,而本文的方法平均误差为1.11%.算例研究表明,此方法能够快速高效的解决GM(1,1)幂模型对小样本有突变趋势振荡序列的预测问题.  相似文献   

6.
猪肉产量受诸多因素影响,因此数据波动性大,并且具有小样本性及贫信息等特点.本文采用基于最小二乘法的GM(1,1)模型对我国未来几年内猪肉产量进行了短期预测.首先,介绍了GM(1,1)模型;然后,通过最小二乘法的原理弱化波动较大的数据,减少随机性,加强规律性,建立基于最小二乘法的GM(1,1)模型;其次,结合2008至2014年我国猪肉产量数据建立预测模型;最后,使用2014年数据对模型的可靠性进行验证,基于最小二乘法的GM(1,1)模型的预测结果更加接近实际值.预测结果显示未来3年中国猪肉产量将持续增加.该模型为其他相关预测提供了理论依据,也便于我国对未来猪肉产品市场进行宏观调控,维持猪肉市场平衡,避免猪肉价格波动风险.  相似文献   

7.
上海市社会总抚养比受到诸多因素的影响,导致数据波动性较大,单纯地采用灰色预测模型无法更加准确地进行预测,因此文章提出了基于最小二乘法的改进GM(1,1)模型.首先文章介绍了普通GM(1,1)模型的建立方法与步骤;接着通过采用最小二乘法的原理弱化波动较大的数据,加强其规律性从而建立新的GM(1,1)模型;最后结合2007-2011年上海市社会总抚养比数据建立新的预测模型,并用2012年数据对模型进行验证合格,可以用来预测未来几年上海市社会总抚养比,便于该市对未来经济的发展宏观调控.结果表明该预测方法是合理可行的,为其他相关预测提供了理论依据.  相似文献   

8.
利用小波分析预测方法对金融数据—股票收盘价这一典型的非平稳时间序列进行预测.使用M a llat小波分解算法对数据进行分解,对分解后的数据进行平滑处理,然后再进行重构,而重构之后的数据就成为近似意义的平稳时间序列,这样就得到了原始数据的近似信号,再应用传统时间序列预测方法对重构后的数据进行预测,将预测结果与实际值,以及和传统预测方法预测结果比较,小波分析方法预测效果更为理想.  相似文献   

9.
客运量受诸多因素影响因此数据波动性大,并且具有小样本性及贫信息等特点.采用基于最小二乘法改进的GM(1,1)模型,对上海市的客运量进行短期预测.首先介绍了普通GM(1,1)模型的建立方法;然后通过最小二乘法的原理弱化波动较大的数据,减少随机性,加强规律性建立改进的新GM(1,1)模型;其次结合2005-201.4年数据建立预测模型;最后使用2014年数据对模型可靠性进行验证.结果表明该预测方法精度高误差小,改进的模型预测结果更加接近实际值.该模型为其他相关预测提供了理论依据,也便于上海市对未来交通运输的宏观调控.  相似文献   

10.
为促进和保障高等院校毕业生的就业,保证人才培养的数目和质量,提前准确地预测毕业人数尤为重要.使用经典最小二乘回归和稳健回归分析方法,选取可能的解释变量对华北五省的毕业人数进行分析.经过全子集法等方法筛选自变量.分别使用经典最小二乘方法和稳健方法建立模型,给出了两种方法的拟合结果对比和预测效果对比,结果表明稳健方法显著地提高了传统模型的拟合精度和预测精度.并预测了2018-2025华北五省的毕业生数,为有关部门制定相关政策提供可靠的数字依据.  相似文献   

11.
The aim of the paper is to present a new global optimization method for determining all the optima of the Least Squares Method (LSM) problem of pairwise comparison matrices. Such matrices are used, e.g., in the Analytic Hierarchy Process (AHP). Unlike some other distance minimizing methods, LSM is usually hard to solve because of the corresponding nonlinear and non-convex objective function. It is found that the optimization problem can be reduced to solve a system of polynomial equations. Homotopy method is applied which is an efficient technique for solving nonlinear systems. The paper ends by two numerical example having multiple global and local minima. This research was supported in part by the Hungarian Scientific Research Fund, Grant No. OTKA K 60480.  相似文献   

12.
The present paper presents the key steps to model internally pressurized fractures in a homogeneous elastic medium. Internal pressure in the cracks transforms the displacement field, which alters the associated stress concentration. The displacements are solved analytically using a Linear Superposition Method (LSM), and stresses are solved under the assumption of linear elasticity. The method allows for the fractures to have any location, geometry, and orientation. Additionally, each crack may be pressurized by either equal or individual pressure loads. Solution methodology are explained, and results are generated for several cases. Selected LSM model results show excellent matches against other independent methods (photo-elastics for multiple crack problems, and prior analytical solutions for single crack problems). The grid-less, closed-form LSM solution is able to achieve fast computation times by side-stepping adaptive grid-refinement, while achieving high target resolution.  相似文献   

13.
We propose a new algorithm for dynamic lot size models (LSM) in which production and inventory cost functions are only assumed to be piecewise linear. In particular, there are no assumptions of convexity, concavity or monotonicity. Arbitrary capacities on both production and inventory may occur, and backlogging is allowed. Thus the algorithm addresses most variants of the LSM appearing in the literature. Computational experience shows it to be very effective on NP-hard versions of the problem. For example, 48 period capacitated problems with production costs defined by eight linear segments are solvable in less than 2.5 minutes of Vax 8600 cpu time.  相似文献   

14.
Wagner Muniz 《PAMM》2005,5(1):689-690
We consider the inverse inhomogeneous medium scattering problem in acoustics where one tries to recover the support of anomalies in a medium by interrogating the region of interest with plane waves at fixed frequency. We discuss the validity of the Linear Sampling Method (LSM) for solving this inverse problem, and its connection to an unusual eigenvalue problem. It turns out that the existence of non-trivial solutions of the so-called interior transmission eigenvalue problem results in the failure of the LSM. We then propose a modification of the LSM that avoids these shortcomings and is numerically sound. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
The Least-Squares Monte Carlo Method (LSM) has become the standard tool to solve real options modeled as an optimal switching problem. The method has been shown to deliver accurate valuation results under complex and high dimensional stochastic processes; however, the accuracy of the underlying decision policy is not guaranteed. For instance, an inappropriate choice of regression functions can lead to noisy estimates of the optimal switching boundaries or even continuation/switching regions that are not clearly separated. As an alternative to estimate these boundaries, we formulate a simulation-based method that starts from an initial guess of them and then iterates until reaching optimality. The algorithm is applied to a classical mine under a wide variety of underlying dynamics for the commodity price process. The method is first validated under a one-dimensional geometric Brownian motion and then extended to general Markovian processes. We consider two general specifications: a two-factor model with stochastic variance and a rich jump structure, and a four-factor model with stochastic cost-of-carry and stochastic volatility. The method is shown to be robust, stable, and easy-to-implement, converging to a more profitable strategy than the one obtained with LSM.  相似文献   

16.
This paper develops a new numerical technique to price an American option written upon an underlying asset that follows a bivariate diffusion process. The technique presented here exploits the supermartingale representation of an American option price together with a coarse approximation of its early exercise surface that is based on an efficient implementation of the least-squares Monte–Carlo algorithm (LSM) of Longstaff and Schwartz (Rev Financ Stud 14:113–147, 2001). Our approach also has the advantage of avoiding two main issues associated with LSM, namely its inherent bias and the basis functions selection problem. Extensive numerical results show that our approach yields very accurate prices in a computationally efficient manner. Finally, the flexibility of our method allows for its extension to a much larger class of optimal stopping problems than addressed in this paper.  相似文献   

17.
Latent space models (LSM) for network data rely on the basic assumption that each node of the network has an unknown position in a D-dimensional Euclidean latent space: generally the smaller the distance between two nodes in the latent space, the greater their probability of being connected. In this article, we propose a variational inference approach to estimate the intractable posterior of the LSM. In many cases, different network views on the same set of nodes are available. It can therefore be useful to build a model able to jointly summarize the information given by all the network views. For this purpose, we introduce the latent space joint model (LSJM) that merges the information given by multiple network views assuming that the probability of a node being connected with other nodes in each network view is explained by a unique latent variable. This model is demonstrated on the analysis of two datasets: an excerpt of 50 girls from “Teenage Friends and Lifestyle Study” data at three time points and the Saccharomyces cerevisiae genetic and physical protein–protein interactions. Supplementary materials for this article are available online.  相似文献   

18.
The characteristic aspects of dynamic distortions on a lengthy time series of i.i.d. pure noise when embedded with slightly-aggregating sparse signals are summarized into a significantly shorter recurrence time process of a chosen extreme event. We first employ the Kolmogorov–Smirnov statistic to compare the empirical recurrence time distribution with the null geometry distribution when no signal being present in the original time series. The power of such a hypothesis testing depends on varying degrees of aggregation of sparse signals: from a completely random distribution of singletons to batches of various sizes on the entire temporal span. We demonstrate the Kolmogorov–Smirnov statistic capturing the dynamic distortions due to slightly-aggregating sparse signals better than does Tukey’s Higher Criticism statistic even when the batch size is as small as five. Secondly, after confirming the presence of signals in the pure noise time series, we apply the hierarchical factor segmentation (HFS) algorithm again based on the recurrence time process to compute focal segments that contain a significantly higher intensity of signals than do the rest of the temporal regions. In a computer experiment with a given fixed number of signals, the focal segments identified by the HFS algorithm afford many folds of signal intensity which also critically depend on the degree of aggregation of sparse signals. This ratio information can facilitate better sensitivity, equivalent to a smaller false discovery rate, if the signal-discovering protocol implemented within the computed focal regions is different from that used outside of the focal regions. We also numerically compute the specificity as the total number of signals contained in the computed collection of focal regions, which indicates the inherent difficulty in the task of sparse signal discovery.  相似文献   

19.
The growth of wireless communication continues. There is a demand for more user capacity from new subscribers and new services such as wireless internet. In order to meet these expectations new and improved technology must be developed. A way to increase the capacity of an existing mobile radio network is to exploit the spatial domain in an efficient way. An antenna array adds spatial domain selectivity in order to improve the Carrier-to-Interference ratio (C/I) as well as Signal-to-Noise Ratio (SNR). An adaptive antenna array can further improve the Carrier-to-Interference ratio (C/I) by suppressing interfering signals and steer a beam towards the user. The suggested scheme is a combination of a beamformer and an interference canceller.The proposed structure is a circular array consisting of K omni-directional elements and combines fixed beamforming with interference cancelling. The fixed beamformers use a weight matrix to form multiple beams. The interference cancelling stage suppresses undesired signals, leaking into the desired beam.The desired signal is filtered out by the fixed beamforming structure. Due to the side-lobes, interfering signals will also be present in this beam. Two alternative strategies were chosen to cancel these interferers; use the other beamformer outputs as inputs to an adaptive interference canceller; or regenerate the outputs from the other beamformer outputs and generate clean signals which are used as inputs to adaptive interference cancellers.Resulting beamformer patterns as well as interference cancellation simulation results are presented. Two different methods have been used to design the beamformer weights, Least Square (LS) and minimax optimisation. In the minimax optimisation a semi-infinite linear programming approach was used. Although the optimisation plays an essential role in the performance of the beamformer, this paper is focused on the application rather then the optimisation methods.  相似文献   

20.
Recent closed form solutions to the Mutual Information Principle (MIP), are used in reconstituting signals on the basis of limited a priori information about them. Most emphasis is given to everywhere positive signals which must be optimally smoothed using a few measured values obtained with an instrument of known average error. The method is compared to results obtained from other classical methods, such as Least Squares, Lagrange and Newton.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号