首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The changing of the scattering data for the solutions ofsu(2) soliton systems which are related by a classical Darboux transformation (CDT) is obtained. It is shown that how a CDT creates and erases a soliton.  相似文献   

2.
The classification problem statement of multicriteria decision analysis is to model the classification of the alternatives/actions according to the decision maker's preferences. These models are based on outranking relations, utility functions or (linear) discriminant functions. Model parameters can be given explicitly or learnt from a preclassified set of alternatives/actions.In this paper we propose a novel approach, the Continuous Decision (CD) method, to learn parameters of a discriminant function, and we also introduce its extension, the Continuous Decision Tree (CDT) method, which describes the classification more accurately.The proposed methods are results of integration of Machine Learning methods in Decision Analysis. From a Machine Learning point of view, the CDT method can be considered as an extension of the C4.5 decision tree building algorithm that handles only numeric criteria but applies more complex tests in the inner nodes of the tree. For the sake of easier interpretation, the decision trees are transformed to rules.  相似文献   

3.
The Celis-Dennis-Tapia(CDT) problem is a subproblem of the trust region algorithms for the constrained optimization. CDT subproblem is studied in this paper. It is shown that there exists the KKT point such that the Hessian matrix of the Lagrangian is positive semidefinite, if the multipliers at the global solution are not unique. Next the second order optimality conditions are also given, when the Hessian matrix of Lagrange at the solution has one negative eigenvalue. And furthermore, it is proved that all feasible KKT points satisfying that the corresponding Hessian matrices of Lagrange have one negative eigenvalue are the local optimal solutions of the CDT subproblem.  相似文献   

4.
Cost effective sampling design is a major concern in some experiments especially when the measurement of the characteristic of interest is costly or painful or time consuming.Ranked set sampling(RSS) was first proposed by McIntyre [1952. A method for unbiased selective sampling, using ranked sets. Australian Journal of Agricultural Research 3, 385-390]as an effective way to estimate the pasture mean. In the current paper, a modification of ranked set sampling called moving extremes ranked set sampling(MERSS) is considered for the best linear unbiased estimators(BLUEs) for the simple linear regression model. The BLUEs for this model under MERSS are derived. The BLUEs under MERSS are shown to be markedly more efficient for normal data when compared with the BLUEs under simple random sampling.  相似文献   

5.
Grapiglia et al. (2013) proved subspace properties for the Celis-Dennis-Tapia (CDT) problem. If a subspace with lower dimension is appropriately chosen to satisfy subspace properties, then one can solve the CDT problem in that subspace so that the computational cost can be reduced. We show how to find subspaces that satisfy subspace properties for the CDT problem, by using the eigendecomposition of the Hessian matrix of the objection function. The dimensions of the subspaces are investigated. We also apply the subspace technologies to the trust region subproblem and the quadratic optimization with two quadratic constraints.  相似文献   

6.
The estimation of the Lyapunov spectrum for a chaotic time series is discussed in this study. Three models: the local linear (LL) model; the local polynomial (LP) model and the global radial basis function (RBF) model, are compared for estimating the Lyapunov spectrum in this study. The number of neighbors for training the LL model and the LP model; the number of centers for building the RBF model, have been determined by the generalized degree of freedom for a chaotic time series. The above models have been applied to three artificial chaotic time series and two real-world time series, the numerical results show that the model-chosen LL model provides more accurate estimation than other models for clean data set while the RBF model behaves more robust to noise than other models for noisy data set.  相似文献   

7.
Normal distribution based discriminant methods have been used for the classification of new entities into different groups based on a discriminant rule constructed from the learning set. In practice if the groups are not homogeneous, then mixture discriminant analysis of Hastie and Tibshirani (J R Stat Soc Ser B 58(1):155–176, 1996) is a useful approach, assuming that the distribution of the feature vectors is a mixture of multivariate normals. In this paper a new logistic regression model for heterogenous group structure of the learning set is proposed based on penalized multinomial mixture logit models. This approach is shown through simulation studies to be more effective. The results were compared with the standard mixture discriminant analysis approach using the probability of misclassification criterion. This comparison showed a slight reduction in the average probability of misclassification using this penalized multinomial mixture logit model as compared to the classical discriminant rules. It also showed better results when applied to practical life data problems producing smaller errors.  相似文献   

8.
通过基于数据挖掘理论的粗糙集和神经网络的研究,用属性约简算法约简并提取了影响房地产价格的主要指标因素,对降维后的数据进行网络学习和训练,最后用训练好的的网络检验测试样本.方法使学习训练的速度和识别率提高了,为房地产价格预测提供了一种更为有效和实用的新途径.  相似文献   

9.
Low-rank modeling has achieved great success in tensor completion. However, the low-rank prior is not sufficient for the recovery of the underlying tensor, especially when the sampling rate (SR) is extremely low. Fortunately, many real world data exhibit the piecewise smoothness prior along both the spatial and the third modes (e.g., the temporal mode in video data and the spectral mode in hyperspectral data). Motivated by this observation, we propose a novel low-rank tensor completion model using smooth matrix factorization (SMF-LRTC), which exploits the piecewise smoothness prior along all modes of the underlying tensor by introducing smoothness constraints on the factor matrices. An efficient block successive upper-bound minimization (BSUM)-based algorithm is developed to solve the proposed model. The developed algorithm converges to the set of the coordinate-wise minimizers under some mild conditions. Extensive experimental results demonstrate the superiority of the proposed method over the compared ones.  相似文献   

10.
In this paper, we study a modification of the Celis-Dennis-Tapia trust-region subproblem, which is obtained by replacing thel 2-norm with a polyhedral norm. The polyhedral norm Celis-Dennis-Tapia (CDT) subproblem can be solved using a standard quadratic programming code.We include computational results which compare the performance of the polyhedral-norm CDT trust-region algorithm with the performance of existing codes. The numerical results validate the effectiveness of the approach. These results show that there is not much loss of robustness or speed and suggest that the polyhedral-norm CDT algorithm may be a viable alternative. The topic merits further investigation.The first author was supported in part by the REDI foundation and State of Texas Award, Contract 1059 as Visiting Member of the Center for Research on Parallel Computation, Rice University, Houston, Texas, He thanks Rice University for the congenial scientific atmosphere provided. The second author was supported in part by the National Science Foundation, Cooperative Agreement CCR-88-09615, Air Force Office of Scientific Research Grant 89-0363, and Department of Energy Contract DEFG05-86-ER25017.  相似文献   

11.
当研究目标的实际测量具有不可修复的破坏性或耗资巨大时,有效的抽样设计将是一项重要的研究课题.在统计推断方面,排序集抽样被视为一种更为有效的收集数据的方式.极值排序集抽样(ERSS)是一种改进的排序集抽样.文章在ERSS下研究了总体均值的比率估计.以正态分布为例,比较了简单随机抽样和ERSS下比率估计的相对效率.数值结果表明ERSS下的比率估计优于简单随机抽样下的比率估计.  相似文献   

12.
This paper provides a significant numerical evidence for out-of-sample forecasting ability of linear Gaussian interest rate models with unobservable underlying factors. We calibrate one, two and three factor linear Gaussian models using the Kalman filter on two different bond yield data sets and compare their out-of-sample forecasting performance. One-step ahead as well as four-step ahead out-of-sample forecasts are analyzed based on the weekly data. When evaluating the one-step ahead forecasts, it is shown that a one factor model may be adequate when only the short-dated or only the long-dated yields are considered, but two and three factor models performs significantly better when the entire yield spectrum is considered. Furthermore, the results demonstrate that the predictive ability of multi-factor models remains intact far ahead out-of-sample, with accurate predictions available up to one year after the last calibration for one data set and up to three months after the last calibration for the second, more volatile data set. The experimental data denotes two different periods with different yield volatilities, and the stability of model parameters after calibration in both the cases is deemed to be both significant and practically useful. When it comes to four-step ahead predictions, the quality of forecasts deteriorates for all models, as can be expected, but the advantage of using a multi-factor model as compared to a one factor model is still significant.  相似文献   

13.
In multivariate time series analysis, dynamic principal component analysis (DPCA) is an effective method for dimensionality reduction. DPCA is an extension of the original PCA method which can be applied to an autocorrelated dynamic process. In this paper, we apply DPCA to a set of real oil data and use the principal components as covariates in condition-based maintenance (CBM) modeling. The CBM model (Model 1) is then compared with the CBM model which uses raw oil data as the covariates (Model 2). It is shown that the average maintenance cost corresponding to the optimal policy for Model 1 is considerably lower than that for Model 2, and when the optimal policies are applied to the oil data histories, the policy for Model 1 correctly indicates almost twice as many impending system failures as the policy for Model 2.  相似文献   

14.
ON MAXIMA OF DUAL FUNCTION OF THE CDT SUBPROBLEM   总被引:3,自引:0,他引:3  
1. IntroductionConsider the following the CDT problem Pwhere g e n", B E n"'", A E n"'", c E urn, a > 0, (2 0, B is a symmetric matrix notnecessajry positive semi--definde, and throughout this paperg the norm 11' 11 denotes the Euclideannorm. For the conveniellt of our following discussion, let F be the feasible region of the CDTsubproblem,andProblem (1.1)--(1.3) arises in some trust region algorithms for equality constrained optillilzation aiming to conquer the inconsistency between the…  相似文献   

15.
As an extension of Pawlak rough set model, decision-theoretic rough set model (DTRS) adopts the Bayesian decision theory to compute the required thresholds in probabilistic rough set models. It gives a new semantic interpretation of the positive, boundary and negative regions by using three-way decisions. DTRS has been widely discussed and applied in data mining and decision making. However, one limitation of DTRS is its lack of ability to deal with numerical data directly. In order to overcome this disadvantage and extend the theory of DTRS, this paper proposes a neighborhood based decision-theoretic rough set model (NDTRS) under the framework of DTRS. Basic concepts of NDTRS are introduced. A positive region related attribute reduct and a minimum cost attribute reduct in the proposed model are defined and analyzed. Experimental results show that our methods can get a short reduct. Furthermore, a new neighborhood classifier based on three-way decisions is constructed and compared with other classifiers. Comparison experiments show that the proposed classifier can get a high accuracy and a low misclassification cost.  相似文献   

16.
There is a burgeoning literature on mortality models for joint lives. In this paper, we propose a new model in which we use time-changed Brownian motion with dependent subordinators to describe the mortality of joint lives. We then employ this model to estimate the mortality rate of joint lives in a well-known Canadian insurance data set. Specifically, we first depict an individual’s death time as the stopping time when the value of the hazard rate process first reaches or exceeds an exponential random variable, and then introduce the dependence through dependent subordinators. Compared with existing mortality models, this model better interprets the correlation of death between joint lives, and allows more flexibility in the evolution of the hazard rate process. Empirical results show that this model yields highly accurate estimations of mortality compared to the baseline non-parametric (Dabrowska) estimation.  相似文献   

17.
The combination of mathematical models and uncertainty measures can be applied in the area of data mining for diverse objectives with as final aim to support decision making. The maximum entropy function is an excellent measure of uncertainty when the information is represented by a mathematical model based on imprecise probabilities. In this paper, we present algorithms to obtain the maximum entropy value when the information available is represented by a new model based on imprecise probabilities: the nonparametric predictive inference model for multinomial data (NPI-M), which represents a type of entropy-linear program. To reduce the complexity of the model, we prove that the NPI-M lower and upper probabilities for any general event can be expressed as a combination of the lower and upper probabilities for the singleton events, and that this model can not be associated with a closed polyhedral set of probabilities. An algorithm to obtain the maximum entropy probability distribution on the set associated with NPI-M is presented. We also consider a model which uses the closed and convex set of probability distributions generated by the NPI-M singleton probabilities, a closed polyhedral set. We call this model A-NPI-M. A-NPI-M can be seen as an approximation of NPI-M, this approximation being simpler to use because it is not necessary to consider the set of constraints associated with the exact model.  相似文献   

18.
In typical robust portfolio selection problems, one mainly finds portfolios with the worst-case return under a given uncertainty set, in which asset returns can be realized. A too large uncertainty set will lead to a too conservative robust portfolio. However, if the given uncertainty set is not large enough, the realized returns of resulting portfolios will be outside of the uncertainty set when an extreme event such as market crash or a large shock of asset returns occurs. The goal of this paper is to propose robust portfolio selection models under so-called “ marginal+joint” ellipsoidal uncertainty set and to test the performance of the proposed models. A robust portfolio selection model under a “marginal + joint” ellipsoidal uncertainty set is proposed at first. The model has the advantages of models under the separable uncertainty set and the joint ellipsoidal uncertainty set, and relaxes the requirements on the uncertainty set. Then, one more robust portfolio selection model with option protection is presented by combining options into the proposed robust portfolio selection model. Convex programming approximations with second-order cone and linear matrix inequalities constraints to both models are derived. The proposed robust portfolio selection model with options can hedge risks and generates robust portfolios with well wealth growth rate when an extreme event occurs. Tests on real data of the Chinese stock market and simulated options confirm the property of both the models. Test results show that (1) under the “ marginal+joint” uncertainty set, the wealth growth rate and diversification of robust portfolios generated from the first proposed robust portfolio model (without options) are better and greater than those generated from Goldfarb and Iyengar’s model, and (2) the robust portfolio selection model with options outperforms the robust portfolio selection model without options when some extreme event occurs.  相似文献   

19.
针对建筑沉降发生的过程,采用支持向量机(SVM)模型对建筑物沉降进行预测.使用前期施工过程中的沉降观测数据作为训练样本集,建立现场动态沉降量预报模型.仿真试验和实践结果表明,模型与BP神经网络预测模型相比能够更准确地反映实际沉降过程,且满足精确性和适用性的要求.  相似文献   

20.
Discretionary models of data envelopment analysis (DEA) assume that all inputs and outputs can be varied at the discretion of management or other users. In any realistic situation, however, there may exist “exogenously fixed” or non-discretionary factors that are beyond the control of a DMU’s management, which also need to be considered. This paper discusses and reviews the use of super-efficiency approach in data envelopment analysis (DEA) sensitivity analyses when some inputs are exogenously fixed. Super-efficiency data envelopment analysis (DEA) model is obtained when a decision making unit (DMU) under evaluation is excluded from the reference set. In this paper by means of modified Banker and Morey’s (BM hereafter) model [R.D. Banker, R. Morey, Efficiency analysis for exogenously fixed inputs and outputs, Operations Research 34 (1986) 513–521], in which the test DMU is excluded from the reference set, we are able to determine what perturbations of discretionary data can be tolerated before frontier DMUs become nonfrontier.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号