共查询到20条相似文献,搜索用时 15 毫秒
1.
Real estate price prediction under asymmetric loss 总被引:3,自引:0,他引:3
Michael Cain Christian Janssen 《Annals of the Institute of Statistical Mathematics》1995,47(3):401-414
This paper deals with the problem of how to adjust a predictive mean in a practical situation of prediction where there is asymmetry in the loss function. A standard linear model is considered for predicting the price of real estate using a normal-gamma conjugate prior for the parameters. The prior of a subject real estate agent is elicited but, for comparison, a diffuse prior is also considered. Three loss functions are used: asymmetric linear, asymmetric quadratic and LINEX, and the parameters under each of these postulated forms are elicited. Theoretical developments for prediction under each loss function in the presence of normal errors are presented and useful tables of adjustment factor values given. Predictions of the dependent price variable for two properties with differing characteristics are made under each loss function and the results compared. 相似文献
2.
M. López J.M. Matías J.A. Vilán 《Journal of Computational and Applied Mathematics》2010,234(4):1338-1345
Automated classification of granite slabs is a key aspect of the automation of processes in the granite transformation sector. This classification task is currently performed manually on the basis of the subjective opinions of an expert in regard to texture and colour. We describe a classification method based on machine learning techniques fed with spectral information for the rock, supplied in the form of discrete values captured by a suitably parameterized spectrophotometer. The machine learning techniques applied in our research take a functional perspective, with the spectral function smoothed in accordance with the data supplied by the spectrophotometer. On the basis of the results obtained, it can be concluded that the proposed method is suitable for automatically classifying ornamental rock. 相似文献
3.
《Communications in Nonlinear Science & Numerical Simulation》2014,19(9):3171-3183
In this paper, a new method for nonlinear system identification via extreme learning machine neural network based Hammerstein model (ELM-Hammerstein) is proposed. The ELM-Hammerstein model consists of static ELM neural network followed by a linear dynamic subsystem. The identification of nonlinear system is achieved by determining the structure of ELM-Hammerstein model and estimating its parameters. Lipschitz quotient criterion is adopted to determine the structure of ELM-Hammerstein model from input–output data. A generalized ELM algorithm is proposed to estimate the parameters of ELM-Hammerstein model, where the parameters of linear dynamic part and the output weights of ELM neural network are estimated simultaneously. The proposed method can obtain more accurate identification results with less computation complexity. Three simulation examples demonstrate its effectiveness. 相似文献
4.
Hoi-Ming Chi Okan K. Ersoy Herbert Moskowitz Jim Ward 《European Journal of Operational Research》2007
Using a supply chain network, we demonstrate the feasibility, viability, and robustness of applying machine learning and genetic algorithms to respectively model, understand, and optimize such data intensive environments. Deployment of these algorithms, which learn from and optimize data, can obviate the need to perform more complex, expensive, and time consuming design of experiments (DOE), which usually disrupt system operations. We apply and compare the behavior and performance of the proposed machine learning algorithms to that obtained via DOE in a simulated Vendor Managed Replenishment system, developed for an actual firm. The results show that the models resulting from the proposed algorithms had strong explanatory and predictive power, comparable to that of DOE. The optimal system settings and profit were also similar to that obtained from DOE. The virtues of using machine learning and evolutionary algorithms to model and optimize data rich environments thus seem promising because they are automatic, involving little human intervention and expertise. We believe and are exploring how they can be made adaptive to improve parameter estimates with increasing data, as well as seamlessly detecting system (and therefore model) changes, thus being capable of recursively updating and reoptimizing a modified or new model. 相似文献
5.
Support vector machine learning algorithm and transduction 总被引:1,自引:0,他引:1
A. Gammermann 《Computational Statistics》2000,15(1):31-39
6.
Miloš Kopa Sebastiano Vitali Tomáš Tichý Radek Hendrych 《Computational Management Science》2017,14(4):559-583
This paper deals with implied volatility (IV) estimation using no-arbitrage techniques. The current market practice is to obtain IV of liquid options as based on Black–Scholes (BS type hereafter) models. Such volatility is subsequently used to price illiquid or even exotic options. Therefore, it follows that the BS model can be related simultaneously to the whole set of IVs as given by maturity/moneyness relation of tradable options. Then, it is possible to get IV curve or surface (a so called smile or smirk). Since the moneyness and maturity of IV often do not match the data of valuated options, some sort of estimating and local smoothing is necessary. However, it can lead to arbitrage opportunity if no-arbitrage conditions on state price density (SPD) are ignored. In this paper, using option data on DAX index, we aim to analyse the behavior of IV and SPD with respect to different choices of bandwidth parameter h, time to maturity and kernel function. A set of bandwidths which violates no-arbitrage conditions is identified. We document that the change of h implies interesting changes in the violation interval of moneyness. We also perform the analysis after removing outliers, in order to show that not only outliers cause the violation of no-arbitrage conditions. Moreover, we propose a new measure of arbitrage which can be considered either for the SPD curve (arbitrage area measure) or for the SPD surface (arbitrage volume measure). We highlight the impact of h on the proposed measures considering the options on a German stock index. Finally, we propose an extension of the IV and SPD estimation for the case of options on a dividend-paying stock. 相似文献
7.
8.
The purpose of this article is to review the similarity and difference between financial risk minimization and a class of machine learning methods known as support vector machines, which were independently developed. By recognizing their common features, we can understand them in a unified mathematical framework. On the other hand, by recognizing their difference, we can develop new methods. In particular, employing the coherent measures of risk, we develop a generalized criterion for two-class classification. It includes existing criteria, such as the margin maximization and \(\nu \) -SVM, as special cases. This extension can also be applied to the other type of machine learning methods such as multi-class classification, regression and outlier detection. Although the new criterion is first formulated as a nonconvex optimization, it results in a convex optimization by employing the nonnegative \(\ell _1\) -regularization. Numerical examples demonstrate how the developed methods work for bond rating. 相似文献
9.
Nonlinear dynamical systems, which include models of the Earth’s climate, financial markets and complex ecosystems, often undergo abrupt transitions that lead to radically different behavior. The ability to predict such qualitative and potentially disruptive changes is an important problem with far-reaching implications. Even with robust mathematical models, predicting such critical transitions prior to their occurrence is extremely difficult. In this work, we propose a machine learning method to study the parameter space of a complex system, where the dynamics is coarsely characterized using topological invariants. We show that by using a nearest neighbor algorithm to sample the parameter space in a specific manner, we are able to predict with high accuracy the locations of critical transitions in parameter space. 相似文献
10.
Richard H. Byrd Gillian M. Chin Jorge Nocedal Yuchen Wu 《Mathematical Programming》2012,134(1):127-155
This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large-scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L 1-regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms. 相似文献
11.
Interbank Offered rate is the only direct market rate in China’s currency market. Volatility forecasting of China Interbank Offered Rate (IBOR) has a very important theoretical and practical significance for financial asset pricing and financial risk measure or management. However, IBOR is a dynamics and non-steady time series whose developmental changes have stronger random fluctuation, so it is difficult to forecast the volatility of IBOR. This paper offers a hybrid algorithm using grey model and extreme learning machine (ELM) to forecast volatility of IBOR. The proposed algorithm is composed of three phases. In the first, grey model is used to deal with the original IBOR time series by accumulated generating operation (AGO) and weaken the stochastic volatility in original series. And then, a forecasting model is founded by using ELM to analyze the new IBOR series. Lastly, the predictive value of the original IBOR series can be obtained by inverse accumulated generating operation (IAGO). The new model is applied to forecasting Interbank Offered Rate of China. Compared with the forecasting results of BP and classical ELM, the new model is more efficient to forecasting short- and middle-term volatility of IBOR. 相似文献
12.
Martin Anthony 《Discrete Applied Mathematics》2008,156(6):883-902
This paper discusses the applications of certain combinatorial and probabilistic techniques to the analysis of machine learning. Probabilistic models of learning initially addressed binary classification (or pattern classification). Subsequently, analysis was extended to regression problems, and to classification problems in which the classification is achieved by using real-valued functions (where the concept of a large margin has proven useful). Another development, important in obtaining more applicable models, has been the derivation of data-dependent bounds. Here, we discuss some of the key probabilistic and combinatorial techniques and results, focusing on those of most relevance to researchers in discrete applied mathematics. 相似文献
13.
enyiit Ercan Atici Uur enol Mehmet Burak 《Central European Journal of Operations Research》2022,30(3):941-959
Central European Journal of Operations Research - In this paper, the effects of Occupational Repetitive Actions (OCRA) parameters, learning rate on process times, and machine scheduling were... 相似文献
14.
H M Soroush 《The Journal of the Operational Research Society》2014,65(7):1017-1036
Scheduling with setup times and learning plays a crucial role in today's manufacturing and service environments where scheduling decisions are made with respect to multiple performance criteria rather than a single criterion. In this paper, we address a bicriteria single machine scheduling problem with job-dependent past-sequence-dependent setup times and job-dependent position-based learning effects. The setup time and actual processing time of a job are respectively unique functions of the actual processing times of the already processed jobs and the position of the job in a schedule. The objective is to derive the schedule that minimizes a linear composite function of a pair of performance criteria consisting of the makespan, the total completion time, the total lateness, the total absolute differences in completion times, and the sum of earliness, tardiness, and common due date penalty. We show that the resulting problems cannot be solved in polynomial time; thus, branch-and-bound (B&B) methods are proposed to obtain the optimal schedules. Our computational results demonstrate that the B&B can solve instances of various size problems with attractive times. 相似文献
15.
El-Ghazali Talbi 《Annals of Operations Research》2016,239(1):171-188
We consider the problem of creating fair course timetables in the setting of a university. The central idea is that undesirable arrangements in the course timetable, i.e., violations of soft constraints, should be distributed in a fair way among the stakeholders. We propose and discuss in detail two fair versions of the popular curriculum-based course timetabling (CB-CTT) problem, the MMF-CB-CTT problem and the JFI-CB-CTT problem, which are based on max–min fairness (MMF) and Jain’s fairness index (JFI), respectively. For solving the MMF-CB-CTT problem, we present and experimentally evaluate an optimization algorithm based on simulated annealing. We introduce three different energy difference measures and evaluate their impact on the overall algorithm performance. The proposed algorithm improves the fairness on 20 out of 32 standard instances compared to the known best timetables. The JFI-CB-CTT problem formulation focuses on the trade-off between fairness and the aggregated soft constraint violations. Here, our experimental evaluation shows that the known best solutions to 32 CB-CTT standard instances are quite fair with respect to JFI. Our experiments show that the fairness can often be improved at the cost of only a small increase in the overall amount of penalty. 相似文献
16.
In this paper we consider the single machine scheduling problems with sum-of-logarithm-processing-times based and position based learning effects, i.e., the actual job processing time of a job is a function of the sum of the logarithms of the processing times of the jobs already processed and its position in a sequence. The logarithm function is used to model the phenomenon that learning as a human activity is subject to the law of diminishing return. We show that even with the introduction of the proposed model to job processing times, several single machine problems remain polynomially solvable. 相似文献
17.
In this article, we study an unrelated parallel machine scheduling problem with setup time and learning effects simultaneously. The setup time is proportional to the length of the already processed jobs. That is, the setup time of each job is past-sequence-dependent. The objective is to minimize the total completion time. We show that there exists a polynomial time solution for the proposed problem. We also discuss two special cases of the problem and show that they can be optimally solved by lower order algorithms. 相似文献
18.
El-Ghazali Talbi 《4OR: A Quarterly Journal of Operations Research》2013,11(2):101-150
During the last years, interest on hybrid metaheuristics has risen considerably in the field of optimization and machine learning. The best results found for many optimization problems in science and industry are obtained by hybrid optimization algorithms. Combinations of optimization tools such as metaheuristics, mathematical programming, constraint programming and machine learning, have provided very efficient optimization algorithms. Four different types of combinations are considered in this paper: (i) Combining metaheuristics with complementary metaheuristics. (ii) Combining metaheuristics with exact methods from mathematical programming approaches which are mostly used in the operations research community. (iii) Combining metaheuristics with constraint programming approaches developed in the artificial intelligence community. (iv) Combining metaheuristics with machine learning and data mining techniques. 相似文献
19.
Redesigning and improving business processes to better serve customer needs has become a priority in service industries as they scramble to become more competitive. This paper describes an approach to process improvement that is being developed collaboratively by applied researchers at US WEST, a major telecommunications company, and the University of Colorado. Motivated by the need to streamline and to add more quantitative power to traditional quality improvement processes, the new approach uses an artificial intelligence (AI) statistical tree growing method that uses customer survey data to identify operations areas where improvements are expected to affect customers most. This AI/statistical method also identifies realistic quantitative targets for improvement and suggests specific strategies (recommended combinations of actions) that are predicted to have high impact. This research, funded in part by the Colorado Advanced Software Institute (CASI) in an effort to stimulate profitable innovations, has resulted in a practical methodology that has been used successfully at US WEST to help set process improvement priorities and to guide resource allocation decisions throughout the company. 相似文献
20.
J-B Wang 《The Journal of the Operational Research Society》2009,60(4):583-586
The paper deals with the single machine scheduling problems with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the processing time of a job is defined by function of its starting time and total normal processing time of jobs in front of it in the sequence. It is shown that even with the introduction of a time-dependent learning effect and deteriorating jobs to job processing times, the single machine makespan minimization problem remain polynomially solvable. But for the total completion time minimization problem, the classical shortest processing time first rule or largest processing time first rule cannot give an optimal solution. 相似文献