首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Real estate price prediction under asymmetric loss   总被引:3,自引:0,他引:3  
This paper deals with the problem of how to adjust a predictive mean in a practical situation of prediction where there is asymmetry in the loss function. A standard linear model is considered for predicting the price of real estate using a normal-gamma conjugate prior for the parameters. The prior of a subject real estate agent is elicited but, for comparison, a diffuse prior is also considered. Three loss functions are used: asymmetric linear, asymmetric quadratic and LINEX, and the parameters under each of these postulated forms are elicited. Theoretical developments for prediction under each loss function in the presence of normal errors are presented and useful tables of adjustment factor values given. Predictions of the dependent price variable for two properties with differing characteristics are made under each loss function and the results compared.  相似文献   

2.
Automated classification of granite slabs is a key aspect of the automation of processes in the granite transformation sector. This classification task is currently performed manually on the basis of the subjective opinions of an expert in regard to texture and colour. We describe a classification method based on machine learning techniques fed with spectral information for the rock, supplied in the form of discrete values captured by a suitably parameterized spectrophotometer. The machine learning techniques applied in our research take a functional perspective, with the spectral function smoothed in accordance with the data supplied by the spectrophotometer. On the basis of the results obtained, it can be concluded that the proposed method is suitable for automatically classifying ornamental rock.  相似文献   

3.
In this paper, a new method for nonlinear system identification via extreme learning machine neural network based Hammerstein model (ELM-Hammerstein) is proposed. The ELM-Hammerstein model consists of static ELM neural network followed by a linear dynamic subsystem. The identification of nonlinear system is achieved by determining the structure of ELM-Hammerstein model and estimating its parameters. Lipschitz quotient criterion is adopted to determine the structure of ELM-Hammerstein model from input–output data. A generalized ELM algorithm is proposed to estimate the parameters of ELM-Hammerstein model, where the parameters of linear dynamic part and the output weights of ELM neural network are estimated simultaneously. The proposed method can obtain more accurate identification results with less computation complexity. Three simulation examples demonstrate its effectiveness.  相似文献   

4.
Damage detection methods of structural components have been extensively evaluated in theoretical and experimental research studies in the last few years. In this context, machine learning algorithms are used to evaluate the health state of structures. This work assesses the dependency of various excitation frequencies in guided-wave based structural health monitoring (SHM) systems and the performance of damage detection, which are barely investigated, in particular in SHM technologies using machine learning approaches. Machine learning can be directly used in SHM applications including environmental effects (noise, imperfection, statistical tests, etc.) to train a new system and to solve the inverse problem. Thereby, the piezoelectric effect is used to apply guided-waves through the structure or to measure the vibration response of flexible structures. The important outcome of this study is to improve the efficiency and performance of SHM systems by optimising the excitation frequency using machine learning approaches. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Using a supply chain network, we demonstrate the feasibility, viability, and robustness of applying machine learning and genetic algorithms to respectively model, understand, and optimize such data intensive environments. Deployment of these algorithms, which learn from and optimize data, can obviate the need to perform more complex, expensive, and time consuming design of experiments (DOE), which usually disrupt system operations. We apply and compare the behavior and performance of the proposed machine learning algorithms to that obtained via DOE in a simulated Vendor Managed Replenishment system, developed for an actual firm. The results show that the models resulting from the proposed algorithms had strong explanatory and predictive power, comparable to that of DOE. The optimal system settings and profit were also similar to that obtained from DOE. The virtues of using machine learning and evolutionary algorithms to model and optimize data rich environments thus seem promising because they are automatic, involving little human intervention and expertise. We believe and are exploring how they can be made adaptive to improve parameter estimates with increasing data, as well as seamlessly detecting system (and therefore model) changes, thus being capable of recursively updating and reoptimizing a modified or new model.  相似文献   

6.
Support vector machine learning algorithm and transduction   总被引:1,自引:0,他引:1  
  相似文献   

7.
8.
This paper deals with implied volatility (IV) estimation using no-arbitrage techniques. The current market practice is to obtain IV of liquid options as based on Black–Scholes (BS type hereafter) models. Such volatility is subsequently used to price illiquid or even exotic options. Therefore, it follows that the BS model can be related simultaneously to the whole set of IVs as given by maturity/moneyness relation of tradable options. Then, it is possible to get IV curve or surface (a so called smile or smirk). Since the moneyness and maturity of IV often do not match the data of valuated options, some sort of estimating and local smoothing is necessary. However, it can lead to arbitrage opportunity if no-arbitrage conditions on state price density (SPD) are ignored. In this paper, using option data on DAX index, we aim to analyse the behavior of IV and SPD with respect to different choices of bandwidth parameter h, time to maturity and kernel function. A set of bandwidths which violates no-arbitrage conditions is identified. We document that the change of h implies interesting changes in the violation interval of moneyness. We also perform the analysis after removing outliers, in order to show that not only outliers cause the violation of no-arbitrage conditions. Moreover, we propose a new measure of arbitrage which can be considered either for the SPD curve (arbitrage area measure) or for the SPD surface (arbitrage volume measure). We highlight the impact of h on the proposed measures considering the options on a German stock index. Finally, we propose an extension of the IV and SPD estimation for the case of options on a dividend-paying stock.  相似文献   

9.
Recent empirical studies indicate that improvements in product conformance quality exhibit learning-by-doing patterns. We address quality improvement in a competitive duopoly market for partially substitutable products characterized by levels of quality that are not necessarily identical. The products’ quality is described with a hazard rate that can be improved both by accumulating production experience (autonomous learning) and quality improvement efforts (induced learning). Given that defective items are fully reimbursable and the demands exhibit increasing returns to scale, we derive Nash equilibrium pricing and induced learning effort dynamic policies. We show that when the effectiveness of autonomous learning prevails over the effectiveness of efforts in induced learning, equilibrium prices gradually grow over time; the trend is quite the opposite when autonomous learning is less effective than induced learning.  相似文献   

10.
Nonlinear dynamical systems, which include models of the Earth’s climate, financial markets and complex ecosystems, often undergo abrupt transitions that lead to radically different behavior. The ability to predict such qualitative and potentially disruptive changes is an important problem with far-reaching implications. Even with robust mathematical models, predicting such critical transitions prior to their occurrence is extremely difficult. In this work, we propose a machine learning method to study the parameter space of a complex system, where the dynamics is coarsely characterized using topological invariants. We show that by using a nearest neighbor algorithm to sample the parameter space in a specific manner, we are able to predict with high accuracy the locations of critical transitions in parameter space.  相似文献   

11.
This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large-scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L 1-regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.  相似文献   

12.
The purpose of this article is to review the similarity and difference between financial risk minimization and a class of machine learning methods known as support vector machines, which were independently developed. By recognizing their common features, we can understand them in a unified mathematical framework. On the other hand, by recognizing their difference, we can develop new methods. In particular, employing the coherent measures of risk, we develop a generalized criterion for two-class classification. It includes existing criteria, such as the margin maximization and \(\nu \) -SVM, as special cases. This extension can also be applied to the other type of machine learning methods such as multi-class classification, regression and outlier detection. Although the new criterion is first formulated as a nonconvex optimization, it results in a convex optimization by employing the nonnegative \(\ell _1\) -regularization. Numerical examples demonstrate how the developed methods work for bond rating.  相似文献   

13.
Interbank Offered rate is the only direct market rate in China’s currency market. Volatility forecasting of China Interbank Offered Rate (IBOR) has a very important theoretical and practical significance for financial asset pricing and financial risk measure or management. However, IBOR is a dynamics and non-steady time series whose developmental changes have stronger random fluctuation, so it is difficult to forecast the volatility of IBOR. This paper offers a hybrid algorithm using grey model and extreme learning machine (ELM) to forecast volatility of IBOR. The proposed algorithm is composed of three phases. In the first, grey model is used to deal with the original IBOR time series by accumulated generating operation (AGO) and weaken the stochastic volatility in original series. And then, a forecasting model is founded by using ELM to analyze the new IBOR series. Lastly, the predictive value of the original IBOR series can be obtained by inverse accumulated generating operation (IAGO). The new model is applied to forecasting Interbank Offered Rate of China. Compared with the forecasting results of BP and classical ELM, the new model is more efficient to forecasting short- and middle-term volatility of IBOR.  相似文献   

14.
Computational Management Science - In the application of machine learning to real-life decision-making systems, e.g., credit scoring and criminal justice, the prediction outcomes might discriminate...  相似文献   

15.
Accurate loss reserves are an important item in the financial statement of an insurance company and are mostly evaluated by macrolevel models with aggregate data in run‐off triangles. In recent years, a new set of literature has considered individual claims data and proposed parametric reserving models based on claim history profiles. In this paper, we present a nonparametric and flexible approach for estimating outstanding liabilities using all the covariates associated to the policy, its policyholder, and all the information received by the insurance company on the individual claims since its reporting date. We develop a machine learning–based method and explain how to build specific subsets of data for the machine learning algorithms to be trained and assessed on. The choice for a nonparametric model leads to new issues since the target variables (claim occurrence and claim severity) are right‐censored most of the time. The performance of our approach is evaluated by comparing the predictive values of the reserve estimates with their true values on simulated data. We compare our individual approach with the most used aggregate data method, namely, chain ladder, with respect to the bias and the variance of the estimates. We also provide a short real case study based on a Dutch loan insurance portfolio.  相似文献   

16.
This paper discusses the applications of certain combinatorial and probabilistic techniques to the analysis of machine learning. Probabilistic models of learning initially addressed binary classification (or pattern classification). Subsequently, analysis was extended to regression problems, and to classification problems in which the classification is achieved by using real-valued functions (where the concept of a large margin has proven useful). Another development, important in obtaining more applicable models, has been the derivation of data-dependent bounds. Here, we discuss some of the key probabilistic and combinatorial techniques and results, focusing on those of most relevance to researchers in discrete applied mathematics.  相似文献   

17.
Central European Journal of Operations Research - In this paper, the effects of Occupational Repetitive Actions (OCRA) parameters, learning rate on process times, and machine scheduling were...  相似文献   

18.
In this commognitive study, we take a close look at the interactive problem-solving by two middle-school students’ dyads, one of which participated in research conducted in Montreal, Canada in 1992, and the other, 25 years later, was a part of a classroom investigation in Melbourne, Australia. The present study was inspired by the second author’s impression of similarity between the two cases. Our analyses, conducted with the help of special constructs, participation profiles, participation structures and roles-in-activity, brought two types of results. First, striking likeness was identified between the two cases in the characteristics of interactions that could be responsible for the production and utilization of learning opportunities. Role conflict likely experienced by the participants emerged as a factor undermining the effectiveness of learning-in-peer-interaction. Second, the confirmation of the similarity, combined with a theoretically supported analysis of mechanisms of interaction, corroborated the claim about generalizability of findings in commognitive case studies.  相似文献   

19.
Scheduling with setup times and learning plays a crucial role in today's manufacturing and service environments where scheduling decisions are made with respect to multiple performance criteria rather than a single criterion. In this paper, we address a bicriteria single machine scheduling problem with job-dependent past-sequence-dependent setup times and job-dependent position-based learning effects. The setup time and actual processing time of a job are respectively unique functions of the actual processing times of the already processed jobs and the position of the job in a schedule. The objective is to derive the schedule that minimizes a linear composite function of a pair of performance criteria consisting of the makespan, the total completion time, the total lateness, the total absolute differences in completion times, and the sum of earliness, tardiness, and common due date penalty. We show that the resulting problems cannot be solved in polynomial time; thus, branch-and-bound (B&B) methods are proposed to obtain the optimal schedules. Our computational results demonstrate that the B&B can solve instances of various size problems with attractive times.  相似文献   

20.
Redesigning and improving business processes to better serve customer needs has become a priority in service industries as they scramble to become more competitive. This paper describes an approach to process improvement that is being developed collaboratively by applied researchers at US WEST, a major telecommunications company, and the University of Colorado. Motivated by the need to streamline and to add more quantitative power to traditional quality improvement processes, the new approach uses an artificial intelligence (AI) statistical tree growing method that uses customer survey data to identify operations areas where improvements are expected to affect customers most. This AI/statistical method also identifies realistic quantitative targets for improvement and suggests specific strategies (recommended combinations of actions) that are predicted to have high impact. This research, funded in part by the Colorado Advanced Software Institute (CASI) in an effort to stimulate profitable innovations, has resulted in a practical methodology that has been used successfully at US WEST to help set process improvement priorities and to guide resource allocation decisions throughout the company.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号