首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Behavioural scoring models are generally used to estimate the probability that a customer of a financial institution who owns a credit product will default on this product in a fixed time horizon. However, one single customer usually purchases many credit products from an institution while behavioural scoring models generally treat each of these products independently. In order to make credit risk management easier and more efficient, it is interesting to develop customer default scoring models. These models estimate the probability that a customer of a certain financial institution will have credit issues with at least one product in a fixed time horizon. In this study, three strategies to develop customer default scoring models are described. One of the strategies is regularly utilized by financial institutions and the other two will be proposed herein. The performance of these strategies is compared by means of an actual data bank supplied by a financial institution and a Monte Carlo simulation study.  相似文献   

2.
蒋紫艳  赵军 《运筹与管理》2015,24(4):240-245
新产品的成功销售取决于两个重要的因素:一是具有生产特性的工程变量,比如产品的可靠性水平;一是具有市场特征的影响因素,比如价格和保障机制。为了实现收益,制造商必须认真审视价格、产品可靠性和保障机制的选择。因此,本文将价格作为外生变量,将保障机制与可靠性作为决策变量,建立了以最大化为目标的收益模型,分析可靠性与保障机制的最优策略。另外,探讨当不同变量的敏感性参数发生变化时,最优保障机制与产品可靠性的变化规律。最后,通过算例分析收益函数的基本特性,结论显示消费者总是从产品保障机制的信号中判断产品的可靠性水平,这对新产品销售有一定的借鉴意义。  相似文献   

3.
In this paper we examine the impact of changes of such factors as tariffs/import cost, exchange rate, and unit savings derived from economies of scale, on the product design of four international strategies which are characterised by two dimensions. The first dimension describes whether the company offers a standardised or a customised product. The second indicates whether the company centralises its production to a single facility in one country or decentralises its production to facilities located in each country. To address the above issue, we present a model that has elements from marketing and manufacturing.For the case where the product has one attribute, we show that when tariffs/import cost decrease, an international enterprise should respond by enhancing the features of its products. Similarly, the product features should be enhanced when the exchange rate increases or the unit savings derived from economies of scale increases. Numerical examples indicate that an international enterprise should change its production configuration from decentralised to centralised, in environments of high tariffs/import cost. Furthermore, an international enterprise should change its product policies from customised to standardised when the savings derived from economies of scale are high, and the exchange rate increases.  相似文献   

4.
It is well known that mutually orthogonal latin squares, or MOLS, admit a (Kronecker) product construction. We show that, under mild conditions, “triple products” of MOLS can result in a gain of one square. In terms of transversal designs, the technique is to use a construction of Rolf Rees twice: once to obtain a coarse resolution of the blocks after one product, and next to reorganize classes and resolve the blocks of the second product. As consequences, we report a few improvements to the MOLS table and obtain a slight strengthening of the famous theorem of MacNeish.  相似文献   

5.
Two-grid finite volume element methods, based on two linear conforming finite element spaces on one coarse grid and one fine grid, are presented and studied for two-dimensional semilinear parabolic problems. With the proposed techniques, solving the nonsymmetric and nonlinear system on the fine space is reduced to solving a symmetric and linear system on the fine space and solving the nonsymmetric and nonlinear system on a much smaller space. Convergence estimates are derived to justify the efficiency of the proposed two-grid algorithms. It is proved that the coarse grid can be much coarser than the fine grid. As a result, solving such a large class of semilinear parabolic problems will not be much more difficult than solving one single linearized equation. In the end a numerical example is presented to validate the usefulness and efficiency of the method.  相似文献   

6.
Credit scoring systems are based on Operational Research and statistical models which seek to identify who of previous borrowers did or did not default on loans. This study looks at the question when will borrowers default not if they will default. It suggests that some of the reliability modelling approaches may be useful in this context and may help identify who will default as well as when they may default.  相似文献   

7.
Data-based scorecards, such as those used in credit scoring, age with time and need to be rebuilt or readjusted. Unlike the huge literature on modelling the replacement and maintenance of equipment there have been hardly any models that deal with this problem for scorecards. This paper identifies an effective way of describing the predictive ability of the scorecard and from this describes a simple model for how its predictive ability will develop. Using a dynamic programming approach one is then able to find when it is optimal to rebuild and when to readjust a scorecard. Failing to readjust or rebuild a scorecard when they aged was one of the defects in credit scoring identified in the investigations into the sub-prime mortgage crisis.  相似文献   

8.
Credit scoring is a method of modelling potential risk of credit applications. Traditionally, logistic regression and discriminant analysis are the most widely used approaches to create scoring models in the industry. However, these methods are associated with quite a few limitations, such as being instable with high-dimensional data and small sample size, intensive variable selection effort and incapability of efficiently handling non-linear features. Most importantly, based on these algorithms, it is difficult to automate the modelling process and when population changes occur, the static models usually fail to adapt and may need to be rebuilt from scratch. In the last few years, the kernel learning approach has been investigated to solve these problems. However, the existing applications of this type of methods (in particular the SVM) in credit scoring have all focused on the batch model and did not address the important problem of how to update the scoring model on-line. This paper presents a novel and practical adaptive scoring system based on an incremental kernel method. With this approach, the scoring model is adjusted according to an on-line update procedure that can always converge to the optimal solution without information loss or running into numerical difficulties. Non-linear features in the data are automatically included in the model through a kernel transformation. This approach does not require any variable reduction effort and is also robust for scoring data with a large number of attributes and highly unbalanced class distributions. Moreover, a new potential kernel function is introduced to further improve the predictive performance of the scoring model and a kernel attribute ranking technique is used that adds transparency in the final model. Experimental studies using real world data sets have demonstrated the effectiveness of the proposed method.  相似文献   

9.
Scoring by usage     
This paper aims to discover whether the predictive accuracy of a new applicant scoring model for a credit card can be improved by estimating separate scoring models for applicants who are predicted to have high or low usage of the card. Two models are estimated. First we estimate a model to explain the desired usage of a card, and second we estimate separately two further scoring models, one for those applicants whose usage is predicted to be high, and one for those for whom it is predicted to be low. The desired usage model is a two-stage Heckman model to take into account the fact that the observed usage of accepted applicants is constrained by their credit limit. Thus a model of the determinants of the credit limit, and one of usage, are both estimated using Heckman's ML estimator. We find a large number of variables to be correlated with desired usage. We also find that the two stage scoring methodology gives only very marginal improvements over a single stage scoring model, that we are able to predict a greater percentage of bad payers for low users than for high users and a greater percentage of good payers for high users than for low users.  相似文献   

10.
We develop an integrated approach for analyzing logistics and marketing decisions within the context of designing an optimal returns system for a retailer servicing two distinct market segments. At the operational level, we show that the optimal refund price is not unique. Moreover, it is such that if both market segments return a purchased product, then neither segment will receive a full money-back refund; and it is such that if one or both segments do not return a purchased product, then a refund premium over the purchase price is possible, but the refund premium will not be enough to offset a customer's total net cost of purchase and return. We also show that any improvement to the returns system that results in increased logistical efficiency or marketing effectiveness will be accompanied by an increase in the selling price of the product. At the strategic level, we show that if the retailer does not coordinate its logistics and marketing efforts to improve the overall returns system, then it will tend to over-invest in one of the functions and under-invest in the other. Finally, we illustrate how our model can be generalized to the case in which a customer's ex post valuation of the product falls along a continuum.  相似文献   

11.
Ciaramella  G.  Vanzan  T. 《Numerical Algorithms》2022,91(1):413-448

Two-level Schwarz domain decomposition methods are very powerful techniques for the efficient numerical solution of partial differential equations (PDEs). A two-level domain decomposition method requires two main components: a one-level preconditioner (or its corresponding smoothing iterative method), which is based on domain decomposition techniques, and a coarse correction step, which relies on a coarse space. The coarse space must properly represent the error components that the chosen one-level method is not capable to deal with. In the literature, most of the works introduced efficient coarse spaces obtained as the span of functions defined on the entire space domain of the considered PDE. Therefore, the corresponding two-level preconditioners and iterative methods are defined in volume. In this paper, we use the excellent smoothing properties of Schwarz domain decomposition methods to define, for general elliptic problems, a new class of substructured two-level methods, for which both Schwarz smoothers and coarse correction steps are defined on the interfaces (except for the application of the smoother that requires volumetric subdomain solves). This approach has several advantages. On the one hand, the required computational effort is cheaper than the one required by classical volumetric two-level methods. On the other hand, our approach does not require, like classical multi-grid methods, the explicit construction of coarse spaces, and it permits a multilevel extension, which is desirable when the high dimension of the problem or the scarce quality of the coarse space prevents the efficient numerical solution. Numerical experiments demonstrate the effectiveness of the proposed new numerical framework.

  相似文献   

12.
王滔  颜波 《运筹与管理》2022,31(12):173-178
考虑制造商自建渠道受众度有限的现实,将电商平台提升制造商自建渠道受众度的溢出效应引入由一个制造商和一个电商平台所组成的在线渠道模型。分别对制造商不进驻电商平台,及进驻时分别保留和放弃自建渠道情形下的最优决策问题进行分析。结果发现,制造商进驻电商平台且保留自建渠道时最优收益的实现受消费者对其自建渠道产品偏好的影响。比较分散和集中决策的情况发现,制造商进驻电商平台时无论其是否保留自建渠道,集中决策较分散决策能带来更高的整体收益,但当电商平台的溢出效应较大且消费者对制造商自建渠道产品偏好较小时,制造商不进驻电商平台时的在线渠道整体收益高于进驻时的整体收益。最后,通过电商平台的介绍费率为不同在线渠道结构分别设计了相应协调机制,实现了在线渠道的帕累托改进。  相似文献   

13.
We study the coarse Baum–Connes conjecture for product spaces and product groups. We show that a product of CAT(0) groups, polycyclic groups and relatively hyperbolic groups which satisfy some assumptions on peripheral subgroups, satisfies the coarse Baum–Connes conjecture. For this purpose, we construct and analyze an appropriate compactification and its boundary, “corona”, of a product of proper metric spaces.  相似文献   

14.
The two-grid method is studied for solving a two-dimensional second-order nonlinear hyperbolic equation using finite volume element method. The method is based on two different finite element spaces defined on one coarse grid with grid size H and one fine grid with grid size h, respectively. The nonsymmetric and nonlinear iterations are only executed on the coarse grid and the fine grid solution can be obtained in a single symmetric and linear step. It is proved that the coarse grid can be much coarser than the fine grid. A prior error estimate in the H1-norm is proved to be O(h+H3|lnH|) for the two-grid semidiscrete finite volume element method. With these proposed techniques, solving such a large class of second-order nonlinear hyperbolic equations will not be much more difficult than solving one single linearized equation. Finally, a numerical example is presented to validate the usefulness and efficiency of the method.  相似文献   

15.
In a highly competitive environment, a product's commercial success depends increasingly more upon the ability to satisfy consumers' preferences that are highly diversified. Since a product typically comprises a host of technological attributes, its market value incorporates all of the individual values of technological attributes. If the willingness-to-pay (WTP) for individual quality attributes of a product is known, one can conjecture the overall WTP or the imputed market price for the product. The market price listed by the producer has to be equal to or lower than this WTP for the commercial survival of the product. In this paper, we propose a methodology for estimating the value of individual product characteristics and thus the overall WTP of the product with DEA. Our methodology is based on a model derived from consumer demand theory on the one hand, and the recent developments in DEA on the other hand. The paper also presents a real case study for the mobile phone market, which is characterized by its high speed of innovation. On the theoretical side, we expect our framework to provide a possibility of combining DEA and consumer demand theory. We also expect that the empirical application will shed some light on the nature of the process of product differentiation based on consumers' valuation.  相似文献   

16.
This article addresses and discusses the inaccuracies in finite differencing across the interface of a nested grid. Explicit schemes for the advection and diffusion equations are analyzed on the fine and coarse grids and reformulated at the interface to guarantee that the evolving solution is unaffected by the abrupt change of the spatial grid resolution. The associated errors are expressed as a function of the wavelength of the initial field distribution and the ratio between the coarse and fine grid resolution. It is found that large-scale features of the coarse grid must supply energy to sustain the small-scale features of the fine grid. To not deplete the large-scale motion, a source of energy must be given at the interface in the form of a computational diffusive term with negative viscosity coefficient. On the other hand, not all the energy of the small-scale features of the fine grid has to be transferred to the large-scale motion, but some of it needs to be computationally dissipated at the interface. © 1994 John Wiley & Sons, Inc.  相似文献   

17.
We consider the cost of estimating an error bound for the computed solution of a system of linear equations, i.e., estimating the norm of a matrix inverse. Under some technical assumptions we show that computing even a coarse error bound for the solution of a triangular system of equations costs at least as much as testing whether the product of two matrices is zero. The complexity of the latter problem is in turn conjectured to be the same as matrix multiplication, matrix inversion, etc. Since most error bounds in practical use have much lower complexity, this means they should sometimes exhibit large errors. In particular, it is shown that condition estimators that: (1) perform at least one operation on each matrix entry; and (2) are asymptotically faster than any zero tester, must sometimes over or underestimate the inverse norm by a factor of at least , where n is the dimension of the input matrix, k is the bitsize, and where either or grows faster than any polynomial in n . Our results hold for the RAM model with bit complexity, as well as computations over rational and algebraic numbers, but not real or complex numbers. Our results also extend to estimating error bounds or condition numbers for other linear algebra problems such as computing eigenvectors. September 10, 1999. Final version received: August 23, 2000.  相似文献   

18.
Motivated by the increasing importance of large‐scale networks typically modeled by graphs, this paper is concerned with the development of mathematical tools for solving problems associated with the popular graph Laplacian. We exploit its mixed formulation based on its natural factorization as product of two operators. The goal is to construct a coarse version of the mixed graph Laplacian operator with the purpose to construct two‐level, and by recursion, a multilevel hierarchy of graphs and associated operators. In many situations in practice, having a coarse (i.e., reduced dimension) model that maintains some inherent features of the original large‐scale graph and respective graph Laplacian offers potential to develop efficient algorithms to analyze the underlined network modeled by this large‐scale graph. One possible application of such a hierarchy is to develop multilevel methods that have the potential to be of optimal complexity. In this paper, we consider general (connected) graphs and function spaces defined on its edges and its vertices. These two spaces are related by a discrete gradient operator, ‘Grad’ and its adjoint, ‘ ? Div’, referred to as (negative) discrete divergence. We also consider a coarse graph obtained by aggregation of vertices of the original one. Then, a coarse vertex space is identified with the subspace of piecewise constant functions over the aggregates. We consider the ?2‐projection QH onto the space of these piecewise constants. In the present paper, our main result is the construction of a projection πH from the original edge‐space onto a properly constructed coarse edge‐space associated with the edges of the coarse graph. The projections πH and QH commute with the discrete divergence operator, that is, we have Div πH = QH div. The respective pair of coarse edge‐space and coarse vertex‐space offer the potential to construct two‐level, and by recursion, multilevel methods for the mixed formulation of the graph Laplacian, which utilizes the discrete divergence operator. The performance of one two‐level method with overlapping Schwarz smoothing and correction based on the constructed coarse spaces for solving such mixed graph Laplacian systems is illustrated on a number of graph examples. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
We present a numerical investigation of bi-disperse particle-laden gravity currents in the lock-exchange configuration. Previous results, based on numerical simulation and laboratory experiments, are used to establish comparisons. Our discussion focuses on explaining how the presence of more than one particle diameter influences the main features of the flow, such as deposit profile, the evolution of the front location and suspended mass. We develop the complete energy budget equation for bi-disperse flows. A set of two and three-dimensional direct numerical simulations (DNS), with different initial compositions of coarse and fine particles, are carried out for Reynolds number equal to 4000. Such simulations show that the energy terms are strongly affected by varying the initial particle fractions. The addition of a small amount of fine particles into a current predominantly composed of coarse particles increases its run-out distance. In particular, it is shown that higher amounts of coarse particles have a dumping effect on the current development. Comparisons show that the two-dimensional simulation does not reproduce the intense turbulence generated in 3D cases accurately, which results in a significant difference in the suspended mass, front position as well as the dissipation term due to the advective motion.  相似文献   

20.
We introduce a binary regression accounting-based model for bankruptcy prediction of small and medium enterprises (SMEs). The main advantage of the model lies in its predictive performance in identifying defaulted SMEs. Another advantage, which is especially relevant for banks, is that the relationship between the accounting characteristics of SMEs and response is not assumed a priori (eg, linear, quadratic or cubic) and can be determined from the data. The proposed approach uses the quantile function of the generalized extreme value distribution as link function as well as smooth functions of accounting characteristics to flexibly model covariate effects. Therefore, the usual assumptions in scoring models of symmetric link function and linear or pre-specified covariate-response relationships are relaxed. Out-of-sample and out-of-time validation on Italian data shows that our proposal outperforms the commonly used (logistic) scoring model for different default horizons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号