首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the optimal loan securitization policy of a commercial bank which is mainly engaged in lending activities. For this we propose a stylized dynamic model which contains the main features affecting the securitization decision. In line with reality we assume that there are non-negligible fixed and variable transaction costs associated with each securitization. The fixed transaction costs lead to a formulation of the optimization problem in an impulse control framework. We prove viscosity solution existence and uniqueness for the quasi-variational inequality associated with this impulse control problem. Iterated optimal stopping is used to find a numerical solution of this PDE, and numerical examples are discussed.  相似文献   

2.
We analyze the process of mortgage loan securitization that has been a root cause of the current subprime mortgage crisis (SMC). In particular, we solve an optimal securitization problem for banks that has the cash outflow rate for financing a portfolio of mortgage-backed securities (MBSs) and the bank’s investment in MBSs as controls. In our case, the associated Hamilton–Jacobi–Bellman equation (HJBE) has a smooth solution when the optimal controls are computed via a power utility function. Finally, we analyze this optimization problem and its connections with the SMC.  相似文献   

3.
The purpose of this study is to analyze the securitization of longevity risk with an emphasis on longevity risk modeling and longevity bond premium pricing. Various longevity derivatives have been proposed, and the capital market has experienced one unsuccessful attempt by the European Investment Bank (EIB) in 2004. After carefully analyzing the pros and cons of previous securitizations, we present our proposed longevity bonds, whose payoffs are structured as a series of put option spreads. We utilize a random walk model with drift to fit small variations of mortality improvements and employ extreme value theory to model rare longevity events. Our method is a new approach in longevity risk securitization, which has the advantage of both capturing mortality improvements within sample and extrapolating rare, out-of- sample longevity events. We demonstrate that the risk cubic model developed for pricing catastrophe bonds can be applied to mortality and longevity bond pricing and use the model to calculate risk premiums for longevity bonds.  相似文献   

4.
Securitization is a financial operation which allows a financial institution to transform financial assets, for instance mortgage assets or lease contracts, into marketable securities. We focus the analysis on a real case of a bank for the leasing. Once the securitization characteristics, such as size and times of the operation, have been defined, the profit for the financial institution—Italease Bank for the Leasing in our case—depends on how the financial assets to use in the securitization are selected. We show that the selection problem can be modelled as a multidimensional knapsack problem (MDKP). Some formal arguments suggest that there may exist a prevailing constraint in the MDKP. Such an idea is used in the design of some simple heuristics which turn out to be very effective.  相似文献   

5.
Pricing and risk management for longevity risk have increasingly become major challenges for life insurers and pension funds around the world. Risk transfer to financial markets, with their major capacity for efficient risk pooling, is an area of significant development for a successful longevity product market. The structuring and pricing of longevity risk using modern securitization methods, common in financial markets, have yet to be successfully implemented for longevity risk management. There are many issues that remain unresolved for ensuring the successful development of a longevity risk market. This paper considers the securitization of longevity risk focusing on the structuring and pricing of a longevity bond using techniques developed for the financial markets, particularly for mortgages and credit risk. A model based on Australian mortality data and calibrated to insurance risk linked market data is used to assess the structure and market consistent pricing of a longevity bond. Age dependence in the securitized risks is shown to be a critical factor in structuring and pricing longevity linked securitizations.  相似文献   

6.
针对一种巨灾保险风险证券化产品-巨灾债券的定价问题,首次考虑了我国短期利率的期限结构,并在此基础上提出了Black-Karasinski利率二叉树建立方法(B-K模型),以此确定了中国短期无风险利率,最后通过Louberge巨灾债券理论定价方法试着对我国假想台风损失巨灾债券进行了具体定价,为我国进行巨灾保险风险证券化定价方面提供了一种新的尝试.  相似文献   

7.
Decision theory dealing with uncertainty is usually considering criteria such as expected, minimum or maximum values. In economic areas, the quantile criterion is commonly used and provides significant advantages. This paper gives interest to the quantile optimization in decision making for designing irrigation strategies. We developed P2q, a hierarchical decomposition algorithm which belongs to the branching methods family. It consists in repeating the creation, evaluation and selection of smaller promising regions. Opposite to common approaches, the main criterion of interest is the α-quantile where α is related to the decision maker risk acceptance. Results of an eight parameters optimization problem are presented. Quantile optimization provided optimal irrigation strategies that differed from thus reached with expected value optimization, responding more accurately to the decision maker preferences.  相似文献   

8.
We consider n noisy measurements of a smooth (unknown) function, which suggest that the graph of the function consists of one convex and one concave section. Due to the noise the sequence of the second divided differences of the data exhibits more sign changes than those expected in the second derivative of the underlying function. We address the problem of smoothing the data so as to minimize the sum of squares of residuals subject to the condition that the sequence of successive second divided differences of the smoothed values changes sign at most once. It is a nonlinear problem, since the position of the sign change is also an unknown of the optimization process. We state a characterization theorem, which shows that the smoothed values can be derived by at most 2n – 2 quadratic programming calculations to subranges of data. Then, we develop an algorithm that solves the problem in about O(n 2) computer operations by employing several techniques, including B-splines, the use of active sets, quadratic programming and updating methods. A Fortran program has been written and some of its numerical results are presented. Applications of the smoothing technique may be found in scientific, economic and engineering calculations, when a potential shape for the underlying function is an S-curve. Generally, the smoothing calculation may arise from processes that show initially increasing and then decreasing rates of change.  相似文献   

9.
We consider stochastic control problems with jump-diffusion processes and formulate an algorithm which produces, starting from a given admissible control π, a new control with a better value. If no improvement is possible, then π is optimal. Such an algorithm is well-known for discrete-time Markov Decision Problems under the name Howard’s policy improvement algorithm. The idea can be traced back to Bellman. Here we show with the help of martingale techniques that such an algorithm can also be formulated for stochastic control problems with jump-diffusion processes. As an application we derive some interesting results in financial portfolio optimization.  相似文献   

10.
One of the scalability bottlenecks for the large-scale usage of Gaussian processes is the computation of the maximum likelihood estimates of the parameters of the covariance matrix. The classical approach requires a Cholesky factorization of the dense covariance matrix for each optimization iteration. In this work, we present an estimating equations approach for the parameters of zero-mean Gaussian processes. The distinguishing feature of this approach is that no linear system needs to be solved with the covariance matrix. Our approach requires solving an optimization problem for which the main computational expense for the calculation of its objective and gradient is the evaluation of traces of products of the covariance matrix with itself and with its derivatives. For many problems, this is an O(nlog?n) effort, and it is always no larger than O(n2). We prove that when the covariance matrix has a bounded condition number, our approach has the same convergence rate as does maximum likelihood in that the Godambe information matrix of the resulting estimator is at least as large as a fixed fraction of the Fisher information matrix. We demonstrate the effectiveness of the proposed approach on two synthetic examples, one of which involves more than 1 million data points.  相似文献   

11.
We consider some inference problems concerning the drift parameters of multi‐factors Vasicek model (or multivariate Ornstein–Uhlebeck process). For example, in modeling for interest rates, the Vasicek model asserts that the term structure of interest rate is not just a single process, but rather a superposition of several analogous processes. This motivates us to develop an improved estimation theory for the drift parameters when homogeneity of several parameters may hold. However, the information regarding the equality of these parameters may be imprecise. In this context, we consider Stein‐rule (or shrinkage) estimators that allow us to improve on the performance of the classical maximum likelihood estimator (MLE). Under an asymptotic distributional quadratic risk criterion, their relative dominance is explored and assessed. We illustrate the suggested methods by analyzing interbank interest rates of three European countries. Further, a simulation study illustrates the behavior of the suggested method for observation periods of small and moderate lengths of time. Our analytical and simulation results demonstrate that shrinkage estimators (SEs) provide excellent estimation accuracy and outperform the MLE uniformly. An over‐ridding theme of this paper is that the SEs provide powerful extensions of their classical counterparts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
李明昕  唐俊  白云  马行达 《运筹与管理》2019,28(10):117-122
能源金融和大宗商品的衍生品交易已逐渐成为金融领域的前沿热点问题。钢铁类金融衍生品定价和能源金融风险研究,对能源资产证券化和金融的发展有着重要意义。本文在现有的期权定价模型下,结合影响螺纹钢实物期权价格的因素,优化经典的Black-Scholes实物期权定价模型,得到螺纹钢模糊B-S实物期权定价模型,并结合VaR方法,研究螺纹钢实物期权的定价机制,量化钢铁类金融风险,从而合理的控制风险传播。  相似文献   

13.
ABSTRACT

The paper considers very general multivariate modifications of Cramer–Lundberg risk model. The claims can be of different types and can arrive in groups. The groups arrival processes have constant intensities. The counting groups processes are dependent multivariate compound Poisson processes of Type I. We allow empty groups and show that in that case we can find stochastically equivalent Cramer–Lundberg model with non-empty groups. The investigated model generalizes the risk model with common shocks, the Poisson risk process of order k, the Poisson negative binomial, the Polya-Aeppli, the Polya-Aeppli of order k among others. All of them with one or more types of policies. The numerical characteristics, Cramer–Lundberg approximations, and probabilities of ruin are derived. During the paper, we show that the theory of these risk models intrinsically relates to the special types of integro differential equations. The probability solutions to such differential equations provide new insights, typically overseen from the standard point of view.  相似文献   

14.

We consider a continuous time portfolio optimization problems on an infinite time horizon for a factor model, recently treated by Bielecki and Pliska ["Risk-sensitive dynamic asset management", Appl. Math. Optim. , 39 (1990) 337-360], where the mean returns of individual securities or asset categories are explicitly affected by economic factors. The factors are assumed to be Gaussian processes. We see new features in constructing optimal strategies for risk-sensitive criteria of the portfolio optimization on an infinite time horizon, which are obtained from the solutions of matrix Riccati equations.  相似文献   

15.

We study methods to simulate term structures in order to measure interest rate risk more accurately. We use principal component analysis of term structure innovations to identify risk factors and we model their univariate distribution using GARCH-models with Student’s t-distributions in order to handle heteroscedasticity and fat tails. We find that the Student’s t-copula is most suitable to model co-dependence of these univariate risk factors. We aim to develop a model that provides low ex-ante risk measures, while having accurate representations of the ex-post realized risk. By utilizing a more accurate term structure estimation method, our proposed model is less sensitive to measurement noise compared to traditional models. We perform an out-of-sample test for the U.S. market between 2002 and 2017 by valuing a portfolio consisting of interest rate derivatives. We find that ex-ante Value at Risk measurements can be substantially reduced for all confidence levels above 95%, compared to the traditional models. We find that that the realized portfolio tail losses accurately conform to the ex-ante measurement for daily returns, while traditional methods overestimate, or in some cases even underestimate the risk ex-post. Due to noise inherent in the term structure measurements, we find that all models overestimate the risk for 10-day and quarterly returns, but that our proposed model provides the by far lowest Value at Risk measures.

  相似文献   

16.
In this paper, we study a Dirichlet optimal control problem associated with a linear elliptic equation the coefficients of which we take as controls in the class of integrable functions. The coefficients may degenerate and, therefore, the problems may exhibit the so-called Lavrentieff phenomenon and non-uniqueness of weak solutions. We consider the solvability of this problem in the class of W-variational solutions. Using a concept of variational convergence of constrained minimization problems in variable spaces, we prove the existence of W-solutions to the optimal control problem and provide the way for their approximation. We emphasize that control problems of this type are important in material and topology optimization as well as in damage or life-cycle optimization.  相似文献   

17.
We develop a general approach to portfolio optimization taking account of estimation risk and stylized facts of empirical finance. This is done within a Bayesian framework. The approximation of the posterior distribution of the unknown model parameters is based on a parallel tempering algorithm. The portfolio optimization is done using the first two moments of the predictive discrete asset return distribution. For illustration purposes we apply our method to empirical stock market data where daily asset log-returns are assumed to follow an orthogonal MGARCH process with t-distributed perturbations. Our results are compared with other portfolios suggested by popular optimization strategies.  相似文献   

18.
G. Scheday  C. Miehe 《PAMM》2002,1(1):189-190
Parameter identification processes concern the determination of parameters in a material model in order to fit experimental data. We provide a distinct, unified algorithmic setting of a generic class of material models and discuss the associated gradient–based optimization problem. Gradient–based optimization algorithms need derivatives of the objective function with respect to the material parameter vector κ . In order to obtain the necessary derivatives, an analytical sensitivity analysis is pointed out for the unified class of algorithmic material models. The quality of the parameter identification is demonstrated for a representative example.  相似文献   

19.
We investigate the algebra of repeated integrals of semimartingales. We prove that a minimal family of semimartingales generates a quasi-shuffle algebra. In essence, to fulfil the minimality criterion, first, the family must be a minimal generator of the algebra of repeated integrals generated by its elements and by quadratic covariation processes recursively constructed from the elements of the family. Second, recursively constructed quadratic covariation processes may lie in the linear span of previously constructed quadratic covariation processes and of the family, but may not lie in the linear span of repeated integrals of these. We prove that a finite family of independent Lévy processes that have finite moments generates a minimal family. Key to the proof are the Teugels martingales and a strong orthogonalization of them. We conclude that a finite family of independent Lévy processes forms a quasi-shuffle algebra. We discuss important potential applications to constructing efficient numerical methods for the strong approximation of stochastic differential equations driven by Lévy processes.  相似文献   

20.
The problem of the estimation of a regression function by continuous piecewise linear functions is formulated as a nonconvex, nonsmooth optimization problem. Estimates are defined by minimization of the empirical L 2 risk over a class of functions, which are defined as maxima of minima of linear functions. An algorithm for finding continuous piecewise linear functions is presented. We observe that the objective function in the optimization problem is semismooth, quasidifferentiable and piecewise partially separable. The use of these properties allow us to design an efficient algorithm for approximation of subgradients of the objective function and to apply the discrete gradient method for its minimization. We present computational results with some simulated data and compare the new estimator with a number of existing ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号