首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper develops several optimization principles relating the fundamental concepts of Pareto efficiency and competitive equilibria. The beginning point for this development is the introduction of a new function describing individual preferences, closely related to willingness-to-pay, termed the benefit function. An important property of the benefit function is that it can be summed across individuals to obtain a meaningful measure of total benefit relative to a given set of utility levels; and the optimization principles presented in the paper are based on maximization of this total benefit.Specifically, it is shown that, under appropriate technical assumptions, a Pareto-efficient allocationX maximizes the total benefit relative to the utility levels it yields. Conversely, if an allocationX yields zero benefit and maximizes the total benefit function, then that allocation is Pareto efficient. The Lagrange multipliersp of the benefit maximization problem serve as prices; and the (X,p) pair satisfies a generalized saddle-point property termed a Lagrange equilibrium. This in turn is equivalent, under appropriate assumptions, to a competitive equilibrium.There are natural duals to all of the results stated above. The dual optimization principle is based on a surplus function which is a function of prices. The surplus is the total income generated at pricesp, minus the total income required to obtain given utility levels. The dual optimization principle states that prices that are dual (or indirect) Pareto efficient minimize total surplus and render it zero. Conversely, a set of prices that minimizes total surplus and renders it zero is a dual Pareto efficient set of prices.The results of the paper can be viewed as augmenting the first and second theorems of welfare economics (and their duals) to provide a family of results that relate the important economic concepts of Pareto efficiency, equilibrium, dual (or indirect) Pareto efficiency, total benefit, Lagrange equilibrium, and total surplus.The author wishes to thank Charles R. Bowman and Andrew J. Yates for several valuable suggestions and corrections.  相似文献   

2.
Abstract

We investigate the position of the Buchen–Kelly density (Peter W. Buchen and Michael Kelly. The maximum entropy distribution of an asset inferred from option prices. Journal of Financial and Quantitative Analysis, 31(1), 143–159, March 1996.) in the family of entropy maximizing densities from Neri and Schneider (Maximum entropy distributions inferred from option portfolios on an asset. Finance and Stochastics, 16(2), 293–318, April 2012.), which all match European call option prices for a given maturity observed in the market. Using the Legendre transform, which links the entropy function and the cumulant generating function, we show that it is both the unique continuous density in this family and the one with the greatest entropy. We present a fast root-finding algorithm that can be used to calculate the Buchen–Kelly density and give upper boundaries for three different discrepancies that can be used as convergence criteria. Given the call prices, arbitrage-free digital prices at the same strikes can only move within upper and lower boundaries given by left and right call spreads. As the number of call prices increases, these bounds become tighter, and we give two examples where the densities converge to the Buchen–Kelly density in the sense of relative entropy. The method presented here can also be used to interpolate between call option prices, and we compare it to a method proposed by Kahalé (An arbitrage-free interpolation of volatilities. Risk, 17(5), 102–106, May 2004). Orozco Rodriguez and Santosa (Estimation of asset distributions from option prices: Analysis and regularization. SIAM Journal on Financial Mathematics, 3(1), 374–401, 2012.) have produced examples in which the Buchen–Kelly algorithm becomes numerically unstable, and we use these as test cases to show that the algorithm given here remains stable and leads to good results.  相似文献   

3.
Tan  Yong  Wanke  Peter  Antunes  Jorge  Emrouznejad  Ali 《Annals of Operations Research》2021,306(1-2):131-171

Although there is a growing number of research articles investigating the performance in the banking industry, research on Chinese banking efficiency is rather focused on discussing rankings to the detriment of unveiling its productive structure in light of banking competition. This issue is of utmost importance considering the relevant transformations in the Chinese economy over the last decades. This is a development of a two-stage network production process (production and intermediation approaches in banking, respectively) to evaluate the efficiency level of Chinese commercial banks. In the second stage regression analysis, an integrated Multi-Layer Perceptron/Hidden Markov model is used for the first time to unveil endogeneity among banking competition, contextual variables, and efficiency levels of the production and intermediation approaches in banking. The competitive condition in the Chinese banking industry is measured by Panar–Rosse H-statistic and Lerner index under the Ordinary Least Square regression. Findings reveal that productive efficiency appears to be positively impacted by competition and market power. Second, credit risk analysis in older local banks, which focus the province level, would possibly be the fact that jeopardizes the productive efficiency levels of the entire banking industry in China. Thirdly, it is found that a perfect banking competition structure at the province level and a reduced market power of local banks are drivers of a sound banking system. Finally, our findings suggest that concentration of credit in a few banks leads to an increase in bank productivity.

  相似文献   

4.
The notion of a frame multiresolution analysis (FMRA) is formulated. An FMRA is a natural extension to affine frames of the classical notion of a multiresolution analysis (MRA). The associated theory of FMRAs is more complex than that of MRAs. A basic result of the theory is a characterization of frames of integer translates of a function φ in terms of the discontinuities and zero sets of a computable periodization of the Fourier transform of φ. There are subband coding filter banks associated with each FMRA. Mathematically, these filter banks can be used to construct new frames for finite energy signals. As with MRAs, the FMRA filter banks provide perfect reconstruction of all finite energy signals in any one of the successive approximation subspacesVjdefining the FMRA. In contrast with MRAs, the perfect reconstruction filter bank associated with an FMRA can be narrow band. Because of this feature, in signal processing FMRA filter banks achieve quantization noise reduction simultaneously with reconstruction of a given narrow-band signal.  相似文献   

5.
This paper discusses diffusion models describing the ‘smile‐effect’ of implied volatilities for option prices partly following the new approach of Bruno Dupire. If one restricts to the time homogeneous case, a careful study of this approach shows that the call option prices considered as a function of the price x of the underlying security, remaining time to maturity Tt and strike price K have necessarily to satisfy a certain functional equation, in order to fit into a coherent model. It is shown that for certain examples of empirically observed option prices which are reported in the literature, this functional equation does not hold. © 2000 John Wiley & Sons, Ltd.  相似文献   

6.

The efficiency of banks has a critical role in development of sound financial systems of countries. Data Envelopment Analysis (DEA) has witnessed an increase in popularity for modeling the performance efficiency of banks. Such efficiency depends on the appropriate selection of input and output variables. In literature, no agreement exists on the selection of relevant variables. The disagreement has been an on-going debate among academic experts, and no diagnostic tools exist to identify variable misspecifications. A cognitive analytics management framework is proposed using three processes to address misspecifications. The cognitive process conducts an extensive review to identify the most common set of variables. The analytics process integrates a random forest method; a simulation method with a DEA measurement feedback; and Shannon Entropy to select the best DEA model and its relevant variables. Finally, a management process discusses the managerial insights to manage performance and impacts. A sample of data is collected on 303 top-world banks for the periods 2013 to 2015 from 49 countries. The experimental simulation results identified the best DEA model along with its associated variables, and addressed the misclassification of the total deposits. The paper concludes with the limitations and future research directions.

  相似文献   

7.
This paper investigates the calibration of a model with a time-homogeneous local volatility function to the market prices of the perpetual American Call and Put options. The main step is the derivation of a Call–Put duality equality for perpetual American options similar to the equality which is equivalent to Dupire’s formula (Dupire in Risk 7(1):18–20, 1994) in the European case. It turns out that in addition to the simultaneous exchanges between the spot price and the strike and between the interest and dividend rates which already appear in the European case, one has to modify the local volatility function in the American case. To show this duality equality, we exhibit non-autonomous nonlinear ODEs satisfied by the perpetual Call and Put exercise boundaries as functions of the strike variable. We obtain uniqueness for these ODEs and deduce that the mapping associating the exercise boundary with the local volatility function is one-to-one onto. Thanks to this Dupire-type duality result, we design a theoretical calibration procedure of the local volatility function from the perpetual Call and Put prices for a fixed spot price x 0. The knowledge of the Put (resp. Call) prices for all strikes enables to recover the local volatility function on the interval (0, x 0) (resp. (x 0, +∞)). We last prove that equality of the dual volatility functions only holds in the standard Black-Scholes model with constant volatility.   相似文献   

8.
Abstract

We consider the problem faced by an investor who must liquidate a given basket of assets over a finite time horizon. The investor's goal is to maximize the expected utility of the sales revenues over a class of adaptive strategies. We assume that the investor's utility has constant absolute risk aversion (CARA) and that the asset prices are given by a very general continuous-time, multiasset price impact model. Our main result is that (perhaps surprisingly) the investor does no worse if he narrows his search to deterministic strategies. In the case where the asset prices are given by an extension of the nonlinear price impact model of Almgren [(2003) Applied Mathematical Finance, 10, pp. 1–18], we characterize the unique optimal strategy via the solution of a Hamilton equation and the value function via a nonlinear partial differential equation with singular initial condition.  相似文献   

9.
Abstract

In this article, we propose an arbitrage-free modelling framework for the joint dynamics of forward variance along with the underlying index, which can be seen as a combination of the two approaches proposed by Bergomi. The difference between our modelling framework and the Bergomi (2008. Smile dynamics III. Risk, October, 90–96) models is mainly the ability to compute the prices of VIX futures and options by using semi-analytic formulas. Also, we can express the sensitivities of the prices of VIX futures and options with respect to the model parameters, which enables us to propose an efficient and easy calibration to the VIX futures and options. The calibrated model allows to Delta-hedge VIX options by trading in VIX futures, the corresponding hedge ratios can be computed analytically.  相似文献   

10.
The article considers a problem of inverse option pricing aimed at the identification of a not directly observable time-dependent volatility function from maturity-dependent option prices. In this situation, an important aspect is the calibration of the antiderivative of the squared volatility. This inverse problem leads to an operator equation with a forward operator of Nemytskii type generated by a monotone function of two variables. In recent literature, an analysis of this forward operator and several numerical case studies have been conducted which revealed certain instability effects. This article supplements these results by studying the nature of these instabilities. In this context, the focus is on the question whether the problem is well-posed or ill-posed, i.e. whether the inverse operator is continuous or not continuous in suitable Banach spaces.

As the mentioned instabilities result in strongly oscillating approximate solutions, we finish by presenting a numerically effective algorithm which uses a priori information about the monotonicity of the searched antiderivative to compute a smooth approximate solution.  相似文献   

11.

This paper describes a family of divergences, named herein as the C-divergence family, which is a generalized version of the power divergence family and also includes the density power divergence family as a particular member of this class. We explore the connection of this family with other divergence families and establish several characteristics of the corresponding minimum distance estimator including its asymptotic distribution under both discrete and continuous models; we also explore the use of the C-divergence family in parametric tests of hypothesis. We study the influence function of these minimum distance estimators, in both the first and second order, and indicate the possible limitations of the first-order influence function in this case. We also briefly study the breakdown results of the corresponding estimators. Some simulation results and real data examples demonstrate the small sample efficiency and robustness properties of the estimators.

  相似文献   

12.
A discrete time model of a financial market is developed, in which heterogeneous interacting groups of agents allocate their wealth between two risky assets and a riskless asset. In each period each group formulates its demand for the risky assets and the risk‐free asset according to myopic mean‐variance maximizazion. The market consists of two types of agents: fundamentalists, who hold an estimate of the fundamental values of the risky assets and whose demand for each asset is a function of the deviation of the current price from the fundamental, and chartists, a group basing their trading decisions on an analysis of past returns. The time evolution of the prices is modelled by assuming the existence of a market maker, who sets excess demand of each asset to zero at the end of each trading period by taking an offsetting long or short position, and who announces the next period prices as functions of the excess demand for each asset and with a view to long‐run market stability. The model is reduced to a seven‐dimensional nonlinear discrete‐time dynamical system, that describes the time evolution of prices and agents' beliefs about expected returns, variances and correlation. The unique steady state of the model is determined and the local asymptotic stability of the equilibrium is analysed, as a function of the key parameters that characterize agents' behaviour. In particular it is shown that when chartists update their expectations sufficiently fast, then the stability of the equilibrium is lost through a supercritical Neimark–Hopf bifurcation, and self‐sustained price fluctuations along an attracting limit cycle appear in one or both markets. Global analysis is also performed, by using numerical techniques, in order to understand the role played by the chartists' behaviour in the transition to a regime characterized by irregular oscillatory motion and coexistence of attractors. It is also shown how changes occurring in one market may affect the price dynamics of the alternative risky asset, as a consequence of the dynamic updating of agents' portfolios.  相似文献   

13.
The independent variables of linear mixed models are subject to measurement errors in practice. In this paper, we present a unified method for the estimation in linear mixed models with errors-in-variables, based upon the corrected score function of Nakamura (1990, Biometrika, 77, 127–137). Asymptotic normality properties of the estimators are obtained. The estimators are shown to be consistent and convergent at the order of n –1/2. The performance of the proposed method is studied via simulation and the analysis of a data set on hedonic housing prices.  相似文献   

14.
Quantile regression for robust bank efficiency score estimation   总被引:1,自引:0,他引:1  
We discuss quantile regression techniques as a robust and easy to implement alternative for estimating Farell technical efficiency scores. The quantile regression approach estimates the production process for benchmark banks located at top conditional quantiles. Monte Carlo simulations reveal that even when generating data according to the assumptions of the stochastic frontier model (SFA), efficiency estimates obtained from quantile regressions resemble SFA-efficiency estimates. We apply the SFA and the quantile regression approach to German bank data for three banking groups, commercial banks, savings banks and cooperative banks to estimate efficiency scores based on a simple value added function and a multiple-input–multiple-output cost function. The results reveal that the efficient (benchmark) banks have production and cost elasticities which differ considerably from elasticities obtained from conditional mean functions and stochastic frontier functions.  相似文献   

15.
Dyuzhina  N. A. 《Mathematical Notes》2019,106(5-6):711-719

It is proved that there exists a function defined in the closed upper half-plane for which the sums of its real shifts are dense in all Hardy spaces Hp for 2 ≤ p < ∞, as well as in the space of functions analytic in the upper half-plane, continuous on its closure, and tending to zero at infinity.

  相似文献   

16.
Estimates of bank cost efficiency can be biased if bank heterogeneity is ignored. I compare X-inefficiency derived from a model constraining the cost frontier to be the same for all banks in the U.S. and a model allowing for different frontiers and error terms across Federal Reserve Districts. I find that the data reject the single cost function model; X-inefficiency measures based on the single cost function model are, on average, higher than those based on the separate cost functions model; the distributions of the one-sided error terms are wider for the single cost function model than for the separate cost functions model; and the ranking of Districts by the level of X-inefficiency differs in the two models. The results suggest it is important when studying X-inefficiency to account for differences across the markets in which banks are operating and that since X-inefficiency is, by construction, a residual, it will be particulary sensitive to omissions in the basic model.  相似文献   

17.

It is well known that it is possible to enhance the approximation properties of a kernel operator by increasing its support size. There is an obvious tradeoff between higher approximation order of a kernel and the complexity of algorithms that employ it. A question is then asked: how do we compare the efficiency of kernels with comparable support size? We follow Blu and Unser and choose as a measure of the efficiency of the kernels the first leading constant in a certain error expansion. We use time domain methods to treat the case of globally supported kernels in L p (R d), 1≤p≤∞.

  相似文献   

18.
Measuring economic efficiency requires complete price information, while resorting to technical efficiency exclusively does not allow one to utilise any price information. In most studies, at least some information on the prices is available from theory or practical knowledge of the industry under evaluation. In this paper we extend the theory of efficiency measurement to accommodate incomplete price information by deriving upper and lower bounds for Farrell's overall economic efficiency. The bounds typically give a better approximation for economic efficiency than technical efficiency measures that use no price data whatsoever. From an operational point of view, we derive new data envelopment analysis (DEA) models for computing these bounds using standard linear programming. The practical application of these estimators is illustrated with an empirical application to large European Union commercial banks.  相似文献   

19.

Association or interdependence of two stock prices is analyzed, and selection criteria for a suitable model developed in the present paper. The association is generated by stochastic correlation, given by a stochastic differential equation (SDE), creating interdependent Wiener processes. These, in turn, drive the SDEs in the Heston model for stock prices. To choose from possible stochastic correlation models, two goodness-of-fit procedures are proposed based on the copula of Wiener increments. One uses the confidence domain for the centered Kendall function, and the other relies on strong and weak tail dependence. The constant correlation model and two different stochastic correlation models, given by Jacobi and hyperbolic tangent transformation of Ornstein-Uhlenbeck (HtanOU) processes, are compared by analyzing daily close prices for Apple and Microsoft stocks. The constant correlation, i.e., the Gaussian copula model, is unanimously rejected by the methods, but all other two are acceptable at a 95% confidence level. The analysis also reveals that even for Wiener processes, stochastic correlation can create tail dependence, unlike constant correlation, which results in multivariate normal distributions and hence zero tail dependence. Hence models with stochastic correlation are suitable to describe more dangerous situations in terms of correlation risk.

  相似文献   

20.

We study Fourier–Bessel series on a q-linear grid, defined as expansions in complete q-orthogonal systems constructed with the third Jackson q-Bessel function, and obtain sufficient conditions for uniform convergence. The convergence results are illustrated with specific examples of expansions in q-Fourier–Bessel series.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号