共查询到20条相似文献,搜索用时 11 毫秒
1.
This study describes a practical use of Data Envelopment Analysis–Discriminant Analysis (DEA–DA) for bankruptcy-based performance assessment. DEA–DA is useful for classifying non-default and default firms based upon their financial performance. However, when we apply DEA–DA to a data set on corporate bankruptcy, we usually face three problems. First, there is a sample imbalance problem because the number of default firms is often limited. In contrast, we can easily obtain a large number of non-default firms. Second, there is a computational problem to deal with a large data set. We need to consider a computational strategy to reduce the dimension of a large data set. Finally, we need to consider data alignment because the location of default firms may exist within that of non-default firms. This study discusses a simultaneous occurrence of the three problems from the perspective of Japanese industrial policy on construction business. To handle the three problems, this study combines DEA–DA with principal component analysis to reduce the computational burden and then alters DEA–DA weights to address both the sample imbalance problem and the location problem. This study also discusses a combined use between DEA–DA and rank sum tests to examine statistically hypotheses related to bankruptcy assessment. As an important application, we apply the proposed approach to the Japanese construction industry and discuss why many Japanese construction firms are misclassified. 相似文献
2.
3.
The method of so-called constrained stochastic simulation is introduced. This method specifies how to efficiently generate
time series around some specific event in a normal process. All events which can be expressed by means of a linear condition
(constraint) can be dealt with. Two examples are given in the paper: the generation of stochastic time series around local
maxima and the generation of stochastic time series around a combination of a local minimum and maximum with a specified time
separation. The constrained time series turn out to be a combination of the original process and several correction terms
which includes the autocorrelation function and its time derivatives. For the application concerning local maxima it is shown
that the presented method is in line with properties of a normal process near a local maximum as found in literature. The
method can e.g. be applied to generate wind gusts in order to assess the extreme loading of wind turbines.
AMS 2000 Subject Classification Primary—60G15, 60G70, 62G32; Secondary—62P30 相似文献
4.
We develop a doubly spectral representation of a stationary functional time series, and study the properties of its empirical version. The representation decomposes the time series into an integral of uncorrelated frequency components (Cramér representation), each of which is in turn expanded in a Karhunen–Loève series. The construction is based on the spectral density operator, the functional analogue of the spectral density matrix, whose eigenvalues and eigenfunctions at different frequencies provide the building blocks of the representation. By truncating the representation at a finite level, we obtain a harmonic principal component analysis of the time series, an optimal finite dimensional reduction of the time series that captures both the temporal dynamics of the process, as well as the within-curve dynamics. Empirical versions of the decompositions are introduced, and a rigorous analysis of their large-sample behaviour is provided, that does not require any prior structural assumptions such as linearity or Gaussianity of the functional time series, but rather hinges on Brillinger-type mixing conditions involving cumulants. 相似文献
5.
This study compares data envelopment analysis–discriminant analysis (DEA–DA) with Altman’s financial ratio analysis to identify the position of DEA–DA in financial performance analysis. Then, this study applies DEA–DA to examine whether Research and Development (R&D) expenditure influences the financial performance of Japanese machinery industry and electric equipment industry. The investigation of DEA–DA identifies that the R&D expenditure makes a positive impact on the financial performance of Japanese machinery industry, but it yields a negative impact on Japanese electric equipment industry. The result implies that the influence of R&D expenditure on financial performance (including the avoidance of bankruptcy) depends upon the type of a manufacturing industry. A rationale regarding why such a discrepancy has occurred between the two Japanese manufacturing industries is because the life cycle of electric equipments is shorter than that of the machinery products. Furthermore, the electric equipment industry faces more fierce competition than the machinery industry. This study suggests that the Japanese electric equipment industry needs R&D expenditure for competition in its global market. However, it is a high risk and high return investment. In contrast, the Japanese machinery is a technologically mature industry where the R&D expenditure influences positively its financial performance. In this sense, the R&D expenditure is a low risk and necessary investment. 相似文献
6.
Within the data envelopment analysis context, problems of discrimination between efficient and inefficient decision-making units often arise, particularly if there are a relatively large number of variables with respect to observations. This paper applies Monte Carlo simulation to generalize and compare two discrimination improving methods; principal component analysis applied to data envelopment analysis (PCA–DEA) and variable reduction based on partial covariance (VR). Performance criteria are based on the percentage of observations incorrectly classified; efficient decision-making units mistakenly defined as inefficient and inefficient units defined as efficient. A trade-off was observed with both methods improving discrimination by reducing the probability of the latter error at the expense of a small increase in the probability of the former error. A comparison of the methodologies demonstrates that PCA–DEA provides a more powerful tool than VR with consistently more accurate results. PCA–DEA is applied to all basic DEA models and guidelines for its application are presented in order to minimize misclassification and prove particularly useful when analyzing relatively small datasets, removing the need for additional preference information. 相似文献
7.
Newly-developed data envelopment analysis techniques permit simultaneous consideration of ‘good and bad’ outputs in evaluating efficiency. We use these techniques to determine joint ecological and technical efficiencies of the 437 largest fossil-fueled electricity-generating plants in the United States. Utilizing the EPA’s E-Grid and Clean Air Markets databases and drawing on ecological modernization theory we evaluate whether innovations in organizational practices and technological solutions help achieve joint technical and environmental performance efficiencies. 相似文献
8.
G. K. Kanji 《International Journal of Mathematical Education in Science & Technology》2013,44(2):155-160
The use of simulation methods for calculating the power values in the case of non‐normal errors is discussed. One and two‐way layouts are considered for the fixed effect model. The Erlangian and contaminated normal distribution are taken as examples of a non‐normal error distribution. The results obtained by these methods are given in tables 1 and 2 which indicate that for inference concerning means the power calculated under normal theory is only slightly affected by the non‐normality of the errors. 相似文献
9.
In order to describe slow modulations in time and space of stable or slightly unstable spatially periodic stationary solutions
of pattern forming reaction–diffusion systems, so-called phase diffusion equations and Cahn–Hilliard equations can be derived
via multiple scaling analysis as formal approximation equations. In the case that these equations degenerate, waiting time
phenomena are well known to occur. In this paper, we prove that such waiting time phenomena can also occur approximately in
the original reaction–diffusion systems by proving estimates between the formal approximations and the exact solutions of
the original systems. 相似文献
10.
Linearity and input separability assumptions are pervasive in most nonprofit cost and performance measurement systems. In this paper, we employ both parametric and nonparametric econometric methodologies to test the linearity and input separability assumptions using data collected from the Minnesota independent school districts. Our test results reject both the assumptions for the Minnesota independent school districts. These findings suggest a need to develop alternative procedures that do not rely on these two pervasive assumptions if the information provided by not-for-profit performance measurement systems is to be relevant for decision making and performance evaluation. 相似文献
11.
The paper concerns with analysis of operational complexity of company supplier–customer relations. Well-known approach for measuring of operational complexity is based upon entropy. However, there are several approaches thereon. In the first part, we discuss various general measures of uncertainty of states, the power entropies in particular. In the second part, we use Shannon entropy as a base framework for our two case studies—the first, a supplier–customer system which implements managerial thresholds for processing product delivery term deviations, the second, a supplier system of the most important commodity in brewery industry, the malted barley. In both cases, we assume an existence of problem-oriented databases, which contain detailed records of all product orders, deliveries and forecasts in quantity and time having been scheduled and realized. Our general procedure elaborated consists of three basic steps—pre-processing of data with consistency checks in Java, calculation of histograms and empirical distribution functions, and finally, evaluation of conditional entropy. The last two steps are realized by Mathematica modules. Illustrative results of operational complexity measurement using entropy are provided for both case studies. 相似文献
12.
We introduce modified Lagrange–Galerkin (MLG) methods of order one and two with respect to time to integrate convection–diffusion
equations. As numerical tests show, the new methods are more efficient, but maintaining the same order of convergence, than
the conventional Lagrange–Galerkin (LG) methods when they are used with either P
1 or P
2 finite elements. The error analysis reveals that: (1) when the problem is diffusion dominated the convergence of the modified
LG methods is of the form O(h
m+1 + h
2 + Δt
q
), q = 1 or 2 and m being the degree of the polynomials of the finite elements; (2) when the problem is convection dominated and the time step
Δt is large enough the convergence is of the form
O(\frachm+1Dt+h2+Dtq){O(\frac{h^{m+1}}{\Delta t}+h^{2}+\Delta t^{q})} ; (3) as in case (2) but with Δt small, then the order of convergence is now O(h
m
+ h
2 + Δt
q
); (4) when the problem is convection dominated the convergence is uniform with respect to the diffusion parameter ν (x, t), so that when ν → 0 and the forcing term is also equal to zero the error tends to that of the pure convection problem. Our error analysis
shows that the conventional LG methods exhibit the same error behavior as the MLG methods but without the term h
2. Numerical experiments support these theoretical results. 相似文献
13.
We consider radial solutions blowing up in infinite time to a parabolic–elliptic system in N-dimensional Euclidean space. The system was introduced to describe the gravitational interaction of particles. In the case where N≥2, we can find positive and radial solutions blowing up in finite time. In the present paper, in the case where N≥11, we find positive and radial solutions blowing up in infinite time and investigate those blowup speeds, by using the so-called asymptotic matched expansion techniques and parabolic regularity. 相似文献
14.
We study the large time behavior of solutions of the Cauchy problem for the Hamilton–Jacobi equation ut+H(x,Du)=0 in Rn×(0,∞), where H(x,p) is continuous on Rn×Rn and convex in p . We establish a general convergence result for viscosity solutions u(x,t) of the Cauchy problem as t→∞. 相似文献
15.
Xiaoming Wang 《Numerische Mathematik》2012,121(4):753-779
We investigate the long time behavior of the following efficient second-order in time scheme for the 2D Navier–Stokes equations in a periodic box: $$\begin{array}{ll}{\frac{3\omega^{n+1} - 4\omega^n + \omega^{n-1}}{2k} + \nabla^\perp(2\psi^n - \psi^{n-1}) \cdot \nabla(2\omega^n - \omega^{n-1})- \nu\Delta\omega^{n+1} = f^{n+1},} \\ {\quad -{\Delta} {\psi}^{n} = {\omega}^{n}.}\end{array}$$ The scheme is a combination of a 2nd-order in time backward-differentiation and a particular explicit Adams–Bashforth treatment of the advection term. Therefore only a linear constant coefficient Poisson solver is needed at each time step. We prove uniform in time bounds on this scheme in ${{\dot{L}^2,\, \dot{H}^1_{per}}}$ and ${{\dot{H}^2_{per}}}$ provided that the time-step is sufficiently small. These time uniform estimates further lead to the convergence of long time statistics (stationary statistical properties) of the scheme to that of the NSE itself at vanishing time-step. 相似文献
16.
Pricing rules specific to the German telecommunications market limit the incumbents flexibility, providing a competitive advantage
to all other market participants. More specifically, the incumbent is required not to offer products to its end customers
at prices below a predetermined level in order to prevent margin squeezes. In contrast, competitors can freely choose their
pricing strategy. In this paper, we propose the imposition of equivalent price barriers on all market participants in order
to avoid price margin squeezes and reduce regulatory discrimination at the same time. We tailor a duopoly model to the German
context, integrating the regulation of access pricing and price margin squeezes. Under standard parameter assumptions, we
demonstrate that no economically significant effects on the value of market participants are observed for the case of market
wide price regulation. We conclude that adjusting the current regulatory framework can enhance competition and increase welfare. 相似文献
17.
In this article, we consider the two-dimensional nonlinear time–space fractional Schrödinger equation with space described by the fractional Laplacian. A second-order fractional backward difference formula in the temporal direction while Fourier spectral method in the spatial direction is proposed to solve the model numerically. In the numerical implementation, a fast method is applied based on a globally uniform approximation of the trapezoidal rule for the integral on the real line to decrease the memory requirement and computational cost. By using the generalized discrete Gronwall inequality developed by Dixon and McKee and the temporal–spatial error splitting argument, the convergence of the fast time-stepping numerical method is also proved in a simple manner without imposing the Courant-Friedrichs-Lewy (CFL) condition. Finally, some numerical results are provided to support the theoretical analysis. 相似文献
18.
S. Belhaiza S. Charrad R. M’Hallah 《Journal of Optimization Theory and Applications》2018,177(2):584-602
In this article, we focus on the conflict among the manager, the controller and the board of directors of a company. We model the problem as a three-player polymatrix game. Under a set of assumptions, we identify five potential Nash equilibria. We prove that the Nash equilibrium is unique, despite its changing structure. Next, we analyze the influence of the manager’s and controller’s bonuses and penalties on the Nash equilibria. Finally, we explain how the manager and the controller may decrease or maintain their performance, when their bonuses or penalties increase. 相似文献
19.
We discuss the existence of a blow-up solution for a multi-component parabolic–elliptic drift–diffusion model in higher space dimensions. We show that the local existence, uniqueness and well-posedness of a solution in the weighted \(L^2\) spaces. Moreover we prove that if the initial data satisfies certain conditions, then the corresponding solution blows up in a finite time. This is a system case for the blow up result of the chemotactic and drift–diffusion equation proved by Nagai (J Inequal Appl 6:37–55, 2001) and Nagai et al. (Hiroshima J Math 30:463–497, 2000) and gravitational interaction of particles by Biler (Colloq Math 68:229–239, 1995), Biler and Nadzieja (Colloq Math 66:319–334, 1994, Adv Differ Equ 3:177–197, 1998). We generalize the result in Kurokiba and Ogawa (Differ Integral Equ 16:427–452, 2003, Differ Integral Equ 28:441–472, 2015) and Kurokiba (Differ Integral Equ 27(5–6):425–446, 2014) for the multi-component problem and give a sufficient condition for the finite time blow up of the solution. The condition is different from the one obtained by Corrias et al. (Milan J Math 72:1–28, 2004). 相似文献
20.
This paper presents a projection-based stabilization method of the double-diffusive convection in Darcy–Brinkman flow. In particular, it is concerned with the convergence analysis of the velocity, temperature and concentration in the time dependent case. Numerical experiments are presented to verify both the theory and the effectiveness of the method. 相似文献