首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we propose a new method called the fractional natural decomposition method (FNDM). We give the proof of new theorems of the FNDM, and we extend the natural transform method to fractional derivatives. We apply the FNDM to construct analytical and approximate solutions of the nonlinear time‐fractional Harry Dym equation and the nonlinear time‐fractional Fisher's equation. The fractional derivatives are described in the Caputo sense. The effectiveness of the FNDM is numerically confirmed. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, the generalized extended tanh-function method is used for constructing the traveling wave solutions of nonlinear evolution equations. We choose Fisher's equation, the nonlinear schrödinger equation to illustrate the validity and advantages of the method. Many new and more general traveling wave solutions are obtained. Furthermore, this method can also be applied to other nonlinear equations in physics.  相似文献   

3.
4.
The aim of this letter is to apply the Lie group analysis method to the Fisher''s equation with time fractional order. We considered the symmetry analysis, explicit solutions to the time fractional Fisher''s(TFF) equations with Riemann-Liouville (R-L) derivative. The time fractional Fisher''s is reduced to respective nonlinear ordinary differential equation(ODE) of fractional order. We solve the reduced fractional ODE using an explicit power series method.  相似文献   

5.
The problem of estimating the tail index in heavy-tailed distributions is very important in many applications. We propose a new graphical method that deals with this problem by selecting an appropriate number of upper order statistics. We also investigate the method's theoretical properties are investigated. Several real datasets are analyzed using this new procedure and a simulation study is carried out to examine its performance in small, moderate and large samples. The results suggest that the new procedure overcomes many of the shortcomings present in some of the most common techniques—for example, the Hill and Zipf plots—used in the estimation of the tail index, and it performs very competitively when compared with other adaptive threshold procedures based on the asymptotic mean squared error of the Hill estimator.  相似文献   

6.
Credit scoring is one of the most widely used applications of quantitative analysis in business. Behavioural scoring is a type of credit scoring that is performed on existing customers to assist lenders in decisions like increasing the balance or promoting new products. This paper shows how using survival analysis tools from reliability and maintenance modelling, specifically Cox's proportional hazards regression, allows one to build behavioural scoring models. Their performance is compared with that of logistic regression. Also the advantages of using survival analysis techniques in building scorecards are illustrated by estimating the expected profit from personal loans. This cannot be done using the existing risk behavioural systems.  相似文献   

7.
In high-dimensional classification problems, one is often interested in finding a few important discriminant directions in order to reduce the dimensionality. Fisher's linear discriminant analysis (LDA) is a commonly used method. Although LDA is guaranteed to find the best directions when each class has a Gaussian density with a common covariance matrix, it can fail if the class densities are more general. Using a likelihood-based interpretation of Fisher's LDA criterion, we develop a general method for finding important discriminant directions without assuming the class densities belong to any particular parametric family. We also show that our method can be easily integrated with projection pursuit density estimation to produce a powerful procedure for (reduced-rank) nonparametric discriminant analysis.  相似文献   

8.
9.
Protein structural alignment is an important problem in computational biology. In this paper, we present first successes on provably optimal pairwise alignment of protein inter-residue distance matrices, using the popular dali scoring function. We introduce the structural alignment problem formally, which enables us to express a variety of scoring functions used in previous work as special cases in a unified framework. Further, we propose the first mathematical model for computing optimal structural alignments based on dense inter-residue distance matrices. We therefore reformulate the problem as a special graph problem and give a tight integer linear programming model. We then present algorithm engineering techniques to handle the huge integer linear programs of real-life distance matrix alignment problems. Applying these techniques, we can compute provably optimal dali alignments for the very first time.  相似文献   

10.
Seymour conjectured that every oriented simple graph contains a vertex whose second neighborhood is at least as large as its first. Seymour's conjecture has been verified in several special cases, most notably for tournaments by Fisher  6 . One extension of the conjecture that has been used by several researchers is to consider vertex‐weighted digraphs. In this article we introduce a version of the conjecture for arc‐weighted digraphs. We prove the conjecture in the special case of arc‐weighted tournaments, strengthening Fisher's theorem. Our proof does not rely on Fisher's result, and thus can be seen as an alternate proof of said theorem.  相似文献   

11.
Abstract

We formulate the problem of exact inference for Kendall's S and Spearman's D algebraically, using a general recursion formula developed by Smid for the score S with ties in both rankings. Analogous recursion formulas are shown to hold for the score D as well as for a log transform, F, of the score used in Fisher's exact test of independence in contingency tables. A new implementation of Mehta and Patel's network algorithm is then applied to obtain exact significance levels of either S or D for observations from both continuous and discrete distributions. A simple extension is made to obtain Fisher's exact test in r x c contingency tables. Observed CPU times for contingency table problems four to six of Mehta and Patel and problems four and five of Clarkson, Fan, and Joe are roughly 2/3 of those obtained using Clarkson's et al. implementation of the network algorithm. It is shown that a hierarchy, with F > S > D, holds regarding the rate of aggregation. An algorithm for rapid lexicographic enumeration of entries in a frequency table is also given.  相似文献   

12.
Reject inference is a method for inferring how a rejected credit applicant would have behaved had credit been granted. Credit-quality data on rejected applicants are usually missing not at random (MNAR). In order to infer credit-quality data MNAR, we propose a flexible method to generate the probability of missingness within a model-based bound and collapse Bayesian technique. We tested the method's performance relative to traditional reject-inference methods using real data. Results show that our method improves the classification power of credit scoring models under MNAR conditions.  相似文献   

13.
The logistic regression framework has been for long time the most used statistical method when assessing customer credit risk. Recently, a more pragmatic approach has been adopted, where the first issue is credit risk prediction, instead of explanation. In this context, several classification techniques have been shown to perform well on credit scoring, such as support vector machines among others. While the investigation of better classifiers is an important research topic, the specific methodology chosen in real world applications has to deal with the challenges arising from the real world data collected in the industry. Such data are often highly unbalanced, part of the information can be missing and some common hypotheses, such as the i.i.d. one, can be violated. In this paper we present a case study based on a sample of IBM Italian customers, which presents all the challenges mentioned above. The main objective is to build and validate robust models, able to handle missing information, class unbalancedness and non-iid data points. We define a missing data imputation method and propose the use of an ensemble classification technique, subagging, particularly suitable for highly unbalanced data, such as credit scoring data. Both the imputation and subagging steps are embedded in a customized cross-validation loop, which handles dependencies between different credit requests. The methodology has been applied using several classifiers (kernel support vector machines, nearest neighbors, decision trees, Adaboost) and their subagged versions. The use of subagging improves the performance of the base classifier and we will show that subagging decision trees achieve better performance, still keeping the model simple and reasonably interpretable.  相似文献   

14.
15.
In recent years, support vector machines (SVMs) were successfully applied to a wide range of applications. However, since the classifier is described as a complex mathematical function, it is rather incomprehensible for humans. This opacity property prevents them from being used in many real-life applications where both accuracy and comprehensibility are required, such as medical diagnosis and credit risk evaluation. To overcome this limitation, rules can be extracted from the trained SVM that are interpretable by humans and keep as much of the accuracy of the SVM as possible. In this paper, we will provide an overview of the recently proposed rule extraction techniques for SVMs and introduce two others taken from the artificial neural networks domain, being Trepan and G-REX. The described techniques are compared using publicly available datasets, such as Ripley’s synthetic dataset and the multi-class iris dataset. We will also look at medical diagnosis and credit scoring where comprehensibility is a key requirement and even a regulatory recommendation. Our experiments show that the SVM rule extraction techniques lose only a small percentage in performance compared to SVMs and therefore rank at the top of comprehensible classification techniques.  相似文献   

16.
The effect of the linear operator L, used in the Adomian's method for solving nonlinear partial differential equations, on the convergence is studied on the Fisher's equation, which describes a balance between linear diffusion and nonlinear reaction. The results show that the convergence of this method is not influenced by the choice of the operator L in the equation to be solved. Furthermore, under some conditions, these results are close to those obtained by using other numerical techniques.  相似文献   

17.
Markov models are commonly used in modelling many practical systems such as telecommunication systems, manufacturing systems and inventory systems. However, higher-order Markov models are not commonly used in practice because of their huge number of states and parameters that lead to computational difficulties. In this paper, we propose a higher-order Markov model whose number of states and parameters are linear with respect to the order of the model. We also develop efficient estimation methods for the model parameters. We then apply the model and method to solve the generalised Newsboy's problem. Numerical examples with applications to production planning are given to illustrate the power of our proposed model.  相似文献   

18.
The Receiver Operating Characteristic (ROC) curve is one of the most widely used visual tools to evaluate performance of scoring functions regarding their capacities to discriminate between two populations. It is the goal of this paper to propose a statistical learning method for constructing a scoring function with nearly optimal ROC curve. In this bipartite setup, the target is known to be the regression function up to an increasing transform, and solving the optimization problem boils down to recovering the collection of level sets of the latter, which we interpret here as a continuum of imbricated classification problems. We propose a discretization approach, consisting of building a finite sequence of N classifiers by constrained empirical risk minimization and then constructing a piecewise constant scoring function s N (x) by overlaying the resulting classifiers. Given the functional nature of the ROC criterion, the accuracy of the ranking induced by s N (x) can be conceived in a variety of ways, depending on the distance chosen for measuring closeness to the optimal curve in the ROC space. By relating the ROC curve of the resulting scoring function to piecewise linear approximates of the optimal ROC curve, we establish the consistency of the method as well as rate bounds to control its generalization ability in sup -norm. Eventually, we also highlight the fact that, as a byproduct, the algorithm proposed provides an accurate estimate of the optimal ROC curve.  相似文献   

19.
In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.  相似文献   

20.
The integer least squares problem is an important problem that arises in numerous applications. We propose a real relaxation-based branch-and-bound (RRBB) method for this problem. First, we define a quantity called the distance to integrality, propose it as a measure of the number of nodes in the RRBB enumeration tree, and provide computational evidence that the size of the RRBB tree is proportional to this distance. Since we cannot know the distance to integrality a priori, we prove that the norm of the Moore–Penrose generalized inverse of the matrix of coefficients is a key factor for bounding this distance, and then we propose a preconditioning method to reduce this norm using lattice reduction techniques. We also propose a set of valid box constraints that help accelerate the RRBB method. Our computational results show that the proposed preconditioning significantly reduces the size of the RRBB enumeration tree, that the preconditioning combined with the proposed set of box constraints can significantly reduce the computational time of RRBB, and that the resulting RRBB method can outperform the Schnorr and Eucher method, a widely used method for solving integer least squares problems, on some types of problem data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号