共查询到10条相似文献,搜索用时 0 毫秒
1.
2.
It is well-known that if an one-dimensional function is continuously differentiable on [0, 1], then its Fourier-Haar series converges absolutely. On the other hand, if a function of two variables has continuous partial derivatives f x ′ and f y ′ on T 2, then its Fourier series does not necessarily absolutely converge with respect to a multiple Haar system (see [1]). In this paper we state sufficient conditions for the absolute convergence of the Fourier-Haar series for two-dimensional continuously differentiable functions. 相似文献
3.
4.
本文针对线性比式和分式规划问题,提出一种求其全局最优解的完全多项式时间近似算法,并从理论上证明该算法的收敛性和计算复杂性,数值算例也说明了算法是可行的. 相似文献
5.
Reconciliation consists in mapping a gene tree T into a species tree S, and explaining the incongruence between the two as evidence for duplication, loss and other events shaping the gene family represented by the leaves of T. When S is unknown, the Species Tree Inference Problem is to infer, from a set of gene trees, a species tree leading to a minimum reconciliation cost. As reconciliation is very sensitive to errors in T, gene tree correction prior to reconciliation is a fundamental task. In this paper, we investigate the complexity of four different combinatorial approaches for deleting misplaced leaves from T. First, we consider two problems (Minimum Leaf Removal and Minimum Species Removal) related to the reconciliation of T with a known species tree S. In the former (latter respectively) we want to remove the minimum number of leaves (species respectively) so that T is “MD-consistent” with S. Second, we consider two problems (Minimum Leaf Removal Inference and Minimum Species Removal Inference) related to species tree inference. In the former (latter respectively) we want to remove the minimum number of leaves (species respectively) from T so that there exists a species tree S such that T is MD-consistent with S. We prove that Minimum Leaf Removal and Minimum Species Removal are APX-hard, even when each label has at most two occurrences in the input gene tree, and we present fixed-parameter algorithms for the two problems. We prove that Minimum Leaf Removal Inference is not only NP-hard, but also W[2]-hard and inapproximable within factor , where n is the number of leaves in the gene tree. Finally, we show that Minimum Species Removal Inference is NP-hard and W[2]-hard, when parameterized by the size of the solution, that is the minimum number of species removals. 相似文献
6.
This study provides operational guidance for building naïve Bayes Bayesian network (BN) models for bankruptcy prediction. First, we suggest a heuristic method that guides the selection of bankruptcy predictors. Based on the correlations and partial correlations among variables, the method aims at eliminating redundant and less relevant variables. A naïve Bayes model is developed using the proposed heuristic method and is found to perform well based on a 10-fold validation analysis. The developed naïve Bayes model consists of eight first-order variables, six of which are continuous. We also provide guidance on building a cascaded model by selecting second-order variables to compensate for missing values of first-order variables. Second, we analyze whether the number of states into which the six continuous variables are discretized has an impact on the model’s performance. Our results show that the model’s performance is the best when the number of states for discretization is either two or three. Starting from four states, the performance starts to deteriorate, probably due to over-fitting. Finally, we experiment whether modeling continuous variables with continuous distributions instead of discretizing them can improve the model’s performance. Our finding suggests that this is not true. One possible reason is that continuous distributions tested by the study do not represent well the underlying distributions of empirical data. Finally, the results of this study could also be applicable to business decision-making contexts other than bankruptcy prediction. 相似文献
7.
Central difference limit for derivatives of ill-posed functions: Best approach in supercomputing era
S.K. Sen 《Applied mathematics and computation》2010,217(7):3338-3348
A polynomial-time algorithm based on a revised method of iterative central difference limit is presented for computing the numerical value of the derivative of a given analytic function. Through numerical experiments, we establish that this algorithm is a best one. This can be used to obtain the derivative to a desired accuracy subject to the precision of the computer for violently fluctuating or rapidly oscillatory functions. The concerned time/computational complexity is so small in practice that in the non-main-frame supercomputing era when over estimated 95% of computing resources is unutilized and hence a waste, the complexity here is not an issue. We have, for the purpose of a comparison, also included Matlab symbolic-cum-numerical computation to obtain the derivative of the foregoing functions numerically. Matlab programs in both Matlab standard precision as well as Matlab variable precision are also included for the central difference limit along with the symbolic-cum-numerical computation. The reader concerned with computing the derivative of an ill-conditioned function - large or small - can use these programs by copying, pasting, and executing and can readily check the quality of the derivative. 相似文献
8.
The objective of this study was to distinguish within a population of patients with and without breast cancer. The study was based on the University of Wisconsin's dataset of 569 patients, of whom 212 were subsequently found to have breast cancer. A subset-conjunctive model, which is related to Logical Analysis of Data, is described to distinguish between the two groups of patients based on the results of a non-invasive procedure called Fine Needle Aspiration, which is often used by physicians before deciding on the need for a biopsy. We formulate the problem of inferring subset-conjunctive rules as a 0-1 integer program, show that it is NP-Hard, and prove that it admits no polynomial-time constant-ratio approximation algorithm. We examine the performance of a randomized algorithm, and of randomization using LP rounding. In both cases, the expected performance ratio is arbitrarily bad. We use a deterministic greedy algorithm to identify a Pareto-efficient set of subset-conjunctive rules; describe how the rules change with a re-weighting of the type-I and type-II errors; how the best rule changes with the subset size; and how much of a tradeoff is required between the two types of error as one selects a more stringent or more lax classification rule. An important aspect of the analysis is that we find a sequence of closely related efficient rules, which can be readily used in a clinical setting because they are simple and have the same structure as the rules currently used in clinical diagnosis. 相似文献
9.
Exclusion algorithms have been used recently to find all solutions of a system of nonlinear equations or to find the global minimum of a function over a compact domain. These algorithms are based on a minimization condition that can be applied to each cell in the domain. In this paper, we consider Lipschitz functions of order α and give a new minimization condition for the exclusion algorithm. Furthermore, convergence and complexity results are presented for such algorithm. 相似文献
10.
We classify into polynomial time or -complete all three nonempty part sandwich problems. This solves the polynomial dichotomy into polynomial time and -complete for this class of graph partition problems. 相似文献