首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We resolve the computational complexity of determining the treelength of a graph, thereby solving an open problem of Dourisboure and Gavoille, who introduced this parameter, and asked to determine the complexity of recognizing graphs of a bounded treelength Dourisboure and Gavoille (2007) [6]. While recognizing graphs with treelength 1 is easily seen as equivalent to recognizing chordal graphs, which can be done in linear time, the computational complexity of recognizing graphs with treelength 2 was unknown until this result. We show that the problem of determining whether a given graph has a treelength at most k is NP-complete for every fixed k≥2, and use this result to show that treelength in weighted graphs is hard to approximate within a factor smaller than . Additionally, we show that treelength can be computed in time O(1.7549n) by giving an exact exponential time algorithm for the Chordal Sandwich problem and showing how this algorithm can be used to compute the treelength of a graph.  相似文献   

2.
We propose a one-norm support vector machine (SVM) formulation as an alternative to the well-known formulation that uses parameter C in order to balance the two inherent objective functions of the problem. Our formulation is motivated by the ?-constraint approach that is used in bicriteria optimization and we propose expressing the objective of minimizing total empirical error as a constraint with a parametric right-hand-side. Using dual variables we show equivalence of this formulation to the one with the trade-off parameter. We propose an algorithm that enumerates the entire efficient frontier by systematically changing the right-hand-side parameter. We discuss the results of a detailed computational analysis that portrays the structure of the efficient frontier as well as the computational burden associated with finding it. Our results indicate that the computational effort for obtaining the efficient frontier grows linearly in problem size, and the benefit in terms of classifier performance is almost always substantial when compared to a single run of the corresponding SVM. In addition, both the run time and accuracy compare favorably to other methods that search part or all of the regularization path of SVM.  相似文献   

3.
We describe explicitly each stage of a numerically stable algorithm for calculating with exponential tension B-splines with non-uniform choice of tension parameters. These splines are piecewisely in the kernel of D 2(D 2p 2), where D stands for ordinary derivative, defined on arbitrary meshes, with a different choice of the tension parameter p on each interval. The algorithm provides values of the associated B-splines and their generalized and ordinary derivatives by performing positive linear combinations of positive quantities, described as lower-order exponential tension splines. We show that nothing else but the knot insertion algorithm and good approximation of a few elementary functions is needed to achieve machine accuracy. The underlying theory is that of splines based on Chebyshev canonical systems which are not smooth enough to be ECC-systems. First, by de Boor algorithm we construct exponential tension spline of class C 1, and then we use quasi-Oslo type algorithms to evaluate classical non-uniform C 2 tension exponential splines.   相似文献   

4.
For sufficiently small C1 perturbations of (nonautonomous) linear difference equations with a nonuniform exponential trichotomy, we establish the existence of center manifolds with the optimal C1 regularity. We also consider the case of parameter-dependent perturbations and we obtain the C1 dependence of the center manifolds on the parameter. In addition, we consider arbitrary growth rates with the usual exponential estimates of the form in the notion of exponential trichotomy replaced by where ρ is now an arbitrary function. The proof of the regularity, both of the center manifold and of its dependence, on the parameter is based on the fiber contraction principle. The most technical part of the argument concerns the continuity of the fiber contraction that essentially needs a direct argument.  相似文献   

5.
We derive expressions for the asymptotic approximation of the bias of the least squares estimators in nonlinear regression models with parameters which are subject to nonlinear equality constraints.The approach suggested modifies the normal equations of the estimator, and approximates them up to o p(N –1), where N is the number of observations. The bias equations so obtained are solved under different assumptions on constraints and on the model. For functions of the parameters the invariance of the approximate bias with respect to reparametrisations is demonstrated. Singular models are considered as well, in which case the constraints may serve either to identify the parameters, or eventually to restrict the parameter space.  相似文献   

6.
Multistage Bayesian decision procedures in statistical quality control are known from attribute sampling. In this paper they are introduced in a more general framework occuring in lot-control by using the theory of Bayesian sequentially planned decision procedures. We show that under sufficiency and transitivity assumptions and monotonicity properties concerning the distribution and cost set-up these Bayes-procedures have (z, c ,c +)-structure which, on one hand, generalizes results of K.-H. Waldmann and, on the other hand, reduces computational effort significantly. Finally, examples taken from attribute sampling and life testing for an outgoing lot are presented.The theory presented in this paper is taken from [11].  相似文献   

7.
In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis–Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982 to 2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplementary materials available online.  相似文献   

8.
A PL homotopy algorithm is modified to yield a polynomial-time result on its computational complexity. We prove that the cost of locating all zeros of a polynomial of degreen to an accuracy of (measured by the number of evaluations of the polynomial) grows no faster thanO(max{n 4,n 3log2(n/)}).This work is in response to a question raised in a paper by S. Smale as to the efficiency of piecewise linear methods in solving equations. In comparison with a few results reported, the algorithm under discussion is the only one providing correct multiplicities and the only one employing vector labelling.This work is supported in part by the Foundation of Zhongshan University Advanced Research Centre, and in part by the National Natural Science Foundation of China.  相似文献   

9.
This paper develops a novel importance sampling algorithm for estimating the probability of large portfolio losses in the conditional independence framework. We apply exponential tilts to (i) the distribution of the natural sufficient statistics of the systematic risk factors and (ii) conditional default probabilities, given the simulated values of the systematic risk factors, and select parameter values by minimizing the Kullback–Leibler divergence of the resulting parametric family from the ideal (zero-variance) importance density. Optimal parameter values are shown to satisfy intuitive moment-matching conditions, and the asymptotic behaviour of large portfolios is used to approximate the requisite moments. In a sense we generalize the algorithm of Glasserman and Li (2005) so that it can be applied in a wider variety of models. We show how to implement our algorithm in the t copula model and compare its performance there to the algorithm developed by Chan and Kroese (2010). We find that our algorithm requires substantially less computational time (especially for large portfolios) but is slightly less accurate. Our algorithm can also be used to estimate more general risk measures, such as conditional tail expectations, whereas Chan and Kroese (2010) is specifically designed to estimate loss probabilities.  相似文献   

10.
Tree-structured models have been widely used because they function as interpretable prediction models that offer easy data visualization. A number of tree algorithms have been developed for univariate response data and can be extended to analyze multivariate response data. We propose a tree algorithm by combining the merits of a tree-based model and a mixed-effects model for longitudinal data. We alleviate variable selection bias through residual analysis, which is used to solve problems that exhaustive search approaches suffer from, such as undue preference to split variables with more possible splits, expensive computational cost, and end-cut preference. Most importantly, our tree algorithm discovers trends over time on each of the subspaces from recursive partitioning, while other tree algorithms predict responses. We investigate the performance of our algorithm with both simulation and real data studies. We also develop an R package melt that can be used conveniently and freely. Additional results are provided as online supplementary material.  相似文献   

11.
Models with intractable likelihood functions arise in areas including network analysis and spatial statistics, especially those involving Gibbs random fields. Posterior parameter estimation in these settings is termed a doubly intractable problem because both the likelihood function and the posterior distribution are intractable. The comparison of Bayesian models is often based on the statistical evidence, the integral of the un-normalized posterior distribution over the model parameters which is rarely available in closed form. For doubly intractable models, estimating the evidence adds another layer of difficulty. Consequently, the selection of the model that best describes an observed network among a collection of exponential random graph models for network analysis is a daunting task. Pseudolikelihoods offer a tractable approximation to the likelihood but should be treated with caution because they can lead to an unreasonable inference. This article specifies a method to adjust pseudolikelihoods to obtain a reasonable, yet tractable, approximation to the likelihood. This allows implementation of widely used computational methods for evidence estimation and pursuit of Bayesian model selection of exponential random graph models for the analysis of social networks. Empirical comparisons to existing methods show that our procedure yields similar evidence estimates, but at a lower computational cost. Supplementary material for this article is available online.  相似文献   

12.
An efficient algorithm for the determination of Bayesian optimal discriminating designs for competing regression models is developed, where the main focus is on models with general distributional assumptions beyond the “classical” case of normally distributed homoscedastic errors. For this purpose, we consider a Bayesian version of the Kullback–Leibler (KL). Discretizing the prior distribution leads to local KL-optimal discriminating design problems for a large number of competing models. All currently available methods either require a large amount of computation time or fail to calculate the optimal discriminating design, because they can only deal efficiently with a few model comparisons. In this article, we develop a new algorithm for the determination of Bayesian optimal discriminating designs with respect to the Kullback–Leibler criterion. It is demonstrated that the new algorithm is able to calculate the optimal discriminating designs with reasonable accuracy and computational time in situations where all currently available procedures are either slow or fail.  相似文献   

13.
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model with normal base measure, Gibbs samplingalgorithms based on the Pólya urn scheme are often used to simulate posterior draws in conjugate models (essentially, linear regression models and models for binary outcomes). In the nonconjugate case, some common problems associated with existing simulation algorithms include convergence and mixing difficulties.

This article proposes an algorithm for MDP models with exponential family likelihoods and normal base measures. The algorithm proceeds by making a Laplace approximation to the likelihood function, thereby matching the proposal with that of the Gibbs sampler. The proposal is accepted or rejected via a Metropolis-Hastings step. For conjugate MDP models, the algorithm is identical to the Gibbs sampler. The performance of the technique is investigated using a Poisson regression model with semi-parametric random effects. The algorithm performs efficiently and reliably, even in problems where large-sample results do not guarantee the success of the Laplace approximation. This is demonstrated by a simulation study where most of the count data consist of small numbers. The technique is associated with substantial benefits relative to existing methods, both in terms of convergence properties and computational cost.  相似文献   

14.
This paper concerns the cubic smoothing spline approach to nonparametric regression. After first deriving sharp asymptotic formulas for the eigenvalues of the smoothing matrix, the paper uses these formulas to investigate the efficiency of different selection criteria for choosing the smoothing parameter. Special attention is paid to the generalized maximum likelihood (GML), C p and extended exponential (EE) criteria and their marginal Bayesian interpretation. It is shown that (a) when the Bayesian model that motivates GML is true, using C p to estimate the smoothing parameter would result in a loss of efficiency with a factor of 10/3, proving and strengthening a conjecture proposed in Stein (1990); (b) when the data indeed come from the C p density, using GML would result in a loss of efficiency of ; (c) the loss of efficiency of the EE criterion is at most 1.543 when the data are sampled from its consistent density family. The paper not only studies equally spaced observations (the setting of Stein, 1990), but also investigates general sampling scheme of the design points, and shows that the efficiency results remain the same in both cases.This work is supported in part by NSF grant DMS-0204674 and Harvard University Clark-Cooke Fund. Mathematics Subject Classification (2000):Primary: 62G08; Secondary: 62G20  相似文献   

15.
We describe an O(n 4 hmin{logU,n 2logn}) capacity scaling algorithm for the minimum cost submodular flow problem. Our algorithm modifies and extends the Edmonds–Karp capacity scaling algorithm for minimum cost flow to solve the minimum cost submodular flow problem. The modification entails scaling a relaxation parameter δ. Capacities are relaxed by attaching a complete directed graph with uniform arc capacity δ in each scaling phase. We then modify a feasible submodular flow by relaxing the submodular constraints, so that complementary slackness is satisfied. This creates discrepancies between the boundary of the flow and the base polyhedron of a relaxed submodular function. To reduce these discrepancies, we use a variant of the successive shortest path algorithm that augments flow along minimum cost paths of residual capacity at least δ. The shortest augmenting path subroutine we use is a variant of Dijkstra’s algorithm modified to handle exchange capacity arcs efficiently. The result is a weakly polynomial time algorithm whose running time is better than any existing submodular flow algorithm when U is small and C is big. We also show how to use maximum mean cuts to make the algorithm strongly polynomial. The resulting algorithm is the first capacity scaling algorithm to match the current best strongly polynomial bound for submodular flow. Received: August 6, 1999 / Accepted: July 2001?Published online October 2, 2001  相似文献   

16.
In this paper Bayesian analysis and Wiener process are used in orderto build an algorithm to solve the problem of globaloptimization.The paper is divided in two main parts.In the first part an already known algorithm is considered: a new (Bayesian)stopping ruleis added to it and some results are given, such asan upper bound for the number of iterations under the new stopping rule.In the second part a new algorithm is introduced in which the Bayesianapproach is exploited not onlyin the choice of the Wiener model but also in the estimationof the parameter 2 of the Wiener process, whose value appears to bequite crucial.Some results about this algorithm are also given.  相似文献   

17.
We establish the robustness of nonuniform exponential dichotomies in Banach spaces, under sufficiently small C1-parameterized perturbations. Moreover, we show that the stable and unstable subspaces of the exponential dichotomies obtained from the perturbation are also of class C1 on the parameter, thus yielding an optimal smoothness.  相似文献   

18.
An algorithm is proposed for minimizing certain niceC 2 functionsf onE n assuming only a computational knowledge off andf. It is shown that the algorithm provides global convergence at a rate which is eventually superlinear and possibly quadratic. The algorithm is purely algebraic and does not require the minimization of any functions of one variable.Numerical computation on specific problems with as many as six independent variables has shown that the method compares very favorably with the best of the other known methods. The method is compared with theFletcher andPowell method for a simple two dimensional test problem and for a six dimensional problem arising in control theory.Supported by Air Force grant AF-AFO SR-93 7-65 and Boeing Scientific Research Laboratories.  相似文献   

19.
Consider the classical nonparametric regression problem yi = f(ti) + ii = 1,...,n where ti = i/n, and i are i.i.d. zero mean normal with variance 2. The aim is to estimate the true function f which is assumed to belong to the smoothness class described by the Besov space B pq q . These are functions belonging to Lp with derivatives up to order s, in Lp sense. The parameter q controls a further finer degree of smoothness. In a Bayesian setting, a prior on B pq q is chosen following Abramovich, Sapatinas and Silverman (1998). We show that the optimal Bayesian estimator of f is then also a.s. in B pq q if the loss function is chosen to be the Besov norm of B pq q . Because it is impossible to compute this optimal Bayesian estimator analytically, we propose a stochastic algorithm based on an approximation of the Bayesian risk and simulated annealing. Some simulations are presented to show that the algorithm performs well and that the new estimator is competitive when compared to the more standard posterior mean.  相似文献   

20.
We present a theoretical result on a path-following algorithm for convex programs. The algorithm employs a nonsmooth Newton subroutine. It starts from a near center of a restricted constraint set, performs a partial nonsmooth Newton step in each iteration, and converges to a point whose cost is within accuracy of the optimal cost in iterations, wherem is the number of constraints in the problem. Unlike other interior point methods, the analyzed algorithm only requires a first-order Lipschitzian condition and a generalized Hessian similarity condition on the objective and constraint functions. Therefore, our result indicates the theoretical feasibility of applying interior point methods to certainC 1-optimization problems instead ofC 2-problems. Since the complexity bound is unchanged compared with similar algorithms forC 2-convex programming, the result shows that the smoothness of functions may not be a factor affecting the complexity of interior point methods.This author's work is supported in part by the National Science Foundation of the USA under grant DDM-8721709.This author's work is supported in part by the Australian Research Council.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号