首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we examineLF spaces, inductive limits of Fréchet spaces, in two different settings: the categoryCV S of convergence vector spaces and the categoryLC S of locally convex topological vector spaces. Special attention is given to permanence properties and retractivity properties in each case. Some interaction between properties ofLF spaces inCV S and other properties inLC S are investigated.R. Beattie's research was supported by NSERC grant OGP0005316.  相似文献   

2.
Several criteria, such as CV, C p , AIC, CAIC, and MAIC, are used for selecting variables in linear regression models. It might be noted that C p has been proposed as an estimator of the expected standardized prediction error, although the target risk function of CV might be regarded as the expected prediction error R PE. On the other hand, the target risk function of AIC, CAIC, and MAIC is the expected log-predictive likelihood. In this paper, we propose a prediction error criterion, PE, which is an estimator of the expected prediction error R PE. Consequently, it is also a competitor of CV. Results of this study show that PE is an unbiased estimator when the true model is contained in the full model. The property is shown without the assumption of normality. In fact, PE is demonstrated as more faithful for its risk function than CV. The prediction error criterion PE is extended to the multivariate case. Furthermore, using simulations, we examine some peculiarities of all these criteria.  相似文献   

3.
Cross-validation (CV) is often used to select the regularization parameter in high-dimensional problems. However, when applied to the sparse modeling method Lasso, CV leads to models that are unstable in high-dimensions, and consequently not suited for reliable interpretation. In this article, we propose a model-free criterion ESCV based on a new estimation stability (ES) metric and CV. Our proposed ESCV finds a smaller and locally ES-optimal model smaller than the CV choice so that it fits the data and also enjoys estimation stability property. We demonstrate that ESCV is an effective alternative to CV at a similar easily parallelizable computational cost. In particular, we compare the two approaches with respect to several performance measures when applied to the Lasso on both simulated and real datasets. For dependent predictors common in practice, our main finding is that ESCV cuts down false positive rates often by a large margin, while sacrificing little of true positive rates. ESCV usually outperforms CV in terms of parameter estimation while giving similar performance as CV in terms of prediction. For the two real datasets from neuroscience and cell biology, the models found by ESCV are less than half of the model sizes by CV, but preserves CV's predictive performance and corroborates with subject knowledge and independent work. We also discuss some regularization parameter alignment issues that come up in both approaches. Supplementary materials are available online.  相似文献   

4.
The aim of this paper is to present a thorough reassessment of the Snyman–Fatti (SF) Multi-start Global Minimization Algorithm with Dynamic Search Trajectories, first published twenty years ago. The reassessment is done with reference to a slightly modified version of the original method, the essentials of which are summarized here. Results of the performance of the current code on an extensive set of standard test problems commonly in use today, are presented. This allows for a fair assessment to be made of the performance of the SF algorithm relative to that of the popular Differential Evolution (DE) method, for which test results on the same standard set of test problems used here for the SF algorithm, are also given. The tests show that the SF algorithm, that requires relatively few parameter settings, is a reliably robust and competitive method compared to the DE method. The results also indicate that the SF trajectory algorithm is particularly promising to solve minimum potential energy problems to determine the structure of atomic and molecular clusters.  相似文献   

5.
Optimal subset selection among a general family of threshold autoregressive moving-average (TARMA) models is considered. The usual complexity of model/order selection is increased by capturing the uncertainty of unknown threshold levels and an unknown delay lag. The Monte Carlo method of Bayesian model averaging provides a possible way to overcome such model uncertainty. Incorporating with the idea of Bayesian model averaging, a modified stochastic search variable selection method is adapted to consider subset selection in TARMA models, by adding latent indicator variables for all potential model lags as part of the proposed Markov chain Monte Carlo sampling scheme. Metropolis–Hastings methods are employed to deal with the well-known difficulty of including moving-average terms in the model and a novel proposal mechanism is designed for this purpose. Bayesian comparison of two hyper-parameter settings is carried out via a simulation study. The results demonstrate that the modified method has favourable performance under reasonable sample size and appropriate settings of the necessary hyper-parameters. Finally, the application to four real datasets illustrates that the proposed method can provide promising and parsimonious models from more than 16 million possible subsets.  相似文献   

6.
We combine the calculus of conormal distributions, in particular the Pull‐Back and Push‐Forward Theorems, with the method of layer potentials to solve the Dirichlet and Neumann problems on half‐spaces. We obtain full asymptotic expansions for the solutions, show that boundary layer potential operators are elements of the full b‐calculus and give a new proof of the classical jump relations. En route, we improve Siegel and Talvila's growth estimates for the modified layer potentials in the case of polyhomogeneous boundary data. The techniques we use here can be generalised to geometrically more complex settings, as for instance the exterior domain of touching domains or domains with fibred cusps. This work is intended to be a first step in a longer program aiming at understanding the method of layer potentials in the setting of certain non‐Lipschitz singularities that can be resolved in the sense of Melrose using manifolds with corners and at applying a matching asymptotics ansatz to singular perturbations of related problems.  相似文献   

7.
This study presents a continuing investigation of influences on outcomes achieved by students working in groups of three on tasks related to chance and data. Earlier research described final mathematical outcomes and identified 17 factors influencing three types of short-term outcomes for groups working in an isolated setting. The current report documents the 17 factors for groups working in a classroom setting and 1654 events for all groups in both settings are identified and each associated with a factor and a short-term outcome. Consideration is then given to variables that have the potential to influence the factors, the short-term outcomes, and their interaction. The overarching variable is the setting within which the collaboration took place. Within each setting, however, two other variables operated: the task carried out, the age/grade of students, gender balance, or collaborative characteristics. The influences of these variables are described within the two settings before consideration is given to the overall influence of the settings.  相似文献   

8.
We show how to uniformly distribute data at random (not to be confounded with permutation routing) in two settings that are able to deal with massive data: coarse grained parallelism and external memory. In contrast to previously known work for parallel setups, our method is able to fulfill the three criteria of uniformity, work-optimality and balance among the processors simultaneously. To guarantee the uniformity we investigate the matrix of communication requests between the processors. We show that its distribution is a generalization of the multivariate hypergeometric distribution and we give algorithms to sample it efficiently in the two settings.  相似文献   

9.
Abstract

This article introduces a general method for Bayesian computing in richly parameterized models, structured Markov chain Monte Carlo (SMCMC), that is based on a blocked hybrid of the Gibbs sampling and Metropolis—Hastings algorithms. SMCMC speeds algorithm convergence by using the structure that is present in the problem to suggest an appropriate Metropolis—Hastings candidate distribution. Although the approach is easiest to describe for hierarchical normal linear models, we show that its extension to both nonnormal and nonlinear cases is straightforward. After describing the method in detail we compare its performance (in terms of run time and autocorrelation in the samples) to other existing methods, including the single-site updating Gibbs sampler available in the popular BUGS software package. Our results suggest significant improvements in convergence for many problems using SMCMC, as well as broad applicability of the method, including previously intractable hierarchical nonlinear model settings.  相似文献   

10.
This work presents some basic principles concerned with two general settings for stable approximating orthogonal generalized inverses in Hilbert spaces. In these two settings, perfect convergence and perfect uniform convergence are two important concepts for making theoretical or numerical analysis. Our basic principles characterize the two important concepts by some necessary conditions: stability with other compensated conditions. These basic principles include as simple corollaries some well-known results.  相似文献   

11.
Two families of parameter estimation procedures for the stablelaws based on a variant of the characteristic function are provided. The methodology which produces viable computational procedures for the stable laws is generally applicable to other families of distributions across avariety of settings. Both families of procedures may be described as a modified weighted chi-squared minimization procedure, and both explicitlytake account of constraints on the parameter space. Influence functions for and efficiencies of the estimators a r e given. If l, x2, ..., xn is a random sample from an unknown distribution F, a method for determining the stable law to which F is attracted is developed. Procedures for regression and autoregression with stable error structurear e provided. A number of examples are given.  相似文献   

12.
Renaut  Rosemary  Su  Yi 《Numerical Algorithms》1997,16(3-4):255-281
When the standard Chebyshev collocation method is used to solve a third order differential equation with one Neumann boundary condition and two Dirichlet boundary conditions, the resulting differentiation matrix has spurious positive eigenvalues and extreme eigenvalue already reaching O(N 5 for N = 64. Stable time-steps are therefore very small in this case. A matrix operator with better stability properties is obtained by using the modified Chebyshev collocation method, introduced by Kosloff and Tal Ezer [3]. By a correct choice of mapping and implementation of the Neumann boundary condition, the matrix operator has extreme eigenvalue less than O(N 4. The pseudospectral and modified pseudospectral methods are implemented for the solution of one-dimensional third-order partial differential equations and the accuracy of the solutions compared with those by finite difference techniques. The comparison verifies the stability analysis and the modified method allows larger time-steps. Moreover, to obtain the accuracy of the pseudospectral method the finite difference methods are substantially more expensive. Also, for the small N tested, N ⩽ 16, the modified pseudospectral method cannot compete with the standard approach. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

13.
The coefficient of variation (CV) of a population is defined as the ratio of the population standard deviation to the population mean. It is regarded as a measure of stability or uncertainty, and can indicate the relative dispersion of data in the population to the population mean. CV is a dimensionless measure of scatter or dispersion and is readily interpretable, as opposed to other commonly used measures such as standard deviation, mean absolute deviation or error factor, which are only interpretable for the lognormal distribution. CV is often estimated by the ratio of the sample standard deviation to the sample mean, called the sample CV. Even for the normal distribution, the exact distribution of the sample CV is difficult to obtain, and hence it is difficult to draw inferences regarding the population CV in the frequentist frame. Different methods of estimating the sample standard deviation as well as the sample mean result in different shapes of the sampling distribution of the sample CV, from which inferences about the population CV can be made. In this paper we propose a simulation-based Bayesian approach to tackle this problem. A set of real data is used to generate the sampling distribution of the CV under the assumption that the data follow the three-parameter Gamma distribution. A probability interval is then constructed. The method also applies easily to lognormal and Weibull distributions.  相似文献   

14.
The modified Newton method for multiple roots is organized in an interval method to include simultaneously the distinct roots of a given polynomialP in complex circular interval arithmetic. A condition on the starting disks which ensures convergence is given, and convergence is shown to be quadratic. As a consequence, a simple parallel algorithm to approach all the distinct roots ofP is derived from the modified Newton method.The research reported in this paper has been made possible through the support and the sponsorship of the Italian Government through the Ministero per l'Universitá e la Ricerca Scientifica under Contract MURST 60%, 1990 at the Universitá di L'Aquila.  相似文献   

15.
Given a compact smooth manifold M with non-empty boundary and a Morse function, a pseudo-gradient Morse-Smale vector field adapted to the boundary allows one to build a Morse complex whose homology is isomorphic to the (absolute or relative to the boundary) homology of M with integer coefficients. Our approach simplifies other methods which have been discussed in more specific geometric settings.  相似文献   

16.
A modified version of the natural power method (NP) for fast estimation and tracking of the principal eigenvectors of a vector sequence is Presented. It is an extension of the natural power method because it is a solution to obtain the principal eigenvectors and not only for tracking of the principal subspace. As compared with some power-based methods such as Oja method, the projection approximation subspace tracking (PAST) method, and the novel information criterion (NIC) method, the modified natural power method (MNP) has the fastest convergence rate and can be easily implemented with only O(np) flops of computation at each iteration, where n is the dimension of the vector sequence and p is the dimension of the principal subspace or the number of the principal eigenvectors. Furthermore, it is guaranteed to be globally and exponentially convergent in contrast with some non-power-based methods such as MALASE and OPERA. Selected from Journal of Fudan University (Natural Science), 2004, 43(3): 275–284  相似文献   

17.
We present an ensemble tree-based algorithm for variable selection in high-dimensional datasets, in settings where a time-to-event outcome is observed with error. This work is motivated by self-reported outcomes collected in large-scale epidemiologic studies, such as the Women’s Health Initiative. The proposed methods equally apply to imperfect outcomes that arise in other settings such as data extracted from electronic medical records. To evaluate the performance of our proposed algorithm, we present results from simulation studies, considering both continuous and categorical covariates. We illustrate this approach to discover single nucleotide polymorphisms that are associated with incident Type 2 diabetes in the Women’s Health Initiative. A freely available R package icRSF has been developed to implement the proposed methods. Supplementary material for this article is available online.  相似文献   

18.
A gradient flow‐based explicit finite element method (L2GF) for reconstructing the 3D density function from a set of 2D electron micrographs has been proposed in recently published papers. The experimental results showed that the proposed method was superior to the other classical algorithms, especially for the highly noisy data. However, convergence analysis of the L2GF method has not been conducted. In this paper, we present a complete analysis on the convergence of L2GF method for the case of using a more general form regularization term, which includes the Tikhonov‐type regularizer and modified or smoothed total variation regularizer as two special cases. We further prove that the L2‐gradient flow method is stable and robust. These results demonstrate that the iterative variational reconstruction method derived from the L2‐gradient flow approach is mathematically sound and effective and has desirable properties. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
In a recent paper McCormick and Ritter consider two classes of algorithms, namely methods of conjugate directions and quasi-Newton methods, for the problem of minimizing a function ofn variablesF(x). They show that the former methods possess ann-step superlinear rate of convergence while the latter are every step superlinear and therefore inherently superior. In this paper a simple and computationally inexpensive modification of a method of conjugate directions is presented. It is shown that the modified method is a quasi-Newton method and is thus every step superlinearly convergent. It is also shown that under certain assumptions on the second derivatives ofF the rate of convergence of the modified method isn-step quadratic.This work was supported by the National Research Council of Canada under Research Grant A8189.  相似文献   

20.
The aim of this work is to study a new finite element (FE) formulation for the approximation of nonsteady convection equation. Our approximation scheme is based on the Streamline Upwind Petrov Galerkin (SUPG) method for space variable, x, and a modified of the Euler implicit method for time variable, t. The most interest for this scheme lies in its application to resolve by continuous (FE) method the complex of viscoelastic fluid flow obeying an Oldroyd‐B differential model; this constituted our aim motivation and allows us to treat the constitutive law equation, which expresses the relation between the stress tensor and the velocity gradient and includes tensorial transport term. To make the analysis of the method more clear, we first study, in this article this modified method for the advection equation. We point out the stability of this new method and the error estimate of the approximation solution is discussed. © 2004 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2005  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号