首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
We explore simultaneous modeling of several covariance matrices across groups using the spectral (eigenvalue) decomposition and modified Cholesky decomposition. We introduce several models for covariance matrices under different assumptions about the mean structure. We consider ‘dependence’ matrices, which tend to have many parameters, as constant across groups and/or parsimoniously modeled via a regression formulation. For ‘variances’, we consider both unrestricted across groups and more parsimoniously modeled via log-linear models. In all these models, we explore the propriety of the posterior when improper priors are used on the mean and ‘variance’ parameters (and in some cases, on components of the ‘dependence’ matrices). The models examined include several common Bayesian regression models, whose propriety has not been previously explored, as special cases. We propose a simple approach to weaken the assumption of constant dependence matrices in an automated fashion and describe how to compute Bayes factors to test the hypothesis of constant ‘dependence’ across groups. The models are applied to data from two longitudinal clinical studies.  相似文献   

2.
Multiple hypotheses testing is concerned with appropriately controlling the rate of false positives, false negatives or both when testing several hypotheses simultaneously. Nowadays, the common approach to testing multiple hypotheses calls for controlling the expected proportion of falsely rejected null hypotheses referred to as the false discovery rate (FDR) or suitable measures based on the positive false discovery rate (pFDR). In this paper, we consider the problem of determining levels that both false positives and false negatives can be controlled simultaneously. As our risk function, we use the expected value of the maximum between the proportions of false positives and false negatives, with the expectation being taken conditional on the event that at least one hypothesis is rejected and one is accepted, referred to as hybrid error rate (HER). We then develop, based on HER, an analog of p-value termed as h-value to test the individual hypotheses. The use of the new procedure is illustrated using the well-known public data set by Golub et al. [Molecular classification of cancer: class discovery and class prediction by gene expression monitoring, Science 386 (1999) 531-537] with Affymetrix arrays of patients with acute lymphoic leukemia and acute myeloid leukemia.  相似文献   

3.
The Wiener disorder problem seeks to determine a stopping time which is as close as possible to the (unknown) time of ‘disorder’ when the drift of an observed Wiener process changes from one value to another. In this paper we present a solution of the Wiener disorder problem when the horizon is finite. The method of proof is based on reducing the initial problem to a parabolic free-boundary problem where the continuation region is determined by a continuous curved boundary. By means of the change-of-variable formula containing the local time of a diffusion process on curves we show that the optimal boundary can be characterized as a unique solution of the nonlinear integral equation.  相似文献   

4.
We study the convergence of the false discovery proportion (FDP) of the Benjamini-Hochberg procedure in the Gaussian equi-correlated model, when the correlation ρm converges to zero as the hypothesis number m grows to infinity. In this model, the FDP converges to the false discovery rate (FDR) at rate {min(m,1/ρm)}1/2, which is different from the standard convergence rate m1/2 holding under independence.  相似文献   

5.
We present a method that scans a random field for localized clusters while controlling the fraction of false discoveries. We use a kernel density estimator as the test statistic and adjust for the bias in this estimator by a method we introduce in this paper. We also show how to combine information across multiple bandwidths while maintaining false discovery control.  相似文献   

6.
Consider the heteroscedastic model Y=m(X)+σ(X)?, where ? and X are independent, Y is subject to right censoring, m(·) is an unknown but smooth location function (like e.g. conditional mean, median, trimmed mean…) and σ(·) an unknown but smooth scale function. In this paper we consider the estimation of m(·) under this model. The estimator we propose is a Nadaraya-Watson type estimator, for which the censored observations are replaced by ‘synthetic’ data points estimated under the above model. The estimator offers an alternative for the completely nonparametric estimator of m(·), which cannot be estimated consistently in a completely nonparametric way, whenever high quantiles of the conditional distribution of Y given X=x are involved.We obtain the asymptotic properties of the proposed estimator of m(x) and study its finite sample behaviour in a simulation study. The method is also applied to a study of quasars in astronomy.  相似文献   

7.
Consider the model Y=m(X)+ε, where m(⋅)=med(Y|⋅) is unknown but smooth. It is often assumed that ε and X are independent. However, in practice this assumption is violated in many cases. In this paper we propose modeling the dependence between ε and X by means of a copula model, i.e. (ε,X)∼Cθ(Fε(⋅),FX(⋅)), where Cθ is a copula function depending on an unknown parameter θ, and Fε and FX are the marginals of ε and X. Since many parametric copula families contain the independent copula as a special case, the so-obtained regression model is more flexible than the ‘classical’ regression model.We estimate the parameter θ via a pseudo-likelihood method and prove the asymptotic normality of the estimator, based on delicate empirical process theory. We also study the estimation of the conditional distribution of Y given X. The procedure is illustrated by means of a simulation study, and the method is applied to data on food expenditures in households.  相似文献   

8.
A proportional reasoning item bank was created from the relevant literature and tested in various forms. Rasch analyses of 303 pupils’ test results were used to calibrate the bank, and data from 84 pupils’ interviews was used to confirm our diagnostic interpretations. A number of sub-tests were scaled, including parallel ‘without models’ and ‘with models’ forms. We provide details of the 13-item ‘without models’ test which was formed from the ‘richest’ diagnostic items and verified on a further test sample (N=212, ages 10-13). Two scales were constructed for this test, one that measures children’s ‘ratio attainment’ and one that measures their ‘tendency for additive strategy.’ Other significant errors — ‘incorrect build-up,’ ‘magical doubling/halving,’ ‘constant sum’ and ‘incomplete reasoning’ — were identified. Finally, an empirical hierarchy of pupils’ attainment of proportional reasoning was formed, incorporating the significant errors and the additive scale.  相似文献   

9.
Let X={X(s)}sS be an almost sure continuous stochastic process (S compact subset of Rd) in the domain of attraction of some max-stable process, with index function constant over S. We study the tail distribution of ∫SX(s)ds, which turns out to be of Generalized Pareto type with an extra ‘spatial’ parameter (the areal coefficient from Coles and Tawn (1996) [3]). Moreover, we discuss how to estimate the tail probability P(∫SX(s)ds>x) for some high value x, based on independent and identically distributed copies of X. In the course we also give an estimator for the areal coefficient. We prove consistency of the proposed estimators. Our methods are applied to the total rainfall in the North Holland area; i.e. X represents in this case the rainfall over the region for which we have observations, and its integral amounts to total rainfall.The paper has two main purposes: first to formalize and justify the results of Coles and Tawn (1996) [3]; further we treat the problem in a non-parametric way as opposed to their fully parametric methods.  相似文献   

10.
The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between worlds and beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting ‘uniform’ and ‘flat’ models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.  相似文献   

11.
12.
We consider the rendezvous problem faced by two mobile agents, initially placed according to a known distribution on intersections in Manhattan (nodes of the integer lattice Z2). We assume they can distinguish streets from avenues (the two axes) and move along a common axis in each period (both to an adjacent street or both to an adjacent avenue). However they have no common notion of North or East (positive directions along axes). How should they move, from node to adjacent node, so as to minimize the expected time required to ‘see’ each other, to be on a common street or avenue. This is called ‘line-of-sight’ rendezvous. It is equivalent to a rendezvous problem where two rendezvousers attempt to find each other via two means of communication.  相似文献   

13.
This paper is concerned with the testing problem of generalized multivariate linear hypothesis for the mean in the growth curve model(GMANOVA). Our interest is the case in which the number of the observed points p is relatively large compared to the sample size N. Asymptotic expansions of the non-null distributions of the likelihood ratio criterion, Lawley-Hotelling’s trace criterion and Bartlett-Nanda-Pillai’s trace criterion are derived under the asymptotic framework that N and p go to infinity together, while p/Nc∈(0,1). It also can be confirmed that Rothenberg’s condition on the magnitude of the asymptotic powers of the three tests is valid when p is relatively large, theoretically and numerically.  相似文献   

14.
In this paper, we investigate a problem concerning quartets; a quartet is a particular kind of tree on four leaves. Loosely speaking, a set of quartets is said to be ‘definitive’ if it completely encapsulates the structure of some larger tree, and ‘minimal’ if it contains no redundant information. Here, we address the question of how large a minimal definitive quartet set on n leaves can be, showing that the maximum size is at least 2n−8 for all n≥4. This is an enjoyable problem to work on, and we present a pretty construction, which employs symmetry.  相似文献   

15.
In this note we define a subset of V-shaped sequences, ‘V-shaped about T’, which generalize ‘V-shaped about d’ sequences. We derive a condition under which this subset contains an optimal sequence for a class of single machine sequencing problems. Cost functions from the literature are used to illustrate our results.  相似文献   

16.
The efficient and accurate calculation of sensitivities of the price of financial derivatives with respect to perturbations of the parameters in the underlying model, the so-called ‘Greeks’, remains a great practical challenge in the derivative industry. This is true regardless of whether methods for partial differential equations or stochastic differential equations (Monte Carlo techniques) are being used. The computation of the ‘Greeks’ is essential to risk management and to the hedging of financial derivatives and typically requires substantially more computing time as compared to simply pricing the derivatives. Any numerical algorithm (Monte Carlo algorithm) for stochastic differential equations produces a time-discretization error and a statistical error in the process of pricing financial derivatives and calculating the associated ‘Greeks’. In this article we show how a posteriori error estimates and adaptive methods for stochastic differential equations can be used to control both these errors in the context of pricing and hedging of financial derivatives. In particular, we derive expansions, with leading order terms which are computable in a posteriori form, of the time-discretization errors for the price and the associated ‘Greeks’. These expansions allow the user to simultaneously first control the time-discretization errors in an adaptive fashion, when calculating the price, sensitivities and hedging parameters with respect to a large number of parameters, and then subsequently to ensure that the total errors are, with prescribed probability, within tolerance.  相似文献   

17.
In this paper, we present a new algorithm to evaluate the Kauffman bracket polynomial. The algorithm uses cyclic permutations to count the number of states obtained by the application of ‘A’ and ‘B’ type smoothings to the each crossing of the knot. We show that our algorithm can be implemented easily by computer programming.  相似文献   

18.
This paper introduces Cárnico-ICSPEA2, a metaheuristic co-evolutionary navigator designed by its end-user as an aid for the analysis and multi-objective optimisation of a beef cattle enterprise running on temperate pastures and fodder crops in Chalco, Mexico State, in the central plateau of Mexico. By combining simulation routines and a multi-objective evolutionary algorithm with a deterministic and stochastic framework, the software imitates the evolutionary behaviour of the system of interest, helping the farm manager to ‘navigate’ through his system’s dynamic phase space. The ultimate goal was to enhance the manager’s decision-making process and co-evolutionary skills, through an increased understanding of his system and the discovery of new, improved heuristics. This paper describes the numerical simulation and optimisation resulting from the application of Cárnico-ICSPEA2 to solve a specific multi-objective optimisation problem, along with implications for the management of the system of interest.  相似文献   

19.
We study the problem of cutting a number of pieces of the same length from n rolls of different lengths so that the remaining part of each utilized roll is either sufficiently short or sufficiently long. A piece is ‘sufficiently short’, if it is shorter than a pre-specified threshold value δmin, so that it can be thrown away as it cannot be used again for cutting future orders. And a piece is ‘sufficiently long’, if it is longer than a pre-specified threshold value δmax (with δmax > δmin), so that it can reasonably be expected to be usable for cutting future orders of almost any length. We show that this problem, faced by a curtaining wholesaler, is solvable in O(nlogn) time by analyzing a non-trivial class of allocation problems.  相似文献   

20.
This paper reports one aspect of a larger study which looked at the strategies used by a selection of grade 6 students to solve six non-routine mathematical problems. The data revealed that the students exhibited many of the behaviours identified in the literature as being associated with novice and expert problem solvers. However, the categories of ‘novice’ and ‘expert’ were not fully adequate to describe the range of behaviours observed and instead three categories that were characteristic of behaviours associated with ‘naïve’, ‘routine’ and ‘sophisticated’ approaches to solving problems were identified. Furthermore, examination of individual cases revealed that each student's problem solving performance was consistent across a range of problems, indicating a particular orientation towards naïve, routine or sophisticated problem solving behaviours. This paper describes common problem solving behaviours and details three individual cases involving naïve, routine and sophisticated problem solvers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号