首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This paper studies spectral density estimation based on amplitude modulation including missing data as a specific case. A generalized periodogram is introduced and smoothed to give a consistent estimator of the spectral density by running local linear regression smoother. We explore the asymptotic properties of the proposed estimator and its application to time series data with periodic missing. A simple data-driven local bandwidth selection rule is proposed and an algorithm for computing the spectral density estimate is presented. The effectiveness of the proposed method is demonstrated using simulations. The application to outlier detection based on leave-one-out diagnostic is also considered. An illustrative example shows that the proposed diagnostic procedure succeeds in revealing outliers in time series without masking and smearing effects. Supported by Chinese NSF Grants 10001004 and 39930160, and Fellowship of City University of Hong Kong.  相似文献   

2.
In model-based analysis for comparative evaluation of strategies for disease treatment and management, the model of the disease is arguably the most critical element. A fundamental challenge in identifying model parameters arises from the limitations of available data, which challenges the ability to uniquely link model parameters to calibration targets. Consequently, the calibration of disease models leads to the discovery of multiple models that are similarly consistent with available data. This phenomenon is known as calibration uncertainty and its effect is transferred to the results of the analysis. Insufficient examination of the breadth of potential model parameters can create a false sense of confidence in the model recommendation, and ultimately cast doubt on the value of the analysis. This paper introduces a systematic approach to the examination of calibration uncertainty and its impact. We begin with a model of the calibration process as a constrained optimization problem and introduce the notion of plausible models which define the uncertainty region for model parameters. We illustrate the approach using a fictitious disease, and explore various methods for interpreting the outputs obtained.  相似文献   

3.
In this paper, we address the problem of planning the patient flow in hospitals subject to scarce medical resources with the objective of maximizing the contribution margin. We assume that we can classify a large enough percentage of elective patients according to their diagnosis-related group (DRG) and clinical pathway. The clinical pathway defines the procedures (such as different types of diagnostic activities and surgery) as well as the sequence in which they have to be applied to the patient. The decision is then on which day each procedure of each patient’s clinical pathway should be done, taking into account the sequence of procedures as well as scarce clinical resources, such that the contribution margin of all patients is maximized. We develop two mixed-integer programs (MIP) for this problem which are embedded in a static and a rolling horizon planning approach. Computational results on real-world data show that employing the MIPs leads to a significant improvement of the contribution margin compared to the contribution margin obtained by employing the planning approach currently practiced. Furthermore, we show that the time between admission and surgery is significantly reduced by applying our models.  相似文献   

4.
We consider a Markov Chain in which the states are fuzzy subsets defined on some finite state space. Building on the relationship between set-valued Markov chains to the Dempster-Shafer combination rule, we construct a procedure for finding transition probabilities from one fuzzy state to another. This construction involves Dempster-Shafer type mass functions having fuzzy focal elements. It also involves a measure of the degree to which two fuzzy sets are equal. We also show how to find approximate transition probabilities from a fuzzy state to a crisp state in the original state space  相似文献   

5.
Two-sample hypothesis testing for random graphs arises naturally in neuroscience, social networks, and machine learning. In this article, we consider a semiparametric problem of two-sample hypothesis testing for a class of latent position random graphs. We formulate a notion of consistency in this context and propose a valid test for the hypothesis that two finite-dimensional random dot product graphs on a common vertex set have the same generating latent positions or have generating latent positions that are scaled or diagonal transformations of one another. Our test statistic is a function of a spectral decomposition of the adjacency matrix for each graph and our test procedure is consistent across a broad range of alternatives. We apply our test procedure to real biological data: in a test-retest dataset of neural connectome graphs, we are able to distinguish between scans from different subjects; and in the C. elegans connectome, we are able to distinguish between chemical and electrical networks. The latter example is a concrete demonstration that our test can have power even for small-sample sizes. We conclude by discussing the relationship between our test procedure and generalized likelihood ratio tests. Supplementary materials for this article are available online.  相似文献   

6.
This paper studies sensor calibration in spectral estimation where the true frequencies are located on a continuous domain. We consider a uniform array of sensors that collects measurements whose spectrum is composed of a finite number of frequencies, where each sensor has an unknown calibration parameter. Our goal is to recover the spectrum and the calibration parameters simultaneously from multiple snapshots of the measurements. In the noiseless case with an infinite number of snapshots, we prove uniqueness of this problem up to certain trivial, inevitable ambiguities based on an algebraic method, as long as there are more sensors than frequencies. We then analyze the sensitivity of this algebraic technique with respect to the number of snapshots and noise.We next propose an optimization approach that makes full use of the measurements by minimizing a non-convex objective which is non-negative and continuously differentiable over all calibration parameters and Toeplitz matrices. We prove that, in the case of infinite snapshots and noiseless measurements, the objective vanishes only at equivalent solutions to the true calibration parameters and the measurement covariance matrix. The objective is minimized using Wirtinger gradient descent which is proven to converge to a critical point. We show empirically that this critical point provides a good approximation of the true calibration parameters and the underlying frequencies.  相似文献   

7.
We derive the spectral decomposition of a covariance matrix for the balanced mixed analysis of variance model. The derivation is based on determining the distinct eigenvalues of a covariance matrix and then obtaining a principal idempotent matrix for each distinct eigenvalue. Examples are given to illustrate the results.  相似文献   

8.
A. K. Nandakumaran  Hari M. Varma  R. Mohan Vasu 《PAMM》2007,7(1):2010017-2010018
We obtain the reconstruction of the refractive index distribution of body based on the intensity and normal derivative of the intensity measurements. The Helmholtz equation is inverted either directly or indirectly through repeated implementation of the forward operator and its adjoint, for recovering the complex refractive index distribution. We do not adopt the procedure of recovery of phase (normally required for complete knowledge distribution). We derive certain sensitivity relations which is used for the easy computation of the Jacobian. Our procedure successfully reconstructs the real and imaginary parts of the complex refractive index from the measurement of the two data types derived from the complex amplitude at the boundary. Our other interest is the reconstruction of the spectroscopic variations of optical absorption coefficients and visco-elastic properties of a tissue which is extremely useful in diagnostic medicines. The research is on progress and some results are available. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
From the literature, it is known that the Least-Squares Spectral Element Method (LSSEM) for the stationary Stokes equations performs poorly with respect to mass conservation but compensates this lack by a superior conservation of momentum. Furthermore, it is known that the Least-Squares Spectral Collocation Method (LSSCM) leads to superior conservation of mass and momentum for the stationary Stokes equations. In the present paper, we consider mass and momentum conservation of the LSSCM for time-dependent Stokes and Navier-Stokes equations. We observe that the LSSCM leads to improved conservation of mass (and momentum) for these problems. Furthermore, the LSSCM leads to the well-known time-dependent profiles for the velocity and the pressure profiles. To obtain these results, we use only a few elements, each with high polynomial degree, avoid normal equations for solving the overdetermined linear systems of equations and introduce the Clenshaw-Curtis quadrature rule for imposing the average pressure to be zero. Furthermore, we combined the transformation of Gordon and Hall (transfinite mapping) with the least-squares spectral collocation scheme to discretize the internal flow problems.  相似文献   

10.
11.
This paper addresses the problem of buying commodities through the futures markets and deals specifically with a heuristic rule developed for the scenario described as `purchasing under a deadline'. The rule is based on a short-term forecasts produced by Taylor's price-trend model. In a previous study applied to the Chicago Board of Trade (CBOT) corn futures market the price-trend parameters of the stochastic process generating the daily returns were shown to be nearly stable over time and hence could be estimated using a static procedure. However, the analysis presented in this paper concerning the CBOT soybean futures market strongly suggests that those parameters were unstable, impairing the successful application of the purchasing rule. The authors recommend the continuous CUSUM monitoring of the purchasing results and propose a procedure for dynamically calibrating the price-trend and buying parameters. Under this procedure the price-trend parameter estimates are derived from exponentially smoothed sample autocorrelation coefficients of the rescaled daily returns. The procedure was developed and tested using the 1972-87 series of CBOT daily soybean futures closing prices. The results suggest that it leads to an improvement on the purchasing results derived from the static parameter calibration procedure formerly adopted.  相似文献   

12.
We consider an iterative procedure where at each iteration a constrained quadratic optimization problem is solved. A Goldstein type step length rule is incorporated in order to assure global convergence. We consider a class of minimization problems which are singular at the optimal point and show that locally the superlinear convergence rate is retained for a certain part of the iterations. These results are applied to problems with perturbed data. In the last section global convergence results are proven.  相似文献   

13.
We study smoothers for the multigrid method of the second kind arising from Fredholm integral equations. Our model problems use nonlocal governing operators that enforce local boundary conditions. For discretization, we utilize the Nyström method with the trapezoidal rule. We find the eigenvalues of matrices associated to periodic, antiperiodic, and Dirichlet problems in terms of the nonlocality parameter and mesh size. Knowing explicitly the spectrum of the matrices enables us to analyze the behavior of smoothers. Although spectral analyses exist for finding effective smoothers for 1D elliptic model problems, to the best of our knowledge, a guiding spectral analysis is not available for smoothers of a multigrid of the second kind. We fill this gap in the literature. The Picard iteration has been the default smoother for a multigrid of the second kind. Jacobi‐like methods have not been considered as viable options. We propose two strategies. The first one focuses on the most oscillatory mode and aims to damp it effectively. For this choice, we show that weighted‐Jacobi relaxation is equivalent to the Picard iteration. The second strategy focuses on the set of oscillatory modes and aims to damp them as quickly as possible, simultaneously. Although the Picard iteration is an effective smoother for model nonlocal problems under consideration, we show that it is possible to find better than ones using the second strategy. We also shed some light on internal mechanism of the Picard iteration and provide an example where the Picard iteration cannot be used as a smoother.  相似文献   

14.
A current challenge for many Bayesian analyses is determining when to terminate high-dimensional Markov chain Monte Carlo simulations. To this end, we propose using an automated sequential stopping procedure that terminates the simulation when the computational uncertainty is small relative to the posterior uncertainty. Further, we show this stopping rule is equivalent to stopping when the effective sample size is sufficiently large. Such a stopping rule has previously been shown to work well in settings with posteriors of moderate dimension. In this article, we illustrate its utility in high-dimensional simulations while overcoming some current computational issues. As examples, we consider two complex Bayesian analyses on spatially and temporally correlated datasets. The first involves a dynamic space-time model on weather station data and the second a spatial variable selection model on fMRI brain imaging data. Our results show the sequential stopping rule is easy to implement, provides uncertainty estimates, and performs well in high-dimensional settings. Supplementary materials for this article are available online.  相似文献   

15.
Recently proposed computationally efficient Markov chain Monte Carlo (MCMC) and Monte Carlo expectation–maximization (EM) methods for estimating covariance parameters from lattice data rely on successive imputations of values on an embedding lattice that is at least two times larger in each dimension. These methods can be considered exact in some sense, but we demonstrate that using such a large number of imputed values leads to slowly converging Markov chains and EM algorithms. We propose instead the use of a discrete spectral approximation to allow for the implementation of these methods on smaller embedding lattices. While our methods are approximate, our examples indicate that the error introduced by this approximation is small compared to the Monte Carlo errors present in long Markov chains or many iterations of Monte Carlo EM algorithms. Our results are demonstrated in simulation studies, as well as in numerical studies that explore both increasing domain and fixed domain asymptotics. We compare the exact methods to our approximate methods on a large satellite dataset, and show that the approximate methods are also faster to compute, especially when the aliased spectral density is modeled directly. Supplementary materials for this article are available online.  相似文献   

16.
Markov models are widely used as a method for describing categorical data that exhibit stationary and nonstationary autocorrelation. However, diagnostic methods are a largely overlooked topic for Markov models. We introduce two types of residuals for this purpose: one for assessing the length of runs between state changes, and the other for assessing the frequency with which the process moves from any given state to the other states. Methods for calculating the sampling distribution of both types of residuals are presented, enabling objective interpretation through graphical summaries. The graphical summaries are formed using a modification of the probability integral transformation that is applicable for discrete data. Residuals from simulated datasets are presented to demonstrate when the model is, and is not, adequate for the data. The two types of residuals are used to highlight inadequacies of a model posed for real data on seabed fauna from the marine environment.

Supplemental materials, including an R-package RMC with functions to perform the diagnostic measures on the class of models considered in this article, are at the journal’s website. The R-package is also available at CRAN.  相似文献   

17.
Deterioration of equipment is modeled as a multistate discrete time controlled Markov process. The states are classified according to the degree of deterioration. The problem of design of optimal systems for equipment maintenance and replacement is considered when the decision-maker may take, in each stage, one of many available maintenance actions, classified according to their “stochastic effectiveness”; no action and replacement are included as alternatives. It is assumed that the transition probabilities satisfy two conditions which effectively describe a trend for monotonically increasing expected deterioration and rate of deterioration. Under these assumptions it is proved in the paper that the optimal (cost minimizing) decision system in an infinite horizon is of the control limit rule type, rapidly obtained by policy improvement algorithms. A numerical example is presented for a specific practical application; detailed data are available from the authors on request.  相似文献   

18.
The theory of belief functions is a generalization of probability theory; a belief function is a set function more general than a probability measure but whose values can still be interpreted as degrees of belief. Dempster's rule of combination is a rule for combining two or more belief functions; when the belief functions combined are based on distinct or “independent” sources of evidence, the rule corresponds intuitively to the pooling of evidence. As a special case, the rule yields a rule of conditioning which generalizes the usual rule for conditioning probability measures. The rule of combination was studied extensively, but only in the case of finite sets of possibilities, in the author's monograph A Mathematical Theory of Evidence. The present paper describes the rule for general, possibly infinite, sets of possibilities. We show that the rule preserves the regularity conditions of continuity and condensability, and we investigate the two distinct generalizations of probabilistic independence which the rule suggests.  相似文献   

19.
The volume under a surface (VUS) is an effective measure for evaluating the discriminating power of a diagnostic test with three ordinal diagnostic groups. In this paper, we investigate the difference of two correlated VUS’s to compare two treatments for discrimination of three-class classification data. A jackknife empirical likelihood (JEL) procedure is employed to avoid the variance estimation in the existing methods. We prove that the limiting distribution of the empirical log-likelihood ratio statistic follows a \(\chi ^2\) distribution. Extensive numerical studies show that the JEL confidence intervals outperform those based on the normal approximation method. The proposed method is also applied to the Alzheimer’s disease data.  相似文献   

20.
In this paper we present a genetic algorithm for the multi-mode resource-constrained project scheduling problem (MRCPSP), in which multiple execution modes are available for each of the activities of the project. We also introduce the preemptive extension of the problem which allows activity splitting (P-MRCPSP). To solve the problem, we apply a bi-population genetic algorithm, which makes use of two separate populations and extend the serial schedule generation scheme by introducing a mode improvement procedure. We evaluate the impact of preemption on the quality of the schedule and present detailed comparative computational results for the MRCPSP, which reveal that our procedure is amongst the most competitive algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号