首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
During surgical interventions, a muscle relaxant drug is frequently administered with the objective of inducing muscle paralysis. Clinical environment and patient safety issues lead to a huge variety of situations that must be taken into account requiring intensive simulation studies. Hence, population models are crucial for research and development in this field.

This work develops a stochastic population model for the neuromuscular blockade (NMB) (muscle paralysis) level induced by atracurium based on a deterministic individual model already proposed in the literature. To achieve this goal, a joint Lognormal distribution is considered for the patient-dependent parameters. This study is based on clinical data collected during general anaesthesia. The procedure developed enables to construct a reliable reference bank of parametrized models that not only reproduces the overall features of the NMB, but also the inter-individual variability characteristic of physiological signals. It turns out that this bank constitutes a fundamental tool to support research on identification and control algorithms and is suitable to be integrated in clinical decision support systems.  相似文献   

2.
We present a new approach for removing the nonspecific noise from Drosophila segmentation genes. The algorithm used for filtering here is an enhanced version of singular spectrum analysis method, which decomposes a gene profile into the sum of a signal and noise. Because the main issue in extracting signal using singular spectrum analysis procedure lies in identifying the number of eigenvalues needed for signal reconstruction, this paper seeks to explore the applicability of the new proposed method for eigenvalues identification in four different gene expression profiles. Our findings indicate that when extracting signal from different genes, for optimised signal and noise separation, different number of eigenvalues need to be chosen for each gene. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
In recent years, several methods have been proposed to deal with functional data classification problems (e.g., one-dimensional curves or two- or three-dimensional images). One popular general approach is based on the kernel-based method, proposed by Ferraty and Vieu (Comput Stat Data Anal 44:161–173, 2003). The performance of this general method depends heavily on the choice of the semi-metric. Motivated by Fan and Lin (J Am Stat Assoc 93:1007–1021, 1998) and our image data, we propose a new semi-metric, based on wavelet thresholding for classifying functional data. This wavelet-thresholding semi-metric is able to adapt to the smoothness of the data and provides for particularly good classification when data features are localized and/or sparse. We conduct simulation studies to compare our proposed method with several functional classification methods and study the relative performance of the methods for classifying positron emission tomography images.  相似文献   

4.

Privacy-preserving data splitting is a technique that aims to protect data privacy by storing different fragments of data in different locations. In this work we give a new combinatorial formulation to the data splitting problem. We see the data splitting problem as a purely combinatorial problem, in which we have to split data attributes into different fragments in a way that satisfies certain combinatorial properties derived from processing and privacy constraints. Using this formulation, we develop new combinatorial and algebraic techniques to obtain solutions to the data splitting problem. We present an algebraic method which builds an optimal data splitting solution by using Gröbner bases. Since this method is not efficient in general, we also develop a greedy algorithm for finding solutions that are not necessarily minimally sized.

  相似文献   

5.
In this paper we point out the differences between the most common hazard-based models, such as the proportional hazards and the accelerated failure time models. We focus on the heteroscestaticity-across-individuals problem that cannot be accommodated by them, and give motivation and general ideas about more flexible formulations. We describe hybrid and extended models, which have the former models as particular cases, but keep enough flexibility to fit data with heteroscedasticity. We show that by considering simple graphical procedures it is easy to verify whether there is heteroscedasticity in the data, whether it is possible to describe it through a simple function of the covariates, and whether it is important to take it in account for the final fit. Real datasets are considered. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

6.
The interaction between fiscal and monetary policy is analyzed by means of a game theory approach. The coordination between these two policies is essential, since decisions taken by one institution may have disastrous effects on the other one, resulting in welfare loss for the society. We derived optimal monetary and fiscal policies in context of three coordination schemes: when each institution independently minimizes its welfare loss as a Nash equilibrium of a normal form game; when an institution moves first and the other follows, in a mechanism known as the Stackelberg solution; and, when institutions behave cooperatively, seeking common goals. In the Brazilian case, a numerical exercise shows that the smallest welfare loss is obtained under a Stackelberg solution which has the monetary policy as leader and the fiscal policy as follower. Under the optimal policy, there is evidence of a strong distaste for inflation by the Brazilian society.  相似文献   

7.
Computational Management Science - The great amount of data collected by the Advanced Metering Infrastructure can help electric utilities to detect energy theft, a phenomenon that globally costs...  相似文献   

8.
9.
In this paper a model of general financial equilibrium with policy interventions is introduced, which yields the optimal composition of assets and liabilities in each sector's portfolio, as well as the market prices for each instrument. The policy interventions considered are taxes and price ceilings. The variational inequality formulation of the equilibrium conditions is derived and then utilized to establish existence and uniqueness properties of the solution pattern. An algorithm is proposed for the computation of the problem. Finally, the algorithm is applied to some special utility functions as numerical examples.  相似文献   

10.
The need to adapt Data Envelopment Analysis (DEA) and other frontier models in the context of negative data has been a rather neglected issue in the literature. A recent article in this journal proposed a variation on the directional distance function, a very general distance function that is dual to the profit function, to accommodate the occurrence of negative data. In this contribution, we define and recommend a generalised Farrell proportional distance function that can do the same job and that maintains a proportional interpretation under mild conditions.  相似文献   

11.
Grey model GM (1,1) has been widely used in short-term prediction of energy production and consumption due to its advantages in data sets with small numbers of samples. However, the existing GM (1,1) modelling method can merely forecast the general trend of a time series but fails to identify and predicts the seasonal fluctuations. In the research, the authors propose a data grouping approach based grey modelling method DGGM (1,1) to predict quarterly hydropower production in China. Firstly, the proposed method is used to divide an entire quarterly time series into four groups, each of which contains only time series data within the same quarter. Afterwards, by using the new series of four quarters, models are established, each of which includes specific seasonal characteristics. Finally, according to the chronological order, the prediction results of four GM (1,1) models are combined into a complete quarterly time series to reflect seasonal differences. The mean absolute percent errors (MAPEs) of the test set 2011Q1–2015Q4 solved using the DGGM (1,1), traditional GM (1,1), and SARIMA models are 16.2%, 22.1%, and 22.2%, respectively; the results indicated that DGGM (1,1) has better adaptability and offers a higher prediction accuracy. It is predicted that China's hydropower production from 2016 to 2020 is supposed to maintain its seasonal growth with the third and first quarters showing the highest and lowest productions, respectively.  相似文献   

12.
This article includes an empirical study of the housing market using the statistical method of Markov Process. The first phase of the study is devoted to measuring the filtering process in a selected neighborhood by estimating probabilities of transition from one income group to another, over the period 1949–1969 using four-year intervals. The estimated transition probabilities are then used to forecast occupancy structure for different periods and the suitability of applying the Markov Process for long term policy analysis in housing is examined. The final phase of the study includes an examination of steady state occupancy structure by various income categories of household. The study indicates a fruitful application of the Markov Process in long term housing policy analysis.  相似文献   

13.
For analysis of time-to-event data with incomplete information beyond right-censoring, many generalizations of the inference of the distribution and regression model have been proposed. However, the development of martingale approaches in this area has not progressed greatly, while for right-censored data such an approach has spread widely to study the asymptotic properties of estimators and to derive regression diagnosis methods. In this paper, focusing on doubly censored data, we discuss a martingale approach for inference of the nonparametric maximum likelihood estimator (NPMLE). We formulate a martingale structure of the NPMLE using a score function of the semiparametric profile likelihood. Finally, an expression of the asymptotic distribution of the NPMLE is derived more conveniently without depending on an infinite matrix expression as in previous research. A further useful point is that a variance-covariance formula of the NPMLE computable in a larger sample is obtained as an empirical version of the limit form presented here.  相似文献   

14.
In this paper we present a new, query based approach for approximating polygonal chains in the plane. We give a few results based on this approach, some of more general interest, and propose a greedy heuristic to speed up the computation. Our algorithms are simple, based on standard geometric operations, and thus suitable for efficient implementation. We also show that the query based approach can be used to obtain a subquadratic time exact algorithm with infinite beam criterion and Euclidean distance metric if some condition on the input path holds. Although in a special case, this is the first subquadratic result for path approximation with Euclidean distance metric.  相似文献   

15.
This article presents two case studies, concerning the allocation of £Billions by a mechanism communicated via spreadsheet models. It argues that technical analytic skills as well as policy development skills are a vital component of governance. In the UK, Central Government uses funding formulae to distribute money to local service providers. One commonly stated goal of such formulae is equity of service provision. However, given the complexity of public services, together with variations in need, delivery style and the exercise of stakeholder judgement as to which needs should be met and how, such formulae frequently obscure the process by which equity has been taken into account. One policy ‘solution’ to managing such tensions is to seek ‘transparency’. With respect to funding formulae, this commonly involves publishing the underlying data and formulae in spreadsheets. This paper extends the argument that such ‘transparency’ requires an audience that understands the policy assumptions (and related conceptualisations), data sources, methodological approaches and interpretation of results. It demonstrates how the search for policy ‘transparency’ is also met by the technical quality assurance goals that the operational research community would recognise as best practice in the development both of software generally and spreadsheet models specifically. Illustrative examples of complex formulae acting to subvert equity are drawn from the English Fire and Rescue Service and Police Service allocation formulae. In the former, an increase in the amount of deprivation, as measured by one of six indicators, has the perverse effect of decreasing the financial allocation. In the latter, metropolitan areas such as London are found to gain most from the inclusion of variables measuring sparsity. The conclusion from these scenarios is that the steps needed to for technical quality assurance and policy transparency are mutually reinforcing goals, with policy analysts urged to make greater use of technical analytic skills in software development.  相似文献   

16.
A flexible Bayesian periodic autoregressive model is used for the prediction of quarterly and monthly time series data. As the unknown autoregressive lag order, the occurrence of structural breaks and their respective break dates are common sources of uncertainty these are treated as random quantities within the Bayesian framework. Since no analytical expressions for the corresponding marginal posterior predictive distributions exist a Markov Chain Monte Carlo approach based on data augmentation is proposed. Its performance is demonstrated in Monte Carlo experiments. Instead of resorting to a model selection approach by choosing a particular candidate model for prediction, a forecasting approach based on Bayesian model averaging is used in order to account for model uncertainty and to improve forecasting accuracy. For model diagnosis a Bayesian sign test is introduced to compare the predictive accuracy of different forecasting models in terms of statistical significance. In an empirical application, using monthly unemployment rates of Germany, the performance of the model averaging prediction approach is compared to those of model selected Bayesian and classical (non)periodic time series models.  相似文献   

17.
The objectives of the study reported in this paper are: (1) to evaluate the adequacy of two data mining techniques, decision tree and neural network in analysing consumer preference for a fast-food franchise and (2) to examine the sufficiency of the criteria selected in understanding this preference. We build decision tree and neural network models to fit data samples collected from 800 respondents in Taiwan to understand the factors that determine their brand preference. Classification rules are generated from these models to differentiate between consumers who prefer the brand and those who do not. The generated rules show that while both decision tree and neural network models can achieve predictive accuracy of more than 80% on the training data samples and more that 70% on the cross-validation data samples, the neural network models compare very favourably to a decision tree model in rule complexity and the numbers of relevant input attributes.  相似文献   

18.
Data envelopment analysis (DEA) is a non-parametric technique to assess the performance of a set of homogeneous decision making units (DMUs) with common crisp inputs and outputs. Regarding the problems that are modelled out of the real world, the data cannot constantly be precise and sometimes they are vague or fluctuating. So in the modelling of such data, one of the best approaches is using the fuzzy numbers. Substituting the fuzzy numbers for the crisp numbers in DEA, the traditional DEA problem transforms into a fuzzy data envelopment analysis (FDEA) problem. Different methods have been suggested to compute the efficiency of DMUs in FDEA models so far but the most of them have limitations such as complexity in calculation, non-contribution of decision maker in decision making process, utilizable for a specific model of FDEA and using specific group of fuzzy numbers. In the present paper, to overcome the mentioned limitations, a new approach is proposed. In this approach, the generalized FDEA problem is transformed into a parametric programming, in which, parameter selection depends on the decision maker’s ideas. Two numerical examples are used to illustrate the approach and to compare it with some other approaches.  相似文献   

19.
This paper is drawn from the use of data envelopment analysis (DEA) in helping a Portuguese bank to manage the performance of its branches. The bank wanted to set targets for the branches on such variables as growth in number of clients, growth in funds deposited and so on. Such variables can take positive and negative values but apart from some exceptions, traditional DEA models have hitherto been restricted to non-negative data. We report on the development of a model to handle unrestricted data in a DEA framework and illustrate the use of this model on data from the bank concerned.  相似文献   

20.
This paper proposes an AHP based statistical method for the design of a comprehensive policy alternative, AHPo, for solving societal problems that require a multifaceted approach. In the proposed method, criteria relevant to the goal or focus are structured in the same way as in the conventional AHP. However, these two methods are quite different in regard to the method of quantification. The new method predicts or analyses the impact of the policy alternatives on the overall goal. In other words, it predicts or rationalizes the way people appreciate the situation in which an alternative is adopted and implemented. It will serve as a tool for supporting (especially political) decision making.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号