首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored.We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.  相似文献   

2.
对于正方形[0,2]×[0,2]上的连续对策,将局中人的非纯策略(概率分布函数)的导数称为这个局中人的策略密度(概率密度函数).建立了这种连续对策的最大熵理论.主要证明了当每个局中人都没有最优纯策略时,具有最大熵的最优策略密度集合的非空紧凸性,研究了最优策略密度的最大熵,给出一类带有最大熵的连续对策.  相似文献   

3.
This paper studies estimation in partial functional linear quantile regression in which the dependent variable is related to both a vector of finite length and a function-valued random variable as predictor variables. The slope function is estimated by the functional principal component basis. The asymptotic distribution of the estimator of the vector of slope parameters is derived and the global convergence rate of the quantile estimator of unknown slope function is established under suitable norm. It is showed that this rate is optimal in a minimax sense under some smoothness assumptions on the covariance kernel of the covariate and the slope function. The convergence rate of the mean squared prediction error for the proposed estimators is also be established. Finite sample properties of our procedures are studied through Monte Carlo simulations. A real data example about Berkeley growth data is used to illustrate our proposed methodology.  相似文献   

4.
Ronny Ramlau  Esther Klann  Wolfgang Ring 《PAMM》2007,7(1):1050303-1050305
We present a Mumford-Shah like approach for the inversion of CT and SPECT-data (Single Photon Emission Computerized Tomography). With this approach we aim at the simultaneous reconstruction and segmentation of activity and density distribution from given tomography data. We assume the functions to be piecewise constant with respect to a set of contours. Shape sensitivity analysis is used to find a descent direction for the cost functional which leads to an update formula for the contour in a level set framework. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Quasi-interpolation has been studied extensively in the literature. However, most studies of quasi-interpolation are usually only for discrete function values (or a finite linear combination of discrete function values). Note that in practical applications, more commonly, we can sample the linear functional data (the discrete values of the right-hand side of some differential equations) rather than the discrete function values (e.g., remote sensing, seismic data, etc). Therefore, it is more meaningful to study quasi-interpolation for the linear functional data. The main result of this paper is to propose such a quasi-interpolation scheme. Error estimate of the scheme is also given in the paper. Based on the error estimate, one can find a quasi-interpolant that provides an optimal approximation order with respect to the smoothness of the right-hand side of the differential equation. The scheme can be applied in many situations such as the numerical solution of the differential equation, construction of the Lyapunov function and so on. Respective examples are presented in the end of this paper.  相似文献   

6.
A constrained optimization problem is formulated and solved in order to determine the smallest confidence region for the parameters of the Pareto distribution in a proposed family of sets. The objective function is the area of the region, whereas the constraints are related to the required confidence level. Explicit expressions for the area and confidence level of a given region are first deduced. An efficient procedure based on minimizing the corresponding Lagrangian function is then presented to solve the nonlinear programming problem. The process is valid when some of the smallest and largest observations have been discarded or censored, i.e., both single (right or left) and double censoring are allowed. The optimal Pareto confidence region is derived by simultaneously solving three (four) nonlinear equations in the right (double) censoring case. In most practical situations, Newton’s method with the balanced set as the starting point only needs a few iterations to find the global solution. In general, the reduction in area of the optimal Pareto region with respect to the balanced set is considerable if the sample size, n, is small or moderately large, which is usual in practice. This reduction is sometimes impressive when n is quite small and the censoring degree is fairly high. Two numerical examples regarding component lifetimes and fire claims are included for illustrative and comparative purposes.  相似文献   

7.
The ranked-set sampling (RSS) is applicable in practical problems where the variable of interest for an observed item is costly or time-consuming but the ranking of a set of items according to the variable can be easily done without actual measurement. In this article, the M-estimates of location parameters using RSS data are studied. We deal mainly with symmetric location families. The asymptotic properties of M-estimates based on ranked-set samples are established. The properties of unbalanced ranked-set sample M-estimates are employed to develop the methodology for determining optimal ranked-set sampling schemes. The asymptotic relative efficiencies of ranked-set sample M-estimates are investigated. Some simulation studies are reported.  相似文献   

8.
We consider the effect of sudden large, randomly occurring density dependent disasters on the optimal harvest policy and optimal expected return for an exploited population. The population is assumed to grow logistically with disasters occurring on a time scale very short compared to the natural growth scale. The case of a density dependent disaster frequency is also treated. Stochastic dynamic programming is used in the optimization. For a set of realistic field data it is found that random effects typically have a significant effect on both optimal return and optimal effort levels. The effect of density dependence is far more pronounced for optimal return than for optimal effort levels.  相似文献   

9.
We present methods for the estimation of level sets, a level set tree, and a volume function of a multivariate density function. The methods are such that the computation is feasible and estimation is statistically efficient in moderate dimensional cases (\(d\approx 8\)) and for moderate sample sizes (\(n\approx \) 50,000). We apply kernel estimation together with an adaptive partition of the sample space. We illustrate how level set trees can be applied in cluster analysis and in flow cytometry.  相似文献   

10.
This paper studies the generalized state density (GDOS) of near-historical extreme events of a set of independent and identically distributed (i.i.d.) random variables. The generalized density of states is proposed which is defined as a probability density function (p.d.f.). For the underlying distribution in the domain of attraction of the three well-known extreme value distribution families, we show the approximate form of the mean GDOS. Estimates of the mean GDOS are presented when the underlying distribution is unknown and the sample size is sufficiently large. Some simulations have been performed, which are found to agree with the theoretical results. The closing price data of the Dow-Jones industrial index are used to illustrate the obtained results.  相似文献   

11.
12.
Histogram and kernel estimators are usually regarded as the two main classical data-based non- parametric tools to estimate the underlying density functions for some given data sets. In this paper we will integrate them and define a histogram-kernel error based on the integrated square error between histogram and binned kernel density estimator, and then exploit its asymptotic properties. Just as indicated in this paper, the histogram-kernel error only depends on the choice of bin width and the data for the given prior kernel densities. The asymptotic optimal bin width is derived by minimizing the mean histogram-kernel error. By comparing with Scott’s optimal bin width formula for a histogram, a new method is proposed to construct the data-based histogram without knowledge of the underlying density function. Monte Carlo study is used to verify the usefulness of our method for different kinds of density functions and sample sizes.  相似文献   

13.
The supervised classification of fuzzy data obtained from a random experiment is discussed. The data generation process is modelled through random fuzzy sets which, from a formal point of view, can be identified with certain function-valued random elements. First, one of the most versatile discriminant approaches in the context of functional data analysis is adapted to the specific case of interest. In this way, discriminant analysis based on nonparametric kernel density estimation is discussed. In general, this criterion is shown not to be optimal and to require large sample sizes. To avoid such inconveniences, a simpler approach which eludes the density estimation by considering conditional probabilities on certain balls is introduced. The approaches are applied to two experiments; one concerning fuzzy perceptions and linguistic labels and another one concerning flood analysis. The methods are tested against linear discriminant analysis and random K-fold cross validation.  相似文献   

14.
Carrying out reaction and separation simultaneously in a reactive dividing wall batch distillation column batch RDWC in the case of ethyl acetate synthesis provides the possibility of separating both products and increasing the equilibrium reaction conversion. Overcoming the known azeotrope conditions, high purity for ethyl acetate and decreasing the batch time compared to simple reactive batch distillation are the advantages of this configuration. The corresponding dynamic simulation is carried out by simultaneously solving the model-associated system of differential and algebraic equations. In this study, the optimal values of the vapour and liquid split ratios are considered as the decision variables in order to maximize the amount of ethyl acetate accumulated during batch time. The optimization strategy is implemented inspired by response surface methodology in which an optimal surface is fitted to the collected data set using differential evolution (DE). The optimal surface relevant algebraic equation is then considered as the reduced form of the complex model and the optimal values are obtained using the DE method.  相似文献   

15.
In the context of surrogate-based optimization (SBO), most designers have still very little guidance on when to stop and how to use infill measures with target requirements (e.g., one-stage approach for goal seeking and optimization); the reason: optimum estimates independent of the surrogate and optimization strategy are seldom available. Hence, optimization cycles are typically stopped when resources run out (e.g., number of objective function evaluations/time) or convergence is perceived, and targets are empirically set which may affect the effectiveness and efficiency of the SBO approach. This work presents an approach for estimating the minimum (target) of the objective function using concepts from extreme order statistics which relies only on the training data (sample) outputs. It is assumed that the sample inputs are randomly distributed so the outputs can be considered a random variable, whose density function is bounded (a, b), with the minimum (a) as its lower bound. Specifically, an estimate of the minimum (a) is obtained by: (i) computing the bounds (using training data and the moment matching method) of a selected set of analytical density functions (catalog), and (ii) identifying the density function in the catalog with the best match to the sample outputs distribution and corresponding minimum estimate (a). The proposed approach makes no assumption about the nature of the objective functions, and can be used with any surrogate, and optimization strategy even with high dimensional problems. The effectiveness of the proposed approach was evaluated using a compact catalog of Generalized Beta density functions and well-known analytical optimization test functions, i.e., F2, Hartmann 6D, and Griewangk 10D and in the optimization of a field scale alkali-surfactant-polymer enhanced oil recovery process. The results revealed that: (a) the density function (from a catalog) with the best match to a function outputs distribution, was the same for both large and reduced samples, (b) the true optimum value was always within a 95% confidence interval of the estimated minimum distribution, and (c) the estimated minimum represents a significant improvement over the present best solution and an excellent approximation of the true optimum value.  相似文献   

16.
Given a density f we pose the problem of estimating the density functional $\psi_r=\int f^{(r)}f$ for a non-negative even r making use of kernel methods. This is a well-known problem but some of its features remained unexplored. We focus on the problem of bandwidth selection. Whereas all the previous studies concentrate on an asymptotically optimal bandwidth here we study the properties of exact, non-asymptotic ones, and relate them with the former. Our main conclusion is that, despite being asymptotically equivalent, for realistic sample sizes much is lost by using the asymptotically optimal bandwidth. In contrast, as a target for data-driven selectors we propose another bandwidth which retains the small sample performance of the exact one.  相似文献   

17.
Methods for nonlinear system identification are often classified, based on the employed model form, into parametric (nonlinear differential or difference equations) and nonparametric (functional expansions). These methods exhibit distinct sets of advantages and disadvantages that have motivated comparative studies and point to potential benefits from combined use. Fundamental to these studies are the mathematical relations between nonlinear differential (or difference, in discrete time) equations (NDE) and Volterra functional expansions (VFE) of the class of nonlinear systems for which both model forms exist, in continuous or discrete time. Considerable work has been done in obtaining the VFE's of a broad class of NDE's, which can be used to make the transition from nonparametric models (obtained from experimental input-output data) to more compact parametric models. This paper presents a methodology by which this transition can be made in discrete time. Specifically, a method is proposed for obtaining a parametric NARMAX (Nonlinear Auto-Regressive Moving-Average with exogenous input) model from Volterra kernels estimated by use of input-output data.  相似文献   

18.
The problem of minimizing a quadratic form over the standard simplex is known as the standard quadratic optimization problem (SQO). It is NP-hard, and contains the maximum stable set problem in graphs as a special case. In this note, we show that the SQO problem may be reformulated as an (exponentially sized) linear program (LP). This reformulation also suggests a hierarchy of polynomial-time solvable LP’s whose optimal values converge finitely to the optimal value of the SQO problem. The hierarchies of LP relaxations from the literature do not share this finite convergence property for SQO, and we review the relevant counterexamples.  相似文献   

19.
Many warehouses store at least some goods in two areas, a reserve areathat is efficient for storage and a forward area that is efficient fororder picking. The forward-reserve allocation problem determines the set ofStock-Keeping Units and their space allocations in the forward area to maximizethe forward area's benefit by trading off the relevant costs of orderpicking and internal replenishment. The mathematical model of this decisionresembles the classical knapsack problem with the additional complexity that ithas a discontinuous nonlinear cost function. A simple greedy heuristic has beenproposed in the literature to solve this problem. This paper proposes analternative branch-and-bound algorithm that can quickly solve the problem tooptimality. Heuristic and optimal solutions are numerically compared usingproblem instances based on real warehouse data. Results suggest that theheuristic solutions are very close to the optimal ones in terms of both theobjective value and the forward assignment.  相似文献   

20.
This work studies the effects of sampling variability in Monte Carlo-based methods to estimate very high-dimensional systems. Recent focus in the geosciences has been on representing the atmospheric state using a probability density function, and, for extremely high-dimensional systems, various sample-based Kalman filter techniques have been developed to address the problem of real-time assimilation of system information and observations. As the employed sample sizes are typically several orders of magnitude smaller than the system dimension, such sampling techniques inevitably induce considerable variability into the state estimate, primarily through prior and posterior sample covariance matrices. In this article, we quantify this variability with mean squared error measures for two Monte Carlo-based Kalman filter variants: the ensemble Kalman filter and the ensemble square-root Kalman filter. Expressions of the error measures are derived under weak assumptions and show that sample sizes need to grow proportionally to the square of the system dimension for bounded error growth. To reduce necessary ensemble size requirements and to address rank-deficient sample covariances, covariance-shrinking (tapering) based on the Schur product of the prior sample covariance and a positive definite function is demonstrated to be a simple, computationally feasible, and very effective technique. Rules for obtaining optimal taper functions for both stationary as well as non-stationary covariances are given, and optimal taper lengths are given in terms of the ensemble size and practical range of the forecast covariance. Results are also presented for optimal covariance inflation. The theory is verified and illustrated with extensive simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号