首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work, we study the existence of solutions of the deconvolution problems in the discrete setting. More precisely, we prove the existence of solutions of the discrete multichannel deconvolution problems DMDP with convolvers being the characteristic functions of finite sets of positive integers. Also, we provide the reader with a simple method and a fast algorithm for finding the closed forms of the discrete deconvolvers with minimal supports that constitute exact solutions of the DMDP. Moreover, we show that unlike the singular value decomposition scheme, the multichannel deconvolution scheme based on the use of these discrete deconvolvers is not very sensitive to small 2-norm perturbation of the data. Finally, we show how to generalize our method for solving the 2-D version of the DMDP.  相似文献   

2.
Meisters and Peterson gave an equivalent condition under which the multisensor deconvolution problem has a solution when there are two convolvers, each the characteristic function of an interval. In this article we find additional conditions under which the deconvolution problem for multiple characteristic functions is solvable. We extend the result to the space of Gevrey distributions and prove that every linear operator S, fromthe space of Gevrey functions with compact support onto itself, which commutes with translations can be represented as convolution with a unique Gevrey distribution T of compact support. Finally, we find explicit formula for deconvolvers when the convolvers satisfy weaker conditions than the equivalence conditions using nonperiodic sampling method.  相似文献   

3.
This paper is intended as an investigation of parametric estimation for the randomly right censored data. In parametric estimation, the Kullback-Leibler information is used as a measure of the divergence of a true distribution generating a data relative to a distribution in an assumed parametric model M. When the data is uncensored, maximum likelihood estimator (MLE) is a consistent estimator of minimizing the Kullback-Leibler information, even if the assumed model M does not contain the true distribution. We call this property minimum Kullback-Leibler information consistency (MKLI-consistency). However, the MLE obtained by maximizing the likelihood function based on the censored data is not MKLI-consistent. As an alternative to the MLE, Oakes (1986, Biometrics, 42, 177–182) proposed an estimator termed approximate maximum likelihood estimator (AMLE) due to its computational advantage and potential for robustness. We show MKLI-consistency and asymptotic normality of the AMLE under the misspecification of the parametric model. In a simulation study, we investigate mean square errors of these two estimators and an estimator which is obtained by treating a jackknife corrected Kaplan-Meier integral as the log-likelihood. On the basis of the simulation results and the asymptotic results, we discuss comparison among these estimators. We also derive information criteria for the MLE and the AMLE under censorship, and which can be used not only for selecting models but also for selecting estimation procedures.  相似文献   

4.
We develop a duality theory for minimax fractional programming problems in the face of data uncertainty both in the objective and constraints. Following the framework of robust optimization, we establish strong duality between the robust counterpart of an uncertain minimax convex–concave fractional program, termed as robust minimax fractional program, and the optimistic counterpart of its uncertain conventional dual program, called optimistic dual. In the case of a robust minimax linear fractional program with scenario uncertainty in the numerator of the objective function, we show that the optimistic dual is a simple linear program when the constraint uncertainty is expressed as bounded intervals. We also show that the dual can be reformulated as a second-order cone programming problem when the constraint uncertainty is given by ellipsoids. In these cases, the optimistic dual problems are computationally tractable and their solutions can be validated in polynomial time. We further show that, for robust minimax linear fractional programs with interval uncertainty, the conventional dual of its robust counterpart and the optimistic dual are equivalent.  相似文献   

5.
In this paper, we consider robust optimal solutions for a convex optimization problem in the face of data uncertainty both in the objective and constraints. By using the properties of the subdifferential sum formulae, we first introduce a robust-type subdifferential constraint qualification, and then obtain some completely characterizations of the robust optimal solution of this uncertain convex optimization problem. We also investigate Wolfe type robust duality between the uncertain convex optimization problem and its uncertain dual problem by proving duality between the deterministic robust counterpart of the primal model and the optimistic counterpart of its dual problem. Moreover, we show that our results encompass as special cases some optimization problems considered in the recent literature.  相似文献   

6.
Kullback-Leibler divergence and the Neyman-Pearson lemma are two fundamental concepts in statistics. Both are about likelihood ratios: Kullback-Leibler divergence is the expected log-likelihood ratio, and the Neyman-Pearson lemma is about error rates of likelihood ratio tests. Exploring this connection gives another statistical interpretation of the Kullback-Leibler divergence in terms of the loss of power of the likelihood ratio test when the wrong distribution is used for one of the hypotheses. In this interpretation, the standard non-negativity property of the Kullback-Leibler divergence is essentially a restatement of the optimal property of likelihood ratios established by the Neyman-Pearson lemma. The asymmetry of Kullback-Leibler divergence is overviewed in information geometry.  相似文献   

7.
Image deconvolution problems with a symmetric point-spread function arise in many areas of science and engineering. These problems often are solved by the Richardson-Lucy method, a nonlinear iterative method. We first show a convergence result for the Richardson-Lucy method. The proof sheds light on why the method may converge slowly. Subsequently, we describe an iterative active set method that imposes the same constraints on the computed solution as the Richardson-Lucy method. Computed examples show the latter method to yield better restorations than the Richardson-Lucy method and typically require less computational effort.  相似文献   

8.
In this paper, we present a duality theory for fractional programming problems in the face of data uncertainty via robust optimization. By employing conjugate analysis, we establish robust strong duality for an uncertain fractional programming problem and its uncertain Wolfe dual programming problem by showing strong duality between the deterministic counterparts: robust counterpart of the primal model and the optimistic counterpart of its dual problem. We show that our results encompass as special cases some programming problems considered in the recent literature. Moreover, we also show that robust strong duality always holds for linear fractional programming problems under scenario data uncertainty or constraint-wise interval uncertainty, and that the optimistic counterpart of the dual is tractable computationally.  相似文献   

9.
本文主要考虑一类经典的含有二阶随机占优约束的投资组合优化问题,其目标为最大化期望收益,同时利用二阶随机占优约束度量风险,满足期望收益二阶随机占优预定的参考目标收益。与传统的二阶随机占优投资组合优化模型不同,本文考虑不确定的投资收益率,并未知其精确的概率分布,但属于某一不确定集合,建立鲁棒二阶随机占优投资组合优化模型,借助鲁棒优化理论,推导出对应的鲁棒等价问题。最后,采用S&P 500股票市场的实际数据,对模型进行不同训练样本规模和不确定集合下的最优投资组合的权重、样本内和样本外不确定参数对期望收益的影响的分析。结果表明,投资收益率在最新的历史数据规模下得出的投资策略,能够获得较高的样本外期望收益,对未来投资更具参考意义。在保证样本内解的最优性的同时,也能取得较高的样本外期望收益和随机占优约束被满足的可行性。  相似文献   

10.
We consider a class of estimation problems in which data of a Poisson character are related by a linear model to a target function that satisfies certain physical constraints. The classic example of this situation is the reconstruction problem of positron emission tomography (PET). There the function of interest satisfies positivity constraints. This article examines the impact of such constraints by comparing simple unconstrained reconstruction methods with constrained alternatives based on maximum likelihood (ML) and least squares (LS) formulations. Data from a series of numerical experiments are presented to quantify the significance of constraints. Although these experiments show that constraints are important, the differences between ML and LS based implementations of constraints are quite small. Thus, in order to evaluate the impact of constraints, it appears to be sufficient to focus on comparing constrained versus unconstrained implementations of LS. This simplifies the analysis of constraints considerably. A perturbation analysis technique is proposed to summarize the impact of constraints in terms of a single relative efficiency measure. The predictions obtained by this analysis are found to be in good agreement with experimental data.  相似文献   

11.
We study optimal solutions to an abstract optimization problem for measures, which is a generalization of classical variational problems in information theory and statistical physics. In the classical problems, information and relative entropy are defined using the Kullback-Leibler divergence, and for this reason optimal measures belong to a one-parameter exponential family. Measures within such a family have the property of mutual absolute continuity. Here we show that this property characterizes other families of optimal positive measures if a functional representing information has a strictly convex dual. Mutual absolute continuity of optimal probability measures allows us to strictly separate deterministic and non-deterministic Markov transition kernels, which play an important role in theories of decisions, estimation, control, communication and computation. We show that deterministic transitions are strictly sub-optimal, unless information resource with a strictly convex dual is unconstrained. For illustration, we construct an example where, unlike non-deterministic, any deterministic kernel either has negatively infinite expected utility (unbounded expected error) or communicates infinite information.  相似文献   

12.
In a real situation, optimization problems often involve uncertain parameters. Robust optimization is one of distribution-free methodologies based on worst-case analyses for handling such problems. In this paper, we first focus on a special class of uncertain linear programs (LPs). Applying the duality theory for nonconvex quadratic programs (QPs), we reformulate the robust counterpart as a semidefinite program (SDP) and show the equivalence property under mild assumptions. We also apply the same technique to the uncertain second-order cone programs (SOCPs) with “single” (not side-wise) ellipsoidal uncertainty. Then we derive similar results on the reformulation and the equivalence property. In the numerical experiments, we solve some test problems to demonstrate the efficiency of our reformulation approach. Especially, we compare our approach with another recent method based on Hildebrand’s Lorentz positivity.  相似文献   

13.
There is a well-recognized need to develop Bayesian computational methodologies that scale well to large data sets. Recent attempts to develop such methodology have often focused on two approaches—variational approximation and advanced importance sampling methods. This note shows how importance sampling can be viewed as a variational approximation, achieving a pleasing conceptual unification of the two points of view. We consider a particle representation of a distribution as defining a certain parametric model and show how the optimal approximation (in the sense of minimization of a Kullback-Leibler divergence) leads to importance sampling type rules. This new way of looking at importance sampling has the potential to generate new algorithms by the consideration of deterministic choices of particles in particle representations of distributions.  相似文献   

14.
In this paper we present a robust conjugate duality theory for convex programming problems in the face of data uncertainty within the framework of robust optimization, extending the powerful conjugate duality technique. We first establish robust strong duality between an uncertain primal parameterized convex programming model problem and its uncertain conjugate dual by proving strong duality between the deterministic robust counterpart of the primal model and the optimistic counterpart of its dual problem under a regularity condition. This regularity condition is not only sufficient for robust duality but also necessary for it whenever robust duality holds for every linear perturbation of the objective function of the primal model problem. More importantly, we show that robust strong duality always holds for partially finite convex programming problems under scenario data uncertainty and that the optimistic counterpart of the dual is a tractable finite dimensional problem. As an application, we also derive a robust conjugate duality theorem for support vector machines which are a class of important convex optimization models for classifying two labelled data sets. The support vector machine has emerged as a powerful modelling tool for machine learning problems of data classification that arise in many areas of application in information and computer sciences.  相似文献   

15.
The Cross-Entropy Method for Combinatorial and Continuous Optimization   总被引:17,自引:0,他引:17  
We present a new and fast method, called the cross-entropy method, for finding the optimal solution of combinatorial and continuous nonconvex optimization problems with convex bounded domains. To find the optimal solution we solve a sequence of simple auxiliary smooth optimization problems based on Kullback-Leibler cross-entropy, importance sampling, Markov chain and Boltzmann distribution. We use importance sampling as an important ingredient for adaptive adjustment of the temperature in the Boltzmann distribution and use Kullback-Leibler cross-entropy to find the optimal solution. In fact, we use the mode of a unimodal importance sampling distribution, like the mode of beta distribution, as an estimate of the optimal solution for continuous optimization and Markov chains approach for combinatorial optimization. In the later case we show almost surely convergence of our algorithm to the optimal solution. Supporting numerical results for both continuous and combinatorial optimization problems are given as well. Our empirical studies suggest that the cross-entropy method has polynomial in the size of the problem running time complexity.  相似文献   

16.
We consider two problems: (1) estimate a normal mean under a general divergence loss introduced in [S. Amari, Differential geometry of curved exponential families — curvatures and information loss, Ann. Statist. 10 (1982) 357-387] and [N. Cressie, T.R.C. Read, Multinomial goodness-of-fit tests, J. Roy. Statist. Soc. Ser. B. 46 (1984) 440-464] and (2) find a predictive density of a new observation drawn independently of observations sampled from a normal distribution with the same mean but possibly with a different variance under the same loss. The general divergence loss includes as special cases both the Kullback-Leibler and Bhattacharyya-Hellinger losses. The sample mean, which is a Bayes estimator of the population mean under this loss and the improper uniform prior, is shown to be minimax in any arbitrary dimension. A counterpart of this result for predictive density is also proved in any arbitrary dimension. The admissibility of these rules holds in one dimension, and we conjecture that the result is true in two dimensions as well. However, the general Baranchick [A.J. Baranchick, a family of minimax estimators of the mean of a multivariate normal distribution, Ann. Math. Statist. 41 (1970) 642-645] class of estimators, which includes the James-Stein estimator and the Strawderman [W.E. Strawderman, Proper Bayes minimax estimators of the multivariate normal mean, Ann. Math. Statist. 42 (1971) 385-388] class of estimators, dominates the sample mean in three or higher dimensions for the estimation problem. An analogous class of predictive densities is defined and any member of this class is shown to dominate the predictive density corresponding to a uniform prior in three or higher dimensions. For the prediction problem, in the special case of Kullback-Leibler loss, our results complement to a certain extent some of the recent important work of Komaki [F. Komaki, A shrinkage predictive distribution for multivariate normal observations, Biometrika 88 (2001) 859-864] and George, Liang and Xu [E.I. George, F. Liang, X. Xu, Improved minimax predictive densities under Kullbak-Leibler loss, Ann. Statist. 34 (2006) 78-92]. While our proposed approach produces a general class of predictive densities (not necessarily Bayes, but not excluding Bayes predictors) dominating the predictive density under a uniform prior. We show also that various modifications of the James-Stein estimator continue to dominate the sample mean, and by the duality of estimation and predictive density results which we will show, similar results continue to hold for the prediction problem as well.  相似文献   

17.
Motivated from the bandwidth selection problem in local likelihood density estimation and from the problem of assessing a final model chosen by a certain model selection procedure, we consider estimation of the Kullback–Leibler divergence. It is known that the best bandwidth choice for the local likelihood density estimator depends on the distance between the true density and the ‘vehicle’ parametric model. Also, the Kullback–Leibler divergence may be a useful measure based on which one judges how far the true density is away from a parametric family. We propose two estimators of the Kullback-Leibler divergence. We derive their asymptotic distributions and compare finite sample properties. Research of Young Kyung Lee was supported by the Brain Korea 21 Projects in 2004. Byeong U. Park’s research was supported by KOSEF through Statistical Research Center for Complex Systems at Seoul National University.  相似文献   

18.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method.  相似文献   

19.
This paper introduces the concept of entropic value-at-risk (EVaR), a new coherent risk measure that corresponds to the tightest possible upper bound obtained from the Chernoff inequality for the value-at-risk (VaR) as well as the conditional value-at-risk (CVaR). We show that a broad class of stochastic optimization problems that are computationally intractable with the CVaR is efficiently solvable when the EVaR is incorporated. We also prove that if two distributions have the same EVaR at all confidence levels, then they are identical at all points. The dual representation of the EVaR is closely related to the Kullback-Leibler divergence, also known as the relative entropy. Inspired by this dual representation, we define a large class of coherent risk measures, called g-entropic risk measures. The new class includes both the CVaR and the EVaR.  相似文献   

20.
By deconvolution we mean the solution of a linear first-kind integral equation with a convolution-type kernel, i.e., a kernel that depends only on the difference between the two independent variables. Deconvolution problems are special cases of linear first-kind Fredholm integral equations, whose treatment requires the use of regularization methods. The corresponding computational problem takes the form of structured matrix problem with a Toeplitz or block Toeplitz coefficient matrix. The aim of this paper is to present a tutorial survey of numerical algorithms for the practical treatment of these discretized deconvolution problems, with emphasis on methods that take the special structure of the matrix into account. Wherever possible, analogies to classical DFT-based deconvolution problems are drawn. Among other things, we present direct methods for regularization with Toeplitz matrices, and we show how Toeplitz matrix–vector products are computed by means of FFT, being useful in iterative methods. We also introduce the Kronecker product and show how it is used in the discretization and solution of 2-D deconvolution problems whose variables separate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号