首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We propose and study a new iterative coordinate descent algorithm (QICD) for solving nonconvex penalized quantile regression in high dimension. By permitting different subsets of covariates to be relevant for modeling the response variable at different quantiles, nonconvex penalized quantile regression provides a flexible approach for modeling high-dimensional data with heterogeneity. Although its theory has been investigated recently, its computation remains highly challenging when p is large due to the nonsmoothness of the quantile loss function and the nonconvexity of the penalty function. Existing coordinate descent algorithms for penalized least-squares regression cannot be directly applied. We establish the convergence property of the proposed algorithm under some regularity conditions for a general class of nonconvex penalty functions including popular choices such as SCAD (smoothly clipped absolute deviation) and MCP (minimax concave penalty). Our Monte Carlo study confirms that QICD substantially improves the computational speed in the p ? n setting. We illustrate the application by analyzing a microarray dataset.  相似文献   

2.
Penalized quantile regression (PQR) provides a useful tool for analyzing high-dimensional data with heterogeneity. However, its computation is challenging due to the nonsmoothness and (sometimes) the nonconvexity of the objective function. An iterative coordinate descent algorithm (QICD) was recently proposed to solve PQR with nonconvex penalty. The QICD significantly improves the computational speed but requires a double-loop. In this article, we propose an alternative algorithm based on the alternating direction method of multiplier (ADMM). By writing the PQR into a special ADMM form, we can solve the iterations exactly without using coordinate descent. This results in a new single-loop algorithm, which we refer to as the QPADM algorithm. The QPADM demonstrates favorable performance in both computational speed and statistical accuracy, particularly when the sample size n and/or the number of features p are large. Supplementary material for this article is available online.  相似文献   

3.
The minimax concave penalty (MCP) has been demonstrated theoretically and practically to be effective in nonconvex penalization for variable selection and parameter estimation. In this paper, we develop an efficient alternating direction method of multipliers (ADMM) with continuation algorithm for solving the MCP-penalized least squares problem in high dimensions. Under some mild conditions, we study the convergence properties and the Karush–Kuhn–Tucker (KKT) optimality conditions of the proposed method. A high-dimensional BIC is developed to select the optimal tuning parameters. Simulations and a real data example are presented to illustrate the efficiency and accuracy of the proposed method.  相似文献   

4.
We propose an algorithm, semismooth Newton coordinate descent (SNCD), for the elastic-net penalized Huber loss regression and quantile regression in high dimensional settings. Unlike existing coordinate descent type algorithms, the SNCD updates a regression coefficient and its corresponding subgradient simultaneously in each iteration. It combines the strengths of the coordinate descent and the semismooth Newton algorithm, and effectively solves the computational challenges posed by dimensionality and nonsmoothness. We establish the convergence properties of the algorithm. In addition, we present an adaptive version of the “strong rule” for screening predictors to gain extra efficiency. Through numerical experiments, we demonstrate that the proposed algorithm is very efficient and scalable to ultrahigh dimensions. We illustrate the application via a real data example. Supplementary materials for this article are available online.  相似文献   

5.
This paper proposes a new approach for variable selection in partially linear errors-in-variables (EV) models for longitudinal data by penalizing appropriate estimating functions. We apply the SCAD penalty to simultaneously select significant variables and estimate unknown parameters. The rate of convergence and the asymptotic normality of the resulting estimators are established. Furthermore, with proper choice of regularization parameters, we show that the proposed estimators perform as well as the oracle procedure. A new algorithm is proposed for solving penalized estimating equation. The asymptotic results are augmented by a simulation study.  相似文献   

6.
While graphical models for continuous data (Gaussian graphical models) and discrete data (Ising models) have been extensively studied, there is little work on graphical models for datasets with both continuous and discrete variables (mixed data), which are common in many scientific applications. We propose a novel graphical model for mixed data, which is simple enough to be suitable for high-dimensional data, yet flexible enough to represent all possible graph structures. We develop a computationally efficient regression-based algorithm for fitting the model by focusing on the conditional log-likelihood of each variable given the rest. The parameters have a natural group structure, and sparsity in the fitted graph is attained by incorporating a group lasso penalty, approximated by a weighted lasso penalty for computational efficiency. We demonstrate the effectiveness of our method through an extensive simulation study and apply it to a music annotation dataset (CAL500), obtaining a sparse and interpretable graphical model relating the continuous features of the audio signal to binary variables such as genre, emotions, and usage associated with particular songs. While we focus on binary discrete variables for the main presentation, we also show that the proposed methodology can be easily extended to general discrete variables.  相似文献   

7.
In this paper, an improved spectral conjugate gradient algorithm is developed for solving nonconvex unconstrained optimization problems. Different from the existent methods, the spectral and conjugate parameters are chosen such that the obtained search direction is always sufficiently descent as well as being close to the quasi-Newton direction. With these suitable choices, the additional assumption in the method proposed by Andrei on the boundedness of the spectral parameter is removed. Under some mild conditions, global convergence is established. Numerical experiments are employed to demonstrate the efficiency of the algorithm for solving large-scale benchmark test problems, particularly in comparison with the existent state-of-the-art algorithms available in the literature.  相似文献   

8.
We present a superlinearly convergent exact penalty method for solving constrained nonlinear least squares problems, in which the projected exact penalty Hessian is approximated by using a structured secant updating scheme. We give general conditions for the two-step superlinear convergence of the algorithm and prove that the projected structured Broyden–Fletcher–Goldfarb–Shanno (BFGS), Powell-symmetric-Broyden (PSB), and Davidon–Fletcher–Powell (DFP) update formulas satisfy these conditions. Then we extend the results to the projected structured convex Broyden family update formulas. Extensive testing results obtained by an implementation of our algorithms, as compared to the results obtained by several other competent algorithms, demonstrate the efficiency and robustness of the proposed approach.  相似文献   

9.
We consider the problem of finding the nearest point (by Euclidean distance) in a simplicial cone to a given point, and develop an exterior penalty algorithm for it. Each iteration in the algorithm consists of a single Newton step following a reduction in the value of the penalty parameter. Proofs of convergence of the algorithm are given. Various other versions of exterior penalty algorithms for nearest point problems in nonsimplicial polyhedral cones and for convex quadratic programs, all based on a single descent step following a reduction in the value of the penalty parameter per iteration, are discussed. The performance of these algorithms in large scale computational experiments is very encouraging. It shows that the number of iterations grows very slowly, if at all, with the dimension of the problem.Partially supported by NSF Grant No. ECS-8521183, and by the two universities.  相似文献   

10.
We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x 1, . . . , x N ) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate blocks from among N-1 coordinate blocks or f has at most one minimum in each of N-2 coordinate blocks. If f is quasiconvex and hemivariate in every coordinate block, then the assumptions of continuity of f and compactness of the level set may be relaxed further. These results are applied to derive new (and old) convergence results for the proximal minimization algorithm, an algorithm of Arimoto and Blahut, and an algorithm of Han. They are applied also to a problem of blind source separation.  相似文献   

11.
We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high-dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, that is, APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results that do not exist in the existing literature. Thorough numerical results are provided to back up our theory.  相似文献   

12.
This paper deals with the problem of recovering an unknown low‐rank matrix from a sampling of its entries. For its solution, we consider a nonconvex approach based on the minimization of a nonconvex functional that is the sum of a convex fidelity term and a nonconvex, nonsmooth relaxation of the rank function. We show that by a suitable choice of this nonconvex penalty, it is possible, under mild assumptions, to use also in this matrix setting the iterative forward–backward splitting method. Specifically, we propose the use of certain parameter dependent nonconvex penalties that with a good choice of the parameter value allow us to solve in the backward step a convex minimization problem, and we exploit this result to prove the convergence of the iterative forward–backward splitting algorithm. Based on the theoretical results, we develop for the solution of the matrix completion problem the efficient iterative improved matrix completion forward–backward algorithm, which exhibits lower computing times and improved recovery performance when compared with the best state‐of‐the‐art algorithms for matrix completion. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Feasible Direction Interior-Point Technique for Nonlinear Optimization   总被引:5,自引:0,他引:5  
We propose a feasible direction approach for the minimization by interior-point algorithms of a smooth function under smooth equality and inequality constraints. It consists of the iterative solution in the primal and dual variables of the Karush–Kuhn–Tucker first-order optimality conditions. At each iteration, a descent direction is defined by solving a linear system. In a second stage, the linear system is perturbed so as to deflect the descent direction and obtain a feasible descent direction. A line search is then performed to get a new interior point and ensure global convergence. Based on this approach, first-order, Newton, and quasi-Newton algorithms can be obtained. To introduce the method, we consider first the inequality constrained problem and present a globally convergent basic algorithm. Particular first-order and quasi-Newton versions of this algorithm are also stated. Then, equality constraints are included. This method, which is simple to code, does not require the solution of quadratic programs and it is neither a penalty method nor a barrier method. Several practical applications and numerical results show that our method is strong and efficient.  相似文献   

14.
We present numerical results of a comparative study of codes for nonlinear and nonconvex mixed-integer optimization. The underlying algorithms are based on sequential quadratic programming (SQP) with stabilization by trust-regions, linear outer approximations, and branch-and-bound techniques. The mixed-integer quadratic programming subproblems are solved by a branch-and-cut algorithm. Second order information is updated by a quasi-Newton update formula (BFGS) applied to the Lagrange function for continuous, but also for integer variables. We do not require that the model functions can be evaluated at fractional values of the integer variables. Thus, partial derivatives with respect to integer variables are replaced by descent directions obtained from function values at neighboring grid points, and the number of simulations or function evaluations, respectively, is our main performance criterion to measure the efficiency of a code. Numerical results are presented for a set of 100 academic mixed-integer test problems. Since not all of our test examples are convex, we reach the best-known solutions in about 90 % of the test runs, but at least feasible solutions in the other cases. The average number of function evaluations of the new mixed-integer SQP code is between 240 and 500 including those needed for one- or two-sided approximations of partial derivatives or descent directions, respectively. In addition, we present numerical results for a set of 55 test problems with some practical background in petroleum engineering.  相似文献   

15.
Semiparametric models with diverging number of predictors arise in many contemporary scientific areas.Variable selection for these models consists of two components:model selection for non-parametric components and selection of significant variables for the parametric portion.In this paper,we consider a variable selection procedure by combining basis function approximation with SCAD penalty.The proposed procedure simultaneously selects significant variables in the parametric components and the nonparametric components.With appropriate selection of tuning parameters,we establish the consistency and sparseness of this procedure.  相似文献   

16.
ABSTRACT

Friedman et al. proposed the fused lasso signal approximator (FLSA) to denoise piecewise constant signals by penalizing the ?1 differences between adjacent signal points. In this article, we propose a new method, referred to as the fused-MCP, by combining the minimax concave penalty (MCP) with the fusion penalty. The fused-MCP performs better than the FLSA in maintaining the profile of the original signal and preserving the edge structure. We show that, with a high probability, the fused-MCP selects the right change-points and has the oracle property, unlike the FLSA. We further show that the fused-MCP achieves the same l2 error rate as the FLSA. We develop algorithms to solve fused-MCP problems, either by transforming them into MCP regression problems or by using an adjusted majorization-minimization algorithm. Simulation and experimental results show the effectiveness of our method. Supplementary material for this article is available online.  相似文献   

17.
This paper points out some fatal errors in the equivalent formulations used in Noor 2011 [Noor MA. Projection iterative methods for solving some systems of general nonconvex variational inequalities. Applied Analysis. 2011;90:777–786] and consequently in Noor 2009 [Noor MA. System of nonconvex variational inequalities. Journal of Advanced Research Optimization. 2009;1:1–10], Noor 2010 [Noor MA, Noor KI. New system of general nonconvex variational inequalities. Applied Mathematics E-Notes. 2010;10:76–85] and Wen 2010 [Wen DJ. Projection methods for a generalized system of nonconvex variational inequalities with different nonlinear operators. Nonlinear Analysis. 2010;73:2292–2297]. Since these equivalent formulations are the main tools to suggest iterative algorithms and to establish the convergence results, the algorithms and results in the aforementioned articles are not valid. It is shown by given some examples. To overcome with the problems in these papers, we consider a new system of extended regularized nonconvex variational inequalities, and establish the existence and uniqueness result for a solution of the aforesaid system. We suggest and analyse a new projection iterative algorithm to compute the unique solution of the system of extended regularized nonconvex variational inequalities which is also a fixed point of a nearly uniformly Lipschitzian mapping. Furthermore, the convergence analysis of the proposed iterative algorithm under some suitable conditions is studied. As a consequence, we point out that one can derive the correct version of the algorithms and results presented in the above mentioned papers.  相似文献   

18.
In this paper, we investigate the convergence of the proximal iteratively reweighted algorithm for a class of nonconvex and nonsmooth problems. Such problems actually include numerous models in the area of signal processing and machine learning research. Two extensions of the algorithm are also studied. We provide a unified scheme for these three algorithms. With the Kurdyka–?ojasiewicz property, we prove that the unified algorithm globally converges to a critical point of the objective function.  相似文献   

19.
This paper is concerned with a practical algorithm for solving low rank linear multiplicative programming problems and low rank linear fractional programming problems. The former is the minimization of the sum of the product of two linear functions while the latter is the minimization of the sum of linear fractional functions over a polytope. Both of these problems are nonconvex minimization problems with a lot of practical applications. We will show that these problems can be solved in an efficient manner by adapting a branch and bound algorithm proposed by Androulakis–Maranas–Floudas for nonconvex problems containing products of two variables. Computational experiments show that this algorithm performs much better than other reported algorithms for these class of problems.  相似文献   

20.
This paper extends prior work by the authors on solving nonlinear least squares unconstrained problems using a factorized quasi-Newton technique. With this aim we use a primal-dual interior-point algorithm for nonconvex nonlinear programming. The factorized quasi-Newton technique is now applied to the Hessian of the Lagrangian function for the transformed problem which is based on a logarithmic barrier formulation. We emphasize the importance of establishing and maintaining symmetric quasi-definiteness of the reduced KKT system. The algorithm then tries to choose a step size that reduces a merit function, and to select a penalty parameter that ensures descent directions along the iterative process. Computational results are included for a variety of least squares constrained problems and preliminary numerical testing indicates that the algorithm is robust and efficient in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号