首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, a new smoothing Newton method is proposed for solving constrained nonlinear equations. We first transform the constrained nonlinear equations to a system of semismooth equations by using the so-called absolute value function of the slack variables, and then present a new smoothing Newton method for solving the semismooth equations by constructing a new smoothing approximation function. This new method is globally and quadratically convergent. It needs to solve only one system of unconstrained equations and to perform one line search at each iteration. Numerical results show that the new algorithm works quite well.  相似文献   

2.
In this paper, we calculate Edgeworth expansion of a test statistic on independence when some of the parameters are large, and simulate the goodness of fit of its approximation. We also calculate an error bound for Edgeworth expansion. Some tables of the error bound are given, which show that the derived bound is sufficiently small for practical use.  相似文献   

3.
Linear least squares problems with box constraints are commonly solved to find model parameters within bounds based on physical considerations. Common algorithms include Bounded Variable Least Squares (BVLS) and the Matlab function lsqlin. Here, the goal is to find solutions to ill-posed inverse problems that lie within box constraints. To do this, we formulate the box constraints as quadratic constraints, and solve the corresponding unconstrained regularized least squares problem. Using box constraints as quadratic constraints is an efficient approach because the optimization problem has a closed form solution. The effectiveness of the proposed algorithm is investigated through solving three benchmark problems and one from a hydrological application. Results are compared with solutions found by lsqlin, and the quadratically constrained formulation is solved using the L-curve, maximum a posteriori estimation (MAP), and the χ2 regularization method. The χ2 regularization method with quadratic constraints is the most effective method for solving least squares problems with box constraints.  相似文献   

4.
In this paper, we consider a general class of nonlinear mixed discrete programming problems. By introducing continuous variables to replace the discrete variables, the problem is first transformed into an equivalent nonlinear continuous optimization problem subject to original constraints and additional linear and quadratic constraints. Then, an exact penalty function is employed to construct a sequence of unconstrained optimization problems, each of which can be solved effectively by unconstrained optimization techniques, such as conjugate gradient or quasi-Newton methods. It is shown that any local optimal solution of the unconstrained optimization problem is a local optimal solution of the transformed nonlinear constrained continuous optimization problem when the penalty parameter is sufficiently large. Numerical experiments are carried out to test the efficiency of the proposed method.  相似文献   

5.
Numerical analysis of a class of nonlinear duality problems is presented. One side of the duality is to minimize a sum of Euclidean norms subject to linear equality constraints (the constrained MSN problem). The other side is to maximize a linear objective function subject to homogeneous linear equality constraints and quadratic inequalities. Large sparse problems of this form result from the discretization of infinite dimensional duality problems in plastic collapse analysis.The solution method is based on the l 1 penalty function approach to the constrained MSN problem. This can be formulated as an unconstrained MSN problem for which the first author has recently published an efficient Newton barrier method, and for which new methods are still being developed.Numerical results are presented for plastic collapse problems with up to 180000 variables, 90000 terms in the sum of norms and 90000 linear constraints. The obtained accuracy is of order 10-8 measured in feasibility and duality gap.  相似文献   

6.
Exact moment equations for nonlinear Itô processes are derived. Taylor expansion of the drift and diffusion coefficients around the first conditional moment gives a hierarchy of coupled moment equations which can be closed by truncation or a Gaussian assumption. The state transition density is expanded into a Hermite orthogonal series with leading Gaussian term and the Fourier coefficients are expressed in terms of the moments. The resulting approximate likelihood is maximized by using a quasi Newton algorithm with BFGS secant updates. A simulation study for the CEV stock price model compares the several approximate likelihood estimators with the Euler approximation and the exact ML estimator (Feller, in Ann Math 54: 173–182, 1951).  相似文献   

7.
Semiparametric linear transformation models have received much attention due to their high flexibility in modeling survival data. A useful estimating equation procedure was recently proposed by Chen et al. (2002) [21] for linear transformation models to jointly estimate parametric and nonparametric terms. They showed that this procedure can yield a consistent and robust estimator. However, the problem of variable selection for linear transformation models has been less studied, partially because a convenient loss function is not readily available under this context. In this paper, we propose a simple yet powerful approach to achieve both sparse and consistent estimation for linear transformation models. The main idea is to derive a profiled score from the estimating equation of Chen et al. [21], construct a loss function based on the profile scored and its variance, and then minimize the loss subject to some shrinkage penalty. Under regularity conditions, we have shown that the resulting estimator is consistent for both model estimation and variable selection. Furthermore, the estimated parametric terms are asymptotically normal and can achieve a higher efficiency than that yielded from the estimation equations. For computation, we suggest a one-step approximation algorithm which can take advantage of the LARS and build the entire solution path efficiently. Performance of the new procedure is illustrated through numerous simulations and real examples including one microarray data.  相似文献   

8.
The Levenberg–Marquardt method is a regularized Gauss–Newton method for solving systems of nonlinear equations. If an error bound condition holds it is known that local quadratic convergence to a non-isolated solution can be achieved. This result was extended to constrained Levenberg–Marquardt methods for solving systems of equations subject to convex constraints. This paper presents a local convergence analysis for an inexact version of a constrained Levenberg–Marquardt method. It is shown that the best results known for the unconstrained case also hold for the constrained Levenberg–Marquardt method. Moreover, the influence of the regularization parameter on the level of inexactness and the convergence rate is described. The paper improves and unifies several existing results on the local convergence of Levenberg–Marquardt methods.  相似文献   

9.
提供了一种新的非单调内点回代线搜索技术的仿射内点信赖域方法解线性不等式约束的广义非线性互补问题(GCP).基于广义互补问题构成的半光滑方程组的广义Jacobian矩阵,算法使用l2范数作为半光滑方程组的势函数,形成的信赖域子问题为一个带椭球约束的线性化的二次模型.利用广义牛顿方程计算试探迭代步,通过内点映射回代技术确保迭代点是严格内点,保证了算法的整体收敛性.在合理的条件下,证明了信赖域算法在接近最优点时可转化为广义拟牛顿步,进而具有局部超线性收敛速率.非单调技术将克服高度非线性情况加速收敛进展.最后,数值结果表明了算法的有效性.  相似文献   

10.
We discuss the energy generation expansion planning with environmental constraints, formulated as a nonsmooth convex constrained optimization problem. To solve such problems, methods suitable for constrained nonsmooth optimization need to be employed. We describe a recently developed approach, which applies the usual unconstrained bundle techniques to a dynamically changing ??improvement function??. Numerical results for the generation expansion planning are reported.  相似文献   

11.
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed.  相似文献   

12.
We desire to find a correlation matrix of a given rank that is as close as possible to an input matrix R, subject to the constraint that specified elements in must be zero. Our optimality criterion is the weighted Frobenius norm of the approximation error, and we use a constrained majorization algorithm to solve the problem. Although many correlation matrix approximation approaches have been proposed, this specific problem, with the rank specification and the constraints, has not been studied until now. We discuss solution feasibility, convergence, and computational effort. We also present several examples.  相似文献   

13.
We show that the solution of a strongly regular generalized equation subject to a scalar perturbation expands in pseudopower series in terms of the perturbation parameter, i.e., the expansion of orderk is the solution of generalized equations expanded to orderk and thus depends itself on the perturbation parameter. In the polyhedral case, this expansion reduces to a usual Taylor expansion. These results are applied to the problem of regular perturbation in constrained optimization. We show that, if the strong regularity condition is satisfied, the property of quadratic growth holds and, at least locally, the solutions of the optimization problem and of the associated optimality system coincide. If, in addition the number of inequality constraints is finite, the solution and the Lagrange multiplier can be expanded in Taylor series. If the data are analytic, the solution and the multiplier are analytic functions of the perturbation parameter.  相似文献   

14.
In this article, an optimal control problem subject to a semilinear elliptic equation and mixed control-state constraints is investigated. The problem data depends on certain parameters. Under an assumption of separation of the active sets and a second-order sufficient optimality condition, Bouligand-differentiability (B-differentiability) of the solutions with respect to the parameter is established. Furthermore, an adjoint update strategy is proposed which yields a better approximation of the optimal controls and multipliers than the classical Taylor expansion, with remainder terms vanishing in L .  相似文献   

15.
An algorithm is presented that minimizes a nonlinear function in many variables under equality constraints by generating a monotonically improving sequence of feasible points along curvilinear search paths obeying an initialvalue system of differential equations. The derivation of the differential equations is based on the idea of a steepest descent curve for the objective function on the feasible region. Our method for small stepsize behaves as the generalized reduced gradient algorithm, whereas for large enough stepsize the constrained equivalent of Newton's method for unconstrained minimization is obtained.  相似文献   

16.
A well known approach to constrained optimization is via a sequenceof unconstrained minimization calculations applied to a penaltyfunction. This paper shown how it is posiible to generalizePowell's penelty function to solve constrained problems withboth equality and inequality constraints. The resulting methodsare equivalent to the Hestenes' method of multipliers, and ageneralization of this to inequality constraints suggested byRockafellar. Local duality results (not all of which have appearedbefore) for these methods are reviewed, with particular emphasison those of practical importance. It is shown that various strategiesfor varying control parameters are possible, all of which canbe viewed as Newton or Newton-like iterations applied to thedual problem. Practical strategies for guaranteeing convergenceare also discussed. A wide selection of numerical evidence isreported, and the algorithms are compared both amongst themselvesand with other penalty function methods. The new penalty functionis well conditioned, without singularities, and it is not necessaryfor the control parameters to tend to infinity in order to forceconvergence. The rate of convergence is rapid and high accuracyis achieved in few unconstrained minimizations.; furthermorethe computational effort for successive minimizations goes downrapidly. The methods are very easy to program efficiently, usingan established quasi-Newton subroutine for unconstrained minimization.  相似文献   

17.
Recent advances in the transformation model have made it possible to use this model for analyzing a variety of censored survival data. For inference on the regression parameters, there are semiparametric procedures based on the normal approximation. However, the accuracy of such procedures can be quite low when the censoring rate is heavy. In this paper, we apply an empirical likelihood ratio method and derive its limiting distribution via U-statistics. We obtain confidence regions for the regression parameters and compare the proposed method with the normal approximation based method in terms of coverage probability. The simulation results demonstrate that the proposed empirical likelihood method overcomes the under-coverage problem substantially and outperforms the normal approximation based method. The proposed method is illustrated with a real data example. Finally, our method can be applied to general U-statistic type estimating equations.  相似文献   

18.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

19.
A modified version of the truncated-Newton algorithm of Nash ([24], [25], [29]) is presented differing from it only in the use of an exact Hessian vector product for carrying out the large-scale unconstrained optimization required in variational data assimilation. The exact Hessian vector products is obtained by solving an optimal control problem of distributed parameters. (i.e. the system under study occupies a certain spatial and temporal domain and is modeled by partial differential equations) The algorithm is referred to as the adjoint truncated-Newton algorithm. The adjoint truncated-Newton algorithm is based on the first and the second order adjoint techniques allowing to obtain a better approximation to the Newton line search direction for the problem tested here. The adjoint truncated-Newton algorithm is applied here to a limited-area shallow water equations model with model generated data where the initial conditions serve as control variables. We compare the performance of the adjoint truncated-Newton algorithm with that of the original truncated-Newton method [29] and the LBFGS (Limited Memory BFGS) method of Liu and Nocedal [23]. Our numerical tests yield results which are twice as fast as these obtained by the truncated-Newton algorithm and are faster than the LBFGS method both in terms of number of iterations as well as in terms of CPU time.  相似文献   

20.
A Chebyshev interval method for nonlinear dynamic systems under uncertainty   总被引:2,自引:0,他引:2  
This paper proposes a new interval analysis method for the dynamic response of nonlinear systems with uncertain-but-bounded parameters using Chebyshev polynomial series. Interval model can be used to describe nonlinear dynamic systems under uncertainty with low-order Taylor series expansions. However, the Taylor series-based interval method can only suit problems with small uncertain levels. To account for larger uncertain levels, this study introduces Chebyshev series expansions into interval model to develop a new uncertain method for dynamic nonlinear systems. In contrast to the Taylor series, the Chebyshev series can offer a higher numerical accuracy in the approximation of solutions. The Chebyshev inclusion function is developed to control the overestimation in interval computations, based on the truncated Chevbyshev series expansion. The Mehler integral is used to calculate the coefficients of Chebyshev polynomials. With the proposed Chebyshev approximation, the set of ordinary differential equations (ODEs) with interval parameters can be transformed to a new set of ODEs with deterministic parameters, to which many numerical solvers for ODEs can be directly applied. Two numerical examples are applied to demonstrate the effectiveness of the proposed method, in particular its ability to effectively control the overestimation as a non-intrusive method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号