首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this article, we consider the problem of estimating the eigenvalues and eigenfunctions of the covariance kernel (i.e., the functional principal components) from sparse and irregularly observed longitudinal data. We exploit the smoothness of the eigenfunctions to reduce dimensionality by restricting them to a lower dimensional space of smooth functions. We then approach this problem through a restricted maximum likelihood method. The estimation scheme is based on a Newton–Raphson procedure on the Stiefel manifold using the fact that the basis coefficient matrix for representing the eigenfunctions has orthonormal columns. We also address the selection of the number of basis functions, as well as that of the dimension of the covariance kernel by a second-order approximation to the leave-one-curve-out cross-validation score that is computationally very efficient. The effectiveness of our procedure is demonstrated by simulation studies and an application to a CD4+ counts dataset. In the simulation studies, our method performs well on both estimation and model selection. It also outperforms two existing approaches: one based on a local polynomial smoothing, and another using an EM algorithm. Supplementary materials including technical details, the R package fpca, and data analyzed by this article are available online.  相似文献   

2.
李慧茹 《经济数学》2002,19(1):85-94
通过定义一种新的*-微分,本文给出了局部Lipschitz非光滑方程组的牛顿法,并对其全局收敛性进行了研究.该牛顿法结合了非光滑方程组的局部收敛性和全局收敛性.最后,我们把这种牛顿法应用到非光滑函数的光滑复合方程组问题上,得到了较好的收敛性.  相似文献   

3.
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed.  相似文献   

4.
解非线性方程组的一类离散的Newton算法   总被引:6,自引:0,他引:6  
1.引言考虑非线性方程组设xi是当前的迭代点,为计算下一个迭代点,Newton法是求解方程若用差商代替导数,离散Newton法要解如下的方程其中这里为了计算J(;;h),需计算n‘个函数值.为了提高效能,Brown方法l‘]使用代入消元的办法来减少函数值计算量.它是再通过一次内选代从h得到下一个迭代点14+1.设n;=(《1,…,Zn尸,t二(ti,…,t*”,t为变量.BfOWll方法的基本思想如下.对人(x)在X;处做线性近似解出然后代入第二个函数,得到这是关于tZ,…,tn的函数.当(tZ,…,t。尸一(ZZ,…,Z。厂时,由(1.4),…  相似文献   

5.
The Newton method and the inexact Newton method for solving quasidifferentiable equations via the quasidifferential are investigated. The notion of Q-semismoothness for a quasidifferentiable function is proposed. The superlinear convergence of the Newton method proposed by Zhang and Xia is proved under the Q-semismooth assumption. An inexact Newton method is developed and its linear convergence is shown.Project sponsored by Shanghai Education Committee Grant 04EA01 and by Shanghai Government Grant T0502.  相似文献   

6.
We show that a modified Milstein scheme combined with explicit Newton’s method enables us to construct fast converging sequences of approximate solutions of stochastic differential equations. The fast uniform convergence of our Newton–Milstein scheme follows from Amano’s probabilistic second-order error estimate, which had been an open problem since 1991. The Newton–Milstein scheme, which is based on a modified Milstein scheme and the symbolic Newton’s method, will be classified as a numerical and computer algebraic hybrid method and it may give a new possibility to the study of computer algebraic method in stochastic analysis.  相似文献   

7.
The paper is devoted to two systems of nonsmooth equations. One is the system of equations of max-type functions and the other is the system of equations of smooth compositions of max-type functions. The Newton and approximate Newton methods for these two systems are proposed. The Q-superlinear convergence of the Newton methods and the Q-linear convergence of the approximate Newton methods are established. The present methods can be more easily implemented than the previous ones, since they do not require an element of Clarke generalized Jacobian, of B-differential, or of b-differential, at each iteration point.  相似文献   

8.
高岩 《运筹学学报》2011,15(2):53-58
研究了非光滑的非线性互补问题. 首先将非光滑的非线性互补问题转化为一个非光滑方程组,然后用牛顿法求解这个非光滑方程组. 在该牛顿法中,每次迭代只需一个原始函数B-微分中的一个元素. 最后证明了该牛顿法的超线性收敛性.  相似文献   

9.
求解半光滑方程组的近似Newton法   总被引:1,自引:0,他引:1  
本文提出了求解半光滑方程组的近似Newton法,并证明了该算法的局部超线性收敛性。数值结果表明 该算法是有效的。  相似文献   

10.
In this paper, we propose a new distinctive version of a generalized Newton method for solving nonsmooth equations. The iterative formula is not the classic Newton type, but an exponential one. Moreover, it uses matrices from B‐differential instead of generalized Jacobian. We prove local convergence of the method and we present some numerical examples.  相似文献   

11.
本文研究由双障碍问题导出的一类B可微函数的性质,并在一定条件下证明了求解相应的B可微方程阻尼牛顿法的全局收效性和二阶收效性.数值例子表明这一算法是有效的.  相似文献   

12.
In this paper, we consider two versions of the Newton-type method for solving a nonlinear equations with nondifferentiable terms, which uses as iteration matrices, any matrix from B-differential of semismooth terms. Local and global convergence theorems for the generalized Newton and inexact generalized Newton method are proved. Linear convergence of the algorithms is obtained under very mild assumptions. The superlinear convergence holds under some conditions imposed on both terms of equation. Some numerical results indicate that both algorithms works quite well in practice.   相似文献   

13.
When solving large complex optimization problems, the user is faced with three major problems. These are (i) the cost in human time in obtaining accurate expressions for the derivatives involved; (ii) the need to store second derivative information; and (iii), of lessening importance, the time taken to solve the problem on the computer. For many problems, a significant part of the latter can be attributed to solving Newton-like equations. In the algorithm described, the equations are solved using a conjugate direction method that only needs the Hessian at the current point when it is multiplied by a trial vector. In this paper, we present a method that finds this product using automatic differentiation while only requiring vector storage. The method takes advantage of any sparsity in the Hessian matrix and computes exact derivatives. It avoids the complexity of symbolic differentiation, the inaccuracy of numerical differentiation, the labor of finding analytic derivatives, and the need for matrix store. When far from a minimum, an accurate solution to the Newton equations is not justified, so an approximate solution is obtained by using a version of Dembo and Steihaug's truncated Newton algorithm (Ref. 1).This paper was presented at the SIAM National Meeting, Boston, Massachusetts, 1986.  相似文献   

14.
A Smoothing Newton Method for Semi-Infinite Programming   总被引:5,自引:0,他引:5  
This paper is concerned with numerical methods for solving a semi-infinite programming problem. We reformulate the equations and nonlinear complementarity conditions of the first order optimality condition of the problem into a system of semismooth equations. By using a perturbed Fischer–Burmeister function, we develop a smoothing Newton method for solving this system of semismooth equations. An advantage of the proposed method is that at each iteration, only a system of linear equations is solved. We prove that under standard assumptions, the iterate sequence generated by the smoothing Newton method converges superlinearly/quadratically.  相似文献   

15.
关于广义Newton法的收敛性问题   总被引:4,自引:0,他引:4  
本文在较弱的条件下,证明了B-可微方程组的广义Newton法的局部超线性收敛性,为该算法直接应用于非线性规划问题、变分不等问题以及非线性互补问题等提供了理论依据。最后,本文给出了广义Newton法付之实践的具体策略。数值结果表明,算法是行之有效的。  相似文献   

16.
牛顿方法的两个新格式   总被引:7,自引:4,他引:3  
给出牛顿迭代方法的两个新格式,S im pson牛顿方法和几何平均牛顿方法,证明了它们至少三次收敛到单根,线性收敛到重根.文末给出数值试验,且与其它已知牛顿法做了比较.结果表明收敛性方法具有较好的优越性,它们丰富了非线性方程求根的方法,在理论上和应用上都有一定的价值.  相似文献   

17.
In this paper, a new smoothing Newton method is proposed for solving constrained nonlinear equations. We first transform the constrained nonlinear equations to a system of semismooth equations by using the so-called absolute value function of the slack variables, and then present a new smoothing Newton method for solving the semismooth equations by constructing a new smoothing approximation function. This new method is globally and quadratically convergent. It needs to solve only one system of unconstrained equations and to perform one line search at each iteration. Numerical results show that the new algorithm works quite well.  相似文献   

18.
In this paper, we present a convergence analysis of the inexact Newton method for solving Discrete-time algebraic Riccati equations (DAREs) for large and sparse systems. The inexact Newton method requires, at each iteration, the solution of a symmetric Stein matrix equation. These linear matrix equations are solved approximatively by the alternating directions implicit (ADI) or Smith?s methods. We give some new matrix identities that will allow us to derive new theoretical convergence results for the obtained inexact Newton sequences. We show that under some necessary conditions the approximate solutions satisfy some desired properties such as the d-stability. The theoretical results developed in this paper are an extension to the discrete case of the analysis performed by Feitzinger et al. (2009) [8] for the continuous-time algebraic Riccati equations. In the last section, we give some numerical experiments.  相似文献   

19.
This work presents a radial basis collocation method combined with the quasi‐Newton iteration method for solving semilinear elliptic partial differential equations. The main result in this study is that there exists an exponential convergence rate in the radial basis collocation discretization and a superlinear convergence rate in the quasi‐Newton iteration of the nonlinear partial differential equations. In this work, the numerical error associated with the employed quadrature rule is considered. It is shown that the errors in Sobolev norms for linear elliptic partial differential equations using radial basis collocation method are bounded by the truncation error of the RBF. The combined errors due to radial basis approximation, quadrature rules, and quasi‐Newton and Newton iterations are also presented. This result can be extended to finite element or finite difference method combined with any iteration methods discussed in this work. The numerical example demonstrates a good agreement between numerical results and analytical predictions. The numerical results also show that although the convergence rate of order 1.62 of the quasi‐Newton iteration scheme is slightly slower than rate of order 2 in the Newton iteration scheme, the former is more stable and less sensitive to the initial guess. © 2007 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2008  相似文献   

20.
The truncated Newton algorithm was devised by Dembo and Steihaug (Ref. 1) for solving large sparse unconstrained optimization problems. When far from a minimum, an accurate solution to the Newton equations may not be justified. Dembo's method solves these equations by the conjugate direction method, but truncates the iteration when a required degree of accuracy has been obtained. We present favorable numerical results obtained with the algorithm and compare them with existing codes for large-scale optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号