首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, the acoustic estimation of suspended sediment concentration is discussed and two estimation methods of suspended sediment concentration are presented. The first method is curve fitting method, in which, according to the acoustic backscattering theory we assume that the fitting factor K1 (r) between the concentration M(r) obtained by acoustic observation and the concentration M0 ( r) obtained by sampling water is a high order power function of distancer. Using least-square algorithm, we can determine the coefficients of the high order power function by minimizing the difference betweenM( r) and M0 ( r) in the whole water profile. To the absorption coefficient of sound due to the suspension in water we do not give constraint in the first method. The second method is recursive fitting method, in which we take M0 ( r) as the conditions of initialization and decision and give rational constraints to some parameters. The recursive process is stable. We analyzed the two methods with a lot of experimental data. The analytical results show that the estimate error of the first method is less than that of the second method and the latter can not only estimate the concentration of suspended sediment but also give the absorption coefficient of sound. Good results have been obtained with the two methods.  相似文献   

2.
The Lasso is a very well-known penalized regression model, which adds an L1 penalty with parameter λ1 on the coefficients to the squared error loss function. The Fused Lasso extends this model by also putting an L1 penalty with parameter λ2 on the difference of neighboring coefficients, assuming there is a natural ordering. In this article, we develop a path algorithm for solving the Fused Lasso Signal Approximator that computes the solutions for all values of λ1 and λ2. We also present an approximate algorithm that has considerable speed advantages for a moderate trade-off in accuracy. In the Online Supplement for this article, we provide proofs and further details for the methods developed in the article.  相似文献   

3.
In the usual Gaussian White-Noise model, we consider the problem of estimating the unknown square-integrable drift function of the standard Brownian motion using the partial sums of its Fourier series expansion generated by an orthonormal basis. Using the squared L 2 distance loss, this problem is known to be the same as estimating the mean of an infinite dimensional random vector with l 2 loss, where the coordinates are independently normally distributed with the unknown Fourier coefficients as the means and the same variance. In this modified version of the problem, we show that Akaike Information Criterion for model selection, followed by least squares estimation, attains the minimax rate of convergence. An erratum to this article can be found at  相似文献   

4.
The CGS (conjugate Gram-Schmidt) algorithms of Hestenes and Stiefel are formulated so as to obtain least-square solutions of a system of equationsg(x)=0 inn independent variables. Both the linear caseg(x)=Axh and the nonlinear case are discussed. In the linear case, a least-square solution is obtained in no more thann steps, and a method of obtaining the least-square solution of minimum length is given. In the nonlinear case, the CGS algorithm is combined with the Gauss-Newton process to minimize sums of squares of nonlinear functions. Results of numerical experiments with several versions of CGS on test functions indicate that the algorithms are effective.The author wishes to express appreciation and to acknowledge the ideas and help of Professor M. R. Hestenes which made this paper possible.  相似文献   

5.
The cumulative degree distributions of transport networks, such as air transportation networks and respiratory neuronal networks, follow power laws. The significance of power laws with respect to other network performance measures, such as throughput and synchronization, remains an open question. Evolving methods for the analysis and design of air transportation networks must be able to address network performance in the face of increasing demands and the need to contain and control local network disturbances, such as congestion. Toward this end, we investigate functional relationships that govern the performance of transport networks; for example, the links between the first nontrivial eigenvalue, λ2, of a network's Laplacian matrix—a quantitative measure of network synchronizability—and other global network parameters. In particular, among networks with a fixed degree distribution and fixed network assortativity (a measure of a network's preference to attach nodes based on a similarity or difference), those with small λ2 are shown to be poor synchronizers, to have much longer shortest paths and to have greater clustering in comparison to those with large λ2. A simulation of a respiratory network adds data to our investigation. This study is a beginning step in developing metrics and design variables for the analysis and active design of air transport networks. © 2008 Wiley Periodicals, Inc. Complexity, 2009  相似文献   

6.
Variable selection is an important aspect of high-dimensional statistical modeling, particularly in regression and classification. In the regularization framework, various penalty functions are used to perform variable selection by putting relatively large penalties on small coefficients. The L1 penalty is a popular choice because of its convexity, but it produces biased estimates for the large coefficients. The L0 penalty is attractive for variable selection because it directly penalizes the number of non zero coefficients. However, the optimization involved is discontinuous and non convex, and therefore it is very challenging to implement. Moreover, its solution may not be stable. In this article, we propose a new penalty that combines the L0 and L1 penalties. We implement this new penalty by developing a global optimization algorithm using mixed integer programming (MIP). We compare this combined penalty with several other penalties via simulated examples as well as real applications. The results show that the new penalty outperforms both the L0 and L1 penalties in terms of variable selection while maintaining good prediction accuracy.  相似文献   

7.
Suppose the stationary r-dimensional multivariate time series {yt} is generated by an infinite autoregression. For lead times h≥1, the linear prediction of yt+h based on yt, yt−1,… is considered using an autoregressive model of finite order k fitted to a realization of length T. Assuming that k → ∞ (at some rate) as T → ∞, the consistency and asymptotic normality of the estimated autoregressive coefficients are derived, and an asymptotic approximation to the mean square prediction error based on this autoregressive model fitting approach is obtained. The asymptotic effect of estimating autoregressive parameters is found to inflate the minimum mean square prediction error by a factor of (1 + kr/T).  相似文献   

8.
In this paper we offer a new definition of monogenicity for functions defined on ℝ n+1 with values in the Clifford algebra ℝ n following an idea inspired by the recent papers [6], [7]. This new class of monogenic functions contains the polynomials (and, more in general, power series) with coefficients in the Clifford algebra ℝ n . We will prove a Cauchy integral formula as well as some of its consequences. Finally, we deal with the zeroes of some polynomials and power series.  相似文献   

9.
Summary. Neumann-Neumann algorithm have been well developed for standard finite element discretization of elliptic problems with discontinuous coefficients. In this paper, an algorithm of this kind is designed and analyzed for a mortar finite element discretization of problems in three dimensions. It is established that its rate of convergence is independent of the discretization parameters and jumps of coefficients between subregions. The algorithm is well suited for parallel computations.Mathematics Subject Classification (1991): 65N55, 65N10, 65N30, 65N22.The work was supported in part by the U.S. Department of Energy under contract DE-FG02-92ER25127 and in part by Polish Science Foundation under grant 2P03A00524.AcknowledgmentThe author would like to thank Olof Widlund for many fruitful discussions and valuable remarks and suggestions on how to improve the presentation of our results.  相似文献   

10.
For many numerical problems involving smooth multivariate functions on d-cubes, the so-called Smolyak algorithm (or Boolean method, sparse grid method, etc.) has proved to be very useful. The final form of the algorithm (see equation (12) below) requires functional evaluation as well as the computation of coefficients. The latter can be done in different ways that may have considerable influence on the total cost of the algorithm. In this paper, we try to diminish this influence as far as possible. For example, we present an algorithm for the integration problem that reduces the time for the calculation and exposition of the coefficients in such a way that for increasing dimension, this time is small compared to dn, where n is the number of involved function values.  相似文献   

11.
The parameters in the governing system of partial differential equations of multiple‐network poroelasticity models typically vary over several orders of magnitude, making its stable discretization and efficient solution a challenging task. In this paper, we prove the uniform Ladyzhenskaya–Babu?ka–Brezzi (LBB) condition and design uniformly stable discretizations and parameter‐robust preconditioners for flux‐based formulations of multiporosity/multipermeability systems. Novel parameter‐matrix‐dependent norms that provide the key for establishing uniform LBB stability of the continuous problem are introduced. As a result, the stability estimates presented here are uniform not only with respect to the Lamé parameter λ but also to all the other model parameters, such as the permeability coefficients Ki; storage coefficients c p i ; network transfer coefficients βi j,i,j = 1,…,n; the scale of the networks n; and the time step size τ. Moreover, strongly mass‐conservative discretizations that meet the required conditions for parameter‐robust LBB stability are suggested and corresponding optimal error estimates proved. The transfer of the canonical (norm‐equivalent) operator preconditioners from the continuous to the discrete level lays the foundation for optimal and fully robust iterative solution methods. The theoretical results are confirmed in numerical experiments that are motivated by practical applications.  相似文献   

12.
In this work, we propose a hybrid radial basis functions (RBFs) collocation technique for the numerical solution of fractional advection–diffusion models. In the formulation of hybrid RBFs (HRBFs), there exist shape parameter (c* ) and weight parameter (ϵ) that control numerical accuracy and stability. For these parameters, an adaptive algorithm is developed and validated. The proposed HRBFs method is tested for numerical solutions of some fractional Black–Sholes and diffusion models. Numerical simulations performed for several benchmark problems verified the proposed method accuracy and efficiency. The quantitative analysis is made in terms of L, L2, Lrms , and Lrel error norms as well as number of nodes N over space domain and time-step δt. Numerical convergence in space and time is also studied for the proposed method. The unconditional stability of the proposed HRBFs scheme is obtained using the von Neumann methodology. It is observed that the HRBFs method circumvented the ill-conditioning problem greatly, a major issue in the Kansa method.  相似文献   

13.
Most banks use the top-down approach to aggregate their risk types when computing total economic capital. Following this approach, marginal distributions for each risk type are first independently estimated and then merged into a joint model using a copula function. Due to lack of reliable data, banks tend to manually select the copula as well as its parameters. In this paper we assess the model risk related to the choice of a specific copula function. The aim is to compute upper and lower bounds on the total economic capital for the aggregate loss distribution of DNB, the largest Norwegian bank, and the key tool for computing these bounds is the Rearrangement Algorithm introduced in Embrechts et al. (J. Bank. Financ. 37(8):2750–2764 2013). The application of this algorithm to a real situation poses a series of numerical challenges and raises a number of warnings which we illustrate and discuss.  相似文献   

14.
Peter Benner  Norman Lang  Jens Saak 《PAMM》2013,13(1):481-482
We present a parametric model order reduction (PMOR) method applied to a parameter depending generalized state-space system, which describes the evolution of the temperature field on a vertical stand of a machine tool assembly group induced by a moving tool slide. The position of this slide parametrizes the input matrix of the associated system. The main idea is to compute projection matrices Vj, Wj in certain parameter sample points μj and concatenate them to the projection bases V, W, respectively, as described in [1]. Instead of using the iterative rational Krylov algorithm (IRKA) to produce the projection matrices in each parameter sample point as suggested there, here we use the well known method of balanced truncation (BT). The numerical results show that for the same reduced order r obtained from V, W ∈ ℝn×r, BT produces a parametric reduced order model (ROM) of similar accuracy as IRKA in less time. (© 2013 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Abstract

A highly flexible nonparametric regression model for predicting a response y given covariates {xk}d k=1 is the projection pursuit regression (PPR) model ? = h(x) = β0 + ΣjβjfjT jx) where the fj , are general smooth functions with mean 0 and norm 1, and Σd k=1α2 kj=1. The standard PPR algorithm of Friedman and Stuetzle (1981) estimates the smooth functions fj using the supersmoother nonparametric scatterplot smoother. Friedman's algorithm constructs a model with M max linear combinations, then prunes back to a simpler model of size MM max, where M and M max are specified by the user. This article discusses an alternative algorithm in which the smooth functions are estimated using smoothing splines. The direction coefficients αj, the amount of smoothing in each direction, and the number of terms M and M max are determined to optimize a single generalized cross-validation measure.  相似文献   

16.
In view of the “round jet initial condition anomaly”, discussed in literature, we investigate the effect of inflow conditions resulting from the use of different nozzle geometries to form the jet. RANS simulations in the framework of OpenFOAM using the k − ε turbulence model are performed. As the standard model coefficient Cε1 = 1.44 is known to overpredict spreading rates for round jets, a value of Cε1 = 1.6 was recommended for this case already in the 1970's. While this works well for jets issuing from long pipes, it does not give satisfactory results for other nozzle geometries. To overcome this deficiency while keeping the k − ε model, we suggest modified coefficients Cε1 based on profiles of mean flow and turbulence at the nozzle exit. We determine optimal values of Cε1 for three different nozzle geometries, and test them at various Reynolds numbers. Good agreement with experimental data is obtained. (© 2014 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
Wavelet-based denoising techniques are well suited to estimate spatially inhomogeneous signals. Waveshrink (Donoho and Johnstone) assumes independent Gaussian errors and equispaced sampling of the signal. Various articles have relaxed some of these assumptions, but a systematic generalization to distributions such as Poisson, binomial, or Bernoulli is missing. We consider a unifying l1-penalized likelihood approach to regularize the maximum likelihood estimation by adding an l1 penalty of the wavelet coefficients. Our approach works for all types of wavelets and for a range of noise distributions. We develop both an algorithm to solve the estimation problem and rules to select the smoothing parameter automatically. In particular, using results from Poisson processes, we give an explicit formula for the universal smoothing parameter to denoise Poisson measurements. Simulations show that the procedure is an improvement over other methods. An astronomy example is given.  相似文献   

18.
In this paper, we deal with l 0-norm data fitting and total variation regularization for image compression and denoising. The l 0-norm data fitting is used for measuring the number of non-zero wavelet coefficients to be employed to represent an image. The regularization term given by the total variation is to recover image edges. Due to intensive numerical computation of using l 0-norm, it is usually approximated by other functions such as the l 1-norm in many image processing applications. The main goal of this paper is to develop a fast and effective algorithm to solve the l 0-norm data fitting and total variation minimization problem. Our idea is to apply an alternating minimization technique to solve this problem, and employ a graph-cuts algorithm to solve the subproblem related to the total variation minimization. Numerical examples in image compression and denoising are given to demonstrate the effectiveness of the proposed algorithm.  相似文献   

19.
This article considers small sample asymptotics for the distribution of the total loss Sn of a credit risk portfolio. For portfolios with a few exceptionally high potential loss values, the distribution of Sn turns out to be bimodal. Direct approximation by Esscher tilting does not capture this feature. An improved recursive algorithm is proposed. The new approach leads to a more accurate small sample approximation that models bimodality in the presence of outliers. The results are illustrated by a simulated example as well as an example of an observed credit risk portfolio.  相似文献   

20.
Let an entire functionF(z) of finite genus have infinitely many zeros which are all positive, and take real values for realz. Then it is shown how to give two-sided bounds for all the zeros ofF in terms of the coefficients of the power series ofF, in fact in terms of the coefficients obtained byGraeffe's algorithm applied toF. A simple numerical illustration is given for a Bessel function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号