首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
We study the complexity (minimal cost) of computing an s-approximation to a fixed point of a contractive function with the contractive factor q < 1. This is done for the relative error criterion in Part I and for the absolute error criterion in Part II, which is in progress. The complexity depends strongly on the dimension of the domain of functions. For the one-dimensional case we develop an optimal fixed point envelope (FPE) algorithm. The cost of the FPE algorithm with use of the relative error criterion is roughly , where c is the cost of one function evaluation. Thus, for fixed ε and q close to 1 the cost of the FPE algorithm is much smaller than the cost of the simple iteration algorithm, since the latter is roughly For the contractive functions of d variables, with d ≥ log(1/ε)/log(l/q) we show that it is impossible to essentially improve the efficiency of the simple iteration.  相似文献   

2.
Least-squares regularized learning algorithms for regression were well-studied in the literature when the sampling process is independent and the regularization term is the square of the norm in a reproducing kernel Hilbert space (RKHS). Some analysis has also been done for dependent sampling processes or regularizers being the qth power of the function norm (q-penalty) with 0?q?≤?2. The purpose of this article is to conduct error analysis of the least-squares regularized regression algorithm when the sampling sequence is weakly dependent satisfying an exponentially decaying α-mixing condition and when the regularizer takes the q-penalty with 0?q?≤?2. We use a covering number argument and derive learning rates in terms of the α-mixing decay, an approximation condition and the capacity of balls of the RKHS.  相似文献   

3.
An error‐correcting code is said to be locally decodable if a randomized algorithm can recover any single bit of a message by reading only a small number of symbols of a possibly corrupted encoding of the message. Katz and Trevisan 12 showed that any such code C : {0, 1}n → Σm with a decoding algorithm that makes at most q probes must satisfy m = Ω((n/log |Σ|)q/(q?1)). They assumed that the decoding algorithm is non‐adaptive, and left open the question of proving similar bounds for adaptive decoders. We show m = Ω((n/log |Σ|)q/(q?1)) without assuming that the decoder is nonadaptive. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005  相似文献   

4.
We study approximation of univariate functions defined over the reals. We assume that the rth derivative of a function is bounded in a weighted Lp norm with a weight ψ. Approximation algorithms use the values of a function and its derivatives up to order r−1. The worst case error of an algorithm is defined in a weighted Lq norm with a weight ρ. We study the worst case (information) complexity of the weighted approximation problem, which is equal to the minimal number of function and derivative evaluations needed to obtain error . We provide necessary and sufficient conditions in terms of the weights ψ and ρ, as well as the parameters r, p, and q for the weighted approximation problem to have finite complexity. We also provide conditions which guarantee that the complexity of weighted approximation is of the same order as the complexity of the classical approximation problem over a finite interval. Such necessary and sufficient conditions are also provided for a weighted integration problem since its complexity is equivalent to the complexity of the weighted approximation problem for q=1.  相似文献   

5.
In this paper, we introduce a new combinatorial invariant called q-binomial moment for q-ary constant weight codes. We derive a lower bound on the q-binomial moments and introduce a new combinatorial structure called generalized (s, t)-designs which could achieve the lower bounds. Moreover, we employ the q-binomial moments to study the undetected error probability of q-ary constant weight codes. A lower bound on the undetected error probability for q-ary constant weight codes is obtained. This lower bound extends and unifies the related results of Abdel-Ghaffar for q-ary codes and Xia-Fu-Ling for binary constant weight codes. Finally, some q-ary constant weight codes which achieve the lower bounds are found.   相似文献   

6.
In this paper, we generalize the algorithm described by Rump and Graillat to compute verified and narrow error bounds such that a slightly perturbed matrix is guaranteed to have an eigenvalue with geometric multiplicity q within computed error bounds. The corresponding invariant subspace can be directly obtained by our algorithm. Our verification method is based on border matrix technique. We demonstrate the performance of our algorithm for matrices of dimension up to hundreds with non-defective and defective eigenvalues.  相似文献   

7.
For function classes with dominant mixed derivative and bounded mixed difference in the metric ofL q (1<q≤2), quadrature formulas are constructed so that the following properties are achieved simultaneously: the grid is simple, the algorithm is efficient and close to the optimal algorithm for constructing the grid, and the order of the error on the power scale cannot be further improved. The caseq=2 was studied earlier. Translated fromMatematicheskie Zametki, Vol. 61, No. 2, pp. 297–301, February, 1997. Translated by N. K. Kulman  相似文献   

8.
In this paper, we first optimize the structure of the Wei–Xiao–Chen algorithm for the linear complexity of sequences over GF(q) with period N =  2p n , where p and q are odd primes, and q is a primitive root modulo p 2. The second, an union cost is proposed, so that an efficient algorithm for computing the k-error linear complexity of a sequence with period 2p n over GF(q) is derived, where p and q are odd primes, and q is a primitive root modulo p 2. The third, we give a validity of the proposed algorithm, and also prove that there exists an error sequence e N , where the Hamming weight of e N is not greater than k, such that the linear complexity of (s + e) N reaches the k-error linear complexity c. We also present a numerical example to illustrate the algorithm. Finally, we present the minimum value k for which the k-error linear complexity is strictly less than the linear complexity.  相似文献   

9.
It is shown that any optimal algorithm for computing the product of two degree-n polynomials over the q-element field, where nq, is based on the Chinese Remainder Theorem, with linear and quadratic polynomials presented as the moduli.  相似文献   

10.
We present two algorithms to compute m-fold hypergeometric solutions of linear recurrence equations for the classical shift case and for the q-case, respectively. The first is an m-fold generalization and q-generalization of the algorithm by van Hoeij (Appl Algebra Eng Commun Comput 17:83–115, 2005; J. Pure Appl Algebra 139:109–131, 1998) for recurrence equations. The second is a combination of an improved version of the algorithms by Petkovšek (Discrete Math 180:3–22, 1998; J Symb Comput 14(2–3):243–264, 1992) for recurrence and q-recurrence equations and the m-fold algorithm from Petkovšek and Salvy (ISSAC 1993 Proceedings, pp 27–33, 1993) for recurrence equations. We will refer to the classical algorithms as van Hoeij or Petkovšek respectively. To formulate our ideas, we first need to introduce an adapted version of an m-fold Newton polygon and its characteristic polynomials for the classical case and q-case, and to prove the important properties in this case. Using the data from the Newton polygon, we are able to present efficient m-fold versions of the van Hoeij and Petkovšek algorithms for the classical shift case and for the q-case, respectively. Furthermore, we show how one can use the Newton polygon and our characteristic polynomials to conclude for which m ? \mathbbN{m\in \mathbb{N}} there might be an m-fold hypergeometric solution at all. Again by using the information obtained from the Newton polygon, the presentation of the q-Petkovšek algorithm can be simplified and streamlined. Finally, we give timings for the ‘classical’ q-Petkovšek, our q-van Hoeij and our modified q-Petkovšek algorithm on some classes of problems and we present a Maple implementation of the m-fold algorithms for the q-case.  相似文献   

11.
Under study is the maximum 2 peripatetic salesmen problem which consists in finding two edge-disjoint Hamiltonian cycles with maximal total weight in a complete undirected graph. We present a cubic time approximation algorithm for this problem with the performance guarantee 7/9 which is the best known estimation by now. We also present a modification of this algorithm for the case when the weights of graph edges are values in a given range [1, q] with the performance guarantee (7q + 3)/(9q + 1) which is also the best known estimation by now and has cubic time complexity too.  相似文献   

12.
In the present paper, we introduce q-parametric Szász-Mirakjan operators. We study convergence properties of these operators S n,q (f). We obtain inequalities for the weighted approximation error of q-Szász-Mirakjan operators. Such inequalities are valid for functions of polynomial growth and are expressed in terms of weighted moduli of continuity. We also discuss Voronovskaja-type formula for q-Szász-Mirakjan operators.  相似文献   

13.
An algorithm for computing polynomial zeros, based on Aberth's method, is presented. The starting approximations are chosen by means of a suitable application of Rouché's theorem. More precisely, an integerq 1 and a set of annuliA i,i=1,...,q, in the complex plane, are determined together with the numberk i of zeros of the polynomial contained in each annulusA i. As starting approximations we choosek i complex numbers lying on a suitable circle contained in the annulusA i, fori=1,...,q. The computation of Newton's correction is performed in such a way that overflow situations are removed. A suitable stop condition, based on a rigorous backward rounding error analysis, guarantees that the computed approximations are the exact zeros of a nearby polynomial. This implies the backward stability of our algorithm. We provide a Fortran 77 implementation of the algorithm which is robust against overflow and allows us to deal with polynomials of any degree, not necessarily monic, whose zeros and coefficients are representable as floating point numbers. In all the tests performed with more than 1000 polynomials having degrees from 10 up to 25,600 and randomly generated coefficients, the Fortran 77 implementation of our algorithm computed approximations to all the zeros within the relative precision allowed by the classical conditioning theorems with 11.1 average iterations. In the worst case the number of iterations needed has been at most 17. Comparisons with available public domain software and with the algorithm PA16AD of Harwell are performed and show the effectiveness of our approach. A multiprecision implementation in MATHEMATICA is presented together with the results of the numerical tests performed.Work performed under the support of the ESPRIT BRA project 6846 POSSO (POlynomial System SOlving).  相似文献   

14.
Estimating financial risk is a critical issue for banks and insurance companies. Recently, quantile estimation based on extreme value theory (EVT) has found a successful domain of application in such a context, outperforming other methods. Given a parametric model provided by EVT, a natural approach is maximum likelihood estimation. Although the resulting estimator is asymptotically efficient, often the number of observations available to estimate the parameters of the EVT models is too small to make the large sample property trustworthy. In this paper, we study a new estimator of the parameters, the maximum Lq-likelihood estimator (MLqE), introduced by Ferrari and Yang (Estimation of tail probability via the maximum Lq-likelihood method, Technical Report 659, School of Statistics, University of Minnesota, 2007 ). We show that the MLqE outperforms the standard MLE, when estimating tail probabilities and quantiles of the generalized extreme value (GEV) and the generalized Pareto (GP) distributions. First, we assess the relative efficiency between the MLqE and the MLE for various sample sizes, using Monte Carlo simulations. Second, we analyze the performance of the MLqE for extreme quantile estimation using real-world financial data. The MLqE is characterized by a distortion parameter q and extends the traditional log-likelihood maximization procedure. When q→1, the new estimator approaches the traditional maximum likelihood estimator (MLE), recovering its desirable asymptotic properties; when q ≠ 1 and the sample size is moderate or small, the MLqE successfully trades bias for variance, resulting in an overall gain in terms of accuracy (mean squared error).   相似文献   

15.
Summary. In this paper we develop an efficient Schur complement method for solving the 2D Stokes equation. As a basic algorithm, we apply a decomposition approach with respect to the trace of the pressure. The alternative stream function-vorticity reduction is also discussed. The original problem is reduced to solving the equivalent boundary (interface) equation with symmetric and positive definite operator in the appropriate trace space. We apply a mixed finite element approximation to the interface operator by iso triangular elements and prove the optimal error estimates in the presence of stabilizing bubble functions. The norm equivalences for the corresponding discrete operators are established. Then we propose an asymptotically optimal compression technique for the related stiffness matrix (in the absence of bubble functions) providing a sparse factorized approximation to the Schur complement. In this case, the algorithm is shown to have an optimal complexity of the order , q = 2 or q = 3, depending on the geometry, where N is the number of degrees of freedom on the interface. In the presence of bubble functions, our method has the complexity arithmetical operations. The Schur complement interface equation is resolved by the PCG iterations with an optimal preconditioner. Received March 20, 1996 / Revised version received October 28, 1997  相似文献   

16.
We develop a factorization method for q-Racah polynomials. It is inspired by the approach to q-Hahn polynomials based on the q-Johnson scheme, but we do not use association scheme theory nor Gel'fand pairs but only manipulation of q-difference operators.  相似文献   

17.
We use elements in the quantum hyperalgebra to define a quantum version of the Désarménien matrix. We prove that our matrix is upper triangular with ones on the diagonal and that, as in the classical case, it gives a quantum straightening algorithm for quantum bideterminants. We use our matrix to give a new proof of the standard basis theorem for the q-Weyl module. As well, we show that the standard basis for the q-Weyl module and the basis dual to the standard basis for the q-Schur module are related by the quantum Désarménien matrix.  相似文献   

18.
19.
20.
M. Ajtai 《Combinatorica》1988,8(3):235-247
LetL be the set consisting of the firstq positive integers. We prove in this paper that there does not exist a data structure for representing an arbitrary subsetA ofL which uses poly (¦A¦) cells of memory (where each cell holdsc logq bits of information) and which the predecessor inA of an arbitraryxq can be determined by probing only a constant (independent ofq) number of cells. Actually our proof gives more: the theorem remains valid if this number is less than log logq, that is D. E. Willard's algorithm [2] for finding the predecessor inO(log logq) time is optimal up to a constant factor.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号