首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
记整群环ZG的增广理想△(G)的n次幂为△n(G).描述了二面体群G=D2t2r,(t≥2,r为奇数)的n-次增广商群Qn(G):△n(a)/△n+1(G)的结构,并得到Qn(D<2tr)≌Z2(s(n)),其中,如果1≤n≤t,那么s(n)=2n;如果n≥t+1,那么s(n)=2t+1.  相似文献   

2.
记整群环ZG的增广理想△(G)的n次幂为△~n(G).描述了二面体群G=D_2t_r(t≥2,r为奇数)的n-次增广商群Q_n(G)=△n(G)/△~(n+1)(G)的结构,并得到Q_n(D_2t_r)≌Z_2~((s(n))),其中,如果1≤n≤t,那么s(n)=2n;如果n≥t+1,那么s(n)=2t+1.  相似文献   

3.
赵红梅  唐国平 《数学进展》2008,37(2):163-170
记ZG为有限群G的整群环,△n(G)为增广理想△(G)的n次幂,Qn(G)=△"(G)/△n 1(G)为G的增广商群.本文考虑了二面体群D2tk(k 奇)和m次对称群Sm,证明了Qn(D2tk)为秩不超过2t 1的基本2-群以及Qn(Sm)≌Z2.  相似文献   

4.
In this paper we want to establish sharp rates of both L~2 and L~∞ decay of glo-bal solutions to the initial value problems for 2-dimensional incompressible Navier-Stokes equations, with, initial data U_0(x)∈L_1∩L~2. U_t + U·U -△U + p = 0,·U = 0,U(x,0) = U_0(x), (1)where U = U(x,t) = (U_1(x,t),U_2(x,t)) is a real vector valued function, △is 2-di-mensional Laplace operator, is gradient operator. We will present a simple method for establishing the decay results.  相似文献   

5.
本文我们引入了函数类Bδ(G//K)={ψ∈L1(G//K)‖ψ(t)|≤△-1(t)(1+t)1-δ,δ>0},对f∈Lp(G//K),1≤p≤∞,和极大算子Mδf(x)=sup ε>0 ψ∈Bδ(G//K) |ψε*f(x)|,证明了这类算子是(H1∞,s,L1)型的.  相似文献   

6.
Given a distribution of pebbles on the vertices of a connected graph G, a pebbling move on G consists of taking two pebbles off one vertex and placing one on an adjacent vertex. The pebbling number f(G) is the smallest number m such that for every distribution of m pebbles and every vertex v,a pebble can be moved to v. A graph G is said to have the 2-pebbling property if for any distribution with more than 2f(G) q pebbles, where q is the number of vertices with at least one pebble, it is possible,using pebbling moves, to get two pebbles to any vertex. Snevily conjectured that G(s,t) has the 2-pebbling property, where G(s, t) is a bipartite graph with partite sets of size s and t (s ≥ t). Similarly, the-pebbling number f (G) is the smallest number m such that for every distribution of m pebbles and every vertex v, pebbles can be moved to v. Herscovici et al. conjectured that f(G) ≤ 1.5n + 8-6 for the graph G with diameter 3, where n = |V (G)|. In this paper, we prove that if s ≥ 15 and G(s, t) has minimum degree at least (s+1)/ 2 , then f (G(s, t)) = s + t, G(s, t) has the 2-pebbling property and f (G(s, t)) ≤ s + t + 8(-1). In other words, we extend a result due to Czygrinow and Hurlbert, and show that the above Snevily conjecture and Herscovici et al. conjecture are true for G(s, t) with s ≥ 15 and minimum degree at least (s+1)/ 2 .  相似文献   

7.
本文我们引入了函数类B_δ(G//K)={φ一L~1(G//K||L~1(G//K)||φ(t)|≤Δ~(-1)(t)(1+t)~(1-δ),δ>0),对f∈L~p(G//K),1≤p≤∞,和极大算子M_δf(x)=sup|φ*f(x)|,证明了这类算子 >0 φ∈B_δ(G//K)是(H_∞~1,L~1)型的.  相似文献   

8.
图G的一个用了颜色1,2,…,t的边着色称为区间t-着色,如果所有t种颜色都被用到,并且关联于G的同一个顶点的边上的颜色是各不相同的,且这些颜色构成了一个连续的整数区间.G称作是可区间着色的,如果对某个正整数t,G有一个区间t-着色.所有可区间着色的图构成的集合记作■.对图G∈■,使得G有一个区间t-着色的t的最小值和最大值分别记作ω(G)和W(G).现给出了图的区间着色的收缩图方法.利用此方法,我们对双圈图G∈■,证明了ω(G)=△(G)或△(G)+1,并且完全确定了ω(G)=△(G)及ω(G)=△(G)+1的双圈图类.  相似文献   

9.
令G表示n个顶点的图,如果G的每个子图中都包含一个度至多为k的顶点,则称G为k-退化图.令N(G,F)表示G中F子图的个数.主要研究了k-退化图中完全子图和完全二部子图的计数问题,给出了计数的上界以及相应的极图.首先,证明了Ν(G,Kt)≤(n-k)(k t-1)+(k t).其次,如果s,t≥1,n≥k+1且s+t≤k,我们证明了Ν(G,Ks,t)≤{(k s)(n-s s)-1/2(k s)(k-s s),t=s,(k s)(n-s t)+(k t)(n-t s)-(k t)(k-t s),t≠s.此外,还研究了在最大匹配和最小点覆盖为给定值的情况下,图G中的最大边数.记v(G),K(G)分别为图G的最大匹配数和最小点覆盖.证明了当v(G)≤k,K(G)=k+r且n≥2k+2r2+r+1时,有e(G)≤(k+r+1 2)+(k-r)(n-k-r-1).  相似文献   

10.
刘瑶 《运筹学学报》2021,25(2):115-126
给定两个非负整数s和t,图G的(s,t)-松弛强k边着色可表示为映射c:E(G)→[k],这个映射满足对G中的任意一条边e,颜色c(e)在e的1-邻域中最多出现s次并且在e的2-邻域中最多出现t次.图G的(s,t)-松弛强边着色指数,记作x'(s,t)(G),表示使得图G有(s,t)-松弛强k边着色的最小k值.在图G中...  相似文献   

11.
We propose a method for support vector machine classification using indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex loss function, our algorithm simultaneously computes support vectors and a proxy kernel matrix used in forming the loss. This can be interpreted as a penalized kernel learning problem where indefinite kernel matrices are treated as noisy observations of a true Mercer kernel. Our formulation keeps the problem convex and relatively large problems can be solved efficiently using the projected gradient or analytic center cutting plane methods. We compare the performance of our technique with other methods on several standard data sets.  相似文献   

12.
A method of converting nonlinear Volterra equations to systems of ordinary differential equations is compared with a standard technique, themethod of moments, for linear Fredholm equations. The method amounts to constructing a Galerkin approximation when the kernel is either finitely decomposable or approximated by a certain Fourier sum. Numerical experiments from recent work by Bownds and Wood serve to compare several standard approximation methods as they apply to smooth kernels. It is shown that, if the original kernel decomposes exactly, then the method produces a numerical solution which is as accurate as the method used to solve the corresponding differential system. If the kernel requires an approximation, the error is greater, but in examples seems to be around 0.5% for a reasonably small number of approximating terms. In any case, the problem of excessive kernel evaluations is circumvented by the conversion to the system of ordinary differential equations.  相似文献   

13.
During the last years, kernel based methods proved to be very successful for many real-world learning problems. One of the main reasons for this success is the efficiency on large data sets which is a result of the fact that kernel methods like support vector machines (SVM) are based on a convex optimization problem. Solving a new learning problem can now often be reduced to the choice of an appropriate kernel function and kernel parameters. However, it can be shown that even the most powerful kernel methods can still fail on quite simple data sets in cases where the inherent feature space induced by the used kernel function is not sufficient. In these cases, an explicit feature space transformation or detection of latent variables proved to be more successful. Since such an explicit feature construction is often not feasible for large data sets, the ultimate goal for efficient kernel learning would be the adaptive creation of new and appropriate kernel functions. It can, however, not be guaranteed that such a kernel function still leads to a convex optimization problem for Support Vector Machines. Therefore, we have to enhance the optimization core of the learning method itself before we can use it with arbitrary, i.e., non-positive semidefinite, kernel functions. This article motivates the usage of appropriate feature spaces and discusses the possible consequences leading to non-convex optimization problems. We will show that these new non-convex optimization SVM are at least as accurate as their quadratic programming counterparts on eight real-world benchmark data sets in terms of the generalization performance. They always outperform traditional approaches in terms of the original optimization problem. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite or indefinite kernel functions.  相似文献   

14.
基于线性规划核心矩阵的单纯形算法   总被引:3,自引:0,他引:3  
本文讨论了线性规划中的核心矩阵及其特性,探讨了利用核心矩阵实现单纯形算法的可能性,并进一步提出了一个基于核心矩阵的两阶段原始一对偶单纯形方法,该方法通过原始和对偶两个阶段的迭代,可以在有限次迭代中收敛到原问题的最优解或证明问题无解或无界.在试验的22个问题中,该算法的计算效率总体优于基于传统单纯形方法的MINOS软件.  相似文献   

15.
《Applied Mathematical Modelling》2014,38(15-16):3822-3833
Smoothed particle hydrodynamics (SPH) is a popular meshfree Lagrangian particle method, which uses a kernel function for numerical approximations. The kernel function is closely related to the computational accuracy and stability of the SPH method. In this paper, a new kernel function is proposed, which consists of two cosine functions and is referred to as double cosine kernel function. The newly proposed double cosine kernel function is sufficiently smooth, and is associated with an adjustable support domain. It also has smaller second order momentum, and therefore it can have better accuracy in terms of kernel approximation. SPH method with this double cosine kernel function is applied to simulate a dam-break flow and water entry of a horizontal circular cylinder. The obtained SPH results agree very well with the experimental results. The double cosine kernel function is also comparatively studied with two frequently used SPH kernel functions, Gaussian and cubic spline kernel functions.  相似文献   

16.
本文讨论了利用Green函数计算再生核的方法,在Wm2空间中利用再生核的和性质以及Green函数理论给出再生核构造的一般方法,并利用此方法计算出W32空间的再生核.  相似文献   

17.
A reproducing kernel method is proposed to obtain the optimal and approximate solutions of Carleman singular integral equations. Therefore, we will be mostly interested in singular integral equations with a Cauchy type kernel and whose coefficients are real or complex valued functions. The new method and corresponding concepts allow the analysis of associated discrete singular integral equations and corresponding inverse source problems in appropriate frameworks.  相似文献   

18.
在支持向量机预测建模中,核函数用来将低维特征空间中的非线性问题映射为高维特征空间中的线性问题.核函数的特征对于支持向量机的学习和预测都有很重要的影响.考虑到两种典型核函数—全局核(多项式核函数)和局部核(RBF核函数)在拟合与泛化方面的特性,采用了一种基于混合核函数的支持向量机方法用于预测建模.为了评价不同核函数的建模效果、得到更好的预测性能,采用遗传算法自适应进化支持向量机模型的各项参数,并将其应用于装备费用预测的实际问题中.实际计算表明采用混合核函数的支持向量机较单一核函数时有更好的预测性能,可以作为一种有效的预测建模方法在装备管理中推广应用.  相似文献   

19.
This paper investigates the analytical approximate solutions of third order three-point boundary value problems using reproducing kernel method. The solution obtained by using the method takes the form of a convergent series with easily computable components. However, the reproducing kernel method can not be used directly to solve third order three-point boundary value problems, since there is no method of obtaining reproducing kernel satisfying three-point boundary conditions. This paper presents a method for solving reproducing kernel satisfying three-point boundary conditions so that reproducing kernel method can be used to solve third order three-point boundary value problems. Results of numerical examples demonstrate that the method is quite accurate and efficient for singular second order three-point boundary value problems.  相似文献   

20.
This paper focuses on developing fast numerical algorithms for selection of a kernel optimal for a given training data set. The optimal kernel is obtained by minimizing a cost functional over a prescribed set of kernels. The cost functional is defined in terms of a positive semi-definite matrix determined completely by a given kernel and the given sampled input data. Fast computational algorithms are developed by approximating the positive semi-definite matrix by a related circulant matrix so that the fast Fourier transform can apply to achieve a linear or quasi-linear computational complexity for finding the optimal kernel. We establish convergence of the approximation method. Numerical examples are presented to demonstrate the approximation accuracy and computational efficiency of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号