首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We discuss the Fredholm and properness properties of second-order quasilinear elliptic operators viewed as mappings from W2,p( R N) to Lp( R N) with N < p < ∞. The unboundedness of the domain makes the standard Sobolev embedding theorems inadequate to investigate such issues. Instead, we develop several new tools and methods to obtain fairly simple necessary and suffcient conditions for such operators to be Fredholm with a given index and to be proper on the closed bounded subsets of W2,p( R N). It is noteworthy that the translation invariance of the domain, well-known to be responsible for the lack of compactness in the Sobolev embedding theorems, is taken advantage of to establish results in the opposite direction and is indeed crucial to the proof of the properness criteria. The limitation to second-order and scalar equations chosen in our exposition is relatively unimportant, as none of the arguments involved here relies upon either of these assumptions. Generalizations to higher order equations or to systems are thus clearly possible with a variableamount of extra work. Various applications, notably but not limited, to global bifurcation problems, are described elsewhere.  相似文献   

2.
Algorithms are developed, based on topological principles, to evaluate the boundary and “internal structure” of the Minkowski sum of two planar curves. A graph isotopic to the envelope curve is constructed by computing its characteristic points. The edges of this graph are in one-to-one correspondence with a set of monotone envelope segments. A simple formula allows a degree   to be assigned to each face defined by the graph, indicating the number of times its points are covered by the Minkowski sum. The boundary can then be identified with the set of edges that separate faces of zero and non-zero degree, and the boundary segments corresponding to these edges can be approximated to any desired geometrical accuracy. For applications that require only the Minkowski sum boundary, the algorithm minimizes geometrical computations on the “internal” envelope edges, that do not contribute to the final boundary. In other applications, this internal structure is of interest, and the algorithm provides comprehensive information on the covering degree for different regions within the Minkowski sum. Extensions of the algorithm to the computation of Minkowski sums in R3R3, and other forms of geometrical convolution, are briefly discussed.  相似文献   

3.
Well known results on near-minimax approximation using Chebyshev polynomials of the first kind are here extended to Chebyshev polynomials of the second, third, and fourth kinds. Specifically, polynomial approximations of degreen weighted by (1–x 2)1/2, (1+x)1/2 or (1–x)1/2 are obtained as partial sums of weighted expansions in Chebyshev polynomials of the second, third, or fourth kinds, respectively, to a functionf continuous on [–1, 1] and constrained to vanish, respectively, at ±1, –1 or +1. In each case a formula for the norm of the resulting projection is determined and shown to be asymptotic to 4–2logn +A +o(1), and this provides in each case and explicit bound on the relative closeness of a minimax approximation. The constantA that occurs for the second kind polynomial is markedly smaller, by about 0.27, than that for the third and fourth kind, while the latterA is identical to that for the first kind, where the projection norm is the classical Lebesgue constant n . The results on the third and fourth kind polynomials are shown to relate very closely to previous work of P.V. Galkin and of L. Brutman.Analogous approximations are also obtained by interpolation at zeros of second, third, or fourth kind polynomials of degreen+1, and the norms of resulting projections are obtained explicitly. These are all observed to be asymptotic to 2–1logn +B +o(1), and so near-minimax approximations are again obtained. The norms for first, third, and fourth kind projections appear to be converging to each other. However, for the second kind projection, we prove that the constantB is smaller by a quantity asymptotic to 2–1log2, based on a conjecture about the point of attainment of the supremum defining the projection norm, and we demonstrate that the projection itself is remarkably close to a minimal (weighted) interpolation projection.All four kinds of Chebyshev polynomials possess a weighted minimax property, and, in consequence, all the eight approximations discussed here coincide with minimax approximations when the functionf is a suitably weighted polynomial of degreen+1.  相似文献   

4.
The short answer to the question just posed seems to be, “Not much.” Since I have given “the long answer” elsewhere,12 I can summarize it here. Berg could see no point in writing Bromley. What could he write to someone he believed guilty of plagiarism? What could such a letter accomplish? He did, however, write to New York University Press; to all the universities involved, and to the Works’ English publisher (Pickering and Chatto), who said they passed the letter on to Campbell-Kelly (30 June 1990); to a great many professional societies in Australia, England, and the United States; to a great many governmental agencies and some politicians in those countries; to some publications, both academic and popular; to the Pope and several cardinals; and to a miscellany of other individuals. Generally, those in the best position to do something—for example, the three universities involved —did not even answer Berg’s letter. Others often did answer, but their answer was generally that they were in no position to do anything. That was how matters stood when I published my first article on “the Berg Affair”.12 Its publication finally roused those best positioned to answer. Late in 1993, Galler, Bromley, and Campbell-Kelly wrote letters to the editor of Accountability in Research criticizing me for not getting their side of the story before I published Berg’s. Campbell-Kelly threatened the journal’s publisher with a lawsuit if I (or it) did not retract. The three also provided some insight into what their explanation of events might be. Bromley, though listed prominently in ads for the Works, claimed to have had only a small part, merely advising Campbell-Kelly on selection and arrangement of the papers printed in Volumes 2 and 3. Campbell-Kelly confirmed that Bromley took no part in the detailed editing or in the provision of documents. That work was performed by one C.J.D. (“Jim”) Roberts, a “London-based independent scholar” who was “editorial consultant to the Works” (and, apparently, worked directly under Campbell-Kelly). Roberts seems to deserve more public credit than he has so far received. According to Campbell-Kelly, it was Roberts who, making a systematic search for unknown holdings of Babbage, turned up the original of the letter to Quetelet by writing the Royal Library (one “tiny triumph” among many). Campbell-Kelly also claimed that neither he nor Roberts knew of Berg’s prior discovery.  相似文献   

5.
In a rectangular grid, given two sets of nodes, (sources) and (sinks), of size each, the disjoint paths (DP) problem is to connect as many nodes in to the nodes in using a set of “disjoint” paths. (Both edge-disjoint and vertex-disjoint cases are considered in this paper.) Note that in this DP problem, a node in can be connected to any node in . Although in general the sizes of and do not have to be the same, algorithms presented in this paper can also find the maximum number of disjoint paths pairing nodes in and . We use the network flow approach to solve this DP problem. By exploiting all the properties of the network, such as planarity and regularity of a grid, integral flow, and unit capacity source/sink/flow, we can optimally compress the size of the working grid (to be defined) from O(N2) to O(N1.5) and solve the problem in O(N2.5) time for both the edge-disjoint and vertex-disjoint cases, an improvement over the straightforward approach which takes O(N3) time.  相似文献   

6.
Let M i X denote a sequence of n-manifolds converging to a compact metric space, X, in the Gromov-Hausdorff topology such that the sectional curvature is bounded in absolute value and dim(X)<n. We prove the following stability result: If the fundamental groups of M i are torsion groups of uniformly bounded exponents and the second twisted Betti numbers of M i vanish, then there is a manifold, M, and a sequence of diffeomorphisms from M to a subsequence of {M i } such that the distance functions of the pullback metrics converge to a pseudo-metric in C 0-norm. Furthermore, M admits a foliation with leaves diffeomorphic to flat manifolds (not necessarily compact) such that a vector is tangent to a leaf if and only if its norm converges to zero with respect to the pullback metrics. These results lead to a few interesting applications. Oblatum 17-I-2002 & 27-II-2002?Published online: 29 April 2002  相似文献   

7.
《Journal of Complexity》1993,9(4):427-446
In this paper we review recent results on nonparametric approaches to identification of linear dynamic systems, under nonprobabilistic assumptions on measurement uncertainties. Two main categories of problems are considered in the paper: H and l1 settings. The H setting assumes that the true system is linear time-invariant and the available information is represented by samples of the frequency response of the system, corrupted by an l-norm bounded noise. The aim is to estimate a proper, stable finite-dimensional model. The estimation error is quantified according to an H norm, measuring the "distance" of the estimated model from the worst-case system in the class of allowable systems, for the worst-case realization of the measurement error. In the l1 setting, the aim is to identify the samples of the impulse response of an unknown linear time-invariant system. The available information is given by input/output measurements corrupted by l-bounded noise and the estimation error is measured according to an l1 norm, for the worst case with respect to allowable systems and noise. In this paper, the main results available in the literature for both settings are reviewed, with particular attention to (a) evaluation of the diameter of information under various experimental conditions, (b) convergence to zero of the diameter of information (i.e., existence of robustly convergent identification procedures), and (c) computation of optimal and almost-optimal algorithms. Some results are also reported for the l setting, similar to the l1 setting, with the exception of the estimation error, which is measured by an l norm.  相似文献   

8.
The problem of scheduling the production and delivery of a supplier to feed the production of F manufacturers is studied. The orders fulfilled by the supplier are delivered to the manufacturers in batches of the same size. The supplier's production line has to be set up whenever it switches from processing an order of one manufacturer to an order of another manufacturer. The objective is to minimize the total setup cost, subject to maintaining continuous production for all manufacturers. The problem is proved to be NP-hard. It is reduced to a single machine scheduling problem with deadlines and jobs belonging to F part types. An O(NlogF) algorithm, where N is the number of delivery batches, is presented to find a feasible schedule. A dynamic programming algorithm with O(N F /F F–2) running time is presented to find an optimal schedule. If F=2 and setup costs are unit, an O(N) time algorithm is derived.  相似文献   

9.
We introducegeneral starvation and consider cyclic networks withgeneral blocking and starvation (GBS). The mechanism of general blocking allows the server to process a limited number of jobs when the buffer downstream is full, and that of general starvation allows the server to perform a limited number of services in anticipation of jobs that are yet to arrive. The two main goals of this paper are to investigate how the throughput of cyclic GBS networks is affected by varying (1) the total number of jobsJ, and (2) the buffer allocationk=(k1..., km) subject to a fixed total buffer capacityK=k 1 +... + km. In particular, we obtain sufficient conditions for the throughput to be symmetric inJ and to be maximized whenJ=K/2. We also show that the equal buffer allocation is optimal under the two regimes of light or heavy usage. In order to establish these results, we obtain several intermediate structural properties of the throughput, using duality, reversibility, and concavity, which are of independent interest.Research supported in part by NSF Grant No. ECS-8919818.  相似文献   

10.
An algorithm for the construction of Ramsey graphs with a given automorphism group G is presented. To find a graph on n vertices with no clique of k vertices, Kk, and no independent set of l vertices, ¯Kl, k, ln, with an automorphism group G, a Boolean formula α based on the G-orbits of k-subsets and l-subsets of vertices is constructed from incidence matrices belonging to G. This Boolean formula is satisfiable if and only if the desired graph exists, and each satisfying assignment to α specifies a set of orbits of pairs of vertices whose union gives the edges of such a graph. Finding these assignments is basically equivalent to the conversion of α from CNF to DNF (conjunctive to disjunctive normal form). Though the latter problem is NP-hard, we present an “efficient” method to do the conversion for the formulas that appear in this particular problem. When G is taken to be the dihedral group Dn for n ≤ 101, this method matches all of the previously known cyclic Ramsey graphs, as reported by F. R. K. Chung and C. M. Grinstead [“A Survey of Bounds for Classical Ramsey Numbers,” Journal of Graph Theory, 7 (1983), 25–38], in dramatically smaller computer time when compared to the time required by an exhaustive search. Five new lower bounds for the classical Ramsey numbers are established: R(4, 7) ? 47, R(4, 8) ? 52, R(4, 9) ? 69, R(5,7) ? 76, and R(5, 8) ? 94. Also, some previously known cyclic graphs are shown to be unique up to isomorphism.  相似文献   

11.
12.
In this paper I show how the Elements of Euclid was transmitted to Western Europe via the countries of the Middle East. Since Campanus had used not only an Adelard II text for his edition, but also works of his contemporaries and a Greek edition, we shall first turn our attention to the Latin translations from the Arabic of Adelard I and II, Hermann of Carinthia and Gerard of Cremona. The most important Arabic editions are those of al-H?a??ā? I and II and IsH?q-Tābit. In sections 1.1 and 1.2 I point out the differences between these Arabic editions and in section 1.3 I pose that Adelard I and Hermann of Carinthia go back to an al-H?a??ā? I text, Adelard II however to an al-H?a??ā? II text. In section 1.4 I show that there is a great resemblance between a Syriac fragment, al-H?a??āg I and Adelard I. I cannot answer however the question whether al-H?a??ā? borrowed his version from a Syriac one or conversely. In section 1.5 I investigated the Gerard edition. On the whole Gerard follows the Ish?āq-T?ābit edition, but he or his Arabic ancestor inserted proofs of al-H???ā? I. According to an-Nayrīzī the propositions VIII.25 and 26, which are missing in the Greek Euclid and in the al-H?a??ā? versions, are from Heron. In chapter 2 I pose the question: What can the Arabic-Latin tradition teach us about the Greek edition of Heiberg. Heiberg III.12 has to be removed since an-Nayrīzī said it was of Heron. Health has said that all porisms are omitted in the Arabic except the porisms to VI.8, VIII.2 and X.3. This is not true since in the explicit of some Adelard II MSS 11 porisms have been mentioned, which are all in the text. Moreover, the Syriac fragment, Adelard I, Hermann and Gerard contain the porism I.15 which is missing in the Heiberg edition. In section 2.3 I show that of the Heiberg lemmas to X.32, 41 and XIII.18 the first is ascribed to Iohanicius Babiloniensis by Gerard and the second to Iezidi; Gerard inserted the third in the scholia, which he has added to his edition. So all these lemmas have to be cancelled. In the last section I state that the proofs of X.68–70 diverge totally from Heiberg and al-Ha??ā? I. I cannot solve however the question which proofs are euclidean. In appendix I I give the text of the lemma to X.32 and in appendix II those of the lemma to X.41.  相似文献   

13.
Summary Recently Eiermann, Marek, and Niethammer have shown how to applysemiiterative methods to a fixed point systemx=Tx+c which isinconsistent or in whichthe powers of the fixed point operator T have no limit, to obtain iterative methods which converge to some approximate solution to the fixed point system. In view of their results we consider here stipulations on apreconditioning QAx=Qb of the systemAx=b and, separately, on asplitting A=M–N which lead to fixed point systems such that, with the aid of a semiiterative method, the iterative scheme converges to a weighted Moore-Penrose solution to the systemAx=b. We show in several ways that to obtain a meaningful limit point from a semiiterative method requires less restrictions on the splittings or the reconditionings than those which have been required in the classical Picard iterative method (see, e.g., the works of Berman and Plemmons, Berman and Neumann, and Tanabe).We pay special attention to the case when the weighted Moore-Penrose solution which is sought is the minimal norm least squares solution toAx=b.Research supported by the Deutsche ForschungsgemeinschaftPartially supported by AFOSR research grant 88-0047  相似文献   

14.
A methodology based on the theory of optimal transport is developed to attribute variability in data sets to known and unknown factors and to remove such attributable components of the variability from the data. Denoting by x the quantities of interest and by z the explanatory factors, the procedure transforms x into filtered variables y through a z‐dependent map, so that the conditional probability distributions ρ(x|z) are pushed forward into a target distribution μ(y), independent of z. Among all maps and target distributions that achieve this goal, the procedure selects the one that minimally distorts the original data: the barycenter of the ρ(x|z). Connections are found to unsupervised learning and to fundamental problems in statistics such as conditional density estimation and sampling. Particularly simple instances of the methodology are shown to be equivalent to k‐means and principal component analysis. An application is shown to a time series of ground temperature hourly data across the United States.© 2017 Wiley Periodicals, Inc.  相似文献   

15.
The Global Information Technology Report released by the World Economic Forum (WEF) has employed networked readiness index (NRI) to measure the global competitiveness of a country’s information and communication technologies (ICT) diffusion. The final NRI overall scores were measured by an arithmetic mean aggregation of the composite pillars scores, which implicitly assumed that all the pillars have constant weights. The Report did not explore the critical pillars and causal relations for better decision making. To add values to this Report, the objective of this paper is to propose an innovative approach by using data mining techniques and partial least squares path modeling to scrutinize the critical pillars within the NRI and to further explore the causal relations amongst them. An empirical analysis based on the latest Report (2009-2010) is carried out. The results show that “business usage,” “business readiness,” and “market environment” are the three root drivers—critical pillars to manipulate the NRI overall scores; whereas “government readiness,” which is further mostly affected by the “government usage,” is the foremost enabler to the NRI overall scores. Based on the results, policy makers are suggested to allocate limited resources with priority to the three root drivers and one foremost enabler to frog-leap the global competitiveness of national ICT diffusion.  相似文献   

16.
Several improvements are made to an algorithm of Higham and Smith for computing the matrix cosine. The original algorithm scales the matrix by a power of 2 to bring the ∞-norm to 1 or less, evaluates the [8/8] Padé approximant, then uses the double-angle formula cos (2A)=2cos 2AI to recover the cosine of the original matrix. The first improvement is to phrase truncation error bounds in terms of ‖A21/2 instead of the (no smaller and potentially much larger quantity) ‖A‖. The second is to choose the degree of the Padé approximant to minimize the computational cost subject to achieving a desired truncation error. A third improvement is to use an absolute, rather than relative, error criterion in the choice of Padé approximant; this allows the use of higher degree approximants without worsening an a priori error bound. Our theory and experiments show that each of these modifications brings a reduction in computational cost. Moreover, because the modifications tend to reduce the number of double-angle steps they usually result in a more accurate computed cosine in floating point arithmetic. We also derive an algorithm for computing both cos (A) and sin (A), by adapting the ideas developed for the cosine and intertwining the cosine and sine double angle recurrences. AMS subject classification 65F30 Numerical Analysis Report 461, Manchester Centre for Computational Mathematics, February 2005. Gareth I. Hargreaves: This work was supported by an Engineering and Physical Sciences Research Council Ph.D. Studentship. Nicholas J. Higham: This work was supported by Engineering and Physical Sciences Research Council grant GR/T08739 and by a Royal Society–Wolfson Research Merit Award.  相似文献   

17.
The rotor-router model is a deterministic analogue of random walk. It can be used to define a deterministic growth model analogous to internal DLA. We show that the set of occupied sites for this model on an infinite regular tree is a perfect ball whenever it can be, provided the initial rotor configuration is acyclic (that is, no two neighboring vertices have rotors pointing to one another). This is proved by defining the rotor-router group of a graph, which we show is isomorphic to the sandpile group. We also address the question of recurrence and transience: We give two rotor configurations on the infinite ternary tree, one for which chips exactly alternate escaping to infinity with returning to the origin, and one for which every chip returns to the origin. Further, we characterize the possible “escape sequences” for the ternary tree, that is, binary words a1an for which there exists a rotor configuration so that the kth chip escapes to infinity if and only if ak=1.  相似文献   

18.
Minimal, rigid foliations by curves on ℂℙ n   总被引:1,自引:0,他引:1  
We prove the existence of minimal and rigid singular holomorphic foliations by curves on the projective space ℂℙ n for every dimension n≥2 and every degree d≥2. Precisely, we construct a foliation ℱ which is induced by a homogeneous vector field of degree d, has a finite singular set and all the regular leaves are dense in the whole of ℂℙ n . Moreover, ℱ satisfies many additional properties expected from chaotic dynamics and is rigid in the following sense: if ℱ is conjugate to another holomorphic foliation by a homeomorphism sufficiently close to the identity, then these foliations are also conjugate by a projective transformation. Finally, all these properties are persistent for small perturbations of ℱ.?This is done by considering pseudo-groups generated on the unit ball 𝔹 n ⊆ℂ n by small perturbations of elements in Diff(ℂ n ,0). Under open conditions on the generators, we prove the existence of many pseudo-flows in their closure (for the C 0-topology) acting transitively on the ball. Dynamical features as minimality, ergodicity, positive entropy and rigidity may easily be derived from this approach. Finally, some of these pseudo-groups are realized in the transverse dynamics of polynomial vector fields in ℂℙ n . Received March 7, 2002 / final version received November 26, 2002?Published online February 7, 2003 Most of this work has been carried out during a visit of the first author to IMPA/RJ and a visit of the second author to the University of Lille 1. We would like to thank these institutes for hospitality and express our gratitude to CNPq-Brazil and CNRS-France for the financial support which made these visits possible. We are also indebted to Paulo Sad, Marcel Nicolau and the referee whose comments helped us to improve on the preliminary version. Finally, the second author has partially conducted this research for the Clay Mathematics Institute.  相似文献   

19.
Definitions, theorems and examples are established for a general model of Laurent polynomial spaces and ordered orthogonal Laurent polynomial sequences, ordered with respect to ordered bases and orthogonal with respect to inner products ·=L° decomposed into transition functional ⊙ and strong moment functional, or, more generally, sample functional L couplings. Under this formulation that is shown to subsume those in the existing literature, new fundamental results are produced, including necessary and sufficient conditions for ordered OLPS to be sequences of nth numerators of continued fractions, in contrast to the classical result concerning nth denominators which is shown to hold only in special cases.  相似文献   

20.
We explore the relation between two general kinds of separation properties. The first kind, which includes the classical separation properties of regularity and normality, has to do with expanding two disjoint closed sets, or dense subsets of each, to disjoint open sets. The second kind has to do with expanding discrete collections of points, or full-cardinality subcollections thereof, to disjoint or discrete collections of open sets. The properties of being collectionwise Hausdorff (cwH), of being strongly cwH, and of being wD(1), fall into the second category. We study the effect on other separation properties if these properties are assumed to hold hereditarily. In the case of scattered spaces, we show that (a) the hereditarily cwH ones are α-normal and (b) a regular one is hereditarily strongly cwH iff it is hereditarily cwH and hereditarily β-normal. Examples are given in ZFC of (1) hereditarily strongly cwH spaces which fail to be regular, including one that also fails to be α-normal; (2) hereditarily strongly cwH regular spaces which fail to be normal and even, in one case, to be β-normal; (3) hereditarily cwH spaces which fail to be α-normal. We characterize those regular spaces X such that X×(ω+1) is hereditarily strongly cwH and, as a corollary, obtain a consistent example of a locally compact, first countable, hereditarily strongly cwH, non-normal space. The ZFC-independence of several statements involving the hereditarily wD(1) property is established. In particular, several purely topological statements involving this property are shown to be equivalent to b=ω1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号