首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
We investigate in this article the Pontryagin’s maximum principle for control problem associated with the primitive equations (PEs) of the ocean with periodic inputs. We also derive a second-order sufficient condition for optimality. This work is closely related to Wang (SIAM J. Control Optim. 41(2):583–606, 2002) and He (Acta Math. Sci. Ser. B Engl. Ed. 26(4):729–734, 2006), in which the authors proved similar results for the three-dimensional Navier-Stokes (NS) systems.  相似文献   

2.
The main objective of the paper is to analyze the impact of environmental regulation on technical efficiencies of Indian cement producing firms. It derives the technical efficiency (TE) scores of firms in the presence and absence of regulation and brings out the differences in their magnitudes in two scenarios: one in which the firms take initiatives to comply with the set standards by investing additional resources for pollution abatement and the other in which the firms do not take the necessary initiatives. The paper uses establishment level data from Annual Survey of Industries on cement for two years, the most recent data published for 2003–2004 and a previous year for 1999–2000 when the environmental regulations in India were in the initial phases of implementation. A non-parametric deterministic method of data envelopment analysis (DEA) is used to derive the TE scores of firms. The traditional DEA framework is modified by introducing weak disposability of bad outputs to characterize ‘effective environmental regulation’, which ensures that reducing pollution is not costless. For both years it has been found that the TE scores of firms under ‘effective regulation’ scenario are either higher than or equal to those derived under ‘ineffective regulation’ scenario resulting in a higher average TE at the industry level in the ‘effective regulation’ scenario.  相似文献   

3.
In this paper, we focus on the restoration of images that have incomplete data in either the image domain or the transformed domain or in both. The transform used can be any orthonormal or tight frame transforms such as orthonormal wavelets, tight framelets, the discrete Fourier transform, the Gabor transform, the discrete cosine transform, and the discrete local cosine transform. We propose an iterative algorithm that can restore the incomplete data in both domains simultaneously. We prove the convergence of the algorithm and derive the optimal properties of its limit. The algorithm generalizes, unifies, and simplifies the inpainting algorithm in image domains given in Cai et al. (Appl Comput Harmon Anal 24:131–149, 2008) and the inpainting algorithms in the transformed domains given in Cai et al. (SIAM J Sci Comput 30(3):1205–1227, 2008), Chan et al. (SIAM J Sci Comput 24:1408–1432, 2003; Appl Comput Harmon Anal 17:91–115, 2004). Finally, applications of the new algorithm to super-resolution image reconstruction with different zooms are presented. R. H. Chan’s research was supported in part by HKRGC Grant 400505 and CUHK DAG 2060257. L. Shen’s research was supported by the US National Science Foundation under grant DMS-0712827. Z. Shen’s research was supported in part by Grant R-146-000-060-112 at the National University of Singapore.  相似文献   

4.
We study the information transmission through two different models of Gaussian memory channels: an additive Gaussian channel and a lossy bosonic memory channel. We then show that entangled inputs can enhance the transmission rate in such channels. Translated from Teoreticheskaya i Matematicheskaya Fizika, Vol. 152, No. 2, pp. 390–404, August, 2007.  相似文献   

5.
A classic result asserts that many geometric structures can be constructed optimally by successively inserting their constituent parts in random order. These randomized incremental constructions (RICs) still work with imperfect randomness: the dynamic operations need only be “locally” random. Much attention has been given recently to inputs generated by Markov sources. These are particularly interesting to study in the framework of RICs, because Markov chains provide highly nonlocal randomness, which incapacitates virtually all known RIC technology. We generalize Mulmuley’s theory of Θ-series and prove that Markov incremental constructions with bounded spectral gap are optimal within polylog factors for trapezoidal maps, segment intersections, and convex hulls in any fixed dimension. The main contribution of this work is threefold: (i) extending the theory of abstract configuration spaces to the Markov setting; (ii) proving Clarkson–Shor-type bounds for this new model; (iii) applying the results to classical geometric problems. We hope that this work will pioneer a new approach to randomized analysis in computational geometry. This work was supported in part by NSF grants CCR-0306283, CCF-0634958.  相似文献   

6.
In a planar periodic Lorentz gas, a point particle (electron) moves freely and collides with fixed round obstacles (ions). If a constant force (induced by an electric field) acts on the particle, the latter will accelerate, and its speed will approach infinity (Chernov and Dolgopyat in J Am Math Soc 22:821–858, 2009; Phys Rev Lett 99, paper 030601, 2007). To keep the kinetic energy bounded one can apply a Gaussian thermostat, which forces the particle’s speed to be constant. Then an electric current sets in and one can prove Ohm’s law and the Einstein relation (Chernov and Dolgopyat in Russian Math Surv 64:73–124, 2009; Chernov et al. Comm Math Phys 154:569–601, 1993; Phys Rev Lett 70:2209–2212, 1993). However, the Gaussian thermostat has been criticized as unrealistic, because it acts all the time, even during the free flights between collisions. We propose a new model, where during the free flights the electron accelerates, but at the collisions with ions its total energy is reset to a fixed level; thus our thermostat is restricted to the surface of the scatterers (the ‘walls’). We rederive all physically interesting facts proven for the Gaussian thermostat in Chernov, Dolgopyat (Russian Math Surv 64:73–124, 2009) and Chernov et al. (Comm Math Phys 154:569–601, 1993; Phys Rev Lett 70:2209–2212, 1993), including Ohm’s law and the Einstein relation. In addition, we investigate the superconductivity phenomenon in the infinite horizon case.  相似文献   

7.
The quickest path problem is related to the classical shortest path problem, but its objective function concerns the transmission time of a given amount of data throughout a path, which involves both cost and capacity. The K-quickest simple paths problem generalises the latter, by looking for a given number K of simple paths in non-decreasing order of transmission time. Two categories of algorithms are known for ranking simple paths according to the transmission time. One is the adaptation of deviation algorithms for ranking shortest simple paths (Pascoal et al. in Comput. Oper. Res. 32(3):509–520, 2005; Rosen et al. in Comput. Oper. Res. 18(6):571–584, 1991), and another is based on ranking shortest simple paths in a sequence of networks with fixed capacity lower bounds (Chen in Inf. Process. Lett. 50:89–92, 1994), and afterwards selecting the K quickest ones. After reviewing the quickest path and the K-quickest simple paths problems we describe a recent algorithm for ranking quickest simple paths (Pascoal et al. in Ann. Oper. Res. 147(1):5–21, 2006). This is a lazy version of Chen’s algorithm, able to interchange the calculation of new simple paths and the output of each k-quickest simple path. Finally, the described algorithm is computationally compared to its former version, as well as to deviation algorithms.   相似文献   

8.
We consider an operator (variable hysteron) used to describe a nonstationary hysteresis nonlinearity (whose characteristics vary under the action of external forces) according to the Krasnosel’skii-Pokrovskii scheme. Sufficient conditions under which the operator is defined for the inputs from the class of functions H 1[t 0, T] satisfying the Lipschitz condition in the segment [t 0, T] are established. __________ Translated from Ukrains’kyi Matematychnyi Zhurnal, Vol. 60, No. 3, pp. 295–309, March, 2008.  相似文献   

9.
Two-filter smoothing is a principled approach for performing optimal smoothing in non-linear non-Gaussian state–space models where the smoothing distributions are computed through the combination of ‘forward’ and ‘backward’ time filters. The ‘forward’ filter is the standard Bayesian filter but the ‘backward’ filter, generally referred to as the backward information filter, is not a probability measure on the space of the hidden Markov process. In cases where the backward information filter can be computed in closed form, this technical point is not important. However, for general state–space models where there is no closed form expression, this prohibits the use of flexible numerical techniques such as Sequential Monte Carlo (SMC) to approximate the two-filter smoothing formula. We propose here a generalised two-filter smoothing formula which only requires approximating probability distributions and applies to any state–space model, removing the need to make restrictive assumptions used in previous approaches to this problem. SMC algorithms are developed to implement this generalised recursion and we illustrate their performance on various problems.  相似文献   

10.
This work deals with the problems of the Continuous Extremal Fuzzy Dynamic System (CEFDS) optimization and briefly discusses the results developed by Sirbiladze (Int J Gen Syst 34(2):107–138, 2005a; 34(2):139–167, 2005b; 34(2):169–198, 2005c; 35(4):435–459, 2006a; 35(5):529–554, 2006b; 36(1): 19–58, 2007; New Math Nat Comput 4(1):41–60, 2008a; Mat Zametki, 83(3):439–460, 2008b). The basic properties of extended extremal fuzzy measures and Sugeno’s type integrals are considered and several variants of their representation are given. Values of extended extremal conditional fuzzy measures are defined as a levels of expert knowledge reflections of CEFDS states in the fuzzy time intervals. The notions of extremal fuzzy time moments and intervals are introduced and their monotone algebraic structures that form the most important part of the fuzzy instrument of modeling extremal fuzzy dynamic systems are discussed. A new approach in modeling of CEFDS is developed. Applying the results of Sirbiladze (Int J Gen Syst 34(2) 107–138, 2005a; 34(2):139–167, 2005b), fuzzy processes with possibilistic uncertainty, the source of which are expert knowledge reflections on the states on CEFDS in extremal fuzzy time intervals, are constructed (Sirbiladze in Int J Gen Syst 34(2):169–198, 2005c). The dynamics of CEFDS’s is described. Questions of the ergodicity of CEFDS are considered. A fuzzy-integral representation of a continuous extremal fuzzy process is given. Based on the fuzzy-integral model, a method and an algorithm are developed for identifying the transition operator of CEFDS. The CEFDS transition operator is restored by means of expert data with possibilistic uncertainty, the source of which is expert knowledge reflections on the states of CEFDS in the extremal fuzzy time intervals. The regularization condition for obtaining quasi-optimal estimator of the transition operator is represented by the theorems. The corresponding calculating algorithm is provided. The results obtained are illustrated by an example in the case of a finite set of CEFDS states.  相似文献   

11.
Does speed provide a ‘model for’ rate of change in other contexts? Does JavaMathWorlds (JMW), animated simulation software, assist in the development of the ‘model for’ rate of change? This project investigates the transference of understandings of rate gained in a motion context to a non-motion context. Students were 27 14–15 year old students at an Australian secondary school. The instructional sequence, utilising JMW, provided rich learning experiences of rate of change in the context of a moving elevator. This context connects to students’ prior knowledge. The data taken from pre- and post-tests and student interviews revealed a wide variation in students’ understanding of rate of change. The variation was mapped on a hypothetical learning trajectory and interpreted in the terms of the ‘emergent models’ theory (Gravemeijer, Math Think Learn 1(2):155–177, 1999) and illustrated by specific examples from the data. The results demonstrate that most students were able to use the ‘model of’ rate of change developed in a vertical motion context as a ‘model for’ rate of change in a horizontal motion context. A smaller majority of students were able to use their, often incomplete, ‘model of’ rate of change as a ‘model for’ reasoning about rate of change in a non-motion context.  相似文献   

12.
We analyze family of solutions to multidimensional scalar conservation law, with flux depending on the time and space explicitly, regularized with vanishing diffusion and dispersion terms. Under a condition on the balance between diffusion and dispersion parameters, we prove that the family of solutions is precompact in L1loc{L^1_{\rm loc}}. Our proof is based on the methodology developed in Sazhenkov (Sibirsk Math Zh 47(2):431–454, 2006), which is in turn based on Panov’s extension (Panov and Yu in Mat Sb 185(2):87–106, 1994) of Tartar’s H-measures (Tartar in Proc R Soc Edinb Sect A 115(3–4):193–230, 1990), or Gerard’s micro-local defect measures (Gerard Commun Partial Differ Equ 16(11):1761–1794, 1991). This is new approach for the diffusion–dispersion limit problems. Previous results were restricted to scalar conservation laws with flux depending only on the state variable.  相似文献   

13.
Personal Excursions: Investigating the Dynamics of Student Engagement   总被引:1,自引:0,他引:1  
We investigate the dynamics of student engagement as it is manifest in self-directed, self-motivated, relatively long-term, computer-based scientific image processing activities. The raw data for the study are video records of 19 students, grades 7 to 11, who participated in intensive 6-week, extension summer courses. From this raw data we select episodes in which students appear to be highly engaged with the subject matter. We then attend to the fine-grained texture of students’ actions, identifying a core set of phenomena that cut across engagement episodes. Analyzed as a whole, these phenomena suggest that when working in self-directed, self-motivated mode, students pursue proposed activities but sporadically and spontaneously venture into self-initiated activities. Students’ recurring self-initiated activities – which we call personal excursions – are detours from proposed activities, but which align to a greater or lesser extent with the goals of such activities. Because of the deeply personal nature of excursions, they often result in students collecting resources that feed back into both subsequent excursions and framed activities. Having developed an understanding of students’ patterns of self-directed, self-motivated engagement, we then identify four factors that seem to bear most strongly on such patterns: (1) students’ competence (broadly construed); (2) features of the software-based activities, and how such features allowed students to express their competence; (3) the time allotted for students to pursue proposed activities, as well as self-initiated ones; and (4) the flexibility of the computational environment within which the activities were implemented.  相似文献   

14.
Generalized Nash games with shared constraints represent an extension of Nash games in which strategy sets are coupled across players through a shared or common constraint. The equilibrium conditions of such a game can be compactly stated as a quasi-variational inequality (QVI), an extension of the variational inequality (VI). In (Eur. J. Oper. Res. 54(1):81–94, 1991), Harker proved that for any QVI, under certain conditions, a solution to an appropriately defined VI solves the QVI. This is a particularly important result, given that VIs are generally far more tractable than QVIs. However Facchinei et al. (Oper. Res. Lett. 35(2):159–164, 2007) suggested that the hypotheses of this result are difficult to satisfy in practice for QVIs arising from generalized Nash games with shared constraints. We investigate the applicability of Harker’s result for these games with the aim of formally establishing its reach. Specifically, we show that if Harker’s result is applied in a natural manner, its hypotheses are impossible to satisfy in most settings, thereby supporting the observations of Facchinei et al. But we also show that an indirect application of the result extends the realm of applicability of Harker’s result to all shared-constraint games. In particular, this avenue allows us to recover as a special case of Harker’s result, a result provided by Facchinei et al. (Oper. Res. Lett. 35(2):159–164, 2007), in which it is shown that a suitably defined VI provides a solution to the QVI of a shared-constraint game.  相似文献   

15.
In this article, using Fontaine's ФГ-module theory, we give a new proof of Coleman's explicit reciprocity law, which generalizes that of Artin-Hasse, Iwasawa and Wiles, by giving a complete formula for the norm residue symbol on Lubin-Tate groups. The method used here is different from the classical ones and can be used to study the Iwasawa theory of crystalline representations.  相似文献   

16.
In the 18th century, Gottfried Ploucquet developed a new syllogistic logic where the categorical forms are interpreted as set-theoretical identities, or diversities, between the full extension, or a non-empty part of the extension, of the subject and the predicate. With the help of two operators ‘O’ (for “Omne”) and ‘Q’ (for “Quoddam”), the UA and PA are represented as ‘O(S) – Q(P)’ and ‘Q(S) – Q(P)’, respectively, while UN and PN take the form ‘O(S) > O(P)’ and ‘Q(S) > O(P)’, where ‘>’ denotes set-theoretical disjointness. The use of the symmetric operators ‘–’ and ‘>’ gave rise to a new conception of conversion which in turn lead Ploucquet to consider also the unorthodox propositions O(S) – O(P), Q(S) – O(P), O(S) > Q(P), and Q(S) > Q(P). Although Ploucquet’s critique of the traditional theory of opposition turns out to be mistaken, his theory of the “Quantification of the Predicate” is basically sound and involves an interesting “Double Square of Opposition”. My thanks are due to Hanno von Wulfen for helpful discussions and for transforming the word-document into a Latex-file.  相似文献   

17.
In this paper we consider the general cone programming problem, and propose primal-dual convex (smooth and/or nonsmooth) minimization reformulations for it. We then discuss first-order methods suitable for solving these reformulations, namely, Nesterov’s optimal method (Nesterov in Doklady AN SSSR 269:543–547, 1983; Math Program 103:127–152, 2005), Nesterov’s smooth approximation scheme (Nesterov in Math Program 103:127–152, 2005), and Nemirovski’s prox-method (Nemirovski in SIAM J Opt 15:229–251, 2005), and propose a variant of Nesterov’s optimal method which has outperformed the latter one in our computational experiments. We also derive iteration-complexity bounds for these first-order methods applied to the proposed primal-dual reformulations of the cone programming problem. The performance of these methods is then compared using a set of randomly generated linear programming and semidefinite programming instances. We also compare the approach based on the variant of Nesterov’s optimal method with the low-rank method proposed by Burer and Monteiro (Math Program Ser B 95:329–357, 2003; Math Program 103:427–444, 2005) for solving a set of randomly generated SDP instances.  相似文献   

18.
We construct new compactly supported wavelets and investigate their asymptotic regularity; they appear to be more regular than the Daubechies ones. These new wavelets are associated to Bernstein–Lorentz polynomials (Daubechies–Volkmer’s wavelets) and Hermite–Féjer polynomials (Lemarié–Matzinger’s wavelets) and this property enables us to derive some improved regularity ratio bounds. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

19.
For a set of measured points, we describe a linear-programming model that enables us to find concentric circumscribed and inscribed circles whose annulus encompasses all the points and whose width tends to be minimum in a Chebychev minmax sense. We illustrate the process using the data of Rorres and Romano (SIAM Rev. 39: 745–754, 1997) that is taken from an ancient Greek stadium in Corinth. The stadium’s racecourse had an unusual circular arc starting line, and measurements along this arc form the basic data sets of Rorres and Romano (SIAM Rev. 39: 745–754, 1997). Here we are interested in finding the center and radius of the circle that defined the starting line arc. We contrast our results with those found in Rorres and Romano (SIAM Rev. 39: 745–754, 1997).  相似文献   

20.
The result provided in this paper helps complete a unified picture of the scaling behavior in heavy-tailed stochastic models for transmission of packet traffic on high-speed communication links. Popular models include infinite source Poisson models, models based on aggregated renewal sequences, and models built from aggregated on–off sources. The versions of these models with finite variance transmission rate share the following pattern: if the sources connect at a fast rate over time the cumulative statistical fluctuations are fractional Brownian motion, if the connection rate is slow the traffic fluctuations are described by a stable Lévy motion, while the limiting fluctuations for the intermediate scaling regime are given by fractional Poisson motion. In this paper, we prove an invariance principle for the normalized cumulative workload of a network with m on–off sources and time rescaled by a factor a. When both the number of sources m and the time scale a tend to infinity with a relative growth given by the so-called ’intermediate connection rate’ condition, the limit process is the fractional Poisson motion. The proof is based on a coupling between the on–off model and the renewal type model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号