首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
While the general notion of ‘metaphor’ may offer a thoughtful analysis of the nature of mathematical thinking, this paper suggests that it is even more important to take into account the particular mental structures available to the individual that have been built from experience that the individual has ‘met-before.’ The notion of ‘met-before’ offers not only a principle to analyse the changing meanings in mathematics and the difficulties faced by the learner—which we illustrate by the problematic case of the minus sign—it can also be used to analyse the met-befores of mathematicians, mathematics educators and those who develop theories of learning to reveal implicit assumptions that support our thinking in some ways and act as impediments in others.  相似文献   

2.
In this paper we report on 10 –14 year old children's strategies while solving two versions of ratio and proportion tasks: one ‘with models’ thought to facilitate proportional reasoning and one ‘without’. Rasch methodology was used to develop ‘with’ and ‘without models’ test versions which were given to a linked sample involving 673 children. We examine the pupils’ additive errors, their effect on ratio reasoning and how contingent on ‘model’ presentation this is. First, we provide a single scale on which pupils, item-difficulty and additive errors can be located. We then provide a new scale constructed from the error prone items, which we name the ‘tendency for additive strategy’. The measurement data is supported by qualitative data showing that the presence of ‘models’ can sometimes affect children's strategies, both positively and negatively but rarely makes a significant measurement difference on this, untutored, sample.  相似文献   

3.
In a previous paper (Ann. L’ Inst. Fourier 52(2) (2002) 379-417) the second-named author developed a new approach to the abelian p-adic Stark conjecture at s=1 and stated some related conjectures. The aim of the present paper is to develop and apply techniques to numerically investigate one of these—the ‘Weak Refined Combined Conjecture’—in 15 cases.  相似文献   

4.
The efficient and accurate calculation of sensitivities of the price of financial derivatives with respect to perturbations of the parameters in the underlying model, the so-called ‘Greeks’, remains a great practical challenge in the derivative industry. This is true regardless of whether methods for partial differential equations or stochastic differential equations (Monte Carlo techniques) are being used. The computation of the ‘Greeks’ is essential to risk management and to the hedging of financial derivatives and typically requires substantially more computing time as compared to simply pricing the derivatives. Any numerical algorithm (Monte Carlo algorithm) for stochastic differential equations produces a time-discretization error and a statistical error in the process of pricing financial derivatives and calculating the associated ‘Greeks’. In this article we show how a posteriori error estimates and adaptive methods for stochastic differential equations can be used to control both these errors in the context of pricing and hedging of financial derivatives. In particular, we derive expansions, with leading order terms which are computable in a posteriori form, of the time-discretization errors for the price and the associated ‘Greeks’. These expansions allow the user to simultaneously first control the time-discretization errors in an adaptive fashion, when calculating the price, sensitivities and hedging parameters with respect to a large number of parameters, and then subsequently to ensure that the total errors are, with prescribed probability, within tolerance.  相似文献   

5.
The notion of nonatomicity for set functions plays a key role in classical measure theory and its applications. For classical measures taking values in finite dimensional Banach spaces, it guarantees the connectedness of range. Even just replacing σ-additivity with finite additivity for measures requires some stronger nonatomicity property for the same conclusion to hold. In the present paper, we deal with non-additive functions – called ‘s-outer’ and ‘quasi-triangular’ – defined on rings and taking values in Hausdorff topological spaces. No algebraic structure is required on their target spaces. In this context, we make use of a notion of strong nonatomicity involving just the behavior of functions on ultrafilters of their underlying Boolean domains. This notion is proved to be equivalent to that proposed in earlier contributions concerning Lyapunov-types theorems in additive and non-additive frameworks. Thus, in particular, our analysis allows to generalize, improve and unify several known results on this topic.  相似文献   

6.
One way to aggregate data is to combine several sets with the same structure, but no overlap in their ranges of values — for instance, aggregating prices before and after a period of hyperinflation. Looking at nonparametric tests on three ‘items’, we compute the relation of the decomposition of the underlying voting profiles of such aggregated sets to those for the original data. We focus on the Basic components, including examples of ‘pure Basic’ sets, computed using Sage. This yields several interesting results about consistency of nonparametric tests with respect to this kind of aggregation, and suggests types of non-uniformity which are not detected by standard tests.  相似文献   

7.
The paper discusses the tension which occurred between the notions of set (with measure) and (trial-) sequence (or—to a certain degree—between nondenumerable and denumerable sets) when used in the foundations of probability theory around 1920. The main mathematical point was the logical need for measures in order to describe general nondiscrete distributions, which had been tentatively introduced before (1919) based on von Mises’s notion of the “Kollektiv.” In the background there was a tension between the standpoints of pure mathematics and “real world probability” (in the words of J.L. Doob) at the time. The discussion and publication in English translation (in Appendix) of two critical letters of November 1919 by the “pure” mathematician Felix Hausdorff to the engineer and applied mathematician Richard von Mises compose about one third of the paper. The article also investigates von Mises’s ill-conceived effort to adopt measures and his misinterpretation of an influential book of Constantin Carathéodory. A short and sketchy look at the subsequent development of the standpoints of the pure and the applied mathematician—here represented by Hausdorff and von Mises—in the probability theory of the 1920s and 1930s concludes the paper.  相似文献   

8.
We show that every nonempty compact and convex space M of probability Radon measures either contains a measure which has ‘small’ local character in M or else M contains a measure of ‘large’ Maharam type. Such a dichotomy is related to several results on Radon measures on compact spaces and to some properties of Banach spaces of continuous functions.  相似文献   

9.
This paper reports one aspect of a larger study which looked at the strategies used by a selection of grade 6 students to solve six non-routine mathematical problems. The data revealed that the students exhibited many of the behaviours identified in the literature as being associated with novice and expert problem solvers. However, the categories of ‘novice’ and ‘expert’ were not fully adequate to describe the range of behaviours observed and instead three categories that were characteristic of behaviours associated with ‘naïve’, ‘routine’ and ‘sophisticated’ approaches to solving problems were identified. Furthermore, examination of individual cases revealed that each student's problem solving performance was consistent across a range of problems, indicating a particular orientation towards naïve, routine or sophisticated problem solving behaviours. This paper describes common problem solving behaviours and details three individual cases involving naïve, routine and sophisticated problem solvers.  相似文献   

10.
We explore simultaneous modeling of several covariance matrices across groups using the spectral (eigenvalue) decomposition and modified Cholesky decomposition. We introduce several models for covariance matrices under different assumptions about the mean structure. We consider ‘dependence’ matrices, which tend to have many parameters, as constant across groups and/or parsimoniously modeled via a regression formulation. For ‘variances’, we consider both unrestricted across groups and more parsimoniously modeled via log-linear models. In all these models, we explore the propriety of the posterior when improper priors are used on the mean and ‘variance’ parameters (and in some cases, on components of the ‘dependence’ matrices). The models examined include several common Bayesian regression models, whose propriety has not been previously explored, as special cases. We propose a simple approach to weaken the assumption of constant dependence matrices in an automated fashion and describe how to compute Bayes factors to test the hypothesis of constant ‘dependence’ across groups. The models are applied to data from two longitudinal clinical studies.  相似文献   

11.
The power of mathematics is discussed as a way of expressing reasoning, aesthetics and insight in symbolic non-verbal communication. The human culture of discovering mathematical ways of thinking in the enterprise of exploring the understanding of the nature and the evolution of our world through hypotheses, theories and experimental affirmation of the scientific notion of algorithmic and non-algorithmic ‘computation’, is examined and commended upon.  相似文献   

12.
Our purpose in this paper is to report on an observational study to show how students think about the links between the graph of a derived function and the original function from which it was formed. The participants were asked to perform the following task: they were presented with four graphs that represented derived functions and from these graphs they were asked to construct the original functions from which they were formed. The students then had to walk these graphs as if they were displacement-time graphs. Their discussions were recorded on audio tape and their walks were captured using data logging equipment and these were analysed together with their pencil and paper notes. From these three sources of data, we were able to construct a picture of the students’ graphical understanding of connections in calculus. The results confirm that at the start of the activity the students demonstrate an algebraic symbolic view of calculus and find it difficult to make connections between the graphs of a derived function and the function itself. By being able to ‘walk’ an associated displacement time graph, we propose that the students are extending their understanding of calculus concepts from symbolic representation to a graphical representation and to what we term a ‘physical feel’.  相似文献   

13.
We study the structure of Banach spaces X determined by the coincidence of nuclear maps on X with certain operator ideals involving absolutely summing maps and their relatives. With the emphasis mainly on Hilbert-space valued mappings, it is shown that the class of Hilbert—Schmidt spaces arises as a ‘solution set’ of the equation involving nuclear maps and the ideal of operators factoring through Hilbert—Schmidt maps. Among other results of this type, it is also shown that Hilbert spaces can be characterised by the equality of this latter ideal with the ideal of 2-summing maps. We shall also make use of this occasion to give an alternative proof of a famous theorem of Grothendieck using some well-known results from vector measure theory.  相似文献   

14.
We present an approach for the transition from convex risk measures in a certain discrete time setting to their counterparts in continuous time. The aim of this paper is to show that a large class of convex risk measures in continuous time can be obtained as limits of discrete time-consistent convex risk measures. The discrete time risk measures are constructed from properly rescaled (‘tilted’) one-period convex risk measures, using a d-dimensional random walk converging to a Brownian motion. Under suitable conditions (covering many standard one-period risk measures) we obtain convergence of the discrete risk measures to the solution of a BSDE, defining a convex risk measure in continuous time, whose driver can then be viewed as the continuous time analogue of the discrete ‘driver’ characterizing the one-period risk. We derive the limiting drivers for the semi-deviation risk measure, Value at Risk, Average Value at Risk, and the Gini risk measure in closed form.  相似文献   

15.
This paper deals with a batch service queue and multiple vacations. The system consists of a single server and a waiting room of finite capacity. Arrival of customers follows a Markovian arrival process (MAP). The server is unavailable for occasional intervals of time called vacations, and when it is available, customers are served in batches of maximum size ‘b’ with a minimum threshold value ‘a’. We obtain the queue length distributions at various epochs along with some key performance measures. Finally, some numerical results have been presented.  相似文献   

16.
By obtaining several new results on Cook-style two-sorted bounded arithmetic, this paper measures the strengths of the axiom of extensionality and of other weak fundamental set-theoretic axioms in the absence of the axiom of infinity, following the author’s previous work [K. Sato, The strength of extensionality I — weak weak set theories with infinity, Annals of Pure and Applied Logic 157 (2009) 234-268] which measures them in the presence. These investigations provide a uniform framework in which three different kinds of reverse mathematics-Friedman-Simpson’s “orthodox” reverse mathematics, Cook’s bounded reverse mathematics and large cardinal theory-can be reformulated within one language so that we can compare them more directly.  相似文献   

17.
For locally finite unions of sets with positive reach in R d, generalized unit normal bundles are introduced in support of a certain set additive index function. Given an appropriate orientation to the normal bundle, signed curvature measures may be defined by means of associated locally rectifiable currents (with index function as multiplicity) and specially chosen differential forms. In the case of regular sets this is shown to be equivalent to well-known classical concepts via former results. The present approach leads to unified methods in proving integral-geometric relations. Some of them are stated in this paper.  相似文献   

18.
The validity of students’ reasoning is central to problem solving. However, equally important are the operating premises from which students’ reason about problems. These premises are based on students’ interpretations of the problem information. This paper describes various premises that 11- and 12-year-old students derived from the information in a particular problem, and the way in which these premises formed part of their reasoning during a lesson. The teacher’s identification of differences in students’ premises for reasoning in this problem shifted the emphasis in a class discussion from the reconciliation of the various problem solutions and a focus on a sole correct reasoning path, to the identification of the students’ premises and the appropriateness of their various reasoning paths. Problem information that can be interpreted ambiguously creates rich mathematical opportunities because students are required to articulate their assumptions, and, thereby identify the origin of their reasoning, and to evaluate the assumptions and reasoning of their peers.  相似文献   

19.
The purpose of this paper is to present evidence supporting the conjecture that graphs and gestures may function in different capacities depending on whether they are used to develop an algorithm or whether they extend or apply a previously developed algorithm in a new context. I illustrate these ideas using an example from undergraduate differential equations in which students move through a sequence of Realistic Mathematics Education (RME)-inspired instructional materials to create the Euler method algorithm for approximating solutions to differential equations. The function of graphs and gestures in the creation and subsequent use of the Euler method algorithm is explored. If students’ primary goal was algorithmatizing ‘from scratch’, they used imagery of graphing and gesturing as a tool for reasoning. However if students’ primary goal was to make predictions in a new context, they used their previously developed Euler algorithm to reason and used graphs and gestures to clarify their ideas.  相似文献   

20.
This study examines the supply chain demand collaboration between a manufacturer and a retailer. We study how the timing of collaboration facilitates production decision of the manufacturer when the information exchanged in the collaboration is asymmetric. We investigate two collaboration mechanisms: ‘Too Little’ and ‘Too Late’, depending on the timing of information sharing between the manufacturer and the retailer. Our research results indicate that early collaboration as in the ‘Too Little’ mechanism leads to a stable production schedule, which decreases the need of production adjustment when production cost information becomes available; whereas a late collaboration as in the ‘Too Late’ mechanism enhances the flexibility of production adjustment when demand information warrants it. In addition, the asymmetric demand information confounds production decisions all the time; the manufacturer has to provide proper incentives to ensure truthful information sharing in collaboration. Information asymmetry might also reduce the difference in production decision between the ‘Too Little’ and ‘Too Late’ collaboration mechanisms. Numerical analysis is further conducted to demonstrate the performance implications of the collaboration mechanisms on the supply chain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号