首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, I take a critical look at some popular ideas about teaching mathematics, which are forcefully promoted worldwide by the reform movement. The issue in focus is the nature and limits of mathematical discourse. Whereas knowing mathematics is conceptualized as an ability to participate in this discourse, special attention is given to meta-discursive rules that regulate participation and are therefore a central, if only implicit, object of learning. Following the theoretical analysis illustrated with empirical examples, the question arises of how far one may go in renegotiating and relaxing the rules of mathematical discourse before seriously affecting its learnability.  相似文献   

2.
On Reform Movement and the Limits of Mathematical Discourse   总被引:13,自引:0,他引:13  
In this article, I take a critical look at some popular ideas about teaching mathematics, which are forcefully promoted worldwide by the reform movement. The issue in focus is the nature and limits of mathematical discourse. Whereas knowing mathematics is conceptualized as an ability to participate in this discourse, special attention is given to meta-discursive rules that regulate participation and are therefore a central, if only implicit, object of learning. Following the theoretical analysis illustrated with empirical examples, the question arises of how far one may go in renegotiating and relaxing the rules of mathematical discourse before seriously affecting its learnability.  相似文献   

3.
An important aspect of participation in a new academic discourse pertains to the metadiscursive rules which govern that discourse. Researchers have documented the viability of using primary sources in undergraduate mathematics education for scaffolding students’ recognition of those rules. Our research explores the related question of whether the use of primary sources can support students’ learning of metadiscursive rules in a way that goes beyond mere recognition. We present a case study of one student’s “figuring out” of metadiscursive rules in a university Analysis course as a result of her experience with a Primary Source Project, illustrate evidence for three dimensions of “figuring out” (adoption, acceptance, awareness) that emerged from that case study, and discuss the implications of our findings for classroom instruction and future research.  相似文献   

4.
The ANALYZE rulebase for supporting LP analysis   总被引:2,自引:0,他引:2  
This paper describes how to design rules to support linear programming analysis in three functional categories: postoptimal sensitivity, debugging, and model management. The ANALYZE system is used to illustrate the behavior of the rules with a variety of examples. Postoptimal sensitivity analysis answers not only the paradigmWhat if …? question, but also the more frequently askedWhy …? question. The latter is static, asking why some solution value is what it is, or why it is not something else. The former is dynamic, asking how the solution changes if some element is changed. Debugging can mean a variety of things; here the focus is on diagnosing an infeasible instance. Model management includes documentation, verification, and validation. Rules are illustrated to provide support in each of these related functions, including some that require reasoning about the linear program's structure. Another model management function is to conduct a periodic review, with one of the goals being to simplify the model, if possible. The last illustration is how to test new rule files, where there is a variety of ways to communicate a result to someone who is not expert in linear programming.  相似文献   

5.
The study investigates the effects which possibly unrealistic assumptions of accurately predicting operation times may have on relative performance of various job shop dispatching rules as compared with using an assumption of not being able to accurately predetermine such times. The experimental design includes factors dealing with the amount of accuracy in the estimated operation times, job dispatching heuristic rules, and shop loading categories. The stochastic operation times represent two different degrees of inaccuracy; one level reflects an estimated ‘normal’ amount of inaccuracy associated with an experienced predictor (shop foreman) while the other level doubles the amount of variance associated with the ‘normal’ predictor's error. These two stochastic levels are compared to a deterministic level where predetermined operation times are absolutely accurate. Five different heuristic rules are evaluated under six different shop loading levels. General conclusions indicate that an assumption of accurately predetermining actual operation times is not likely to weaken the analysis and impact of the research studies which have been performed using such an assumption. However, a specific conclusion indicates that, for at least one shop loading category, researchers should be careful when extending conclusions based on one operation time assumption to situations involving the other assumption.  相似文献   

6.
7.
We consider the problem of scheduling orders for multiple different product types in an environment with m dedicated machines in parallel. The objective is to minimize the total weighted completion time. Each product type is produced by one and only one of the m dedicated machines; that is, each machine is dedicated to a specific product type. Each order has a weight and may also have a release date. Each order asks for certain amounts of various different product types. The different products for an order can be produced concurrently. Preemptions are not allowed. Even when all orders are available at time 0, the problem has been shown to be strongly NP-hard for any fixed number (?2) of machines. This paper focuses on the design and analysis of efficient heuristics for the case without release dates. Occasionally, however, we extend our results to the case with release dates. The heuristics considered include some that have already been proposed in the literature as well as several new ones. They include various static and dynamic priority rules as well as two more sophisticated LP-based algorithms. We analyze the performance bounds of the priority rules and of the algorithms and present also an in-depth comparative analysis of the various rules and algorithms. The conclusions from this empirical analysis provide insights into the trade-offs with regard to solution quality, speed, and memory space.  相似文献   

8.
9.
By employing fundamental results from “geometric” functional analysis and the theory of multifunctions we formulate a general model for (nonsequential) statistical decision theory, which extends Wald's classical model. From central results that hold for the model we derive a general theorem on the existence of admissible nonrandomized Bayes rules. The generality of our model makes it also possible to apply these results to some stochastic optimization problems. In an appendix we deal with the question of sufficiency reduction.  相似文献   

10.
Let us suppose that certain committee is going to decide, using some fixed voting rules, either to accept or to reject a proposal that affects your interests. From your perception about each voter’s position, you can make an a priori estimation of the probability of the proposal being accepted. Wishing to increase this probability of acceptance before the votes are cast, assume further that you are able to convince (at least) one voter to improve his/her perception in favor of the proposal. The question is: which voters should be persuaded in order to get the highest possible increase in the probability of acceptance? In other words, which are the optimal persuadable voters? To answer this question a measure of “circumstantial power” is considered in this paper, which is useful to identify optimal persuadable voters. Three preorderings in the set of voters, based on the voting rules, are defined and they are used for finding optimal persuadable voters, even in the case that only a qualitative ranking of each voter’s inclination for the proposal has been made.  相似文献   

11.
Gauss-type quadrature rules with one or two prescribed nodes are well known and are commonly referred to as Gauss–Radau and Gauss–Lobatto quadrature rules, respectively. Efficient algorithms are available for their computation. Szeg? quadrature rules are analogs of Gauss quadrature rules for the integration of periodic functions; they integrate exactly trigonometric polynomials of as high degree as possible. Szeg? quadrature rules have a free parameter, which can be used to prescribe one node. This paper discusses an analog of Gauss–Lobatto rules, i.e., Szeg? quadrature rules with two prescribed nodes. We refer to these rules as Szeg?–Lobatto rules. Their properties as well as numerical methods for their computation are discussed.  相似文献   

12.
We are considering the problem of multi-criteria classification. In this problem, a set of “if … then …” decision rules is used as a preference model to classify objects evaluated by a set of criteria and regular attributes. Given a sample of classification examples, called learning data set, the rules are induced from dominance-based rough approximations of preference-ordered decision classes, according to the Variable Consistency Dominance-based Rough Set Approach (VC-DRSA). The main question to be answered in this paper is how to classify an object using decision rules in situation where it is covered by (i) no rule, (ii) exactly one rule, (iii) several rules. The proposed classification scheme can be applied to both, learning data set (to restore the classification known from examples) and testing data set (to predict classification of new objects). A hypothetical example from the area of telecommunications is used for illustration of the proposed classification method and for a comparison with some previous proposals.  相似文献   

13.
The percolation phase transition and the mechanism of the emergence of the giant component through the critical scaling window for random graph models, has been a topic of great interest in many different communities ranging from statistical physics, combinatorics, computer science, social networks and probability theory. The last few years have witnessed an explosion of models which couple random aggregation rules, that specify how one adds edges to existing configurations, and choice, wherein one selects from a “limited” set of edges at random to use in the configuration. While an intense study of such models has ensued, understanding the actual emergence of the giant component and merging dynamics in the critical scaling window has remained impenetrable to a rigorous analysis. In this work we take an important step in the analysis of such models by studying one of the standard examples of such processes, namely the Bohman‐Frieze model, and provide first results on the asymptotic dynamics, through the critical scaling window, that lead to the emergence of the giant component for such models. We identify the scaling window and show that through this window, the component sizes properly rescaled converge to the standard multiplicative coalescent. Proofs hinge on a careful analysis of certain infinite‐type branching processes with types taking values in the space of cadlag paths, and stochastic analytic techniques to estimate susceptibility functions of the components all the way through the scaling window where these functions explode. Previous approaches for analyzing random graphs at criticality have relied largely on classical breadth‐first search techniques that exploit asymptotic connections with Brownian excursions. For dynamic random graph models evolving via general Markovian rules, such approaches fail and we develop a quite different set of tools that can potentially be used for the study of critical dynamics for all bounded size rules. © 2013 Wiley Periodicals, Inc. Random Struct. Alg., 46, 55–116, 2015  相似文献   

14.
Can local rules impose a global order? If yes, when and how? This is a philosophical question that could be asked in many cases. How does local interaction of atoms create crystals (or quasicrystals)? How does one living cell manage to develop into a pine cone whose seeds form spirals (and the number of spirals  相似文献   

15.
Penot  Jean-Paul 《Positivity》2002,6(4):413-432
It is well known that elementary subdifferentials which are the simplest and the most precise among known subdifferentials do not enjoy good calculus rules, whereas more elaborated subdifferentials do have calculus rules but are not as precise and, in particular, do not preserve order. This paper explores an order preservation property for the subdifferentials of the second category. This property concerns the case in which a distance function is involved. It emphasizes the crucial role played by such functions in nonsmooth analysis. The result enables one to get in a simple, unified way the passage from the properties of subdifferentials for Lipschitzian functions to the same properties for the case of lower semicontinuous functions.  相似文献   

16.
Summary Differential calculus is proving inefficient to solve problems of mechanics dealing with ropes, unless one satisfies oneself with rough approximations.Ordinary or Euclidean geometry is also proving useless: The author shows that in order to solve such problems as described before, one has to use a special branch of geometry. Distances are no longer measured along rigid straight rules, but along the catenary curves in question.The fundamental characteristics of this hypergeometry, with some of their more striking results, are described in the following study.  相似文献   

17.
Mordukhovich  Boris S.  Shao  Yongheng  Zhu  Qiji 《Positivity》2000,4(1):1-39
This paper concerns with generalized differentiation of set-valued and nonsmooth mappings between Banach spaces. We study the so-called viscosity coderivatives of multifunctions and their limiting behavior under certain geometric assumptions on spaces in question related to the existence of smooth bump functions of any kind. The main results include various calculus rules for viscosity coderivatives and their topological limits. They are important in applications to variational analysis and optimization.  相似文献   

18.
Concern has been expressed in the United Kingdom regarding the proportion of beds intended for acute care that are occupied by patients who do not require acute care. One solution to this problem that is being investigated by some hospitals is the establishment of an intermediate care facility devoted to non-acute care. A key question faced by hospital planners is how many beds such an intermediate care facility should have. We report on a study consisting of a bed use survey and a stochastic analysis exercise that was conducted at the Whittington Hospital NHS Trust in London in order to address this question. Rather than focus on the whether patients in acute beds required acute care throughout their stay in hospital, the study concentrated on identifying periods in patients’ stays when they would have been transferred to an intermediate care facility if one had been available.  相似文献   

19.
20.
Rule acquisition is one of the most important objectives in the analysis of decision systems. Because of the interference of errors, a real-world decision system is generally inconsistent, which can lead to the consequence that some rules extracted from the system are not certain but possible rules. In practice, however, the possible rules with high confidence are also useful in making decision. With this consideration, we study how to extract from an interval-valued decision system the compact decision rules whose confidences are not less than a pre-specified threshold. Specifically, by properly defining a binary relation on an interval-valued information system, the concept of interval-valued granular rules is presented for the interval-valued decision system. Then, an index is introduced to measure the confidence of an interval-valued granular rule and an implication relationship is defined between the interval-valued granular rules whose confidences are not less than the threshold. Based on the implication relationship, a confidence-preserved attribute reduction approach is proposed to extract compact decision rules and a combinatorial optimization-based algorithm is developed to compute all the reducts of an interval-valued decision system. Finally, some numerical experiments are conducted to evaluate the performance of the reduction approach and the gain of using the possible rules in making decision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号