首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article suggests that logic puzzles, such as the well-known Tower of Hanoi puzzle, can be used to introduce computer science concepts to mathematics students of all ages. Mathematics teachers introduce their students to computer science concepts that are enacted spontaneously and subconsciously throughout the solution to the Tower of Hanoi puzzle. These concepts include, but are not limited to, conditionals, iteration, and recursion. Lessons, such as the one proposed in this article, are easily implementable in mathematics classrooms and extracurricular programmes as they are good candidates for ‘drop in’ lessons that do not need to fit into any particular place in the typical curriculum sequence. As an example for readers, the author describes how she used the puzzle in her own Number Sense and Logic course during the federally funded Upward Bound Math/Science summer programme for college-intending low-income high school students. The article explains each computer science term with real-life and mathematical examples, applies each term to the Tower of Hanoi puzzle solution, and describes how students connected the terms to their own solutions of the puzzle. It is timely and important to expose mathematics students to computer science concepts. Given the rate at which technology is currently advancing, and our increased dependence on technology in our daily lives, it has become more important than ever for children to be exposed to computer science. Yet, despite the importance of exposing today's children to computer science, many children are not given adequate opportunity to learn computer science in schools. In the United States, for example, most students finish high school without ever taking a computing course. Mathematics lessons, such as the one described in this article, can help to make computer science more accessible to students who may have otherwise had little opportunity to be introduced to these increasingly important concepts.  相似文献   

2.
This study is an empirical investigation of 11th graders at a German high school (Gymnasium). Working over a 24-hour period in a computer lab, we investigated students' use of quadratic functions with `Derive', and trigonometric functions with `Mathplus' (or `Theorist' for Macintosh). We were particularly interested in the working styles of students while they solved problems and looked for changes in these styles, as compared to traditional paper and pencil activities. While students worked on the computer, their activities (such as inputs from the keyboard, menu choices or mouse movements) were saved by a special program, which ran in the `background'. We are interested in the possibilities of developing a research method based on these `computer protocols'. The study should be seen as an exploratory study for developing hypotheses for further empirical investigations.This revised version was published online in September 2005 with corrections to the Cover Date.  相似文献   

3.
This paper reports on the formulation of a secondary school timetabling problem as a non-linear goal program, where students freely choose their courses of study from a complete list of subjects rather than the usual restricted sets of subjects. The problem as formulated is far too large to solve by traditional optimisation methods, so it is broken down into several stages for solution by heuristics to give good timetabling schedules which are at least as good as those built by manual methods. Timetable construction using a desktop computer is reduced from weeks to hours, giving schools the opportunity to construct timetables closer to the time when student choices and teaching staff are more settled.  相似文献   

4.
Andrea Hoffkamp 《ZDM》2011,43(3):359-372
Calculus and functional thinking are closely related. Functional thinking includes thinking in variations and functional dependencies with a strong emphasis on the aspect of change. Calculus is a climax within school mathematics and the education to functional thinking can be seen as propaedeutics to it. Many authors describe that functions at school are mainly treated in a static way, by regarding them as pointwise relations. This often leads to the underrepresentation of the aspect of change at school. Moreover, calculus at school is mainly procedure-oriented and structural understanding is lacking. In this work, two specially designed interactive activities for the teaching and learning of concepts of calculus based on dynamic geometry software are presented. They accentuate the aspect of change and the object aspect of functions using a double stage visualization. Moreover, the activities allow the discovery and exploration of some concepts of calculus in a qualitative-structural way without knowing or using curve-sketching routines. The activities were used in a qualitative study with 10th grade students of age 15–16 in secondary schools in Berlin, Germany. Some pairs of students were videotaped while working with the activities. After transcribing, the interactions of the students were interpreted and analyzed focusing to the use of the computer. The results show how the students mobilize their knowledge about functions working on the given tasks, and using the activities to formulate important concepts of calculus in a qualitative way. Also, some important epistemological obstacles can be detected.  相似文献   

5.
Dispersion in the output data of a system can be analyzed as either noise fluctuations about a deterministic model or as the noise with added fluctuations due to randomness in the model itself. This latter interpretation finds applications in the identification of inherently random systems which provide rational models for systems such as biological and economic systems. It is shown that the computational procedure is closely related to traditional least-square analysis. Both linear and nonlinear models are considered. Results of computer simulations are presented for some simple cases.  相似文献   

6.
This paper analyses the value added (VA) of a sample of Portuguese schools using two methodologies: data envelopment analysis (DEA) and the methodology used presently by the UK Department for Children, Schools and Families (DCSF). The VA estimates obtained by the two methods are substantially different. This reflects their different focus: DEA emphasizes on best-observed performance, whereas the DCSF method reveals average performance. The main advantage of the methodology used by the DCSF is its simplicity, although it confounds pupil effects with school effects in the estimation of school VA. In contrast, the DEA methodology can differentiate these effects, but the complexity may prevent its use in a systematic way. This paper shows that the two methods provide complementary information regarding the VA of schools, and their joint use can improve the understanding of the relative effectiveness of schools regarding the progress that pupils make between educational stages.  相似文献   

7.
It has been known for many years that an optimal discrete nonlinear filter may be synthesized for systems whose plant dynamics, sensor characteristics and signal statistics are known by applying Bayes' Rule to sequentially update the conditional probability density function from the latest data. However, it was not until 1969 that a digital computer algorithm implementing the theory for a one-state variable one-step predictor appeared in the literature. This delay and the continuing scarcity of multidimensional nonlinear filters result from the overwhelming computational task which leads to unrealistic data processing times. For many nonlinear filtering problems analog and digital computers (a hybrid computation) combine to yield a higher data rate than can be obtained by con¬ventional digital methods. This paper describes an implementation of the theory by means of a hybrid computer algorithm for the optimal nonlinear one-step predictor.

The hybrid computer algorithm presented reduces the overall solution time per prediction because:

1) Many large computations of identical form are executed on the analog computer in parallel.

2) The discrete running variable in the digital algorithm may be replaced with a continuous analog computer variable in one or more dimensions leading to increased computational speed and finer resolution of the exponential transformation.

3) The modern analog computer is well suited to generate functions such as the expo¬nential at high speed with modest equipment.

4) The arithmetic, storage, and control functions performed rapidly by the digital computer are utilized without introducing extensive auxiliary calculations.

To illustrate pertinent aspects of the algorithm developed, the scalar cubed sensor problem previously described by Bucy is treated extensively. The hybrid algorithm is described. Problems associated with partitioning of equations between analog and digital computers, machine representations of variables, setting of initial conditions and floating of grid base are discussed. The effects of analog component bandwidths, digital-to-analog and analog-to-digital conversion times, analog computer mode switching times and digital computer I/O data rates on overall processing time are examined. The effect of limited analog computer dynamic range on accuracy is discussed. Results from a simulation of this optimal predictor using MOBSSL, a continuous system simulation language, are given. Timing estimates are presented and compared against similar estimates for the all digital algorithm.

For example, given a four-state variable optimal 1-step predictor utilizing 7 discrete points in each dimension, the hybrid algorithm can be used to generate predictions accurate to 2 decimal places once every 10 seconds. An analog computer complement of 250 integra¬tors and multipliers and a high-speed 3rd generation digital computer such as the CDC 6600 or IBM 360/85 are required. This compares with a lower bound of about 3 seconds per all digital prediction which would require 49 CDC 6600's operating in parallel. Analytical and simulation work quantifying errors in one state variable filters is presented. Finally, the use of an interactive graphic system for real time display and for filter evaluation is described.  相似文献   

8.
9.
Models of environmental processes must often be constructed without the use of extensive data sets. This can occur because the exercise is preliminary (aimed at guiding future data collection) or because requisite data are extremely difficult, expensive, or even impossible to obtain. In such cases traditional, statistically based methods for estimating parameters in the model cannot be applied; in fact, parameter estimation cannot be accomplished in a rigorous way at all. We examine the use of a regionalized sensitivity analysis procedure to select appropriate values for parameters in cases where only sparse, imprecise data are available. The utility of the method is examined in the context of equilibrium and dynamic models for describing water quality and hydrological data in a small catchment in Shehandoah National Park, Virginia. Results demonstrate that (1) models can be “tentatively calibrated” using this procedure; (2) the data most likely to provide a stringent test of the model can be identified; and (3) potential problems with model identifiability can be exposed in a preliminary analysis.  相似文献   

10.
Markov chain Monte Carlo (MCMC) methods for Bayesian computation are mostly used when the dominating measure is the Lebesgue measure, the counting measure, or a product of these. Many Bayesian problems give rise to distributions that are not dominated by the Lebesgue measure or the counting measure alone. In this article we introduce a simple framework for using MCMC algorithms in Bayesian computation with mixtures of mutually singular distributions. The idea is to find a common dominating measure that allows the use of traditional Metropolis-Hastings algorithms. In particular, using our formulation, the Gibbs sampler can be used whenever the full conditionals are available. We compare our formulation with the reversible jump approach and show that the two are closely related. We give results for three examples, involving testing a normal mean, variable selection in regression, and hypothesis testing for differential gene expression under multiple conditions. This allows us to compare the three methods considered: Metropolis-Hastings with mutually singular distributions, Gibbs sampler with mutually singular distributions, and reversible jump. In our examples, we found the Gibbs sampler to be more precise and to need considerably less computer time than the other methods. In addition, the full conditionals used in the Gibbs sampler can be used to further improve the estimates of the model posterior probabilities via Rao-Blackwellization, at no extra cost.  相似文献   

11.
基于联系数贴近度的区间数多属性决策方法   总被引:3,自引:3,他引:3  
利用集对分析和联系数学理论对区间数多属性决策方法进行了研究.首先给出了联系数贴近度的定义和性质;然后在将区间数决策矩阵转化为联系数决策矩阵的基础上,依据传统的逼近理想解的排序方法(TOPSIS)的基本思路,基于联系数贴近度提出了一种区间数多属性决策新方法.该方法简洁直观,易于计算,无需对区间数进行排序;最后,通过算例表明了它的有效性和实用性.  相似文献   

12.
This paper discusses the use of the non-parametric free disposal hull (FDH) andthe parametric multi-level model (MLM) as alternative methods for measuringpupil and school attainment where hierarchical structured data are available.Using robust FDH estimates, we show how to decompose the overall inefficiency ofa unit (a pupil) into a unit specific and a higher level (a school) component.By a sample of entry and exit attainments of 3017 girls in British ordinarysingle sex schools, we test the robustness of the non-parametric and parametricestimates. Finally, the paper uses the traditional MLM model in a best practiceframework so that pupil and school efficiencies can be computed.  相似文献   

13.
This study presents the results of an extensive Monte Carlo experiment to compare different methods of efficiency analysis. In addition to traditional parametric–stochastic and nonparametric–deterministic methods recently developed robust nonparametric–stochastic methods are considered. The experimental design comprises a wide variety of situations with different returns-to-scale regimes, substitution elasticities and outlying observations. As the results show, the new robust nonparametric–stochastic methods should not be used without cross-checking by other methods like stochastic frontier analysis or data envelopment analysis. These latter methods appear quite robust in the experiments.  相似文献   

14.
应用联合分析和混合回归模型进行市场细分   总被引:2,自引:0,他引:2  
联合分析是一种有效地反映消费者需求差异的方法,所以被广泛地应用到市场细分的研究中。但是,传统的方法存在着一定的不足。本研究提出了一个残差分布假设不同的混合回归模型,模型估计的效率比较高,而且模型系数也必较可靠。所以不失为一个比较理想的市场细分分析工具。本文应用该模型方法对一个笔记本电脑联合分析案例进行了实证分析。  相似文献   

15.
In this work we will discuss the solution of an initial value problem of parabolic type. The main objective is to propose an alternative method of solution, one not based on finite difference or finite element or spectral methods. The aim of the present paper is to investigate the application of the Adomian decomposition method for solving the Fokker–Planck equation and some similar equations. This method can successfully be applied to a large class of problems. The Adomian decomposition method needs less work in comparison with the traditional methods. This method decreases considerable volume of calculations. The decomposition procedure of Adomian will be obtained easily without linearizing the problem by implementing the decomposition method rather than the standard methods for the exact solutions. In this approach the solution is found in the form of a convergent series with easily computed components. In this work we are concerned with the application of the decomposition method for the linear and nonlinear Fokker–Planck equation. To give overview of methodology, we have presented several examples in one and two dimensional cases.  相似文献   

16.
Formal analysis and computer recognition of 2D color images is an important branch of modern computer geometry. However, the present methods, in spite of their longstanding high development, are not quite satisfactory and seem to be much worse than (unknown) algorithms used by our brain to analyze visual information. Almost all existing algorithms omit colors and deal with gray scale transformations only. However, in many cases color information is important and has to be proceeded. In this paper a fundamentally new method of encoding and analyzing color digital images is proposed. The main idea of this method is that a full-color digital image is encoded by a special two-dimensional surface in the three-dimensional space. After that the surface is analyzed by methods of differential geometry rather than traditional gradient-based or Hessian-based methods (like SIFT, GLOH, SURF, Canny operator, and many other well-known algorithms).  相似文献   

17.
Dialog-controlled rule systems were introduced as a tool to describe the way in which theWimdas system for knowledge-based analysis of marketing data manages its dialog with the user. In this paper we shall discuss how dialog-controlled rule systems can be used to specify a formal language aiding a knowledge engineer in maintaining a system's knowledge base. Although this language is finite, it must be defined generically, being too extensive to be enumerated. In contrast to the well-known traditional methods for defining formal languages — using finite automata, regular expressions or grammars — our method can be applied by a user who need not be an expert in theoretical computer science.Research for this paper was supported by the Deutsche Forschungsgemeinschaft.  相似文献   

18.
本文研究抽象变分问题(不必要求具有强制性)的Galerhin方法,利用泛函分析理论证明了:若变分问题的Galerkin逼近问题存在唯一解,那么它本身的解存在唯一且可由Galerhin逼近解无限逼近的充要条件是其Galerkin逼近格式具有某种稳定性.此结果是对Lax-Milgram定理和C啨a定理的补充,可以应用于不必具有强制性的变分问题.  相似文献   

19.
Research suggests that many schools have a differential effectiveness with pupils of different ability. For example a school may be more effective in raising the performance of pupils of low rather than higher ability or vice versa. The identification of the existence of any differential effectiveness at a school is important as it can prompt a review of teaching practices, which will benefit ability ranges hitherto disadvantaged and thereby improve the overall effectiveness of the school. The most appropriate data for assessing differential effectiveness would be at pupil or, at least, at ability range level. Such data is not generally available. This paper develops a data envelopment analysis (DEA) based method that can identify the existence, and indicate the direction of, differential effectiveness at a school using data covering the full range of pupil abilities. The method can also identify role model schools for a school seeking to alter the bias in its differential effectiveness.  相似文献   

20.
Understanding how malignant brain tumors are formed and evolve has direct consequences on the development of efficient methods for their early detection and treatment. Adequate mathematical models for brain tumor growth and invasion can be helpful in clarifying some aspects of the mechanism responsible for the tumor. These mathematical models are typically implemented in computer models, which can be used for computer experimentation to study how changes in inputs, such as growth and diffusion parameters, affect the evolution of the virtual brain tumor. The computer model considered in this article is defined on a three-dimensional (3D) anatomically accurate digital representation of the human brain, which includes white and gray matter, and on a time interval of hundreds of days to realistically simulate the tumor development. Consequently, this computer model is very computationally intensive and only small-size computer experiments can be conducted, corresponding to a small sample of inputs. This article presents a computationally efficient multidimensional kriging method to predict the evolution of the virtual brain tumor at new inputs, conditioned on the virtual brain tumor data available from the small-size computer experiment. The analysis shows that this prediction can be more accurate than a computationally competing model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号