首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 622 毫秒
1.
In the workplace mathematics and statistics are essential for communication and decision-making. Process workers at lower classifications of skill levels are likely to be confronted with statistical charts and warnings about nonconformity. Mathematics, statistics, and technology education in and for the workplace must take account of the cultural diversity which exists within and between workplaces. The design of generic mathematics, and in some cases statistics, curricula rarely reflect actual workplace practice except at a superficial level. One way of overcoming these problems is for mathematics/statistics educators to work in cooperation with industry, particularly at the local level, in a way that will encourage and support lifelong learning yet remain critical of the uses to which mathematics, statistics, and technology are put. This paper outlines some ways in which to address the challenge of making mathematics, statistics, and technology education take on real meaning within the context of the workplace.  相似文献   

2.
This article begins with some context setting on new views of statistics and statistical education. These views are reflected, in particular, in the introduction of exploratory data analysis (EDA) into the statistics curriculum. Then, a detailed example of EDA learning activity in the middle school is introduced, which makes use of the power of the spreadsheet to mediate students' construction of meanings for statistical conceptions. Through this example, I endeavor to illustrate how an attempt at serious integration of computers in teaching and learning statistics brings about a cascade of changes in curriculum materials, classroom praxis, and students' ways of learning. A theoretical discussion follows that underpins the impact of technological tools on teaching and learning statistics by emphasizing how the computer lends itself to supporting cognitive and sociocultural processes. Subsequently, I present a sample of educational technologies, which represents the sorts of software that have typically been used in statistics instruction: statistical packages (tools), microworlds, tutorials, resources (including Internet resources), and teachers' metatools. Finally, certain implications and recommendations for the use of computers in the statistical educational milieu are suggested.  相似文献   

3.
This article begins with some context setting on new views of statistics and statistical education. These views are reflected, in particular, in the introduction of exploratory data analysis (EDA) into the statistics curriculum. Then, a detailed example of EDA learning activity in the middle school is introduced, which makes use of the power of the spreadsheet to mediate students' construction of meanings for statistical conceptions. Through this example, I endeavor to illustrate how an attempt at serious integration of computers in teaching and learning statistics brings about a cascade of changes in curriculum materials, classroom praxis, and students' ways of learning. A theoretical discussion follows that underpins the impact of technological tools on teaching and learning statistics by emphasizing how the computer lends itself to supporting cognitive and sociocultural processes. Subsequently, I present a sample of educational technologies, which represents the sorts of software that have typically been used in statistics instruction: statistical packages (tools), microworlds, tutorials, resources (including Internet resources), and teachers' metatools. Finally, certain implications and recommendations for the use of computers in the statistical educational milieu are suggested.  相似文献   

4.
The provision of quality learning experiences for teachers is critical to mathematics reform agendas aimed at equitable and culturally responsive teaching. In this paper we use an activity theory framework to explore one teacher’s learning journey. Drawing on the teacher’s self-report of his journey 1 year after his participation in an intervention designed to support the introduction of mathematical inquiry practices we examine those factors that supported expansive learning. In seeking to understand our pedagogical stance within the intervention we gained new insights into the provision of research based tools to support learning, the provision of space for individual and collective learning, and the provision of a safe learning environment both within the programme, the class, and the wider professional community. These factors are important in understanding transformational changes associated with ambitious pedagogy.  相似文献   

5.
The role of universities in preparing students to use spreadsheet and other technical software in the financial services workplace has been investigated through surveys of university graduates, university academics, and employers. It is found that graduates are less skilled users of software than employers would like, due at least in part to a lack of structured formal training opportunities in the workplace, and a lack of targeted, coherent learning opportunities at university. The widespread and heavy use of software in the workplace means that there is significant potential for productivity gains if universities and employers address these issues.  相似文献   

6.
The NCTM “Curriculum and Evaluation Standards for School Mathematics” (1989) reflect the current movement to introduce probability and statistics in the precollege curriculum. These standards include topics and principles for instruction in probability and statistics which are included in the Quantitative Literacy Project (QLP) curriculum materials. This paper presents results of a survey which explored the successes of the QLP materials in terms of student reactions to instruction in probability and statistics. A student survey of 917 students of teachers trained in QLP workshops assessed how students regarded the instructional materials in general, how well they liked learning topics in probability and statistics, and how well they believed that they had learned the content. Results indicated that students have mostly positive attitudes towards learning statistics. However, fewer students felt it was useful to learn about these topics. An increase in positive attitudes with grade level suggests the topics may be received more favorably and, therefore, may more appropriately be used in higher grades.  相似文献   

7.
Regularization Networks and Support Vector Machines   总被引:23,自引:0,他引:23  
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular, the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

8.
While technology has become an integral part of introductory statistics courses, the programs typically employed are professional packages designed primarily for data analysis rather than for learning. Findings from several studies suggest that use of such software in the introductory statistics classroom may not be very effective in helping students to build intuitions about the fundamental statistical ideas of sampling distribution and inferential statistics. The paper describes an instructional experiment which explored the capabilities of Fathom, one of several recently-developed packages explicitly designed to enhance learning. Findings from the study indicate that use of Fathom led students to the construction of a fairly coherent mental model of sampling distributions and other key concepts related to statistical inference. The insights gained point to a number of critical ingredients that statistics educators should consider when choosing statistical software. They also provide suggestions about how to approach the particularly challenging topic of statistical inference. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

9.
Software may be used in university teaching both to enhance student learning of discipline-content knowledge and skills, and to equip students with capabilities that will be useful in their future careers. Although research has indicated that software may be used as an effective way of engaging students and enhancing learning in certain scenarios, relatively little is known about academic practices with regard to the use of software more generally or about the extent to which this software is subsequently used by graduates in the workplace. This article reports on the results of a survey of academics in quantitative and financial disciplines, which is part of a broader study also encompassing recent graduates and employers. Results indicate that a variety of software packages are in widespread use in university programmes in quantitative and financial disciplines. Most surveyed academics believe that the use of software enhances learning and enables students to solve otherwise intractable problems. A majority also rate spreadsheet skills in particular as very important for the employability of graduates. A better understanding of the use of software in university teaching points the way to how curricula can be revised to enhance learning and prepare graduates for professional work.  相似文献   

10.
As part of a broader research objective concerned with identifying the range of employer defined skill profiles that characterize workplace performance, this paper examines skill contexts for Application of Number, one of six UK defined Key Skills similar to Australian defined Key Competencies. Following the construction of questionnaires, grounded in the Analytic Hierarchy Process, applications of the instrument in both the UK and in Australia produced a ratio scale of priorities within the Key Skills area. This enabled a specification of the relative balance between classical competencies, e.g. facility with pen and paper calculations and emerging competencies demanded by the effective use of ICT. Relevance to workplace learning, including the transition from school to employment, and related aspects of mathematics education are discussed. Among the research outcomes is that spreadsheets are assuming a pre-eminent position and that this is an overriding priority for each defined activity and at each job level.  相似文献   

11.
Estimation of probability density functions (PDF) is a fundamental concept in statistics. This paper proposes an ensemble learning approach for density estimation using Gaussian mixture models (GMM). Ensemble learning is closely related to model averaging: While the standard model selection method determines the most suitable single GMM, the ensemble approach uses a subset of GMM which are combined in order to improve precision and stability of the estimated probability density function. The ensemble GMM is theoretically investigated and also numerical experiments were conducted to demonstrate benefits from the model. The results of these evaluations show promising results for classifications and the approximation of non-Gaussian PDF.  相似文献   

12.
More than 50 years ago, John Tukey called for a reformation of academic statistics. In “The Future of Data Analysis,” he pointed to the existence of an as-yet unrecognized science, whose subject of interest was learning from data, or “data analysis.” Ten to 20 years ago, John Chambers, Jeff Wu, Bill Cleveland, and Leo Breiman independently once again urged academic statistics to expand its boundaries beyond the classical domain of theoretical statistics; Chambers called for more emphasis on data preparation and presentation rather than statistical modeling; and Breiman called for emphasis on prediction rather than inference. Cleveland and Wu even suggested the catchy name “data science” for this envisioned field. A recent and growing phenomenon has been the emergence of “data science” programs at major universities, including UC Berkeley, NYU, MIT, and most prominently, the University of Michigan, which in September 2015 announced a $100M “Data Science Initiative” that aims to hire 35 new faculty. Teaching in these new programs has significant overlap in curricular subject matter with traditional statistics courses; yet many academic statisticians perceive the new programs as “cultural appropriation.” This article reviews some ingredients of the current “data science moment,” including recent commentary about data science in the popular media, and about how/whether data science is really different from statistics. The now-contemplated field of data science amounts to a superset of the fields of statistics and machine learning, which adds some technology for “scaling up” to “big data.” This chosen superset is motivated by commercial rather than intellectual developments. Choosing in this way is likely to miss out on the really important intellectual event of the next 50 years. Because all of science itself will soon become data that can be mined, the imminent revolution in data science is not about mere “scaling up,” but instead the emergence of scientific studies of data analysis science-wide. In the future, we will be able to predict how a proposal to change data analysis workflows would impact the validity of data analysis across all of science, even predicting the impacts field-by-field. Drawing on work by Tukey, Cleveland, Chambers, and Breiman, I present a vision of data science based on the activities of people who are “learning from data,” and I describe an academic field dedicated to improving that activity in an evidence-based manner. This new field is a better academic enlargement of statistics and machine learning than today’s data science initiatives, while being able to accommodate the same short-term goals. Based on a presentation at the Tukey Centennial Workshop, Princeton, NJ, September 18, 2015.  相似文献   

13.
Almost every U.S.-based statistician working on problems motivated by atmospheric science is connected to the statistics program at the National Center for Atmospheric Research (NCAR). Through its permanent staff scientists, postdoctoral researchers, visitors, seminars, workshops, and published work, NCAR has made a profound impact on the community of statisticians working in the atmospheric and climate sciences. This past year saw a reorganization of statistics at NCAR. This article looks back at more than 20 years of statistics there.  相似文献   

14.
A short review is given of the necessity for, and evidence of, synaptic facilitation as a mechanism which organises the microcircuitry of the brain as a result of experience. Conditional probability statistics described this process and perceptual learning machines illustrate it. A brief review of animal and machine pattern recognition and learning is given. A simple model is described which simulates learning and pattern recognition. It is suggested that certain brain processes and perceptual learning machines are homologous rather than just analogous.  相似文献   

15.
This paper proposes a variant of the generalized learning vector quantizer (GLVQ) optimizing explicitly the area under the receiver operating characteristics (ROC) curve for binary classification problems instead of the classification accuracy, which is frequently not appropriate for classifier evaluation. This is particularly important in case of overlapping class distributions, when the user has to decide about the trade-off between high true-positive and good false-positive performance. The model keeps the idea of learning vector quantization based on prototypes by stochastic gradient descent learning. For this purpose, a GLVQ-based cost function is presented, which describes the area under the ROC-curve in terms of the sum of local discriminant functions. This cost function reflects the underlying rank statistics in ROC analysis being involved into the design of the prototype based discriminant function. The resulting learning scheme for the prototype vectors uses structured inputs, i.e. ordered pairs of data vectors of both classes.  相似文献   

16.
In this paper various ensemble learning methods from machine learning and statistics are considered and applied to the customer choice modeling problem. The application of ensemble learning usually improves the prediction quality of flexible models like decision trees and thus leads to improved predictions. We give experimental results for two real-life marketing datasets using decision trees, ensemble versions of decision trees and the logistic regression model, which is a standard approach for this problem. The ensemble models are found to improve upon individual decision trees and outperform logistic regression.  相似文献   

17.
The analytical stance taken by equity researchers in education, the methodologies employed, and the interpretations that are drawn from data all have an enormous impact on the knowledge that is produced about sources of inequality. In the 1970s and 1980s, a great deal of interest was given to the issue of women's and girls' underachievement in mathematics. This prompted numerous different research projects that investigated the extent and nature of the differences between girls' and boys' achievement and offered reasons why such disparities occurred. This work contributed to a discourse on gender and mathematics that flowed through the media channels and into schools, homes, and the workplace. In this article, I consider some of the scholarship on gender and mathematics, critically examining the findings that were produced and the influence they had. In the process, I propose a fundamental tension in research on equity, as scholars walk a fine and precarious line between lack of concern on the one hand and essentialism on the other. I argue in this article that negotiating that tension may be the most critical role for equity researchers as we move into the future.  相似文献   

18.
19.
中国宏观经济统计数据的真实性和数据质量一直以来饱受质疑,特别是2008年美国金融危机以来,更受到世界各国和许多国内研究者的批评.引进、鉴证并扩展了Benford法则的分布拟合和宏观统计数据的数据随机性质量评价能力,对我国统计系统公布的宏观统计数据进行了客观、真实和有效的评价.研究表明,在0.05的置信水平下,我国的国民经济核算、政府财政统计、金融业、国际收支平衡这四个宏观经济部门的主要经济指标不存在人为操纵和修正,统计数据质量有了显著提升.  相似文献   

20.
The Fatih Project in Turkey has improved software in mathematics teaching such as data analysis software. As a result, the need to inquire into the efficiency of computer-supported learning environments has emerged. This study aims to examine the effect of dynamic data analysis software-supported learning environments on secondary school students’ achievement and attitude. The research method employs a quasi-experimental design with a pre-test, post-test control group. Basic topics related to data analysis were introduced through dynamic statistics software in the experimental group while the students were taught with the help of smart boards, course books and exercises in the control group. Data were collected with an achievement test, attitude scale and semi-structured interviews. Also, interviews were conducted with four students from the experimental group in order to get more detailed information from students. The data gained in the study were analysed both quantitatively and qualitatively. The findings revealed that statistics teaching through statistics software is more efficient than the one with the traditional method on achievement and attitudes. In accordance with this result, it is suggested that computer-supported statistics software should be used in statistics teaching.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号