首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
This article begins with some context setting on new views of statistics and statistical education. These views are reflected, in particular, in the introduction of exploratory data analysis (EDA) into the statistics curriculum. Then, a detailed example of EDA learning activity in the middle school is introduced, which makes use of the power of the spreadsheet to mediate students' construction of meanings for statistical conceptions. Through this example, I endeavor to illustrate how an attempt at serious integration of computers in teaching and learning statistics brings about a cascade of changes in curriculum materials, classroom praxis, and students' ways of learning. A theoretical discussion follows that underpins the impact of technological tools on teaching and learning statistics by emphasizing how the computer lends itself to supporting cognitive and sociocultural processes. Subsequently, I present a sample of educational technologies, which represents the sorts of software that have typically been used in statistics instruction: statistical packages (tools), microworlds, tutorials, resources (including Internet resources), and teachers' metatools. Finally, certain implications and recommendations for the use of computers in the statistical educational milieu are suggested.  相似文献   

2.
While technology has become an integral part of introductory statistics courses, the programs typically employed are professional packages designed primarily for data analysis rather than for learning. Findings from several studies suggest that use of such software in the introductory statistics classroom may not be very effective in helping students to build intuitions about the fundamental statistical ideas of sampling distribution and inferential statistics. The paper describes an instructional experiment which explored the capabilities of Fathom, one of several recently-developed packages explicitly designed to enhance learning. Findings from the study indicate that use of Fathom led students to the construction of a fairly coherent mental model of sampling distributions and other key concepts related to statistical inference. The insights gained point to a number of critical ingredients that statistics educators should consider when choosing statistical software. They also provide suggestions about how to approach the particularly challenging topic of statistical inference. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

3.
4.
This article proposes a penalized likelihood method to jointly estimate multiple precision matrices for use in quadratic discriminant analysis (QDA) and model-based clustering. We use a ridge penalty and a ridge fusion penalty to introduce shrinkage and promote similarity between precision matrix estimates. We use blockwise coordinate descent for optimization, and validation likelihood is used for tuning parameter selection. Our method is applied in QDA and semi-supervised model-based clustering.  相似文献   

5.
We study a parametric estimation problem. Our aim is to estimate or to identify the conditional probability which is called the system. We suppose that we can select appropriate inputs to the system when we gather the training data. This kind of estimation is called active learning in the context of the artificial neural networks. In this paper we suggest new active learning algorithms and evaluate the risk of the algorithms by using statistical asymptotic theory. The algorithms are regarded as a version of the experimental design with two-stage sampling. We verify the efficiency of the active learning by simple computer simulations.  相似文献   

6.
本文对复杂的机床传动系统的动力特性和疲劳载荷进行统计分析,对传动件的疲劳寿命也视为概率分布状态,在探讨载荷与疲劳寿命的分布规律基础上,提出了机床传动件概率设计的理论公式,用这种新的设计方法可以预测传动件的可靠度或疲劳寿命。  相似文献   

7.
Despite widespread use of significance testing in empirical research, its interpretation and researchers' excessive confidence in its results have been criticized for years. In this article, the logic of statistical testing in the Fisher and Neyman-Pearson approaches are described, some common misinterpretations of basic concepts behind statistical tests are reviewed, and the philosophical and psychological issues that can contribute to these misinterpretations are analyzed. Some frequent criticisms against statistical tests are revisited, with the conclusion that most of them refer not to the tests themselves but to the misuse of tests on the part of researchers. In accordance with Levin (1998a), statistical tests should be transformed into a more intelligent process that helps researchers in their work. Possible ways in which statistical education might contribute to the better understanding and application of statistical inference are suggested.  相似文献   

8.
Despite widespread use of significance testing in empirical research, its interpretation and researchers' excessive confidence in its results have been criticized for years. In this article, the logic of statistical testing in the Fisher and Neyman-Pearson approaches are described, some common misinterpretations of basic concepts behind statistical tests are reviewed, and the philosophical and psychological issues that can contribute to these misinterpretations are analyzed. Some frequent criticisms against statistical tests are revisited, with the conclusion that most of them refer not to the tests themselves but to the misuse of tests on the part of researchers. In accordance with Levin (1998a), statistical tests should be transformed into a more intelligent process that helps researchers in their work. Possible ways in which statistical education might contribute to the better understanding and application of statistical inference are suggested.  相似文献   

9.
Stability is a major requirement to draw reliable conclusions when interpreting results from supervised statistical learning. In this article, we present a general framework for assessing and comparing the stability of results, which can be used in real-world statistical learning applications as well as in simulation and benchmark studies. We use the framework to show that stability is a property of both the algorithm and the data-generating process. In particular, we demonstrate that unstable algorithms (such as recursive partitioning) can produce stable results when the functional form of the relationship between the predictors and the response matches the algorithm. Typical uses of the framework in practical data analysis would be to compare the stability of results generated by different candidate algorithms for a dataset at hand or to assess the stability of algorithms in a benchmark study. Code to perform the stability analyses is provided in the form of an R package. Supplementary material for this article is available online.  相似文献   

10.
11.
12.
13.
We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points \(X_i\), superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.  相似文献   

14.
对处理顺序回归问题的支持向量顺序回归机的统计学习理论基础进行研究.
首先, 利用结构风险最小化原则推导出一种顺序回归机,
称之为结构风险最小化顺序回归机, 其次,
证明了结构风险最小化顺序回归机与支持向量顺序回归机解之间的关系.
进一步从统计学习的角度证明了支持向量顺序回归机是结构风险最小化原则的一种直接实现,
并给出了惩罚参数C的含义.  相似文献   

15.
基于双重粗糙样本的统计学习理论的理论基础   总被引:1,自引:0,他引:1  
本文介绍双重粗糙理论的基本内容;提出双重粗糙经验风险泛函,双重粗糙期望风险泛函,双重粗糙经验风险最小化原则等概念;最后证明基于双重粗糙样本的统计学习理论的关键定理并讨论学习过程一致收敛速度的界.为系统建立基于不确定样本的统计学习理论并构建相应的支持向量机奠定了理论基础.  相似文献   

16.
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components. We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplemental materials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context.  相似文献   

17.
18.
19.
More than a decade of research and innovation in using computer-based graphing and simulation environments has encouraged many of us in the research community to believe important dimensions of calculus-related reasoning can be successfully understood by young learners. This paper attempts to address what kinds of calculus-related insights seem to typify this early form of calculus reasoning. The phrase “qualitative calculus” is introduced to frame the analysis of this “other” calculus. The learning of qualitative calculus is the focus of the synthesis. The central claim is that qualitative calculus is a cognitive structure in its own right and that qualitative calculus develops or evolves in ways that seem to fit with important general features of Piaget's analyses of the development of operational thought. In particular, the intensification of rate and two kinds of reversibility between what are called “how much” (amount) and “how fast” (rate) quantities are what interactively, and collectively,characterize and help to define understanding qualitative calculus. Although sharing a family resemblance with traditional expectations of what it might mean to learn calculus, qualitative calculus does not build from ratio- or proportion-based ideas of slope as they are typically associated with defining rate. The paper does close, however, with a discussion of how understanding qualitative calculus can support and link to the rate-related literature of slope, ratio and proportion. Additionally, curricular connections and implications are discussed throughout to help illustrate and explore the significance of learning qualitative calculus. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

20.
介绍模糊粗糙理论的基本内容;提出模糊粗糙经验风险泛函,模糊粗糙期望风险泛函,模糊粗糙经验风险最小化原则等概念;最后证明基于模糊粗糙样本的统计学习理论的关键定理并构建学习过程一致收敛速度的界.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号