首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
2.
Modular programming is a development paradigm that emphasizes self-contained, flexible, and independent pieces of functionality. This practice allows new features to be seamlessly added when desired, and unwanted features to be removed, thus simplifying the software's user interface. The recent rise of web-based software applications has presented new challenges for designing an extensible, modular software system. In this article, we outline a framework for designing such a system, with a focus on reproducibility of the results. We present as a case study a Shiny-based web application called intRo, that allows the user to perform basic data analyses and statistical routines. Finally, we highlight some challenges we encountered, and how to address them, when combining modular programming concepts with reactive programming as used by Shiny. Supplementary material for this article is available online.  相似文献   

3.
There are several fundamental problems with statistical software development in the academic community. In addition, the development and dissemination of academic software will become increasingly difficult due to a variety of reasons. To solve these problems, a new framework for statistical software development, maintenance, and publishing is proposed: it is based on the paradigm that academic and commercial software should be both cost-effectively created, maintained and published with Marketing Principles in mind. The framework has been seamlessly integrated into a highly successful website () that operates as a provider of free web-based statistical software. Finally it is explained how the R framework provides a platform for the development of a true compendium publishing system.  相似文献   

4.
Interactive graphics provide a very important tool that facilitates the process of exploratory data and model analysis which is a crucial step in real-world applied statistics. Only a very limited set of software exists that provides truly interactive graphics for data analysis, partially because it is not easy to implement. Very often specialized software is created to offer graphics for a particular problem, but many fundamental plots are omitted since it is not considered new research. In this paper we discuss a general framework that allows to create interactive graphics software on a sound foundation that offers consistent user interface, fast prototyping of new plots and extensibility to support interactive models. In addition, we also discuss one implementation of the general framework: iPlots eXtreme—next-generation interactive graphics for analysis of large data in R. It provides most fundamental plot types and allows new interactive plots to be created. The implementation raises interactive graphics performance to an entirely new level. We will discuss briefly several methods that allowed us to achieve this goal and illustrate the use of advanced programmability features in conjunction with R.  相似文献   

5.
Abstract

Statistical software provides essential support for statisticians and others who are analyzing data or doing research on new statistical techniques. Those supported typically regard themselves as “users” of the software, but as soon as they need to express their own ideas computationally, they in fact become “programmers.” Nothing is more important for the success of statistical software than enabling this transition from user to programmer, and on to gradually more ambitious software design. What does the user need? How can the design of statistical software help? This article presents a number of suggestions based on past experience and current research. The evolution of the S system reflects some of these opinions. Work on the Omegahat software provides a promising direction for future systems that reflect similar motivations.  相似文献   

6.
The complexity of linear mixed-effects (LME) models means that traditional diagnostics are rendered less effective. This is due to a breakdown of asymptotic results, boundary issues, and visible patterns in residual plots that are introduced by the model fitting process. Some of these issues are well known and adjustments have been proposed. Working with LME models typically requires that the analyst keeps track of all the special circumstances that may arise. In this article, we illustrate a simpler but generally applicable approach to diagnosing LME models. We explain how to use new visual inference methods for these purposes. The approach provides a unified framework for diagnosing LME fits and for model selection. We illustrate the use of this approach on several commonly available datasets. A large-scale Amazon Turk study was used to validate the methods. R code is provided for the analyses. Supplementary materials for this article are available online.  相似文献   

7.
The last few years have seen a significant increase in publicly available software specifically targeted to the analysis of extreme values. This reflects the increase in the use of extreme value methodology by the general statistical community. The software that is available for the analysis of extremes has evolved in essentially independent units, with most forming extensions of larger software environments. An inevitable consequence is that these units are spread about the statistical landscape. Scientists seeking to apply extreme value methods must spend considerable time and effort in determining whether the currently available software can be usefully applied to a given problem. We attempt to simplify this process by reviewing the current state, and suggest future approaches for software development. These suggestions aim to provide a basis for an initiative leading to the successful creation and distribution of a flexible and extensible set of tools for extreme value practitioners and researchers alike. In particular, we propose a collaborative framework for which cooperation between developers is of fundamental importance. AMS 2000 Subject Classification Primary—62P99  相似文献   

8.
Network models and methods have proven to be rather successful in many application fields. In this survey their use in the solution of Vehicle Scheduling and Crew Scheduling problems in Mass Transit settings is reviewed. An attempt is made to encompass in a unified framework some of the most relevant models on the subject to be found in the literature.  相似文献   

9.
Abstract

Statistical environments such as S, R, XLisp-Stat, and others have had a dramatic effect on the way we, statistics practitioners, think about data and statistical methodology. However, the possibilities and challenges introduced by recent technological developments and the general ways we use computing conflict with the computational model of these systems. This article explores some of these challenges and identifies the need to support easy integration of functionality from other domains, and to export statistical methodology to other audiences and applications, both statically and dynamically. Existing systems can be improved in these domains with some already implemented extensions (see Section 5). However, the development of a new environment and computational model that exploits modern tools designed to handle many general aspects of these challenges appears more promising as a long-term approach. We present the architecture for such a new model named Omegahat. It lends itself to entirely new statistical computing paradigms. It is highly extensible at both the user and programmer level, and also encourages the development of new environments for different user groups. The Omegahat interactive language offers a continuity between the different programming tasks and levels via optional type checking and seamless access between the interpreted user language and the implementation language, Java. Parallel and distributed computing, network and database access, interactive graphics, and many other aspects of statistical computing are directly accessible to the user as a consequence of this seamless access. We describe the benefits of using Java as the implementation language for the environment and several innovative features of the user-level language which promise to assist development of software that can be used in many contexts. We also outline how this architecture can be integrated with existing environments such as R and S.

The ideas are drawn from work within the Omega Project for Statistical Computing. The project provides open-source software for researching and developing next generation statistical computing tools.  相似文献   

10.
This paper describes progress towards developing a platform for rapid prototyping of interactive data visualizations, using R, GGobi, rggobi and RGtk2. GGobi is a software tool for multivariate interactive graphics. At the core of GGobi is a data pipeline that incrementally transforms data through a series of stages into a plot and maps user interaction with the plot back to the data. The GGobi pipeline is extensible and mutable at runtime. The rggobi package, an interface from the R language to GGobi, has been augmented with a low-level interface that supports the customization of interactive data visualizations through the extension and manipulation of the GGobi pipeline. The large size of the GGobi API has motivated the use of the RGtk2 code generation system to create the low-level interface between R and GGobi. The software is demonstrated through an application to interactive network visualization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号