首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
This article argues that the agent‐based computational model permits a distinctive approach to social science for which the term “generative” is suitable. In defending this terminology, features distinguishing the approach from both “inductive” and “deductive” science are given. Then, the following specific contributions to social science are discussed: The agent‐based computational model is a new tool for empirical research. It offers a natural environment for the study of connectionist phenomena in social science. Agent‐based modeling provides a powerful way to address certain enduring—and especially interdisciplinary—questions. It allows one to subject certain core theories—such as neoclassical microeconomics—to important types of stress (e.g., the effect of evolving preferences). It permits one to study how rules of individual behavior give rise—or “map up”—to macroscopic regularities and organizations. In turn, one can employ laboratory behavioral research findings to select among competing agent‐based (“bottom up”) models. The agent‐based approach may well have the important effect of decoupling individual rationality from macroscopic equilibrium and of separating decision science from social science more generally. Agent‐based modeling offers powerful new forms of hybrid theoretical‐computational work; these are particularly relevant to the study of non‐equilibrium systems. The agent‐based approach invites the interpretation of society as a distributed computational device, and in turn the interpretation of social dynamics as a type of computation. This interpretation raises important foundational issues in social science—some related to intractability, and some to undecidability proper. Finally, since “emergence” figures prominently in this literature, I take up the connection between agent‐based modeling and classical emergentism, criticizing the latter and arguing that the two are incompatible. © 1999 John Wiley & Sons, Inc.  相似文献   

2.
Correspondence analysis, a data analytic technique used to study two‐way cross‐classifications, is applied to social relational data. Such data are frequently termed “sociometric” or “network” data. The method allows one to model forms of relational data and types of empirical relationships not easily analyzed using either standard social network methods or common scaling or clustering techniques. In particular, correspondence analysis allows one to model:

—two‐mode networks (rows and columns of a sociomatrix refer to different objects)

—valued relations (e.g. counts, ratings, or frequencies).

In general, the technique provides scale values for row and column units, visual presentation of relationships among rows and columns, and criteria for assessing “dimensionality” or graphical complexity of the data and goodness‐of‐fit to particular models. Correspondence analysis has recently been the subject of research by Goodman, Haberman, and Gilula, who have termed their approach to the problem “canonical analysis” to reflect its similarity to canonical correlation analysis of continuous multivariate data. This generalization links the technique to more standard categorical data analysis models, and provides a much‐needed statistical justificatioa

We review both correspondence and canonical analysis, and present these ideas by analyzing relational data on the 1980 monetary donations from corporations to nonprofit organizations in the Minneapolis St. Paul metropolitan area. We also show how these techniques are related to dyadic independence models, first introduced by Holland, Leinhardt, Fienberg, and Wasserman in the early 1980's. The highlight of this paper is the relationship between correspondence and canonical analysis, and these dyadic independence models, which are designed specifically for relational data. The paper concludes with a discussion of this relationship, and some data analyses that illustrate the fart that correspondence analysis models can be used as approximate dyadic independence models.  相似文献   

3.
More than 50 years ago, John Tukey called for a reformation of academic statistics. In “The Future of Data Analysis,” he pointed to the existence of an as-yet unrecognized science, whose subject of interest was learning from data, or “data analysis.” Ten to 20 years ago, John Chambers, Jeff Wu, Bill Cleveland, and Leo Breiman independently once again urged academic statistics to expand its boundaries beyond the classical domain of theoretical statistics; Chambers called for more emphasis on data preparation and presentation rather than statistical modeling; and Breiman called for emphasis on prediction rather than inference. Cleveland and Wu even suggested the catchy name “data science” for this envisioned field. A recent and growing phenomenon has been the emergence of “data science” programs at major universities, including UC Berkeley, NYU, MIT, and most prominently, the University of Michigan, which in September 2015 announced a $100M “Data Science Initiative” that aims to hire 35 new faculty. Teaching in these new programs has significant overlap in curricular subject matter with traditional statistics courses; yet many academic statisticians perceive the new programs as “cultural appropriation.” This article reviews some ingredients of the current “data science moment,” including recent commentary about data science in the popular media, and about how/whether data science is really different from statistics. The now-contemplated field of data science amounts to a superset of the fields of statistics and machine learning, which adds some technology for “scaling up” to “big data.” This chosen superset is motivated by commercial rather than intellectual developments. Choosing in this way is likely to miss out on the really important intellectual event of the next 50 years. Because all of science itself will soon become data that can be mined, the imminent revolution in data science is not about mere “scaling up,” but instead the emergence of scientific studies of data analysis science-wide. In the future, we will be able to predict how a proposal to change data analysis workflows would impact the validity of data analysis across all of science, even predicting the impacts field-by-field. Drawing on work by Tukey, Cleveland, Chambers, and Breiman, I present a vision of data science based on the activities of people who are “learning from data,” and I describe an academic field dedicated to improving that activity in an evidence-based manner. This new field is a better academic enlargement of statistics and machine learning than today’s data science initiatives, while being able to accommodate the same short-term goals. Based on a presentation at the Tukey Centennial Workshop, Princeton, NJ, September 18, 2015.  相似文献   

4.
ABSTRACT

“Contentious politics” has become the main label to define a wide range of previously separated fields of research encompassing topics such as collective action, radicalization, armed insurgencies, and terrorism. Over the past two decades, scholars have tried to bring these various strands together into a unified field of study. In so doing, they have developed a methodology to isolate and analyze the common social and cognitive mechanisms underlying several diverse historical phenomena such as “insurgencies,” “revolutions,” “radicalization,” or “terrorism.” A multidisciplinary approach was adopted open to contributions from diverse fields such as economics, sociology, and psychology. The aim of this paper is to add to the multidisciplinarity of the field of Contentious Politics (CP) and introduce the instruments of Agent-Based Modeling and network game-theory to the study of some fundamental mechanisms analyzed within this literature. In particular, the model presented in this paper describes the dynamics of one process, here defined as “the radicalization of politics,” and its main underlying mechanisms. Their mechanics are analyzed in diverse social contexts differentiated by the values of four parameters: the extent of repression, inequality, social tolerance, and interconnectivity. The model can be used to explain the basic dynamics underlying different phenomena such as the development of radicalization, populism, and popular rebellions. In the final part, different societies characterized by diverse values of the aforementioned four parameters are tested through Python simulations, thereby offering an overview of the different outcomes that the mechanics of our model can shape according to the contexts in which they operate.  相似文献   

5.
The percolation phase transition and the mechanism of the emergence of the giant component through the critical scaling window for random graph models, has been a topic of great interest in many different communities ranging from statistical physics, combinatorics, computer science, social networks and probability theory. The last few years have witnessed an explosion of models which couple random aggregation rules, that specify how one adds edges to existing configurations, and choice, wherein one selects from a “limited” set of edges at random to use in the configuration. While an intense study of such models has ensued, understanding the actual emergence of the giant component and merging dynamics in the critical scaling window has remained impenetrable to a rigorous analysis. In this work we take an important step in the analysis of such models by studying one of the standard examples of such processes, namely the Bohman‐Frieze model, and provide first results on the asymptotic dynamics, through the critical scaling window, that lead to the emergence of the giant component for such models. We identify the scaling window and show that through this window, the component sizes properly rescaled converge to the standard multiplicative coalescent. Proofs hinge on a careful analysis of certain infinite‐type branching processes with types taking values in the space of cadlag paths, and stochastic analytic techniques to estimate susceptibility functions of the components all the way through the scaling window where these functions explode. Previous approaches for analyzing random graphs at criticality have relied largely on classical breadth‐first search techniques that exploit asymptotic connections with Brownian excursions. For dynamic random graph models evolving via general Markovian rules, such approaches fail and we develop a quite different set of tools that can potentially be used for the study of critical dynamics for all bounded size rules. © 2013 Wiley Periodicals, Inc. Random Struct. Alg., 46, 55–116, 2015  相似文献   

6.
“Exploratory” and “confirmatory” data analysis can both be viewed as methods for comparing observed data to what would be obtained under an implicit or explicit statistical model. For example, many of Tukey's methods can be interpreted as checks against hypothetical linear models and Poisson distributions. In more complex situations, Bayesian methods can be useful for constructing reference distributions for various plots that are useful in exploratory data analysis. This article proposes an approach to unify exploratory data analysis with more formal statistical methods based on probability models. These ideas are developed in the context of examples from fields including psychology, medicine, and social science.  相似文献   

7.
Probabilistic cellular automata form a very large and general class of stochastic processes. These automata exhibit a wide range of complex behavior and are of interest in a number of fields of study, including mathematical physics, percolation theory, computer science, and neurobiology. Very little has been proved about these models, even in simple cases, so it is common to compare the models to mean field models. It is normally assumed that mean field models are essentially trivial. However, we show here that even the mean field models can exhibit surprising behavior. We prove some rigorous results on mean field models, including the existence of a surrogate for the “energy” in certain non‐reversible models. We also briefly discuss some differences that occur between the mean field and lattice models. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2006  相似文献   

8.
We consider several different bidirectional Whitham equations that have recently appeared in the literature. Each of these models combines the full two‐way dispersion relation from the incompressible Euler equations with a canonical shallow water nonlinearity, providing nonlocal model equations that may be expected to exhibit some of the interesting high‐frequency phenomena present in the Euler equations that standard “long‐wave” theories fail to capture. Of particular interest here is the existence and stability of periodic traveling wave solutions in such models. Using numerical bifurcation techniques, we construct global bifurcation diagrams for each system and compare the global structure of branches, together with the possibility of bifurcation branches terminating in a “highest” singular (peaked/cusped) wave. We also numerically approximate the stability spectrum along these bifurcation branches and compare the stability predictions of these models. Our results confirm a number of analytical results concerning the stability of asymptotically small waves in these models and provide new insights into the existence and stability of large amplitude waves.  相似文献   

9.
Despite the recent wave of interest in the social and physical sciences regarding “complexity,” relatively title attention has been given to the logical foundation of complexity measurement. With this in mind, a number of fairly simple, “reasonable” axioms for the measurement of network complexity are here presented, and some of the implications of these axioms are considered. It is shown that the only family of graph complexity measures satisfying the “reasonable” axioms is of limited theoretical utility, and hence that those seeking more interesting measures of complexity must be willing to sacrifice at least one intuitively reasonable constraint. Several existing complexity measures are also described, and are differentiated from one another on an axiomatic basis. Finally, some suggestions are offered regarding future efforts at measuring graph complexity.  相似文献   

10.
Based on a market consisting of one monopoly and several customers who are embedded in an economic network, we study how the different perception levels about the network structure affect the two kinds of participants' welfares, and then provide some good strategies for the monopoly to mine the information of the network structure. The above question is the embodiment of the “complex structure and its corresponding functions” question often mentioned in the field of complexity science. We apply a two‐stage game to solve for the optimal pricing and consumption at different perception levels of the monopoly and further utilize simulation analysis to explore the influence patterns. We also discuss how this theoretic model can be applied to a real world problem by introducing the statistical exponential random graph model and its estimation method. Further, the main findings have specific policy implications on uncovering network information and demonstrate that it is possible for the policy‐maker to design some win–win mechanisms for uplifting both the monopoly's profit and the whole customers' welfare at the same time. © 2014 Wiley Periodicals, Inc. Complexity 21: 349–362, 2015  相似文献   

11.
The detection of community structures within network data is a type of graph analysis with increasing interest across a broad range of disciplines. In a network, communities represent clusters of nodes that exhibit strong intra-connections or relationships among nodes in the cluster. Current methodology for community detection often involves an algorithmic approach, and commonly partitions a graph into node clusters in an iterative manner before some stopping criterion is met. Other statistical approaches for community detection often require model choices and prior selection in Bayesian analyses, which are difficult without some amount of data inspection and pre-processing. Because communities are often fuzzily-defined human concepts, an alternative approach is to leverage human vision to identify communities. The work presents a tool for community detection in form of a web application, called gravicom, which facilitates the detection of community structures through visualization and direct user interaction. In the process of detecting communities, the gravicom application can serve as a standalone tool or as a step to potentially initialize (and/or post-process) another community detection algorithm. In this paper we discuss the design of gravicom and demonstrate its use for community detection with several network data sets. An “Appendix” describes details in the technical formulation of this web application built on the R package Shiny and the JavaScript library D3.  相似文献   

12.
In general this comment tackles the problems and difficulties potentially combined with an application of formal economic models and constructs, such as the Walras equilibrium and microeconmic demand theory, to pure sociological contexts. In particular, this is done by analyzing a further attempt, as recently suggested by Braun (1993 and 1994), to extend the well‐known Coleman Model by incorporating the embeddedness of social transactions in incomplete social network structures. “Pars pro toto” it is proved that Braun's conceptualization contains some weaknesses which imply that fundamental conclusions drawn in his article have to be revised.  相似文献   

13.
The question of what structures of relations between actors emerge in the evolution of social networks is of fundamental sociological interest. The present research proposes that processes of network evolution can be usefully conceptualized in terms of a network of networks, or “metanetwork,” wherein networks that are one link manipulation away from one another are connected. Moreover, the geography of metanetworks has real effects on the course of network evolution. Specifically, both equilibrium and non-equilibrium networks located in more desirable regions of the metanetwork are found to be more probable. These effects of metanetwork geography are illustrated by two dynamic network models: one in which actors pursue access to unique information through “structural holes,” and the other in which actors pursue access to valid information by minimizing path length. Finally, I discuss future directions for modeling network dynamics in terms of metanetworks.  相似文献   

14.
Relational event data, which consist of events involving pairs of actors over time, are now commonly available at the finest of temporal resolutions. Existing continuous‐time methods for modeling such data are based on point processes and directly model interaction “contagion,” whereby one interaction increases the propensity of future interactions among actors, often as dictated by some latent variable structure. In this article, we present an alternative approach to using temporal‐relational point process models for continuous‐time event data. We characterize interactions between a pair of actors as either spurious or as resulting from an underlying, persistent connection in a latent social network. We argue that consistent deviations from expected behavior, rather than solely high frequency counts, are crucial for identifying well‐established underlying social relationships. This study aims to explore these latent network structures in two contexts: one comprising of college students and another involving barn swallows.  相似文献   

15.
In this paper, for the first time we analyze the structure of the Italian Airport Network (IAN) looking at it as a mathematical graph and investigate its topological properties. We find that it has very remarkable features, being like a scale-free network, since both the degree and the “betweenness centrality” distributions follow a typical power-law known in literature as a Double Pareto Law. From a careful analysis of the data, the Italian Airport Network turns out to have a self-similar structure. In short, it is characterized by a fractal nature, whose typical dimensions can be easily determined from the values of the power-law scaling exponents.Moreover, we show that, according to the period examined, these distributions exhibit a number of interesting features, such as the existence of some “hubs”, i.e. in the graph theory’s jargon, nodes with a very large number of links, and others most probably associated with geographical constraints.Also, we find that the IAN can be classified as a small-world network because the average distance between reachable pairs of airports grows at most as the logarithm of the number of airports. The IAN does not show evidence of “communities” and this result could be the underlying reason behind the smallness of the value of the clustering coefficient, which is related to the probability that two nearest neighbors of a randomly chosen airport are connected.  相似文献   

16.
The study reported in this paper investigated perceptions concerning connections between mathematics and science held by university/college instructors who participated in the Maryland Collaborative for Teacher Preparation (MCTP), an NSF-funded program aimed at developing special middle-level mathematics and science teachers. Specifically, we asked (a) “What are the perceptions of MCTP instructors about the ‘other’ discipline?” (b) “What are the perceptions of MCTP instructors about the connections between mathematics and science?” and (c) “What are some barriers perceived by MCTP instructors in implementing mathematics and science courses that emphasize connections?” The findings suggest that the benefits of emphasizing mathematics and science connections perceived by MCTP instructors were similar to the benefits reported by school teachers. The barriers reported were also similar. The participation in the project appeared to have encouraged MCTP instructors to grapple with some fundamental questions, like “What should be the nature of mathematics and science connections?” and “What is the nature of mathematics/science in relationship to the other discipline?”  相似文献   

17.
We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson's monks, providing code where possible so that these analyses may be duplicated.  相似文献   

18.
Udo Kelle 《ZDM》2003,35(6):232-246
The disregard of causal inference in the methodological literature about qualitative research is highly problematic, since the category of causality is closely linked to the concept of social action. However, it is also clear that causal analysis is burdened with certain difficulties and methodological challenges in the realm of social research. Some of these problems are discussed in this article using Mackie—s concept of 3 “INUS”-conditions. Thereby it will be shown that strategies of causal analysis based on comparative methods proposed for qualitative research, namely “Analytic Induction” and “Qualitative Comparative Analysis” have great difficulties in dealing adequately with these problems. They can only be solved, if case-comparative methods are combined with explorative research strategies which support the researcher in gaining access to the local knowledge of the research field.  相似文献   

19.
Many existing statistical and machine learning tools for social network analysis focus on a single level of analysis. Methods designed for clustering optimize a global partition of the graph, whereas projection-based approaches (e.g., the latent space model in the statistics literature) represent in rich detail the roles of individuals. Many pertinent questions in sociology and economics, however, span multiple scales of analysis. Further, many questions involve comparisons across disconnected graphs that will, inevitably be of different sizes, either due to missing data or the inherent heterogeneity in real-world networks. We propose a class of network models that represent network structure on multiple scales and facilitate comparison across graphs with different numbers of individuals. These models differentially invest modeling effort within subgraphs of high density, often termed communities, while maintaining a parsimonious structure between said subgraphs. We show that our model class is projective, highlighting an ongoing discussion in the social network modeling literature on the dependence of inference paradigms on the size of the observed graph. We illustrate the utility of our method using data on household relations from Karnataka, India. Supplementary material for this article is available online.  相似文献   

20.
The emergency service station (ESS) location problem has been widely studied in the literature since 1970s. There has been a growing interest in the subject especially after 1990s. Various models with different objective functions and constraints have been proposed in the academic literature and efficient solution techniques have been developed to provide good solutions in reasonable times. However, there is not any study that systematically classifies different problem types and methodologies to address them. This paper presents a taxonomic framework for the ESS location problem using an operations research perspective. In this framework, we basically consider the type of the emergency, the objective function, constraints, model assumptions, modeling, and solution techniques. We also analyze a variety of papers related to the literature in order to demonstrate the effectiveness of the taxonomy and to get insights for possible research directions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号