Calendar
During the fall 2018 semester, the Computational Social Science (CSS) and the Computational Sciences and Informatics (CSI) Programs have merged their seminar/colloquium series where students, faculty and guest speakers present their latest research. These seminars are free and are open to the public. This series takes place on Fridays from 3-4:30 in Center for Social Complexity Suite which is located on the third floor of Research Hall.
If you would like to join the seminar mailing list please email Karen Underwood.
Computational Social Science Research Colloquium /Colloquium in Computational and Data Sciences
Kieran Marray, Laidlaw Scholar
St. Catherine’s College
University of Oxford
FORTEC: Forecasting the Development of Artificial Intelligence up to 2050 Using Agent-Based Modeling
Friday, August 31, 3:00 p.m.
Center for Social Complexity, 3rd Floor Research Hall
All are welcome to attend.
Kieran is a Laidlaw Scholar from St Catherine’s College, University of Oxford. He has been visiting the Center for Social Complexity over the summer to do research in complexity economics supervised by Professor Rob Axtell.
Due to a welcome reception for new and returning CDS student, there will be no colloquium on Friday, September 7. The next one will be held on Friday, September 14. Speaker and topic to be announced later.
Computational Social Science Research Colloquium /Colloquium in Computational and Data Sciences
William Kennedy, PhD, Captain, USN (Ret.)
Research Assistant Professor
Center for Social Complexity
Computational and Data Sciences
College of Science
Characterizing the Reaction of the Population of NYC to a Nuclear WMD
Friday, September 14, 3:00 p.m.
Center for Social Complexity, 3rd Floor Research Hall
All are welcome to attend.
Abstract: This talk will again review the status of our multi-year project to characterize the reaction of the population of a US megacity to a nuclear WMD event 2 years into the project. Our approach has been to develop an agent-based model of the New York City area, with agents representing each of the 23 million people, and establish a baseline of normal behaviors before exploring the population’s reactions to small (5-10Kt) nuclear weapon explosions. We have the modeled the environment, agents, and their interactions, but there have been some challenges in the last year. I’ll review our status, successes, and challenges as well as near term plans.
Computational Social Science Research Colloquium /Colloquium in Computational and Data Sciences
Michael Eagle, Asst. Professor
Department of Computational and Data Sciences
George Mason University
Understanding Behavior in Interactive Environments: Deriving Meaningful Insights from Data
Friday, September 21, 3:00 p.m.
Center for Social Complexity, 3rd Floor Research Hall
All are welcome to attend.
Abstract: Advanced learning technologies are transforming education as we know it. These systems provide a wealth of data about student behavior. However, extracting meaning from such datasets is a challenge for researchers; and often impossible for the instructors. Understanding learner behavior is critical to finding, extracting, and acting on insight found in educational data. It is equally important to have strong evaluative methodologies to explore the effectiveness of new interventions, and pinpoint when, where, and precisely what students are learning.
This talk covers the ways I have combined human modeling, qualitative, quantitative (statistical and machine learning) methods enable researchers to make sense of behavior and to produce data-driven personalization. I will have a focus on modeling of humans in interactive problem-solving environments, such as intelligent tutoring systems, online courses, and educational video games. Combining results from experimental design, machine learning, and cognitive models results in large improvements to existing learning systems, as well as powerful insights for instructors and researchers on how students behave and learn in interactive environments.
Computational Social Science Research Colloquium /
Colloquium in Computational and Data Sciences
Robert Axtell, Professor
Computational Social Science Program,
Department of Computational and Data Sciences
College of Science
and
Department of Economics
College of Humanities and Social Sciences
George Mason University
Are Cities Agglomerations of People or of Firms? Data and a Model
Friday, September 28, 3:00 p.m.
Center for Social Complexity, 3rd Floor Research Hall
All are welcome to attend.
Abstract: Business firms are not uniformly distributed over space. In every country there are large swaths of land on which there are very few or no firms, coexisting with relatively small areas on which large numbers of businesses are located—these are the cities. Since the dawn of civilization the earliest cities have husbanded a variety of business activities. Indeed, often the raison d’etre for the growth of villages into towns and then into cities was the presence of weekly markets and fairs facilitating the exchange of goods. City theorists of today tend to see cities as amalgams of people, housing, jobs, transportation, specialized skills, congestion, patents, pollution, and so on, with the role of firms demoted to merely providing jobs and wages. Reciprocally, very little of the conventional theory of the firm is grounded in the fact that most firms are located in space, generally, and in cities, specifically. Consider the well-known facts that both firm and city sizes are approximately Zipf distributed. Is it merely a coincidence that the same extreme size distribution approximately describes firm and cities? Or is it the case that skew firm sizes create skew city sizes? Perhaps it is the other way round, that skew cities permit skew firms to arise? Or is it something more intertwined and complex, the coevolution of firm and city sizes, some kind of dialectical interplay of people working in companies doing business in cities? If firm sizes were not heavy-tailed, but followed an exponential distribution instead, say, could giant cities still exist? Or if cities were not so varied in size, as they were not, apparently, in feudal times, would firm sizes be significantly attenuated? In this talk I develop the empirical foundations of this puzzle, one that has been little emphasized in the extant literatures on firms and cities, probably because these are, for the most part, distinct literatures. I then go on to describe a model of individual people (agents) who arrange themselves into both firms and cities in approximate agreement with U.S. data.
Computational Social Science Research Colloquium /Colloquium in Computational and Data Sciences
Gonzalo Castañeda
Visiting Scholar, Interdisciplinary Center for Economic Science
George Mason University/Centro de Investigación y Docencia Económica (CIDE), México
How do governments determine policy priorities?
Studying development strategies through spillover networks
Friday, October 5, 3:00 p.m.
Center for Social Complexity, 3rd Floor Research Hall
All are welcome to attend.
Abstract: Determining policy priorities is a challenging task for any government because there may be, for example, a multiple objectives to be simultaneously attained, a multidimensional policy space to be explored, inefficiencies in the implementation of public policies, interdependencies between policy issues, etc. Altogether, these factors generate a complex landscape that governments need to navigate in order to reach their goals. To address this problem, we develop a framework to model the evolution of development indicators as a political economy game on a network. Our approach accounts for the –recently documented–network of interactions between policy issues, as well as the well-known political economy problem arising from budget assignment. This allows us to infer not only policy priorities, but also the effective use of resources in each policy issue. Using development indicators data from more than 100 countries over 11 years, we show that the country-specific context is a central determinant of the effectiveness of policy priorities. In addition, our model explains well-known aggregate facts about the relationship between corruption and development. Finally, this framework provides a new analytic tool to generate bespoke advice on development strategies.
Computational Social Science Research Colloquium /
Colloquium in Computational and Data Sciences
Maciej Latek, Chief Technology Officer, trovero.io./
Ph.D. in Computational Social Science 2011
George Mason University
Industrializing multi-agent simulations:
The case of social media marketing, advertising and influence campaigns
Friday, October 12, 3:00 p.m.
Center for Social Complexity, 3rd Floor Research Hall
All are welcome to attend.
Abstract: System engineering approaches required to transition multi-agent simulations out of science into decision support share features with AI, machine learning and application development, but also present unique challenges. In this talk, I will use trovero as an example to illustrate how some of these challenges can be addressed.
As platform to help advertisers and marketers plan and implement campaigns on the social media, trovero is comprised of social network simulations for optimization and automation and network population synthesis used to preserve people’s privacy while maintaining a robust picture of social media communities. Social network simulations forecast campaign outcomes and pick the right campaigns for given KPIs. Simulation is the only viable way to reliably forecast campaign outcomes: Big data methods fail to forecast campaign outcomes, because they are fundamentally unfit for social network data. Network population synthesis enables working with aggregate data without relying on data sharing agreements with social media platforms that are ever more reluctant to share user data with third parties after GDPR and the Cambridge Analytica debacle.
I will outline how these two approaches complement one another, what computational and data infrastructure is required to support them and how workflows and interactions with social media platforms are organized.
Computational Social Science Research Colloquium /
Colloquium in Computational and Data Sciences
Ken Kahn, Senior Researcher
Computing Services
University of Oxford
Agent-based Modelling for Everyone
Friday, October 19, 3:00 p.m.
Center for Social Complexity Suite
3rd Floor Research Hall
All are welcome to attend.
Abstract: Agent-based models (ABMs) can be made accessible to a wide audience. A wonderful example is the Parable of the Polygons (https://ncase.me/polygons/) based upon Schelling’s segregation model. The challenge isn’t simply to provide an interactive simulation to the general public but to convey how the model works and what assumptions underlie it. The speaker has been involved in three efforts to do more than make the models but understandable but also to enable people without computer programming experience to get a hands-on understanding of the process of modelling. One project attempted to model the 1918 Pandemic in a modular fashion so learners could understand and modify the model. Another was the Epidemic Game Maker which was created for a Royal Society science exhibition. Finally a generic browser-based system for creating ABMs by composing and customising pre-built “micro-behaviours” will be described. All of these systems will be demonstrated.
Computational Social Science Research Colloquium /Colloquium in Computational and Data Sciences
Brant Horio
Director, Data Science at LMI/CSS PhD Student
The Pedagogy of Zombies: A Case Study of Agent-Based Modeling Competitions for Introducing Complexity, Simulation, and its Real-World Applications
Friday, October 26, 3:00 p.m.
Center for Social Complexity, 3rd Floor Research Hall
All are welcome to attend.
Abstract: Complexity is pervasive in our daily lives and while academic programs exist to explore, interpret, experiment with, and apply these concepts to better understand the mechanics of our social world, the field is yet to be widely recognized in the mainstream consciousness. Are there engaging instructional methods and tools that can leverage a lower barrier to entry and indoctrinate new scholars into the science of complexity? In this Halloween-themed talk, I present a use case of a simulation modeling competition (and its evolution over several years) that provided preprogrammed agent-based models of a zombie apocalypse. Participants were challenged to explore and formalize human agent behaviors that leveraged their environment and other human agent-agent interactions to hide, evade, and otherwise prevent a grisly human extinction. I will describe the successes and challenges of this experience and a selection of the most creative solutions. I then go on to describe how this competition concept was extended to contemporary challenges that highlighted for participants potential real-world use cases that included combating the zika virus, and fisheries enforcement by the US Coast Guard. I hope for this talk to initiate dialog for how we might continue similar efforts to more easily introduce and propagate the complexity perspective.
Computational Social Science Research Colloquium /
Colloquium in Computational and Data Sciences
J. Brent Williams
Founder and CEO
Euclidian Trust
Improved Entity Resolution as a Foundation for Model Precision
Friday, November 2, 3:00 p.m.
Center for Social Complexity, 3rd Floor Research Hall
All are welcome to attend.
Abstract: Analyzing behavior, identifying and classifying micro-differentiations, and predicting outcomes relies on the establishment of a core foundation of reliable and complete data linking. Whether data about individuals, families, companies, or markets, acquiring data from orthogonal sources results in significant matching challenges. These matching challenges are difficult because attempts to eliminate (or minimize) false positives yields an increase in false negatives. The converse is true also.
This discussion will focus on the business challenges in matching data and the primary and compounded impact on subsequent outcome analysis. Through practical experience, the speaker led the development and first commercialization of novel approach to “referential matching”. This approach leads to a more comprehensive unit data model (patient, customer, company, etc.), which enables greater computational resolution and model accuracy by hyper-accurate linking, disambiguation, and detection of obfuscation. The discussion also covers the impact of enumeration strategies, data obfuscation/hashing, and natural changes in unit data models over time.