Calendar
During the fall 2018 semester, the Computational Social Science (CSS) and the Computational Sciences and Informatics (CSI) Programs have merged their seminar/colloquium series where students, faculty and guest speakers present their latest research. These seminars are free and are open to the public. This series takes place on Fridays from 3-4:30 in Center for Social Complexity Suite which is located on the third floor of Research Hall.
If you would like to join the seminar mailing list please email Karen Underwood.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Sciences and Informatics
Department of Computational and Data Sciences
College of Science
George Mason University
Suchismita Goswami
BNUS, University of Calcutta, 1990
Master of Science, State University of New York, Stony Brook, 2001
Master of Science, George Mason University, 2013
NETWORK NEIGHBORHOOD ANALYSIS FOR DETECTING
ANOMALIES IN TIME SERIES OF GRAPHS
Tuesday, April 2, 2019, 11:00 a.m.
Research Hall, Room 162
All are invited to attend.
Committee
Igor Griva, Chair
Edward Wegman, Dissertation Director
Jeff Solka
Dhafter Marzougui
Around terabytes of unstructured electronic data are generated every day from twitter networks, scientific collaborations, organizational emails, telephone calls and websites. Excessive communications in such social networks continue to be a major problem. In some cases, for example, Enron e-mails, frequent contact or excessive activities on interconnected networks lead to fraudulent activities. In a social network, anomalies can occur as a result of abrupt changes in the interactions among a group of individuals. Analyzing such changes in a social network is thus important to understand the behavior of individuals in a subregion of a network. The motivation of this dissertation work is to investigate the excessive communications or anomalies and make inferences about the dynamic subnetworks. Here I present three major contributions of this research work to detect anomalies of dynamic networks obtained from interorganizational emails.
I develop a two-step scan process to detect the excessive activities by invoking the maximum log-likelihood ratio as a scan statistic with overlapping and variable window sizes to rank the clusters. The initial step is to determine the structural stability of the time series and perform differencing and de-seasonalizing operations to make the time series stationary, and obtain a primary cluster with a Poisson process model. I then construct neighborhood ego subnetworks around the observed primary cluster to obtain more refined cluster by invoking the graph invariant betweenness as the locality statistic using the binomial model. I demonstrate that the two-step scan statistics algorithm is more scalable in detecting excessive activities in large dynamic social networks.
I implement the multivariate time series models for the first time to detect a group of influential people that are associated with excessive communications, which cannot be assessed using scan statistics models. I employ here a vector auto regressive (VAR) model of time series of subgraphs, constructed using the graph edit distance, as the nodes or vertices of the subgraphs are interrelated. Anomalies are assessed using the residual thresholds greater than three times the standard deviation obtained from fitted time series models.
Finally, I devise a new method of detecting excessive topic activities from the unstructured text obtained from e-mail contents by combining probabilistic topic modeling and scan statistics algorithms. Initially, I investigate the major topic discussed using the latent Dirichlet allocation (LDA) modeling, and apply scan statistics to get excessive topic activities using the largest log-likelihood ratio in the neighborhood of primary cluster.
These processes provide new ways of detecting the excessive communications and topic flow through the influential vertices in dynamic networks, and can be employed in other dynamic social networks to critically investigate excessive activities.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Social Science
Department of Computational and Data Sciences
College of Science
George Mason University
Gary Keith Bogle
Bachelor of Arts, University of California, Davis, 1990
Master of Arts, University of Illinois at Urbana-Champaign, 1995
Master of Science, Marymount University, 2003
Polity Cycling in Great Zimbabwe via Agent-Based Modeling:
The Effects of Timing and Magnitude of External Factors
Thursday, April 11, 2019, 1:00 p.m.
Research Hall, Room 92
All are invited to attend.
Committee
Claudio Cioffi-Revilla, Chair
William Kennedy
Amy Best
This research explores polity cycling at the site of Great Zimbabwe. It rests on laying out the possibilities that may explain what is seen in the archaeological record in terms of modeling what external factors, operating at specific times and magnitudes. What can cause a rapid rise and decline in the polity? This is explored in terms of attachment that individuals feel towards the small groups of which they are a part of, and the change in this attachment in response to their own resources and the history of success that the group enjoys in conducting collective action. The model presented in this research is based on the Canonical Theory of politogenesis. It is implemented using an agent-based model as this type of model excels at generating macro-level behavior from micro-level decisions. The results of this research cover the relationship between environmental inputs and the pattern of growth and decline of groups, the differences in group fealty and resources between successful groups and unsuccessful groups, the change in the number of groups throughout the simulation and the relationship between the probability of success in collective action and the success of the groups themselves. The input parameters to the model presented here are the collective action frequency (CAF) and environmental effect multiplier. The results show that a prehistoric polity can be modeled to demonstrate a sharp rise and fall in community groups and that the rise and fall emerges from the individual decision-making. Different sets of input parameters represent different environmental conditions, from the stable and predictable to less stable to quite unpredictable. Regardless of the environmental variability, the overall value of fealty experienced by community members moves in a similar fashion for all input sets. However, the more stable environment of Set A means the overall feelings of attachment to leadership do not fall as fast as they do in the more variable environments. In all, there is a two-stage process in which members in the community are sorted in to the surviving groups. Success in collective action leads to overall group success. The significance of this research is that it provides a basis for understanding that, while the archaeological record is incomplete, what happened in Great Zimbabwe lies within what has happened in other areas. What seems at first glance to be unusual can be explained through expected environmental and social factors that affect prehistoric societies on other continents. Furthermore, this research provides the basis for further quantifying the analysis of prehistoric societies by providing a model of laying out external factors along the lines of collective action frequencies and environmental effect multipliers.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Social Science
Department of Computational and Data Sciences
College of Science
George Mason University
John Bjorn Nelson
Bachelor of Science, University of Maryland, 2007
A Computational Model of Belief System Construction and
Expression with Applications to American Democracy
Friday, April 12, 2019, 1:30 p.m.
Research Hall, Room 162
All are invited to attend.
Committee
Claudio Cioffi-Revilla, Chair
William G. Kennedy
Jennifer N. Victor
This is a dissertation about people and their beliefs. It asks, how do beliefs form? Why do they change? How does the environment affect construction? What is the relationship between asocial experiences and the social exchange of information about them? And, how
do beliefs affect social structure? To interrogate these questions, I build an agent-based model with agent-to-nature and agent-to-agent interaction spaces. The payoff distributions associated with each context-action pair in nature are homogeneous. However, agent
exposure rates are heterogeneous. The agent-to-agent interactions allow for social information exchange, facilitating the discovery of best contexts and actions for selection. All agent expressions are sincere. However, to guard against error integration, agents sample
dynamic stereotypes over overt traits as proxies for experiential counterpart reliability. An expression is more receivable when aligned with social expectations than when it is not. This creates a recursive relationship whereby stereotypes affect belief and beliefs affect
stereotypes. I implement three stereotyping strategies and six different environments. The three stereotyping strategies — prosocial, informative, and discriminatory — operationalize different assumptions about social information processing. Five of the environments
progressively increase inherent structure. The sixth introduces broadcasts which synchronize contextual salience in social interactions.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Sciences and Informatics
Department of Computational and Data Sciences
College of Science
George Mason University
Thomas P. Boggs
Master of Science, George Mason University, 2002
Master of Science, Virginia Tech, 1994
Bachelor of Arts, Virginia Tech, 1992
Bachelor of Science, Virginia Tech, 1991
Probabilistic Topic Modeling for Hyperspectral Image Classification
Monday, April 22, 2019, 3:00 p.m.
Exploratory Hall, Room 3301
All are invited to attend.
Committee
Jason Kinser, Chair
Igor Griva
Ronald Resmini
Robert Weigel
Probabilistic Topic Models are a family of mathematical models used primarily to identify latent topics in large collections of text documents. This research adapts the topic modeling approach to the unsupervised classification of hyperspectral images. By considering image pixels similarly to text documents and quantizing data for each spectral band to develop a spectral feature vocabulary, it is demonstrated that by using Latent Dirichlet Allocation with a hyperspectral image corpus, learned topics can be used to produce unsupervised classification results that often match ground truth better than the commonly used k-means algorithm. The
topic modeling approach developed is demonstrated to easily extend to classification of image regions by aggregating spectral features over spatial windows. The region-based document models are shown to account for the spectral covariance and heterogeneity of ground-cover classes, resulting in similarity to land use ground truth that increases monotonically with window size.
Multiresolution wavelet decompositions of pixel reflectance spectra are used to develop a novel feature vocabulary that more naturally aligns with material absorption and reflectance features, further improving classification results. The wavelet-based document modeling approach is evaluated against synthetic image data, a small AVIRIS image with 16 ground truth classes, and finally on practical-sized, overlapping AVIRIS and Hyperion images to demonstrate the utility of the models. Multiple wavelet bases and numbers of quantization levels are considered and for the data sets evaluated, it is determined that using the Haar wavelet with 10 quantization levels yields the best performance, while also producing easily interpretable topics. It is demonstrated that by omitting low-level wavelet coefficients, vocabulary size and model inference time can be significantly reduced without loss of accuracy.
The wavelet-based approach is extended by replacing quantization levels with simple thresholds for positive and negative wavelet coefficients, reducing the vocabulary size to two times the number of wavelet coefficients. The thresholded wavelet model provides accuracy comparable to the quantized wavelet model, while having significantly shorter inference time and supporting easily interpretable visualization of topics in the wavelet domain. By establishing appropriate model hyperparameters and omitting low-level wavelet
coefficients, the thresholded wavelet model provides better unsupervised classification results than previously developed quantized band models, has shorter model parameter estimation time, and has an average document word count smaller by a factor of 5 and a vocabulary smaller by a factor of 10
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Social Science
Department of Computational and Data Sciences
College of Science
George Mason University
Matthew Oldham
Bachelor of Economics with Honours, University of Tasmania, 1995
Master of Arts in Interdisciplinary Studies, George Mason University, 2016
The Utilization of Computational Social Science
for the Benefit of Finance
Wednesday, April 24, 2019, 9:00 a.m.
Exploratory Hall, Room 3301
All are invited to attend.
Committee
Robert Axtell, Chair
Andrew Crooks
Edward Lopez
Richard Bookstaber
The ability to identify the mechanisms responsible for the behavioral characteristics of financial markets has remained an elusive pursuit. Further, the precise behavioral characteristics of financial markets remains a point of contention. Some practitioners proclaim that markets are efficient and the return profile of financial assets follow a Gaussian distributed random walk, while others suggest that markets are not efficient, with returns tending to be heavily skewed and markets record instances of extreme outlying events at a rate more than what the efficient school prescribes. A feasible explanation for why financial markets behave as they do is that they are a complex adaptive system (CAS), an approach where investors and firms are considered heterogeneous interacting agents (HIA), which contrasts against the single representative agent approach utilized in the efficient market (neoclassical economic) paradigm.
Firstly, this dissertation provides an overview of the basis of the efficient market framework (EMF) before presenting the need to pursue alternative methods. The principal alternative discussed is the utilization of Computational Social Science (CSS) tools to consider financial markets as a CAS. The primary impetus for the approach is the statistical imprint of a CAS – power-law distributions – are found in asset returns and various other economic variable related to financial markets, including the distributions of shareholders and firm size. Of the various CSS tools, the remainder of the dissertation presents two agent-based models aimed at addressing a variety of research, yet with a common theme of quantifying the effects of agents placing an increased focus on short-term factors, a phenomenon known as “short-termism.”
The first model considers the effects of investors forming an information network with each other in an agent-based artificial stock market. In turn, agents try and improve their investment performance by adjusting their connections; a process that involves cutting ties with those agents who provide poor quality information and connecting to the betterperforming investors. The crucial elements in the model are the timeframe over which the agents consider their performance; the interval between rewiring their connections; and
their tendency to follow the advice of their connections over other information sources. Through varying the effect of these elements meaningful insights into the dynamics driving the behavior of the financial markets, with the presence of even a small proportion of
short-term investors being responsible for a material increase in market volatility. A similar record occurred after reducing the interval between when investors adjust their information network.
An ambition research agenda underlies the implementation of the second model. The foundation for the model stems from the growing concern that the management of publicly listed firms is becoming preoccupied with the share price of their firm, thereby placing an
increased, and non-optimal, focus on their short-term earnings. To address this issue required the expansion of the existing agent-based artificial stock market approach to include many firms who have their earnings endogenously influenced by the market. To achieve the required expansion, the model has firms maintain growth expectations which they adjust after factoring in their most recent performance against those expectations and the movement in their firm’s share price. Firms also must allocate their limited resources between growing sales or margins. In terms of the investors, the model considers various investment styles, with individual styles and combinations responsible for generating greater volatility in the market and more extreme adjustments by management. Before undertaking the extensions, an extensive set of data relating to the size, growth, and performance of globally listed firms was collected and assessed. Consistent with previous research, the distributions, apart from growth, were heavily skewed. The growth distributions were found to be somewhat consistent with Laplace distributions, which is the existing growth distribution benchmark.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Social Science
Department of Computational and Data Sciences
College of Science
George Mason University
E. André L’Huillier M.
Bachelor of Arts, Universidad Adolfo Ibañez, 2010
Master of Arts, Universidad Adolfo Ibañez, 2012
Blockbuster Emergence in Entertainment Platform Markets: Modeling
the History of the Video Game Industry in North America
Tuesday, April 30, 2019, 1:00 p.m.
Exploratory Hall, Room 3302
All are invited to attend.
Committee
Robert Axtell, Chair
Marshall Van Alstyne
William Kennedy
Eduardo López
Entertainment markets are typically dominated by blockbusters; which are characterized for being highly popular and financially successful over a vast majority of failures. Today, the entertainment industry has shifted into a platform model, where a similar concentration occurs. The expanding and disruptive placement of multi-sided business organization has modified many cultural markets, reshaping the way products are created, delivered, and consumed. Nevertheless, entertainment platforms still depend heavily on the existence of blockbusters. I study the history of video game industry with particular attention to the life cycle of platforms and blockbuster emergence. After an empirical analysis of the home console market and a literature review of its history, an agent-based model of the video game market is presented. The model aims to represent the complex behavior of the market’s heterogeneous actors. The design is based on platform economics, diffusion through social networks, and social influence; with an emphasis in decision making under high uncertainty. Results of the model successfully reproduce the main dynamics of the market in a simple behavioral representation. The simulation experiments indicate that peer influence in a multi-sided organization is sufficient to reproduce the industry’s life-cycles, its high concentration, and extreme uncertainty. Furthermore, results of the model display the combined effect of promotion and word of mouth; particularly on how mass promotion provides an increment in expectation while the tipping force of adoption usually depends
on social influence. Although the model is able to reproduce the emergence of blockbusters and market concentration in a completely uncertain market, the rule-based nature of its structure allows for future experiments that consider installed base factors, quality, or
asymmetries in market power. After the initial results of the base model, a series of extensions are presented to address additional issues of blockbuster formation in entertainment platforms. The extensions focus in the role of market segmentation in quality perception, the effect of uncertainty and consumer perception, and finally, an exploration on basic aspects of platform management. Results for the extension on consumer preferences and product features presents the complex interaction between sub-groups in the formation
of positive expectations and market concentration; where partial diversity of games properties is better than its extremes (i.e. fully heterogeneous or identical). Results on consumer perception experiments also provide evidence of a non-linear effect on adoption
and market behavior; with higher perception consumers are able to discriminate earlier without the generation of sufficient hype to form blockbusters or platform participation. Finally, the platform management section goes through matters of time of release, multihoming, and a price structure prototype. In general, results on these extensions present an important effect of externalities among platforms operations (e.g. the mutual hype generation when consumers multi-home or when platforms release at closer dates). Future
research on entertainment platforms should consider an empirical approach to describe preference and product heterogeneity, which may further inquire in a critical review of quality in markets with high uncertainty. Finally, the insights of the model are useful for the
study of other markets beyond the video games or the entertainment business. The insights provided and the model’s framework are relevant to any multi-sided system that sees a dominant herd behavior based in decision uncertainty like social media or platforms for
collective action.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Social Science
Department of Computational and Data Sciences
College of Science
George Mason University
Ross Jeffrey Schuchard
Bachelor of Science, United States Military Academy, 2004
Master of Arts in Interdisciplinary Studies, George Mason University, 2015
Examining Adaptation in Complex Online Social Systems
Tuesday, June 18, 2019, 10:00 a.m.
Research Hall, Showcase
All are invited to attend.
Committee
Andrew Crooks, Chair
Robert Axtell
Arie Croitoru
Anthony Stefanidis
A. Trevor Thral
Online social systems, comprised of social media services and platforms including social networking (e.g. Facebook, LinkedIn), microblogging (e.g. Twitter, Sina Weibo) and crowdsourcing (e.g. Wikipedia, OpenStreetMap) applications, continue to gain traction among an ever-increasing global user base. The growing reliance upon online social systems to augment an individual’s daily workflow and the resulting interdependence between human and technical systems provide sufficient evidence to classify them as socio-technical systems. These interdependencies are complex in nature and are best defined from a complex adaptive system (CAS) perspective.
It is through a CAS lens that this dissertation examines two types of adaptation in online social systems using an array of Computational Social Science (CSS) tools. In the first type of adaptation, human actors are no longer the sole participants in online social systems, since social bots, or automated software mimicking humans, have emerged as potential threats to stifle or amplify certain online conversation narratives. This section of the dissertation addresses adaptation to these new types of actors by presenting a novel social bot analysis framework designed to determine the pervasiveness and relative importance of social bots within various online conversations. In the second form of adaptation, individual citizens and government entities modify their behaviors in relation to each other through censorship circumvention or detection. This section of the dissertation investigates the rise of digital censorship in online social systems, creating a new agent-based model inspired by the findings from an evaluation of a Turkish digital censorship campaign.
The social bot analysis framework results consistently showed that while users identified as social bots only comprised a small portion of total accounts within the overall research corpus, they account for a significantly large portion of prominent centrality rankings across all observed online conversations. Furthermore, bot classification results, when using multiple bot detection platforms, exhibited minimal overlap, thus affirming that different bot detection algorithms focus on the various types of bots that exist. Finally, the results of the Turkish digital censorship campaign showed marginal effectiveness as some Turkish citizens circumvented the censorship policies, thus highlighting an individual decision cycle to risk punishment and engage in online activities. The recognition of this citizen decision cycle served as the basis for the adaptation to digital censorship model, which used empirical evidence to stylize and template a simulation censorship environment.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Social Science
Department of Computational and Data Sciences
College of Science
George Mason University
Thomas Dietrich Pike
Bachelor of Arts, University of Arizona, 1999
Master of Arts, American Military University, 2009
Master of Science, National Intelligence University, 2010
Standardizing Complexity: Doctrine and Computation for Integrated Campaigning
Friday, June 21, 2019, 1:00 a.m.
Research Hall, Room 92
All are invited to attend.
Committee
Robert Axtell, Chair
Patrick Gillevet
William G. Kennedy
This dissertation examines the integration of complexity theory and computational tools into U.S. foreign policy. It identifies ways to improve the Department of Defense’s main analytic framework to ensure a more accurate reflection of complex systems and it provides a holistic assessment of the integration of computational tools into Joint campaigns. Based on this analysis, this dissertation advocates the incorporation of Agent Based Models (ABMs) as simulations to support both analysis and foreign policy development at all levels of the foreign policy enterprise. To aid this integration two Mesa based ABM libraries are provided. (1) Multi-level Mesa, the first Python based multi-level library to facilitate the integration and evolution of layered adaptive networks. This library goes beyond existing multi-level libraries by providing greater user flexibility and allowing for the integration and adaption of more complex networks. (2) Distributed Space Mesa, a first attempt at starting a Distributed Mesa meta-library. This library provides modest time improvements to spatial Mesa ABMs and critical lessons for the continued development of a suite of distributed Mesa libraries.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Sciences and Informatics
Department of Computational and Data Sciences
College of Science
George Mason University
Christine Harvey
Bachelor of Science, Stockton University, 2011
Master of Science, Stockton University, 2013
Modeling, Simulation, and Analysis of the US Organ Transplant System
Tuesday, October 29, 2:00 p.m.
Exploratory Hall, Room 3301
All are invited to attend.
Committee
Robert Weigel, Dissertation Director
Andrew Crooks, Committee Chair
Hamdi Kavak
James Gentle
Analysis, modeling, and simulation of organ transplantation and donation can enhance the understanding of this complex system and guide strategic policy improvements. Four major research questions are addressed in this work: (1) how can we further enable data-driven research of the transplant system for future scientists?; (2) what demographic factors influence donations and access to transplantation?; (3) how do laws and policies affect organ donations?; and (4) how do certain patient advantages impact the overall system as well as those lacking advantages?
A data pipeline and associated software were developed and published that address how to further data-driven research of the transplant system for future scientists. This software simplifies access to and analysis of data from proprietary Organ Procurement and Transplantation Network (OPTN) Standard Transplant Analysis and Research (STAR) files to an open-source database format. These files contain data on every organ donor, waitlist registrant, and transplant recipient since 1987 in the US. This data pipeline directly facilitated the next phase of research which involved performing an analysis of the transplant system using this dataset. The exploratory data analysis scales transplant data to the relative populations to gain a better understanding of the differences between demographic groups and reveals important differences across education levels, gender, race, and ethnicity.
Demographic factors influencing organ donation and access to transplants are analyzed in this work through exploratory visualizations and predictive modeling. A visual exploratory analysis is presented which examines demographic features of organ donors and highlights differences in intersectional data across the population of donors compared to the relative population described by the US Census. Additionally, a random forest model is used to determine the features of patients on the waitlist for a kidney transplant to determine if certain attributes may inadvertently drive the allocation system. This model predicts patient outcomes based on features represented in the model with an accuracy above the zero-rule baseline. The analysis found that patient age, year of listing, body weight, and zip code are important factors in determining a patient’s outcome – other demographic factors such as race and gender were not important prediction features.
State and local laws, policies, and their impact on organ donation are evaluated through a statistical analysis that compares donations after the implementation of a policy to areas without the policy implementation. A database of state and local laws and policies and the years of implementation was developed to compare donations across the country. The results demonstrated that some policies can be correlated with a change in donation, but only for certain demographic subgroups in a population.
Finally, I built discrete event simulation models of a representative patient population to determine the impact of changes to the transplant system that can not be easily demonstrated in the real world. A transplant process model was developed to determine how increasing living and deceased donation overall and within racial sub-groups would impact the number of donors each year. Additionally, an agent-based queuing model was used to understand the impact of allowing patients to register within more than one area. This model provides a valuable tool for examining policy changes that shows the global and local impacts of multiple listing. The analysis found that multiply listed patients have improved access to transplants and are less likely to die while waiting for a transplant.
Notice and Invitation
Oral Defense of Doctoral Dissertation
Doctor of Philosophy in Computational Science and Informatics
Department of Computational and Data Sciences
College of Science
George Mason University
Swabir Silayi
Bachelor of Science, Fatih University, 2009
Master of Science, Fatih University, 2011
Master of Science, Old Dominion University, 2013
ELECTRONIC STRUCTURE AND DYNAMICS ANALYSIS OF NOBLE METALS BY A TIGHT-BINDING PARAMETRIZATION
Wednesday, December 04, 2019, 3:00 p.m.
Exploratory Hall, Room 3301
All are invited to attend.
Committee
Dr. Estela Blaisten – Committee Chair
Dr. Dimitrios A. Papaconstantopoulos
Dr. James Glasbrenner
Dr. Eduardo Lopez
Theoretical studies of the properties of materials are important as they serve to narrow the focus of what are normally time consuming and costly experimental searches. In modeling these materials, first-principles density functional methods have been proven to quite effective. They have the drawback of being computationally expensive and, to mitigate this, faster approaches have been developed such as the tight-binding model.
We have used the Naval Research Lab (NRL) tight-binding (TB) method to study the electronic and mechanical properties of the noble metals. The tight-binding Hamiltonians are determined from a fit that has a non-orthogonal basis and reproduces the electronic structure and total energy values of first-principles linearized augmented plane wave calculations. In order to perform molecular dynamics simulations, we developed new TB parameters that work well at smaller interatomic distances. We analyze fcc, bcc and sc periodic structures and we demonstrate that the TB parameters are transferable and robust for calculating additional dynamical properties which they had not been fitted to.
To do this, we calculated phonon frequencies and density of states at finite temperature and performed simulations to determine the coefficients of thermal expansion and the atomic mean squared displacement. The energies for vacancy formation were also calculated as were the binding energies for fcc-based, bcc-based and icosahedral clusters of different sizes. The results compared very well with experimental observations and independent first-principles density functional calculations.
Extending from the single element systems, we develop parameter sets for the Cu-Ag and Ag-Au noble metal binary alloys as well. These parameters were fit to the structures 2, 10, 12 − 3,3, with the and representing the different combinations of , and in addition to the fcc , and .
As an output of this extension to the binary systems, the following quantities were reproduced in good agreement with available experimental and theoretical values: elastic constants, densities of electronic states as well as the total energies of additional crystal structures that were not included in the original first-principles database. We also used this TB parametrization for the alloy systems to successfully perform molecular dynamics simulations and determined the energies for vacancy formation, temperature dependence of the coefficient of thermal expansion, the mean squared displacement and phonon spectra. In addition we show that these TB parameters work for determining binding energies and bond lengths of Cu-Ag fcc-like clusters.