Our mission is to help computational modelers at all levels engage in the establishment and adoption of community standards and good practices for developing and sharing computational models. Model authors can freely publish their model source code in the Computational Model Library alongside narrative documentation, open science metadata, and other emerging open science norms that facilitate software citation, reproducibility, interoperability, and reuse. Model authors can also request peer review of their computational models to receive a DOI.
All users of models published in the library must cite model authors when they use and benefit from their code.
Please check out our model publishing tutorial and contact us if you have any questions or concerns about publishing your model(s) in the Computational Model Library.
We also maintain a curated database of over 7500 publications of agent-based and individual based models with additional detailed metadata on availability of code and bibliometric information on the landscape of ABM/IBM publications that we welcome you to explore.
Displaying 10 of 1203 results
The model is an agent-based artificial stock market where investors connect in a dynamic network. The network is dynamic in the sense that the investors, at specified intervals, decide whether to keep their current adviser (those investors they receive trading advise from). The investors also gain information from a private source and share public information about the risky asset. Investors have different tendencies to follow the different information sources, consider differing amounts of history, and have different thresholds for investing.
We develop an IBM that predicts how interactions between elephants, poachers, and law enforcement affect poaching levels within a virtual protected area. The model is theoretical at this stage and is not meant to provide a realistic depiction of poaching, but instead to demonstrate how IBMs can expand upon the existing modelling work done in this field, and to provide a framework for future research. The model could be further developed into a useful management support tool to predict the outcomes of various poaching mitigation strategies at real-world locations. The model was implemented in NetLogo version 6.1.0.
We first compared a scenario in which poachers have prescribed, non-adaptive decision-making and move randomly across the landscape, to one in which poachers adaptively respond to their memories of elephant locations and where other poachers have been caught by law enforcement. We then compare a situation in which ranger effort is distributed unevenly across the protected area to one in which rangers patrol by adaptively following elephant matriarchal herds.
In a two-level hierarchical structure (consisting of the positions of managers and operators), persons holding these positions have a certain performance and the value of their own (personal perception in this, simplified, version of the model) perception of each other. The value of the perception of each other by agents is defined as a random variable that has a normal distribution (distribution parameters are set by the control elements of the interface).
In the world of the model, which is the space of perceptions, agents implement two strategies: rapprochement with agents that perceive positively and distance from agents that perceive negatively (both can be implemented, one of these strategies, or neither, the other strategy, which makes the agent stationary). Strategies are implemented in relation to those agents that are in the radius of perception (PerRadius).
The manager (Head) forms a team of agents. The performance of the group (the sum of the individual productivities of subordinates, weighted by the distance from the leader) varies depending on the position of the agents in space and the values of their individual productivities. Individual productivities, in the current version of the model, are set as a random variable distributed evenly on a numerical segment from 0 to 100. The manager forms the team 1) from agents that are in (organizational) radius (Op_Radius), 2) among agents that the manager perceives positively and / or negatively (both can be implemented, one of the specified rules, or neither, which means the refusal of the command formation).
Agents can (with a certain probability, given by the variable PrbltyOfDecisn%), in case of a negative perception of the manager, leave his group permanently.
It is possible in the model to change on the fly radii values, update the perception value across the entire population and the perception of an individual agent by its neighbors within the perception radius, and the probability values for a subordinate to make a decision about leaving the group.
You can also change the set of strategies for moving agents and strategies for recruiting a team manager. It is possible to add a randomness factor to the movement of agents (Stoch_Motion_Speed, the default is set to 0, that is, there are no random movements).
…
This version adds a Maslowian entropy to each agent decision based on Kendrick et. al. Rudimentary implementation assumes agents with lower scores are more likely to make decisions autonomously rather than sociotropically.
This model accompanies a paper looking at the role and limits of values and norms for modeling realistic social agents. Based on literature we synthesize a theory on norms and a theory that combines both values and norms. In contrast to previous work, these theories are checked against data on human behavior obtained from a psychological experiment on dividing money: the ultimatum game. We found that agents that act according to a theory that combines both values and norms, produce behavior quite similar to that of humans. Furthermore, we found that this theory is more realistic than theories solely concerned with norms or theories solely concerned with values. However, to explain the amount of money people accept in this ultimatum game we will eventually need an even more realistic theory. We propose that a theory that explains when people exactly choose to use norms instead of values could provide this realism.
This model is an extended version of the original MERCURY model (https://www.comses.net/codebases/4347/releases/1.1.0/ ) . It allows for experiments to be performed in which empirically informed population sizes of sites are included, that allow for the scaling of the number of tableware traders with the population of settlements, and for hypothesised production centres of four tablewares to be used in experiments.
Experiments performed with this population extension and substantive interpretations derived from them are published in:
Hanson, J.W. & T. Brughmans. In press. Settlement scale and economic networks in the Roman Empire, in T. Brughmans & A.I. Wilson (ed.) Simulating Roman Economies. Theories, Methods and Computational Models. Oxford: Oxford University Press.
…
The model aims at estimating household energy consumption and the related greenhouse gas (GHG) emissions reduction based on the behavior of the individual household under different operationalizations of the Theory of Planned Behaviour (TPB).
The original model is developed as a tool to explore households decisions regarding solar panel investments and cumulative consequences of these individual choices (i.e. diffusion of PVs, regional emissions savings, monetary savings). We extend the model to explore a methodological question regarding an interpretation of qualitative concepts from social science theories, specifically Theory of Planned Behaviour in a formal code of quantitative agent-based models (ABMs). We develop 3 versions of the model: one TPB-based ABM designed by the authors and two alternatives inspired by the TPB-ABM of Schwarz and Ernst (2009) and the TPB-ABM of Rai and Robinson (2015). The model is implemented in NetLogo.
The code shared here accompanies the paper at https://doi.org/10.1371/journal.pone.0208451. It simulates the effects of various economic trade scenarios on the phenomenon of the ‘disappearing middle’ in the Scottish beef and dairy farming industries. The ‘disappearing middle’ is a situation in which there is a simultaneous observed decline in medium-sized enterprises and rise in the number of small and large-scale enterprises.
This model was developed to study the combination of electric vehicles (EVs) and intermitten renewable energy sources. The model presents an EV fleet in a fictional area, divided into a residential area, an office area and commercial area. The area has renewable energy sources: wind and PV solar panels. The agents can be encouraged to charge their electric vehicles at times of renewable energy surplus by introducing different policy interventions. Other interesting variables in the model are the installed renewable energy sources, EV fleet composition and available charging infrastructure. Where possible, use emperical data as input for our model. We expand upon previous models by incorporating environmental self-identity and range anxiety as agent variables.
The purpose of the simulation was to explore and better understand the process of bridging between an analysis of qualitative data and the specification of a simulation. This may be developed for more serious processes later but at the moment it is merely an illustration.
This exercise was done by Stephanie Dornschneider (School of Politics and International Relations, University College Dublin) and Bruce Edmonds to inform the discussion at the Lorentz workshop on “Integrating Qualitative and Quantitative Data using Social Simulation” at Leiden in April 2019. The qualitative data was collected and analysed by SD. The model specification was developed as the result of discussion by BE & SD. The model was programmed by BE. This is described in a paper submitted to Social Simulation 2019 and (to some extent) in the slides presented at the workshop.
Displaying 10 of 1203 results