Computational Model Library

Displaying 10 of 414 results for "Therese Lindahl" clear search

An Agent-Based Simulation of Continuous-Time Public Goods Games

Tuong Vu | Published Thursday, May 17, 2018 | Last modified Tuesday, April 02, 2019

To our knowledge, this is the first agent-based simulation of continuous-time PGGs (where participants can change contributions at any time) which are much harder to realise within both laboratory and simulation environments.

Work related to this simulation has been published in the following journal article:
Vu, Tuong Manh, Wagner, Christian and Siebers, Peer-Olaf (2019) ‘ABOOMS: Overcoming the Hurdles of Continuous-Time Public Goods Games with a Simulation-Based Approach’ Journal of Artificial Societies and Social Simulation 22 (2) 7 http://jasss.soc.surrey.ac.uk/22/2/7.html. doi: 10.18564/jasss.3995

Abstract:

The model simulates the decisions of residents and a water authority to respond to socio-hydrological hazards. Residents from neighborhoods are located in a landscape with topographic complexity and two problems: water scarcity in the peripheral neighborhoods at high altitude and high risk of flooding in the lowlands, at the core of the city. The role of the water authority is to decide where investments in infrastructure should be allocated to reduce the risk to water scarcity and flooding events in the city, and these decisions are made via a multi-objective site selection procedure. This procedure accounts for the interdependencies and feedback between the urban landscape and a policy scenario that defines the importance, or priorities, that the authority places on four criteria.
Neighborhoods respond to the water authority decisions by protesting against the lack of investment and the level of exposure to water scarcity and flooding. Protests thus simulate a form of feedback between local-level outcomes (flooding and water scarcity) and higher-level decision-making. Neighborhoods at high altitude are more likely to be exposed to water scarcity and lack infrastructure, whereas neighborhoods in the lowlands tend to suffer from recurrent flooding. The frequency of flooding is also a function of spatially uniform rainfall events. Likewise, neighborhoods at the periphery of the urban landscape lack infrastructure and suffer from chronic risk of water scarcity.
The model simulates the coupling between the decision-making processes of institutional actors, socio-political processes and infrastructure-related hazards. In the documentation, we describe details of the implementation in NetLogo, the description of the procedures, scheduling, and the initial conditions of the landscape and the neighborhoods.
This work was supported by the National Science Foundation under Grant No. 1414052, CNH: The Dynamics of Multi-Scalar Adaptation in Megacities (PI Hallie Eakin).

Substitution of food products will be key to realising widespread adoption of sustainable diets. We present an agent-based model of decision-making and influences on food choice, and apply it to historically observed trends of British whole and skimmed (including semi) milk consumption from 1974 to 2005. We aim to give a plausible representation of milk choice substitution, and test different mechanisms of choice consideration. Agents are consumers that perceive information regarding the two milk choices, and hold values that inform their position on the health and environmental impact of those choices. Habit, social influence and post-decision evaluation are modelled. Representative survey data on human values and long-running public concerns empirically inform the model. An experiment was run to compare two model variants by how they perform in reproducing these trends. This was measured by recording mean weekly milk consumption per person. The variants differed in how agents became disposed to consider alternative milk choices. One followed a threshold approach, the other was probability based. All other model aspects remained unchanged. An optimisation exercise via an evolutionary algorithm was used to calibrate the model variants independently to observed data. Following calibration, uncertainty and global variance-based temporal sensitivity analysis were conducted. Both model variants were able to reproduce the general pattern of historical milk consumption, however, the probability-based approach gave a closer fit to the observed data, but over a wider range of uncertainty. This responds to, and further highlights, the need for research that looks at, and compares, different models of human decision-making in agent-based and simulation models. This study is the first to present an agent-based modelling of food choice substitution in the context of British milk consumption. It can serve as a valuable pre-curser to the modelling of dietary shift and sustainable product substitution to plant-based alternatives in Britain.

The Urban Traffic Simulator is an agent-based model developed in the Unity platform. The model allows the user to simulate several autonomous vehicles (AVs) and tune granular parameters such as vehicle downforce, adherence to speed limits, top speed in mph and mass. The model allows researchers to tune these parameters, run the simulator for a given period and export data from the model for analysis (an example is provided in Jupyter Notebook).

The data the model is currently able to output are the following:

01a ModEco V2.05 – Model Economies – In C++

Garvin Boyle | Published Monday, February 04, 2013 | Last modified Friday, April 14, 2017

Perpetual Motion Machine - A simple economy that operates at both a biophysical and economic level, and is sustainable. The goal: to determine the necessary and sufficient conditions of sustainability, and the attendant necessary trade-offs.

Ant Colony Optimization for infrastructure routing

P W Heijnen Emile Chappin Igor Nikolic | Published Wednesday, March 05, 2014 | Last modified Saturday, March 24, 2018

The mode implements a variant of Ant Colony Optimization to explore routing on infrastructures through a landscape with forbidden zones, connecting multiple sinks to one source.

Schelling famously proposed an extremely simple but highly illustrative social mechanism to understand how strong ethnic segregation could arise in a world where individuals do not necessarily want it. Schelling’s simple computational model is the starting point for our extensions in which we build upon Wilensky’s original NetLogo implementation of this model. Our two NetLogo models can be best studied while reading our chapter “Agent-based Computational Models” (Flache and de Matos Fernandes, 2021). In the chapter, we propose 10 best practices to elucidate how agent-based models are a unique method for providing and analyzing formally precise, and empirically plausible mechanistic explanations of puzzling social phenomena, such as segregation, in the social world. Our chapter addresses in particular analytical sociologists who are new to ABMs.

In the first model (SegregationExtended), we build on Wilensky’s implementation of Schelling’s model which is available in NetLogo library (Wilensky, 1997). We considerably extend this model, allowing in particular to include larger neighborhoods and a population with four groups roughly resembling the ethnic composition of a contemporary large U.S. city. Further features added concern the possibility to include random noise, and the addition of a number of new outcome measures tuned to highlight macro-level implications of the segregation dynamics for different groups in the agent society.

In SegregationDiscreteChoice, we further modify the model incorporating in particular three new features: 1) heterogeneous preferences roughly based on empirical research categorizing agents into low, medium, and highly tolerant within each of the ethnic subgroups of the population, 2) we drop global thresholds (%-similar-wanted) and introduce instead a continuous individual-level single-peaked preference function for agents’ ideal neighborhood composition, and 3) we use a discrete choice model according to which agents probabilistically decide whether to move to a vacant spot or stay in the current spot by comparing the attractiveness of both locations based on the individual preference functions.

Covid-19-Belief-network-Hybrid-Model

Morteza Mahmoudzadeh | Published Sunday, September 05, 2021

Digital social networks facilitate the opinion dynamics and idea flow and also provide reliable data to understand these dynamics. Public opinion and cooperation behavior are the key factors to determine the capacity of a successful and effective public policy. In particular, during the crises, such as the Corona virus pandemic, it is necessary to understand the people’s opinion toward a policy and the performance of the governance institutions. The problem of the mathematical explanation of the human behaviors is to simplify and bypass some of the essential process. To tackle this problem, we adopted a data-driven strategy to extract opinion and behavioral patterns from social media content to reflect the dynamics of society’s average beliefs toward different topics. We extracted important subtopics from social media contents and analyze the sentiments of users at each subtopic. Subsequently, we structured a Bayesian belief network to demonstrate the macro patters of the beliefs, opinions, information and emotions which trigger the response toward a prospective policy. We aim to understand the factors and latent factors which influence the opinion formation in the society. Our goal is to enhance the reality of the simulations. To capture the dynamics of opinions at an artificial society we apply agent-based opinion dynamics modeling. We intended to investigate practical implementation scenarios of this framework for policy analysis during Corona Virus Pandemic Crisis. The implemented modular modeling approach could be used as a flexible data-driven policy making tools to investigate public opinion in social media. The core idea is to put the opinion dynamics in the wider contexts of the collective decision-making, data-driven policy-modeling and digital democracy. We intended to use data-driven agent-based modeling as a comprehensive analysis tools to understand the collective opinion dynamics and decision making process on the social networks and uses this knowledge to utilize network-enabled policy modeling and collective intelligence platforms.

Criminal organizations operate in complex changing environments. Being flexible and dynamic allows criminal networks not only to exploit new illicit opportunities but also to react to law enforcement attempts at disruption, enhancing the persistence of these networks over time. Most studies investigating network disruption have examined organizational structures before and after the arrests of some actors but have disregarded groups’ adaptation strategies.
MADTOR simulates drug trafficking and dealing activities by organized criminal groups and their reactions to law enforcement attempts at disruption. The simulation relied on information retrieved from a detailed court order against a large-scale Italian drug trafficking organization (DTO) and from the literature.
The results showed that the higher the proportion of members arrested, the greater the challenges for DTOs, with higher rates of disrupted organizations and long-term consequences for surviving DTOs. Second, targeting members performing specific tasks had different impacts on DTO resilience: targeting traffickers resulted in the highest rates of DTO disruption, while targeting actors in charge of more redundant tasks (e.g., retailers) had smaller but significant impacts. Third, the model examined the resistance and resilience of DTOs adopting different strategies in the security/efficiency trade-off. Efficient DTOs were more resilient, outperforming secure DTOs in terms of reactions to a single, equal attempt at disruption. Conversely, secure DTOs were more resistant, displaying higher survival rates than efficient DTOs when considering the differentiated frequency and effectiveness of law enforcement interventions on DTOs having different focuses in the security/efficiency trade-off.
Overall, the model demonstrated that law enforcement interventions are often critical events for DTOs, with high rates of both first intention (i.e., DTOs directly disrupted by the intervention) and second intention (i.e., DTOs terminating their activities due to the unsustainability of the intervention’s short-term consequences) culminating in dismantlement. However, surviving DTOs always displayed a high level of resilience, with effective strategies in place to react to threatening events and to continue drug trafficking and dealing.

Shellmound Mobility

Henrique de Sena Kozlowski | Published Saturday, June 15, 2024

Least Cost Path (LCP) analysis is a recurrent theme in spatial archaeology. Based on a cost of movement image, the user can interpret how difficult it is to travel around in a landscape. This kind of analysis frequently uses GIS tools to assess different landscapes. This model incorporates some aspects of the LCP analysis based on GIS with the capabilities of agent-based modeling, such as the possibility to simulate random behavior when moving. In this model the agent will travel around the coastal landscape of Southern Brazil, assessing its path based on the different cost of travel through the patches. The agents represent shellmound builders (sambaquieiros), who will travel mainly through the use of canoes around the lagoons.

How it works?
When the simulation starts the hiker agent moves around the world, a representation of the lagoon landscape of the Santa Catarina state in Southern Brazil. The agent movement is based on the travel cost of each patch. This travel cost is taken from a cost surface raster created in ArcMap to represent the different cost of movement around the landscape. Each tick the agent will have a chance to select the best possible patch to move in its Field of View (FOV) that will take it towards its target destination. If it doesn’t select the best possible patch, it will randomly choose one of the patches to move in its FOV. The simulation stops when the hiker agent reaches the target destination. The elevation raster file and the cost surface map are based on a 1 Arc-second (30m) resolution SRTM image, scaled down 5 times. Each patch represents a square of 150m, with an area of 0,0225km². The dataset uses a UTM Sirgas 2000 22S projection system. There are four different cost functions available to use. They change the cost surface used by the hikers to navigate around the world.

Displaying 10 of 414 results for "Therese Lindahl" clear search

This website uses cookies and Google Analytics to help us track user engagement and improve our site. If you'd like to know more information about what data we collect and why, please see our data privacy policy. If you continue to use this site, you consent to our use of cookies.
Accept