Computational Model Library

Displaying 10 of 161 results for "Nuno Pinto" clear search

The Non-Deterministic model of affordable housing Negotiations (NoD-Neg) is designed for generating hypotheses about the possible outcomes of negotiating affordable housing obligations in new developments in England. By outcomes we mean, the probabilities of failing the negotiation and/or the different possibilities of agreement.
The model focuses on two negotiations which are key in the provision of affordable housing. The first is between a developer (DEV) who is submitting a planning application for approval and the relevant Local Planning Authority (LPA) who is responsible for reviewing the application and enforcing the affordable housing obligations. The second negotiation is between the developer and a Registered Social Landlord (RSL) who buys the affordable units from the developer and rents them out. They can negotiate the price of selling the affordable units to the RSL.
The model runs the two negotiations on the same development project several times to enable agents representing stakeholders to apply different negotiation tactics (different agendas and concession-making tactics), hence, explore the different possibilities of outcomes.
The model produces three types of outputs: (i) histograms showing the distribution of the negotiation outcomes in all the simulation runs and the probability of each outcome; (ii) a data file with the exact values shown in the histograms; and (iii) a conversation log detailing the exchange of messages between agents in each simulation run.

Evolution of shedding games

Marco Janssen | Published Sunday, May 16, 2010 | Last modified Saturday, April 27, 2013

This simulates the evolution of rules of shedding games based on cultural group selection. A number of groups play shedding games and evaluate the consequences on the average length and the difficulty

An agent-based simulation of a game of basketball. The model implements most components of a standard game of basketball. Additionally, the model allows the user to test for the effect of two separate cognitive biases – the hot-hand effect and a belief in the team’s franchise player.

The present model was created and used for the study titled ``Agent-Based Insight into Eco-Choices: Simulating the Fast Fashion Shift.” The model is implemented in the multi-agent programmable environment NetLogo 6.3.0. The model is designed to simulate the behavior and decision-making processes of individuals (agents) in a social network. It focuses on how agents interact with their peers, social media, and government campaigns, specifically regarding their likelihood to purchase fast fashion.

Peer reviewed PPHPC - Predator-Prey for High-Performance Computing

Nuno Fachada | Published Saturday, August 08, 2015 | Last modified Wednesday, November 25, 2015

PPHPC is a conceptual model for studying and evaluating implementation strategies for spatial agent-based models (SABMs). It is a realization of a predator-prey dynamic system, and captures important SABMs characteristics.

A multithreaded PPHPC replication in Java

Nuno Fachada | Published Saturday, October 31, 2015 | Last modified Tuesday, January 19, 2016

A multithreaded replication of the PPHPC model in Java for testing different ABM parallelization strategies.

MCR Model

Davide Secchi Nuno R Barros De Oliveira | Published Friday, July 22, 2016 | Last modified Saturday, January 23, 2021

The aim of the model is to define when researcher’s assumptions of dependence or independence of cases in multiple case study research affect the results — hence, the understanding of these cases.

BehaviorSpace tutorial model

Colin Wren | Published Wednesday, March 23, 2016

This is based off my previous Profiler tutorial model, but with an added tutorial on converting it into a model usable with BehaviorSpace, and creating a BehaviorSpace experiment.

This model implements a classic scenario used in Reinforcement Learning problem, the “Cliff Walking Problem”. Consider the gridworld shown below (SUTTON; BARTO, 2018). This is a standard undiscounted, episodic task, with start and goal states, and the usual actions causing movement up, down, right, and left. Reward is -1 on all transitions except those into the region marked “The Cliff.” Stepping into this region incurs a reward of -100 and sends the agent instantly back to the start (SUTTON; BARTO, 2018).

CliffWalking

The problem is solved in this model using the Q-Learning algorithm. The algorithm is implemented with the support of the NetLogo Q-Learning Extension

Displaying 10 of 161 results for "Nuno Pinto" clear search

This website uses cookies and Google Analytics to help us track user engagement and improve our site. If you'd like to know more information about what data we collect and why, please see our data privacy policy. If you continue to use this site, you consent to our use of cookies.
Accept