Defence of masters thesis of Justin Noah Kreikemeyer

Konrad-Zuse-Haus, R.001

Enabled by the availability of large amounts of data, parameter optimisation is a central technique to construct data-driven models. Especially for machine-learning models like neural networks, the gradient descent (GD) procedure enables fast optimisation, delivering a vast amount of highly accurate models. Unfortunately, gradient-based optimisation techniques are less accessible to mechanistic models, like agent-based models (ABMs). While Automatic Differentiation (AD) provides the means to calculate gradients for ABMs, discontinuities introduced by data-driven control flow, such as conditional branches on the ABM’s parameters,  impact the applicability of GD. However, the model’s output function might be smoothed to eliminate the discontinuities. To this end the method of Smooth Interpretation (SI) by Chaudhuri et. al. employs a probabilistic program execution, soundly smoothing the output. A downside is its overhead of executing and storing the results of many possible control flow paths. Moreover, its application to stochastic models introduces additional execution time for each necessary replication.

Guided by the observation that the per control-flow calculations of SI, the stochastic replications, as well as the common behaviour of agents in an ABM can largely be executed in parallel, this thesis develops a vectorised smooth simulator for ABMs. A special focus is set on (parallel) algorithms to restrict the amount of control flow paths necessary to track and the efficient integration of AD in smooth ABMs. To evaluate the concept, a prototype smooth differentiable simulator utilising the single instruction, multiple data architecture of graphical processing units is developed with Nvidia’s CUDA and Thrust APIs; the fidelity of the gradients is determined and the performance is compared to a sequential implementation in C++. The result is a generalised method to make stochastic ABMs amenable to gradient-based optimisation. Measurements show that parallelisation provides a significant speedup compared to a sequential smoothing in many scenarios, but it is limited by the inherently sequential operations in SI.


Zurück zu allen Veranstaltungen