A platform for research: civil engineering, architecture and urbanism
Fair Models for Impartial Policies: Controlling Algorithmic Bias in Transport Behavioural Modelling
The increasing use of new data sources and machine learning models in transport modelling raises concerns with regards to potentially unfair model-based decisions that rely on gender, age, ethnicity, nationality, income, education or other socio-economic and demographic data. We demonstrate the impact of such algorithmic bias and explore the best practices to address it using three different representative supervised learning models of varying levels of complexity. We also analyse how the different kinds of data (survey data vs. big data) could be associated with different levels of bias. The methodology we propose detects the model’s bias and implements measures to mitigate it. Specifically, three bias mitigation algorithms are implemented, one at each stage of the model development pipeline—before the classifier is trained (pre-processing), when training the classifier (in-processing) and after the classification (post-processing). As these debiasing techniques have an inevitable impact on the accuracy of predicting the behaviour of individuals, the comparison of different types of models and algorithms allows us to determine which techniques provide the best balance between bias mitigation and accuracy loss for each case. This approach improves model transparency and provides an objective assessment of model fairness. The results reveal that mode choice models are indeed affected by algorithmic bias, and it is proven that the implementation of off-the-shelf mitigation techniques allows us to achieve fairer classification models.
Fair Models for Impartial Policies: Controlling Algorithmic Bias in Transport Behavioural Modelling
The increasing use of new data sources and machine learning models in transport modelling raises concerns with regards to potentially unfair model-based decisions that rely on gender, age, ethnicity, nationality, income, education or other socio-economic and demographic data. We demonstrate the impact of such algorithmic bias and explore the best practices to address it using three different representative supervised learning models of varying levels of complexity. We also analyse how the different kinds of data (survey data vs. big data) could be associated with different levels of bias. The methodology we propose detects the model’s bias and implements measures to mitigate it. Specifically, three bias mitigation algorithms are implemented, one at each stage of the model development pipeline—before the classifier is trained (pre-processing), when training the classifier (in-processing) and after the classification (post-processing). As these debiasing techniques have an inevitable impact on the accuracy of predicting the behaviour of individuals, the comparison of different types of models and algorithms allows us to determine which techniques provide the best balance between bias mitigation and accuracy loss for each case. This approach improves model transparency and provides an objective assessment of model fairness. The results reveal that mode choice models are indeed affected by algorithmic bias, and it is proven that the implementation of off-the-shelf mitigation techniques allows us to achieve fairer classification models.
Fair Models for Impartial Policies: Controlling Algorithmic Bias in Transport Behavioural Modelling
María Vega-Gonzalo (author) / Panayotis Christidis (author)
2022
Article (Journal)
Electronic Resource
Unknown
Metadata by DOAJ is licensed under CC BY-SA 1.0
Analysis of Urban Environmental Policies Assisted by Behavioural Modelling
British Library Conference Proceedings | 1996
|Algorithmic Modelling, Parametric Thinking
Wiley | 2011
|Algorithmic modelling, parametric thinking
British Library Conference Proceedings | 2011
|Elsevier | 1981