Workshop reflection & synopsis

In this session, we hoped to have honest conversations about the ways AI practitioners identify, address, and/or mitigate algorithmic discrimination and learned a lot from the outcomes of the exercises, and the discourse and shared sense-making throughout.

From this Workshop series, we want to document the barriers AI practitioners face when identifying, addressing, and/or mitigating algorithmic discrimination, and the types of support they need to overcome these barriers.

The objective of this workshop was to:

TLDR - Here are 4 take-aways that stuck with us:

👉🏼 beyond the direct, visible, and individual harms that we hear most about, there is a range of more subtle, more opaque, and more collective harms embedded in normative world views and colonial legacies that are equally part of the problem, and enable the more overt harms

👉🏼 there is a lot of harm that slides past the grey areas of data protection which are reliant on the awareness within companies’ decision-makers, the transparency - or lack thereof - of data and model pipelines, and the outcome evidently affecting a single individual

👉🏼 rather than relying on any specific tool, method, framework, or intervention to mitigate harmful outcomes, we want to think of our work as a practice; and as AI practitioners, continue to question how we can acknowledge the presence and disarm the impacts of inequalities in everything we make

👉🏼 not being allowed to collect special category data has its downsides. we can not confirm how racist/sexist/ableist etc. the model is because there is no way to segment the groups. “without race, we can not check for racism”. models also figure out proxy variables and affinity group regardless, which can be difficult to recognize and protect

ezgif-6-688f182c05.gif

Workshop synopsis

Documenting first-hand experiences of algorithmic discrimination & harm

We kicked the workshop off by plotting harms – over 85+ post-its were shared with algorithmic harms that were either perceived by, experienced by, or encountered by participants. We then collectively clustered these harms into themes and found that while some of the harms were caused by technical elements like: the way data was handled or models were trained; many more were caused by makers’ choice: un/intentionally embedding their ideologies into their creations, or mirroring historical inequalities.

Act I: Plotting & Clustering Harms

Act I: Plotting & Clustering Harms

In the exchange that followed, we discussed both individual (’increased background checks based on geographic location before approving service’) vs collective (’not allowing users to opt out of classifications or self identify (e.g. gender, race)’) harms, direct (’targeting vulnerable groups with ads crafted to trigger their weakness (e.g. gambling)’) vs indirect (’Making the default voice & name for AI assistants feminine (or even human) reinforces servitude’), and intentionality (’using inaccessible language to demonstrate data rights’) vs resulting from ignorance (’pronoun misuse’).