Workshop reflection & synopsis

In this session, we hoped to have honest conversations about the ways AI practitioners identify, address, and/or mitigate algorithmic discrimination and learned a lot from the outcomes of the exercises, and the discourse and shared sense-making throughout.

From this Workshop series, we want to document the barriers AI designers face when identifying, addressing, and/or mitigating algorithmic discrimination, and the types of support they need to overcome these barriers.

The objective of this workshop was to understand the experience of designers and:

ezgif-7-590c5a1359.gif

TLDR - Here are 6 take-aways that stuck with us:

👉🏼 Designers are limited by perceptions of their role and discipline as mostly aesthetic.

👉🏼 Designers rarely have enough transparency into the model and data pipeline to recognize and interrogate (potential) harms. In order for them to influence discriminatory outcomes, they need to be supported in understanding model features and functioning, the data pipeline, and how these things influence the UX. This could be done using tools such as service mapping or model cards, and configured in various structures between the design, dev, and other teams.

👉🏼  Designers find themselves operating in a MVP culture with a “move fast & break things attitude”. Unable to overthrow capitalist ideologies, they would be helped by business arguments as leverage, the integration of ‘speed bumps’ by regulation, and ultimately; a cultural shift that doesn’t shy away from acknowledging its limitations, responsibility, and difficult questions.

👉🏼  Discrimination, data rights, and other issues of dissent are handled by legal & compliance. They are seen as incidents or outliers instead of integrated features. The reign of these legal consideration is limited by an anti-liability mindset. This means that digital rights are only given to users in geographical areas where digital rights laws are enforced.A lack of shared accountability and multi-disciplinary design keeps this limited view of non-discriminatory practice in place.

👉🏼 The post-deployment tools which fall within outcome-based approaches are significantly less developed than process-based approaches. More channels for listening to the team, user, and wider community post-deployment can help distribute the burden of proof from internal team champions to external users and community advocates, spotting flaws and harms earlier, easier, and cheaper.

👉🏼 A “design for data rights” approach has a long way to go. Designers have little knowledge about data rights but are very willing to learn. They need local guidance and internal lobbying leverage. Data rights literacy needs to integrated in the larger literacy needed for data work to be sustainably and responsibly scalable.