Surveys of public attitudes show that people believe it is possible to design ethical AI. However the professional development context can offer minimal space for ethical reflection or oversight, creating a significant gap between public expectations and ethics ‘performance’ in practice. This workshop will examine how and where ethical reflection happens on the ground, exploring the gap in expectations and identifying new approaches to more effective ethical performance. Bringing social scientists, data scientists, designers, activists and consultants together to focus on AI/ML in the health context, it will foster critical dialogue and bring to the surface the structural, disciplinary, social and epistemological challenges of effective ethics performance on the ground. Where, when and how can meaningful interventions happen?
When expectations are collectively shared they can have a significant influence on development. Expectations are ‘performative’ in that they ‘do’ something or provoke certain social actions, as strategies to manage uncertainty and risk in innovation processes (Pollock and Williams 2016, van Lente 2012). They are shaped by a range of actors using formal and informal mechanisms but there is a distinction between
a) generic performativity – when theoretical models, language or approaches are taken up in the real world but don’t make a difference to how things are actually done, and
b) effective performativity – which makes a real difference in practice (Mackenzie, 2008).
Discourses on ethical AI are often merely ‘generically’ performative, operating at a high level of discourse rather than changing practice on the ground. Abstract values (like FAT) while crucial for guiding discourse among stakeholders at a high level, can be difficult to apply in practice. For example, ethics guidelines such as AI HLEG (2019) tend to conflate uses of AI, suggesting that unfair outcomes relate to ML techniques rather than the context, purposes and decisions around implementation. Even with explainable AI, civic rights and privacy concerns can persist, and relate to political accountability and transparency rather than aspects of development (Eubanks 2018, Ananny & Crawford, 2018).
Critical studies have highlighted the limits of ‘anti-discrimination’ approaches (e.g. Hoffman, 2019), which are socio-political responses specific to the US context. Concepts like ‘fairness’ are culturally and technically shaped over time and do not lend themselves to standardisation (Hutchinson & Mitchell, 2018), yet competing metrics to measure fairness in AI continue to be developed (e.g. Bird et al, 2019). Explanation models for transparency may even have a limited effect, unless they include other requirements such as justifiability and contestability (Mittelstadt et al, 2019).
The dominant discourse is that problematic outcomes of AI can be fixed by further technology developments, rather than questioning the motivation for its implementation (Powles & Nissenbaum, 2018). ML developers need techniques and methods to move away from solutionism and better understand, describe and limit the social context in which their systems will be deployed (Selbst et al, 2019). Overall, FAT requires a more pragmatic, holistic and critical approach (Gürses et al, 2019), one that addresses how ethics are implicated in everyday communication processes across the lifecycle of AI/ML technologies from education of developers right through to policy, design and implementation.
This workshop focuses on the human aspects of working with AI/ML and therefore explicitly does not address technical solutions. It will use insights from academic, policy and industry research to improve our understanding of where and how FAT and other values can be demanded/enabled if at all. We focus on the domain of health, a rich context for study due to the potential for AI/ML in diagnosis, treatment personalisation and use of mobile health (mHealth) apps for behavioural support across public and private structures. We will explore how human interactions may impact on the performance of FAT and other ethical values in practice, drawing on concepts from the sociology of expectations (van Lente, 2012), dramaturgical communication theory (Goffman. 1956), Winston’s (1998) model of the ‘performance’ of technological competence and applied phronesis (Barry et al, 2017). Participants will interrogate the ‘solutionist’ discourses around FAT (Selbst et al, 2019) and take away an awareness of how context (geographical, social and political) and human subjectivity impact on the ethical challenges that arise. Participants will be empowered to critique education, policy and practice and to help develop new approaches to FAT.
AI HLEG. Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence, European Commission. 2019.
Ananny, M., Crawford, K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society. 20,3. 2018.
Barry, M. Doherty, K. Marcano Bellisario, J. Car, J., Morrison, C., Doherty G.. mHealth for Maternal Mental Health: Everyday wisdom in ethical design. Proceedings of CHI Conference on Human Factors in Computing Systems. Denver CO. 2017.
Bird, S., Kiciman E., Kenthapadi, K., Mitchell, M. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned, WSDM Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 834-835. 2019.
Eubanks, V. Automating Inequality. How High-Tech Tools Profile, Police and Punish the Poor. St Martin’s Press. 2018.
Gillett, J. Acting Stanislavski: A practical guide to Stanislavski’s approach and method. Methuen Drama, Bloomsbury. London. 2014.
Goffman, E. The Presentation of Self in Everyday Life. Penguin. London. 1956.
Gürses, S. Peña Gangadharan, S. and Venkatasubramanian, S. Critiquing and Rethinking Fairness, Accountability and Transparency. Available at https://www.odbproject.org/2019/07/15/critiquing-and-rethinking-fairness-accountability-and-transparency/ 2019.
Hoffmann, A.L. Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication, & Society 22(7), 900-915. 2019.
Hutchinson, B. and Mitchell, M. 50 Years of Test (Un)fairness. Proceedings of the Conference on Fairness, Accountability, and Transparency – FAT* ’19. 2019.
Mackenzie, D. An Engine, Not a Camera. How Financial Models Shape Markets, Cambridge, MA, MIT Press. 2008.
Mittelstadt, B. Principles Alone Cannot Guarantee Ethical AI. Nature Machine Intelligence, Available at SSRN: https://ssrn.com/abstract=3391293 or http://dx.doi.org/10.2139/ssrn.3391293 . November 2019.
Pollock, N. & Williams, R. How Industry Analysts Shape the Digital Future, Oxford University Press. Oxford. 2016.
Powles J, & Nissenbaum, H. The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence. Medium.com [available at https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53? ] 2018.
Selbst, A. , boyd, d., Friedler, S.A., Venkatasubramanian, S. and Vertesi, J. Fairness and Abstraction in Sociotechnical Systems. In Conference on Fairness, Accountability, and Transparency (FAT* ’19) . ACM, New York. 2019. https://doi.org/10.1145/3287560.3287598
Van Lente, H. Navigating foresight in a sea of expectations: lessons from the sociology of expectations. Technology Analysis & Strategic Management 24, 2012.
Winston, B. Media, Technology and Society, A History: from the Telegraph to the Internet Routledge, Oxford. 1998