Theory & Techniques

During the workshop, participants will engage in a variety of activities which draw from theatre workshop practice, dramaturgical theory, philosophy of technology and virtue ethics. Here are some materials and background reading to help prepare and familiarise yourself with the theories, practices and ideas that we hope will be useful in further developing our knowledge of Ethics on the Ground.

Forum Theatre

Forum theatre was developed by theatre practitioner and activist Augusto Boal, as a type of theatrical game where a problem is shown in an unresolved form. The audience is invited to suggest and enact solutions. The scenario is then repeated, allowing the audience to offer alternative solutions. The game becomes a kind a contest between the audience and actors trying to bring the play (or ‘oppression’ in Boal’s terms) to a different end. The result is a pooling of knowledge, tactics and experiences. As the audience participates in enacting solutions to break the cycle of oppression they are also “rehearsing for life.”

Augusto_Boal_nyc2

Augusto Boal at a workshop in New York 2008 –
By Thehero – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6717170

For Boal’s own writing about theatre techniques, see Games for Actors and non-Actors (2005)

Theatre of Action

The Stanislavski ‘method’ of acting is often considered to involve seven key steps – techniques that were developed to help actors to build believeable characters. Here participant/actors must answer the following questions to build an improvised response:

  • Who Am I?
  • Where Am I?
  • When Is It?
  • What Do I Want?
  • Why Do I Want It?
  • How Will I Get It?
  • What Do I Need To Overcome?

Read more about Stanislavki’s methods and techniques here

Virtue Ethics & Applied Phronesis

In Virtue Ethics, phronesis is the core intellectual virtue through which other scientific, artistic and technical virtues are expressed. Phronesis comes from an intimate familiarity with practice in contextualized settings. It represents knowledge that is context dependent and particular, rather than what is abstract and universal. Phronesis involves practical judgment about the right ‘choice’ to make among various possibilities and is ultimately concerned with the appropriate action in relation to things that are good for us.

Applied phronesis seeks information based on experience in context for the benefit of the people being studied and for whom systems are designed. It demands analysing what appear to be the same phenomena or requirements in different contexts and reflecting on the choices and dilemmas arising. Building phronesis into data design processes contributes to deeper ethical reflection, which leads to a more sustainable design ethics.

Elements of applied phronesis that have been developed in mHealth research include:

  1. Cast a wider net for feedback in research: this goes beyond contextual design practice to include multiple participants with distinct perspectives on the same phenomena in different contexts. Analysis shows that this can produce ‘value conversations’ that result in changes to the ‘voice’ and tone of design.
  2. Pay attention to the order of feedback: this relates to which values may become embedded first in design. E.g. clinical requirements take precedence in mHealth, but can present challenges for vulnerable groups in accessing and using technologies whereas design that recognises the order of feedback can support intrinsic values like empowerment in simple ways.
  3. Adopt an ethically pluralist approach that expects and acknowledges difference among designers and participants. Small but important differences can arise in how designers, developers, data scientists and participants conceptualise those who use mHealth technologies, e.g. the term ‘client’ (rather than user or patient) may better reflect the communication relationship.
  4. Acknowledge and disclose practitioners/researchers subjective value systems: this ensures that we continue to ‘see no neutral ground’…

For an extended review of its application the mHealth for mental health context, see:

M. Barry, K. Doherty, J. Marcano Bellisario, J. Car, C. Morrison, G. Doherty (2017) “mHealth for Maternal Mental Health: Everyday wisdom in ethical design”, Proceedings of CHI Conference on Human Factors in Computing Systems 2017. https://dl.acm.org/doi/10.1145/3025453.3025918

The Research

Surveys of public attitudes show that people believe it is possible to design ethical AI. However the professional development context can offer minimal space for ethical reflection or oversight, creating a significant gap between public expectations and ethics ‘performance’ in practice. This workshop will examine how and where ethical reflection happens on the ground, exploring the gap in expectations and identifying new approaches to more effective ethical performance. Bringing social scientists, data scientists, designers, activists and consultants together to focus on AI/ML in the health context, it will foster critical dialogue and bring to the surface the structural, disciplinary, social and epistemological challenges of effective ethics performance on the ground. Where, when and how can meaningful interventions happen?

When expectations are collectively shared they can have a significant influence on development. Expectations are ‘performative’ in that they ‘do’ something or provoke certain social actions, as strategies to manage uncertainty and risk in innovation processes (Pollock and Williams 2016, van Lente 2012). They are shaped by a range of actors using formal and informal mechanisms but there is a distinction between
a) generic performativity – when theoretical models, language or approaches are taken up in the real world but don’t make a difference to how things are actually done, and
b) effective performativity – which makes a real difference in practice (Mackenzie, 2008).

Discourses on ethical AI are often merely ‘generically’ performative, operating at a high level of discourse rather than changing practice on the ground. Abstract values (like FAT) while crucial for guiding discourse among stakeholders at a high level, can be difficult to apply in practice. For example, ethics guidelines such as AI HLEG (2019) tend to conflate uses of AI, suggesting that unfair outcomes relate to ML techniques rather than the context, purposes and decisions around implementation. Even with explainable AI, civic rights and privacy concerns can persist, and relate to political accountability and transparency rather than aspects of development (Eubanks 2018, Ananny & Crawford, 2018).

Critical studies have highlighted the limits of ‘anti-discrimination’ approaches (e.g. Hoffman, 2019), which are socio-political responses specific to the US context. Concepts like ‘fairness’ are culturally and technically shaped over time and do not lend themselves to standardisation (Hutchinson & Mitchell, 2018), yet competing metrics to measure fairness in AI continue to be developed (e.g. Bird et al, 2019). Explanation models for transparency may even have a limited effect, unless they include other requirements such as justifiability and contestability (Mittelstadt et al, 2019).

The dominant discourse is that problematic outcomes of AI can be fixed by further technology developments, rather than questioning the motivation for its implementation (Powles & Nissenbaum, 2018). ML developers need techniques and methods to move away from solutionism and better understand, describe and limit the social context in which their systems will be deployed (Selbst et al, 2019). Overall, FAT requires a more pragmatic, holistic and critical approach (Gürses et al, 2019), one that addresses how ethics are implicated in everyday communication processes across the lifecycle of AI/ML technologies from education of developers right through to policy, design and implementation.

This workshop focuses on the human aspects of working with AI/ML and therefore explicitly does not address technical solutions. It will use insights from academic, policy and industry research to improve our understanding of where and how FAT and other values can be demanded/enabled if at all. We focus on the domain of health, a rich context for study due to the potential for AI/ML in diagnosis, treatment personalisation and use of mobile health (mHealth) apps for behavioural support across public and private structures. We will explore how human interactions may impact on the performance of FAT and other ethical values in practice, drawing on concepts from the sociology of expectations (van Lente, 2012), dramaturgical communication theory (Goffman. 1956), Winston’s (1998) model of the ‘performance’ of technological competence and applied phronesis (Barry et al, 2017). Participants will interrogate the ‘solutionist’ discourses around FAT (Selbst et al, 2019) and take away an awareness of how context (geographical, social and political) and human subjectivity impact on the ethical challenges that arise. Participants will be empowered to critique education, policy and practice and to help develop new approaches to FAT.

References:

AI HLEG. Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence, European Commission. 2019.

Ananny, M., Crawford, K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society. 20,3. 2018.

Barry, M. Doherty, K. Marcano Bellisario, J. Car, J., Morrison, C., Doherty G.. mHealth for Maternal Mental Health: Everyday wisdom in ethical design. Proceedings of CHI Conference on Human Factors in Computing Systems. Denver CO. 2017.

Bird, S., Kiciman E., Kenthapadi, K., Mitchell, M. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned, WSDM Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 834-835. 2019.

Eubanks, V. Automating Inequality. How High-Tech Tools Profile, Police and Punish the Poor. St Martin’s Press. 2018.

Gillett, J. Acting Stanislavski: A practical guide to Stanislavski’s approach and method. Methuen Drama, Bloomsbury. London. 2014.

Goffman, E. The Presentation of Self in Everyday Life. Penguin. London. 1956.

Gürses, S. Peña Gangadharan, S. and Venkatasubramanian, S. Critiquing and Rethinking Fairness, Accountability and Transparency. Available at https://www.odbproject.org/2019/07/15/critiquing-and-rethinking-fairness-accountability-and-transparency/ 2019.

Hoffmann, A.L. Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication, & Society 22(7), 900-915. 2019.

Hutchinson, B. and Mitchell, M. 50 Years of Test (Un)fairness. Proceedings of the Conference on Fairness, Accountability, and Transparency – FAT*  ’19. 2019.

Mackenzie, D. An Engine, Not a Camera. How Financial Models Shape Markets, Cambridge, MA, MIT Press. 2008.

Mittelstadt, B. Principles Alone Cannot Guarantee Ethical AI. Nature Machine Intelligence, Available at SSRN: https://ssrn.com/abstract=3391293 or http://dx.doi.org/10.2139/ssrn.3391293 . November 2019.

Pollock, N. & Williams, R. How Industry Analysts Shape the Digital Future, Oxford University Press. Oxford. 2016.

Powles J, & Nissenbaum, H. The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence. Medium.com [available at https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53? ] 2018.

Selbst, A. , boyd, d., Friedler, S.A., Venkatasubramanian, S. and Vertesi, J. Fairness and Abstraction in Sociotechnical Systems. In Conference on Fairness, Accountability, and Transparency (FAT* ’19) . ACM, New York. 2019.  https://doi.org/10.1145/3287560.3287598

Van Lente, H. Navigating foresight in a sea of expectations: lessons from the sociology of expectations. Technology Analysis & Strategic Management 24, 2012.

Winston, B. Media, Technology and Society, A History: from the Telegraph to the Internet Routledge, Oxford. 1998