Head of Data
Insight & Analytics
View profileWe are excited for the third Tech Ethics Bristol event on “AI Fairness and Explainability”.
This Friday 11th June 12.20pm – 1.30pm, will feature two online sessions featuring four speakers who are experts in their field.
AI and Machine Learning is being adopted at a rapid pace across the world, and without a solid understanding of bias and fairness, as well as being able to explain what the algorithms are doing, it’s increasingly difficult to manage, audit or govern these. This can lead to unintended consequences such as discrimination, offence being caused, decisions not being taken fairly or the potential for the algorithms to completely fall down and not act as they were designed to do. This can lead to real world problems, people falling victim to the implications of these decisions, loan applications being rejected, incorrect arrests and more, depending on the use case.
This event will explore the theoretical and commercial ways of identifying and dealing with bias and ensuring that algorithms developed have a robust process in place to look at all these factors.
Machine Learning is increasingly pervasive in our day to day lives, whether simply suggesting your weekly Spotify mixtape, or perhaps even provisioning medical care. Ensuring ML-based algorithms are fair and unbiased with respect to certain sensitive variables becomes an essential consideration in the development and deployment of such products. In this talk, we will discuss the different ways in which we can begin to qualitatively and quantitatively understand and measure bias, and discuss some of the ethical considerations around developing such life-affecting technologies.
We will show how some of the very commonly used bias mitigation techniques that ignore cause and effect relationships in the data can actually increase bias in some hidden ways. Finally, we will close by discussing some ways forward.
Speakers:
Chris Lucas is a Senior Research Engineer at the pioneering HealthTech business Babylon Health. He has a PhD in High Energy Particle Physics and worked for the CMS collaboration on the Large Hadron Collider at CERN before moving into AI & ML Engineering roles. His work at Babylon Health has heavily focussed on fair/unbiased representation learning, issues with causal inference, graphical models and he has worked on numerous ethics projects for this.
Sina Salek is a Data Scientist at Axiom Data. Sina was originally trained as a theoretical physicist, working on foundational as well as applied issues in quantum information theory at great Universities such as Oxford, Bristol and Hong Kong. For most of his career as a physicist, his main focus has been generalising causal models and machine learning techniques for the future quantum information devices. After leaving academia, he worked at Fujitsu Labs of Europe as an AI Ethics Researcher.
Explainable AI (XAI) is increasingly positioned as a technical solution to address a variety of ethical challenges of automated decision making – from identifying data bias to enhancing trust and complying with regulation. In contrast, our case study at an insurance company evidences that xAI goes beyond models and data, as explanations were generated by a variety of actors within and beyond the technical teams, and different actors held different knowledge and expectations of what needed explaining and why. We argue for the need to widen the horizon of explainable AI from normative principles and technical solutions to social practices that take into account wider organizational and community contexts.
An expanded definition of xAI will allow us to employ participatory approaches that integrate the lived experiences of the people subject to automated decision making, and facilitate richer and more inclusive discussions on what makes AI fair and ethical.
Speakers:
Susan Halford is a social scientist, Professor of Sociology, Co-Director of the Bristol Digital Futures Institute at the University of Bristol and President of the British Sociological Association.
Marisela Gutierrez Lopez is a Senior Research Associate at the Bristol Digital Futures Institute of the University of Bristol.
🔓 12.20pm – CrowdCast room opens
👋 12.30pm – Event starts with a welcome from your meet-up organisers, Karin & Alex. We will give you an overview of Tech Ethics Bristol and our mission
📢 12.35pm – Session 1: “Fairness considerations in Machine Learning”
💚 13: 05 pm – Session 2: Ethical AI in Context: Explainability as a Relational Practice
🗓️ 1.30pm – Round-up, other community notices and next event announcement.
The event will be recorded and will be available along with slides to view shortly afterwards.
Brought to you by:
Collective Intelligence – https://www.collective-intelligence.co.uk/
Sponsored by:
ADLIB recruitment – www.adlib-recruitment.co.uk