Head of Data
Insight & Analytics
View profileAs part of the ‘Tech Ethics Bristol’ events series, we caught up with Chris Lucas, Senior Research Engineer at the pioneering HealthTech business Babylon Health.
We welcomed Chris as a guest speaker at our third “Tech Ethics Bristol” event on Friday 11th June 12.20pm – 1.30pm with topic “AI Fairness and Explainability”.
Chris Lucas is a Senior Research Engineer at the pioneering HealthTech business Babylon Health. He has a PhD in High Energy Particle Physics and worked for the CMS collaboration on the Large Hadron Collider at CERN before moving into AI & ML Engineering roles. His work at Babylon Health has heavily focussed on fair/unbiased representation learning, issues with causal inference, graphical models and he has worked on numerous ethics projects for this business.
Chris Lucas: I’m currently a researcher in the Applied Sciences group at Babylon Health, but originally I come from a Physics background, more specifically experimental particle physics. When I left academia I knew I wanted to find something that was as fundamentally interesting, but had a much more direct positive impact on people’s lives. After earning my engineering stripes in a small start-up I was approached by Babylon and have worked in the research team ever since. We focus on using technology, often more specifically Machine Learning, to make healthcare more accessible and affordable for people around the world.
Chris Lucas: Machine Learning based technology has seen a huge boom in popularity, hype and funding in the last 10 years. Some people hail ML as a “magic bullet” which can, and should, be used to solve nearly every problem. As we discuss in our talk, this is even the approach which has been taken with fairness and the de-biasing of machine learning models – that it’s a solvable analytical problem if the right architectures or metrics can just be found. Unfortunately, I don’t believe that years of complex cultural and societal biases can be solved by just using a slightly different loss metric. In addition to technical solutions, we need to start collectively questioning exactly what we’re trying to do with such technology and to try to understand the subtleties of who it can ultimately affect for better or worse.
Chris Lucas: Technologies like ML are often deployed at scale which can result in significant real world impact due to any tiny imperfections. This is especially true when we consider the sometimes subtle and overlooked sources of bias which models are able to perpetuate from seemingly inconsequential modelling or dataset curation decisions. ML fairness is ultimately a societal, philosophical and ethical problem, and as such it’s important to not only raise awareness but to also enable the kinds of conversations that should be happening when creating AI-based technologies.
The prominent sociologist Ruha Benjamin has a quote which really resonated with me:
“Indifference to social reality on the part of tech designers and adopters can be even more harmful than malicious intent”.
As a tech community I believe it’s important that we’re not only having conversations about the ethics of AI ourselves, but that we seek to include as wide a range of society as possible.
Chris Lucas: As I mentioned before, I decided to jump from academia to industry specifically because I wanted to have a more direct impact on people and society. While it’s easy to criticise and focus on all the ways in which tech and AI can have a negative impact, I definitely don’t want to lose sight on it’s possibility to have a fantastic positive effect, if done right. Many of the problems we face as a society appear to come from increasing inequality and the imbalance of power. In discussion at the recent ICLR conference, Timnit Gebru asked:
“Every time we create a technology we should ask ourselves: is this technology shifting power, or is it amplifying the current holders of power?”
So for me, I get excited to think of the ways in which tech can enable and empower under-represented members of society, improving access to vital services, education and opportunities for all.
Chris Lucas: As we conclude in our talk, ultimately the problem of bias and fairness in ML isn’t just a technical problem, but a much bigger conversation that is now starting to happen. I see the exciting key future developments lie in greater understanding and ethical oversight of ML practises, involving people from a wide variety of backgrounds. The work of groups like the Algorithmic Justice League and the AI Blindspot group in raising awareness and defining guidelines for the ethical development of AI tech are great examples of this that I hope become the norm in our communities.
More event info here: https://www.meetup.com/tech-ethics-bristol/events/278572516/