Recommendation special: How do we use artificial intelligence in an ethical way?
INVI's Head of Research, Sofie Burgos-Thorsen, is leading the development of the Wild Problem Model. Unique to Sofie's work with data is a distinct data feminist approach that relies on a critical approach to power and entrenched knowledge systems, which she is passionate about turning upside down and pushing.
That's why she always asks the question: What knowledge and experience are we missing out on with the conventional methods and tools we use to decide hard things? Can we expand our data imagination by including a greater diversity of perspectives - especially the people who experience the problems up close, who have relevant experience and knowledge that is often overlooked or ignored?
According to Sofie, this means being curious and open to the possibilities of digital technologies such as artificial intelligence - but without being blind to the biases and challenges that can also be associated with them.
Here you can read about three books that Sofie Burgos-Thorsen has been inspired by when working on how to create an ethical foundation for INVI's Wild Problem Model.
-
Sofie Burgos-Thorsen has worked with digital innovation, participatory design, artificial intelligence and urban planning across research and practice for the past 10 years, with experience from Copenhagen Institute for Futures Studies, Techfestival Copenhagen, and Gehl Architects.
A sociologist by training, she holds a PhD from the Techno-Antropology Laboratory and MIT and is passionate about bridging data science, design justice, and sociological analysis to create social change in strategy, design and policy.
#1
Atlas of AI by Kate Crawford
Crawford argues that AI is not a neutral technology, but is based on extensive extraction of rare minerals, data collection without consent, and the labor of people in exploitative conditions. She highlights how AI systems reinforce existing inequalities and benefit big tech companies, while ordinary people are monitored and oppressed.
She makes AI tangible. For many people, artificial intelligence is something intangible that resides in a cloud. Crawford brings it down to earth and, based on more than a decade of research, she reveals what AI is from a new and exciting angle. One of Crawford's latest works is the impressive project "Calculating Empires A Genealogy of Technology and Power Since 1500", which she created with Vladan Joler and which can be found online here.
#2
Data feminism by Catherine D'Ignazio by Lauren Klein
This book also helps inform how we approach AI at INVI. Catherine and Lauren uncover how AI often reproduces inequality and asymmetrical power dynamics in society. But while Crawford focuses mostly on the negative aspects of AI, Catherine and Lauren also explore how we can use technology to create positive change - how technology can also be the answer to these concerns. As they write, data methods and AI are a double-edged sword, and we should join the fight to create transparent, accountable, and democratic uses of AI. This framework, as well as the seven data feminist principles presented in the book, is a great inspiration for the way we work with AI at INVI.
The book is also open access at MIT Pess, so everyone can read along.
#3
Design Justice by Sasha Costanza-Chock
In Design Justice, Sasha Costanza-Chock explores how design that involves marginalized groups can break down structural inequality, promote emancipation and create socially sustainable solutions. Costanza-Chock makes a strong case for why it's valuable to involve many diverse voices in developing solutions to society's wicked problems. This is central to what we want to do with our Wild Problems Model - increase inclusion, bring more voices to the table, and create greater diversity in the evidence base that informs decision-making. Both because we believe it creates fairness and legitimacy, but also because it simply gives us better solutions. If we take Sasha's message seriously, Design Justice also calls for us to involve diverse voices in the way we develop our Wild Problem Model. We already do this to some extent, but we hope to open up more over the next year or so and invite researchers, students, and various practitioners to provide different perspectives on how we should think about bias and "responsible AI" in relation to our model.
Design Justice is also open access at MIT Press here!