Concordia’s Jason Edward Lewis wants ethical artificial intelligence with an Indigenous worldview
The key moment happened two years ago, when a sentence buried in an unpublished book caught the eye of doctoral student Suzanne Kite.
Her supervisor, Jason Edward Lewis, Concordia University Research Chair in Computational Media and the Indigenous Future Imaginary, shared with her a draft of a chapter that discussed algorithmic bias and artificial intelligence (AI).
“In my own research, I had come across this idea in Lakota ontology of rocks as having volition,” says Kite. “And when I read Jason’s book chapter, I realized that the Lakota ontology spoke directly to his work.”
The philosophy recognizes non-human forms of being as legitimate consciousness that exists outside of humanity. The Lakota have also established many formal protocols to recognize the relationship between human and non-human ways of being.
Kite, in connecting Lakota views on non-humans to computational systems, led Lewis to reconsider how to approach the problem of bias in AI.
‘The IIF is the engine that is making it all go’
“This is the first time we thought: we need to look at our relationality with AI,” says Lewis.
“Lakota ontology and philosophy provided a window into a language that might make it possible to do so in a new way.”
Since then, Lewis has co-founded the Indigenous Epistemology and AI Working Group: an international hub of Indigenous scholars whose goal is to define an ethical relationship to AI that is informed by Indigenous knowledge and philosophies.
Lewis received $130,000 worth of grants from the Canadian Institute for Advanced Research (CIFAR) and the Social Sciences and Humanities Research Council of Canada (SSHRC) to explore the theory and practice of AI through a series of workshops in Hawai‘i this spring and summer. Lewis and Kite worked with collaborators to turn those initial conversations into a $10,000 prize-winning essay to be published this summer in an edited collection by MIT Press.
None of this could have happened without the Initiative for Indigenous Futures (IIF), the SSHRC partnership that Lewis leads. He is the common person between the two grants; he and Kite are the common links between organizing the paper and the workshops.
“IIF is the engine that is making it all go,” Lewis explains.
Seeking a relationship to non-human intelligences
The essay, titled “Making Kin with the Machines,” is co-authored by Lewis, Kite, Noelani Arista (University of Hawai‘i at Mānoa) and Archer Pechawis.
Chosen out of 260 submissions, it is one of 10 essays published in a special edition of the Journal of Design and Science by MIT Press. One reviewer wrote that it might be the only essay in the collection that opens up truly new ways of thinking about AI.
The essay argues that Indigenous knowledge systems are much better at accommodating the non-human than Western philosophies, because the Indigenous worldview does not place man at the centre of creation. The writers seek a relationship to non-human intelligences — beyond that of merely tools or slaves — as potential partners who exist in a living system of mutual respect.
The essay states that there is currently no consensus on how to approach human relations with AI. Opinions vary widely within the small network of Indigenous scholars, artists, designers, computer programmers and knowledge-holders who consider the topic. Different Indigenous communities approach questions of kinship differently; some disavow kinship with machines entirely.
And that’s where the CIFAR and SSHRC funding comes in.
Developing an Indigenous protocol for artificial intelligence
Lewis, with collaborators Angie Abdilla, Oiwi Parker Jones (University of Oxford) and D. Fox Harrell (MIT) applied to the CIFAR AI and Society workshop funding competition. Lewis and his colleagues were one of four awarded support, receiving $80,000 to conduct two workshops on Indigenous protocol and AI. He also received a SSHRC Connections Grant worth nearly $50,000 to facilitate the workshops.
In March, 30 people gathered in Honolulu to answer two questions: first, what do you think is the intersection between Indigenous thought and AI; and second, what is your interest in AI? From there, they organized into small groups to discuss and brainstorm potential areas of research.
In May, a smaller group will reassemble in Honolulu to complete the first draft of a position paper that outlines an Indigenous protocol for AI.
“My hope is that we can produce this document and then find other mechanisms to keep the group going, so we can turn this into design guidelines and think about how we might actually design one of these systems,” Lewis says.
‘This is the essence of Indigenous futurism’
Lewis envisions three stages to this AI project. Stage one: hold the internal conversation with Indigenous thinkers and makers through the workshops.
“It’s important for us to sort this out amongst ourselves in a way that we think is good for us and our communities,” says Lewis.
Stage two: make the group and its work visible to policy makers.
“Lots of decisions are being made now about the appropriate way to structure AI. We need to be in those conversations,” he adds.
“When people say they’re creating design guidelines for ethical AI and they want to have Indigenous people in the room, they just have to search ‘Indigenous AI’ and they’ll get our website with a list of people from all over the world. I hope this means more of us will be involved in those policy conversations.”
Stage three: making.
As a developer and an artist, Lewis is interested in designing a Kānaka (Indigenous Hawaiian) AI. He is currently helping support a group of young Kānaka Maoli in Honolulu who are developing a Hawaiian programming language. From there, he hopes to see them develop an operating system. From the system, he believes they could program something AI-like.
“This is the essence of Indigenous futurism to me. It’s not just about dreaming about the future, it’s about building the infrastructure to get us to the future we want,” says Lewis.
Likewise, Kite uses machine learning to make art. She is concerned about where her tools come from and how much control she has over them.
“For me there’s a natural progression from asking who built my tools to what are they built out of and why did builders make the decisions they made? These are the kind of questions I ask when I am making.”
‘An infinite number of ways to make the world a better place’
Both Lewis and Kite see an urgent need to engage with — and change — the ethical foundations of AI development.
“A whole ethical structure is starting to be built around AI now. If we don’t consider these issues, we will be locked into a single way of thinking about these systems,” says Lewis.
For instance, many programmers who work on AI tend to view technology as neutral ground and believe that error correcting will work bias out of the system.
“Part of what I am trying to say is, the solution is not error correcting. There is actually something fundamentally wrong with the way you are constructing these things,” he explains.
“This is where Indigenous epistemology comes in really well. You’re not treating something respectfully because it has a soul, you are treating it respectfully because it’s one nodal point in a number of different relations that you are enmeshed in.”
Kite sees this relational shift as a needed alternative to Western philosophies that make hard distinctions between humans and everything else. It would allow humans to better address a whole interrelated system of pressing social, environmental and technological issues, from AI to climate change.
“What would happen if we took seriously the idea that what we think of as animate or inanimate might not be accurate? The implications are huge. From such a simple starting point, we open up an infinite number of possibilities and ways to make the world a better place.”
Find out more about Indigenous Directions at Concordia.