Skip to main content

Concordia’s Fenwick McKelvey and other researchers explore governance at the Montreal AI Symposium

Speakers at the October 11 event argued for an interdisciplinary approach to address regulation challenges
October 23, 2024
|

four people sitting on a stage below a massive projection screen showing a woman and a child The Montreal AI Symposium panel discussion included Fenwick McKelvey, Concordia professor and Applied AI Institute co-director (third from left).

What principles should guide research into new technologies? How do we balance exploration and innovation, while minimizing any potential risks?

Governments, regulators, industry and artificial intelligence (AI) researchers are all grappling with these questions. But, as Fenwick McKelvey, associate professor of communication studies and co-director of Concordia’s Applied AI institute reminded attendees at the 2024 Montreal AI Symposium, they’re not new.

“I think part of our responsibility is to not negate the lessons we've learned,” McKelvey said in his comments at the event’s panel discussion. It brought together researchers from a range of disciplines to explore AI and governance, the symposium’s 2024 theme, through the lens of societal impacts.

“It’s kind of a plea for interdisciplinary research,” McKelvey continued. “Technologies have a social life. When everybody was trying to assess the impacts of ChatGPT in universities, I thought, ‘No, this is exactly what we do: try to make sense of how technologies are taken up in the world.’

“Should AI be used for mass surveillance? Should it be used in a policing context? How do we start assessing risk and developing ways of mitigating it? It’s a whole stack problem that goes from regulators to developers to people training and building AI models.”

Encouraging collaboration and exploring risks

Hosted at the Centre Mont-Royal in downtown Montreal on October 11, the symposium is now in its seventh year. The event seeks to address fundamental advances and applications in AI and features contributions from Montreal-area academics and professionals, including Concordia faculty, researchers and recent graduates.

In addition to the panel discussion, the day-long event consisted of keynote speakers, contributed talks and posters. A cocktail hour encouraged networking and allowed participants to connect with the event’s sponsors, including Google, Meta and ServiceNow.

Co-organizer Michał Drożdżal, a research scientist at the Facebook AI Research lab, says the symposium is designed to bring value to the wider Montreal AI research community.

“Oftentimes, researchers, we’re in a bubble — we only know the people around us. The symposium invites researchers from the Greater Montreal area to come together, discuss, maybe find new collaborations and see what others are thinking.”

He notes that his peers are increasingly recognizing the risks associated with the rapid pace of AI development, making governance a timely focus.

“There’s a growing understanding that we’re building technology that connects to real life. These tools are being used more and more, but users may not understand them well,” he says. “This creates a problem with governance. People are shifting to asking questions around utility: how can we use these tools well?”

A young man in a light green shirt and glasses stands beside a poster. Concordia student Peter Veroutis's poster featured his collaborative AI research project.

A venue to explore tough questions

For Drożdżal, the symposium is a venue to kickstart discussions, identify stakeholders and explore tough questions by inviting a range of perspectives.

“We often don’t have the answers in technology, but maybe law or social sciences, other fields have thought about similar problems, and we can work together to advance and make a new technology,” he says.

Drożdżal explains that diversity, equity and inclusion have been integral to symposium planning from the start.

“We have a code of conduct to make it a safe space to engage. It’s built into the culture. I think that’s the idea behind the founding of the symposium. We followed that spirit. The key point now is to start discussing what we can do as researchers to think about those hard problems.”

The diverse and interdisciplinary approach was echoed in the two keynote talks: “Aligning AI and Law for Safe Real-world Deployments,” by Peter Henderson, a legal researcher from Princeton University; and “Large Language Models as Cultural Technologies,” from Alison Gopnik, a psychology professor from the University of California at Berkeley.

They both joined McKelvey on the panel discussion, alongside Joelle Pineau of Meta AI, drawing on their different backgrounds to address the global nature of AI governance issues.

For his part, McKelvey connected the specific challenges facing Canadian regulators to the international context.

“Every Canadian citizen has the right to their personal information, but our data is often used in training large language models without due consideration of privacy rights. We're figuring out what the right relationship is with existing law and artificial intelligence. How does privacy law inform access and data rights, and how does that inform the training of models?

“It’s a real and substantial issue that many privacy commissioners globally are tackling. In one sense, we’re trying to ensure that AI systems are compliant with existing laws, and secondly, trying to make sense of where the right focus is going to be for AI.”

Applying ethics as a research constraint

For Peter Veroutis, BA/BSc 23, the symposium was an opportunity to showcase his Natural Sciences and Engineering Reseach Council (NSERC)-funded undergraduate research project, “Survival Multiarmed Bandits with Bootstrapping,” via the event’s poster segment.

Co-created with Frédéric Godin, associate professor of mathematics and statistics at Concordia, the project explores the use of AI in sequential decision-making for applications in algorithmic engagement.

Veroutis says the symposium provided professional opportunities, such as advice on potential new directions for his research, as well as a chance to delve into the ethical considerations of his burgeoning career.

“The talks were cool. And meeting experts in the field and receiving their comments on my paper has been a highlight,” he says.

“AI researchers think in terms of optimize, optimize, optimize. We should be more aware of ethics. As scientists, we can make a smart model, but we have to play within the realm of safety. The talks were a reminder that we can actually measure ethics as a constraint and optimize within that.”


Discover emerging research on artificial intelligence technologies at Concordia’s
Applied AI Institute.



Trending

Back to top

© Concordia University