In this talk, Florian Grond will be discussing his past and ongoing research on immersive audio in sound art and ethnography. He will share insights from collaborative research-creation projects in cultural mediation, assistive technology, and independent art projects.
His previous projects have focused on bridging academia and the arts in sound art, music, the intersection of art and disability, and assistive technology. Through these examples, he will demonstrate how immersive audio can be a powerful tool to transport the audience through the experiences of first-person perspectives. These experiences enable the listener to dive into various environments, their affordances, and imaginaries when engaging with other people's perspectives across sensory abilities and practices.
He will conclude the presentation with his ongoing SSHRC-funded research projects on neurodiverse phenomenology, atmospheres, and sensory-friendly zones with binaural first-person perspectives, in which he deepens the immersive ethnographic methods with colleagues from Concordia and McGill.
About the speaker
Florian Grond is an assistant professor at Concordia University in Design, Interaction Design, and Computation Arts. His research focuses on immersive media, assistive technology, and the intersection of design and disability, drawing from his background as an artist and technologist.
Grond has published in sound studies, auditory display, and recently in immersive sound recording. His work includes pioneering 6-degrees-of-freedom sound recording systems, contributing to 3D sound recording and immersive sound reproduction.
In collaboration with literary scholar and writer Dr. Piet Devos, he also introduced binaural recordings for first-person ethnographies. After a post-doctoral appointment at Concordia, where he was involved with the Critical Disability Studies Working Group, he held a research-creation postdoctoral fellowship at CIRMMT (Center for Interdisciplinary Research in Music Media and Technology) funded by FRQSC.
Grond has also been associated with the Shared Reality Lab at McGill and worked as a creative sound engineer for ZYLIA, a company specializing in microphone arrays. Recently, he received with Melissa Park from McGill an SSHRC Insight Development Grant for a project applying immersive sound recording to study neurodiverse multisensory experiences and another IDG grant as PI to explore atmospheres using immersive audio technology with a neurodiversity focus, collaborating with David Howes, Matthew Unger, and Melissa Park.