2024
Tecks, A., T. Peschlow, and G.
Vigliensoni. 2024. Explainability Paths for Sustained Artistic Practice. In
Proceedings of the the Second International Workshop on eXplainable AI for the Arts at the ACM Creativity and Cognition Conference (XAIxArts2024).
Bryan-Kinns, N., C. Ford, S. Zheng, H. Kennedy, A. Chamberlain, M. Lewis, D. Hemment, Z. Li, Q. Wu, L. Xiao, G. Xia, J. Rezwana, M. Clemens, and G.
Vigliensoni. Explainable AI for the Arts 2 (XAIxArts2). In
Proceedings of the ACM Creativity and Cognition Conference (C&C ’24).
2023
Vigliensoni, G., and R. Fiebrink. 2023. Steering latent audio models through interactive machine learning. In Proceedings of the 14th International Conference on Computational Creativity (ICCC2023).
Vigliensoni, G., and R. Fiebrink. 2023. Interacting with neural audio synthesis models through interactive machine learning. In The First International Workshop on eXplainable AI for the Arts at the ACM Creativity and Cognition Conference (XAIxArts2023).
Vigliensoni, G., and R. Fiebrink. 2023. Re•col•lec•tions: Sharing sonic memories through interactive machine learning and neural audio synthesis models In Creative AI track of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Shimizu, J., I. Olowe, T. Broad, G. Vigliensoni, P. Thattai, and R. Fiebrink. 2023. Interactive machine learning for generative models. In Proceedings of the Machine Learning for Creativity and Design Workshop 37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Fujinaga, I, and G. Vigliensoni. 2023. Optical music recognition workflow for medieval music manuscripts. In Proceedings of the 5th International Workshop on Reading Music Systems (WoRMS 2023)
2022
Vigliensoni, G., P. Perry, and R. Fiebrink. 2021. A small-data mindset for generative AI creative work. In Proceedings of the Generative AI and HCI - Conference on Human Factors in Computing Systems Workshop (CHI2022). doi: https://doi.org/10.5281/zenodo.7086327.
Vigliensoni, G., L. McCallum, E. Maestre, and R. Fiebrink. 2022. R-VAE: Live latent space drum rhythm generation from minimal-size datasets. Journal of Creative Music Systems 1(1). doi: https://doi.org/10.5920/jcms.902.
2021
Vigliensoni, G., E. de Luca, and I. Fujinaga. 2021. Chapter 6: Repertoire: Neume Notation. In
Music Encoding Initiative Guidelines edited by J. Kepper et al.
2020
Vigliensoni, G., L. McCallum, E. Maestre, and R. Fiebrink. 2020. Generation and visualization of rhythmic latent spaces. In
Proceedings of the 2020 Joint Conference on AI Music Creativity. doi: https://doi.org/10.5281/zenodo.4285422.
Vigliensoni, G., L. McCallum, and R. Fiebrink. 2020. Creating latent spaces for modern music genre rhythms using minimal training data. In
Proceedings of the International Conference on Computational Creativity (ICCC’20). doi: https://doi.org/10.5281/zenodo.7415792.
Vigliensoni, G., E. Maestre, and R. Fiebrink. 2020. Web-based dynamic visualization of rhythmic latent space. In Proceedings of the Sound, Image and Interaction Design Symposium (SIIDS2020). doi: https://doi:10.5281/zenodo.7438305.
Regimbal, J., G. Vigliensoni, C. Hutnik, and I. Fujinaga. 2020. IIIF-based lyric and neume editor for square-notation manuscripts. In Proceedings of the Music Encoding Conference.
2019
Fujinaga, I., and G. Vigliensoni. 2019. The art of teaching computers: The SIMSSA optical music recognition workflow system. In Proceedings of the 27th European Signal Processing Conference.
Vigliensoni, G., A. Daigle, E. Liu, J. Calvo-Zaragoza, J. Regimbal, M. A.
Nguyen, N. Baxter, Z. McLennan, and I. Fujinaga. 2019. Overcoming the challenges of optical music recognition of Early Music with machine learning. Digital Humanities Conference 2019.
Vigliensoni, G., A. Daigle, E. Liu, J. Calvo-Zaragoza, J. Regimbal, M. A. Nguyen, N. Baxter, Z. McLennan, and I. Fujinaga. 2019. From image to encoding: Full optical music recognition of Medieval and Renaissance music. Music Encoding Conference 2019.
2018
Vigliensoni, G., J. Calvo-Zaragoza, and I. Fujinaga. 2018. Developing an environment for teaching computers to read music. In Proceedings
of 1st International Workshop on Reading Music Systems.
Castellanos, F., J. Calvo-Zaragoza, G. Vigliensoni, and I. Fujinaga. 2018. Document analysis of music score images with selectional autoencoders. In Proceedings of the 19th International Society for Music Information Retrieval Conference.
Nápoles, N., G. Vigliensoni, and I. Fujinaga. 2018. Encoding matters. In Proceedings of the 5th International Conference on Digital Libraries for
Musicology.
Calvo-Zaragoza, J., F. Castellanos, G. Vigliensoni, and I. Fujinaga. 2018. Deep neural networks for document processing of music score images. Applied Sciences, 8(5).
Vigliensoni, G., J. Calvo-Zaragoza, and I. Fujinaga. 2018. An environment for machine pedagogy: Learning how to teach computers to read music. In Proceedings of the Intelligent Music Interfaces for Listening and Creation workshop
2017
Vigliensoni, G. 2017. Evaluating the performance improvement of a music recommendation model by using user-centric features. PhD dissertation. McGill University.
Vigliensoni, G. and I. Fujinaga. 2017. The music listening histories dataset. In Proceedings of the 18th International Society for Music Information Retrieval Conference. doi: https://doi.org/10.5281/zenodo.1417499.
Vigliensoni, G., D. Romblom, M. P. Verge, and C. Guastavino. 2017. Perceptual evaluation of a virtual acoustic room model. The Journal of the Acoustical Society of America 142(4).
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. One-step detection of background, staff lines, and symbols in medieval music manuscripts with convolutional neural networks. In Proceedings of the 18th International Society for Music Information Retrieval Conference.
Barone, M., K. Dacosta, G. Vigliensoni, and M. Woolhouse. 2017. GRAIL: Database linking music metadata across artist, release, and track. In Proceedings of the 4th International Workshop on Digital Libraries for Musicology.
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. Music document layout analysis through machine learning and human feedback. In Proceedings of the 12th IAPR International Workshop on Graphics Recognition.
Saleh, Z., K. Zhang, J. Calvo-Zaragoza, G. Vigliensoni, and I. Fujinaga. 2017. Pixel.js: Web-based pixel classification correction platform for ground truth creation. In Proceedings of the 12th IAPR International Workshop on Graphics Recognition.
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. Pixelwise classification for music document analysis. In Proceedings of the 2017
Seventh International Conference on Image Processing Theory, Tools, and Applications.
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. Pixel-wise binarization of musical documents with Convolutional Neural Networks. In Proceedings of the 15th IAPR Conference on Machine Vision Applications.
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. Staff-line detection on greyscale images with pixel classification. In Proceedings of the 8th Iberian Conference on Pattern Recognition and Image Analysis.
Barone, M., K. Dacosta, G. Vigliensoni, and M. Woolhouse. 2017. GRAIL: A general recorded audio identity linker. Late breaking session 17th International Society for Music Information Retrieval Conference.
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. A unified approach towards automatic recognition of heterogeneous music documents. In Proceedings of the Music Encoding Conference.
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2017. A machine learning framework for the categorization of elements in images of musical documents. In Proceedings of the Third International Conference on Technologies for Music Notation and Representation.
2016
Vigliensoni, G. and I. Fujinaga. 2016. Automatic music recommendation systems: Do demographic, profiling, and contextual features improve their performance?. In Proceedings of the 17th International Society for Music Information Retrieval Conference. doi: https://doi.org/10.5281/zenodo.1417073.
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2016. Staff-line detection on greyscale images with pixel classification. Late breaking session 17th International Society for Music Information Retrieval Conference.
Calvo-Zaragoza, J., G. Vigliensoni, and I. Fujinaga. 2016. Document analysis for music scores via machine learning. 3rd International Digital Libraries for Musicology workshop.
2015
Fujinaga, I., G. Vigliensoni, and H. Knox. 2015. The making of a computerized harpsichord for analysis and training. International Symposium on Performance Science.
Barone, M., K. Dacosta, G. Vigliensoni, and M. Woolhouse. 2015. GRAIL: A music identity space collection and API. Late breaking session 16th International Society for Music Information Retrieval Conference
2014
Vigliensoni, G., and I. Fujinaga. 2014. Time-shift normalization and listener profiling in a large dataset of music listening histories. Fourth annual seminar on cognitively based music informatics research.
Vigliensoni, G., and I. Fujinaga. 2014. Identifying time zones in a large dataset of music listening logs. In Proceedings of the International Workshop on Social Media Retrieval and Analysis.
2013
Vigliensoni, G., J. A. Burgoyne, and I. Fujinaga. 2013. Musicbrainz for the world: the Chilean experience. In Proceedings of the International Society for Music Information Retrieval Conference.
Vigliensoni, G., G. Burlet, and I Fujinaga. 2013. Optical measure recognition in common music notation. In Proceedings of the International Society for Music Information Retrieval Conference.
2012
Vigliensoni, G., and M. Wanderley. 2012. A quantitative comparison of position trackers for the development of a touch-less musical interface. In Proceedings of the New Interfaces for Musical Expression Conference.
Hankinson, A., J. A. Burgoyne, G. Vigliensoni, A. Porter, J. Thompson, W. Liu, R. Chiu, and I. Fujinaga. 2012. Digital document image retrieval using optical music recognition. In Proceedings of the International Society for Music Information Retrieval Conference.
Hankinson, A., J. A. Burgoyne, G. Vigliensoni, and I. Fujinaga. 2012. Creating a large-scale searchable digital collection from printed music materials. In Proceedings of Advances in Music Information Research.
2011
Vigliensoni, G. 2011. Touch-less gestural control of concatenative sound synthesis. Master's thesis, McGill University.
Vigliensoni, G., J. A. Burgoyne, A. Hankinson, and I. Fujinaga. 2011. Automatic pitch detection in printed square notation. In Proceedings of the International Society for Music Information Retrieval Conference.
Hankinson, A., G. Vigliensoni, J. A. Burgoyne, and I. Fujinaga. 2011. New tools for optical chant recognition. International Association of Music Libraries Conference. Dublin, Ireland.
Burgoyne, J. A., R. Chiu, G. Vigliensoni, A. Hankinson, J. Cumming, and I. Fujinaga. 2011. Creating a fully-searchable edition of the Liber Usualis. Medieval and Renaissance Music Conference. Barcelona, España.
2010
Vigliensoni, G., and M. Wanderley. 2010. Soundcatcher: Explorations in audio-looping and time-freezing using an open-air gestural controller. In Proceedings of the International Computer Music Conference.
McKay, C., J. A. Burgoyne, J. Hockman, J. B. L. Smith, G. Vigliensoni, and I. Fujinaga. 2010. Evaluating the performance of lyrical features relative to and in combination with audio, symbolic and cultural features. In Proceedings of the International Society for Music Information Retrieval Conference.
Vigliensoni, G., C. McKay, and I. Fujinaga. 2010. Using jWebMiner 2.0 to improve music classification performance by combining different types of features mined from the web. In Proceedings of the International Society for Music Information Retrieval Conference