
Shuqi Dai

shuqi.
Dai
Computer Science & Music

Shuqi Dai
Computer Science Department
Carnegie Mellon University
Hi, I am a fifth-year Ph.D. student from Computer Science Department, Carnegie Mellon University, advised by Prof. Roger Dannenberg. My research lies in the interaction of Computer Music, Machine Learning and Human Computer Interaction, including projects in automatic music generation, music understanding & analysis, expressive performance control, human-computer interactive music performance, and Chinese music technology. Before joining CMU, I received my B.S. in Computer Science from Peking University in China in July 2018.
I am also a professional Pipa (Chinese traditional instrument) player with 20 years of performance experience, and a mezzo-soprano with 7-year formal training of Western opera singing. During my bachelor study, I served in PKU Chinese Music Institute (CMI), PKU Musical Club, and Chorus of PKU Hall.


Deep Music Generation via Music Frameworks
Deep Learning, Hierarchical Music Structure, Controllability
With Music Frameworks (a hierarchical music structure representation) and new musical features, we combine music domain knowledge with deep learning, and factor music generation into sub-problems, which allows simpler models, requires less data and achieves high musicality.

Automatic Analysis of Hierarchical Music Structure
Music Similarity, Segmentation, Repetition
Introduces new algorithms for identifying a two-level hierarchical music structure based on repetition. Automatically detected hierarchical repetition structures reveal significant interactions between structure and chord progressions, melody and rhythm. Different levels of hierarchy interact differently.

Personalized Stylistic Music Generation
Machine Learning, Music Domain Knowledge, Imitation, Repetition Structure
Designed a stylistic music generation system that is able to capture structure, melody, chord progression and bass styles from one or a few example music, and imitate the styles in a new piece using statistical machine learning models.

Mobile Orchestra (v1.0 Ringtone)
Personal Hackathon, SuperCollider, JavaScript, Open Sound Control
Developed mobile web app with SuperCollider that lets people use mobile gestures (speeds, ranges, directions) to control melodies (such as ringtones) in real-time as if they were playing musical instruments, adjusting pitch, volume, tempo, accompaniment and special effects etc., allowing group of people to form ringtone orchestra
Selected Publications
-
Shuqi Dai, Zeyu Jin, Celso Gomes, Roger B. Dannenberg, "Controllable Deep Melody Generation via Hierarchical Music Structure Representation". In Proceedings of the 22nd International Society for Music Information Retrieval Conference, Online, 2021. [paper] [video] [poster] [demo]
-
Shuqi Dai, Xichu Ma, Ye Wang, Roger B. Dannenberg, "Personalized Popular Music Generation Using Imitation and Structure". arXiv preprint arXiv:2105.04709, 2021. [paper] [demo]
-
Shuqi Dai, Huan Zhang, Roger B. Dannenberg. "Automatic Analysis and Influence of Hierarchical Structure on Melody, Rhythm and Harmony in Popular Music". In Proceedings of the 2020 Joint Conference on AI Music Creativity and International Workshop on Music Metacreation (CSMC+MUME), Stockholm, Sweden, Oct 2020. [paper] [video] [code]
-
Z.Wang, K. Chen, J. Jiang, Y. Zhang, M. Xu, Shuqi Dai, X. Gu, G. Xia. "Pop909: A Pop-song Dataset for Music Arrangement Generation". In Proceedings of the 21st International Society for Music Information Retrieval Conference (ISMIR), Montéal, Canada, 2020. [paper]
-
Gus G. Xia, Shuqi Dai. "Music Style Transfer Issues: A Position Paper". In Proceedings of 6th International Workshop on Music Metacreation (MUME), Salamanca, Spain, June 2018. [paper]
-
Shuqi Dai, Gus G. Xia. "Computational Models For Common Pipa Techniques", best student paper, the 5th National Conference on Sound and Music Technology, Oct 2017.