I am currently a student pursuing my Phd. in Electrical Engineering at Drexel University. The overall focus of my work is to develop methods to quantify the specific attributes we use to express ourselves through music. My work spans a wide scope of topics including audio signal processing, machine learning, and human computer interaction.
I am also a musician and composer. Throughout my youth and well into adulthood, I both trained and gained performance experience as a drummer, classical percussionist, and marching percussionist. In addition to percussion, I learned to play the piano and studied music composition. I try to find a way to perform and create music every day.
Personal Website: mattprockup.com
Ph.D. Electrical Engineering, Drexel University (Anticipated Spring 2016)
M. S. in Electrical Engineering, Drexel University (2011)
B. S. in Electrical Engineering, Drexel University (2011)
Minor in Music Theory/Composition, Drexel University (2011)
Modeling Genre with Musical Attributes
Genre provides one of the most convenient groupings of music, but it is often regarded as poorly defined and largely subjective. In this work we seek to answer whether musical genres be modeled objectively via a combination of musical attributes and if audio features mimic the behavior of these attributes. This work is done in collaboration with Pandora, and evaluation is performed using Pandora’s Music Genome Project® (MGP).
Modeling Rhythmic Attributes in Music
Musical meter and attributes of the rhythmic feel such as swing, syncopation, and danceability are crucial when defining musical style. In this work, we propose a number of tempo-invariant audio features for modeling meter and rhythmic feel. This work is done in collaboration with Pandora, and evaluation is performed using Pandora’s Music Genome Project® (MGP).
In this work, we present a system that seeks to classify different expressive articulation techniques independent of percussion instrument. We also outline a newly recorded dataset that encompasses a vast array of percussion performance expressions on a standard four piece drum kit.
We have developed a system that helps users by guiding them through the performance using a handheld application (iPhone app) in real-time. Using audio features, we attempt to align the live performance audio with that of a previously annotated reference recording. The aligned position is transmitted to users’ handheld devices and pre-annotated information about the piece is displayed synchronously. [video] [official PhilOrch page]
Prockup, M., Ehmann, A., Gouyon, F., Schmidt, E., Celma, O., Kim, Y., "Modeling Genre with the Music Genome Project: Comparing Human-Labeled Attributes and Audio Features." International Society for Music Information Retrieval Conference, Malaga, Spain, 2015. [PDF]
Prockup, M., Asman, A., Ehmann, A., Gouyon, F., Schmidt, E., Kim, Y., "Modeling Rhythm Using Tree Ensembles and the Music Genome Project." Machine Learning for Music Discovery Workshop at the 32nd International Conference on Machine Learning, Lille, France, 2015. [PDF]
Prockup, M., Ehmann, A., Gouyon, F., Schmidt, E., Kim, Y., "Modeling Rhythm at Scale with the Music Genome Project." IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, New York, 2015. [PDF]
Prockup, M., Scott, J., and Kim, Y. "Representing Musical Patterns via the Rhythmic Style Histogram Feature." Proceedings of the ACM International Conference on Multimedia, Orlando, Florida, 2014. [PDF]
Prockup, M., Schmidt, E., Scott, J. & Kim, Y. Toward understanding expressive percussion through content based analysis. Proceedings of the 14th International Society for Music Information Retrieval Conference. Curitiba, Brazil. 2013 [PDF]
Schmidt, E. M., Prockup, M., Scott, J., Dolhansky, B., Morton, B. G., and Kim, Y. E. (2013). Analyzing the Perceptual Salience of Audio Features for Musical Emotion Recognition. Computer Music Modeling and Retrieval. Music and Emotions.
Prockup, M.; Grunberg, D.; Hrybyk, A.; Kim, Y.E., Orchestral Performance Companion: Using Real-Time Audio to Score Alignment. IEEE MultiMedia , vol.20, no.2, pp.52,60, April-June 2013
Schmidt, E. M., Prockup, M., Scott, J., Dolhansky, B., Morton, B. and Kim, Y. E. (2012). Relating perceptual and feature space invariances in music emotion recognition. Proceedings of the International Symposium on Computer Music Modeling and Retrieval, London, U.K.: CMMR. (Best Student Paper) [PDF] [Oral Presentation]
Scott, J., Schmidt, E. M., Prockup, M., Morton, B. and Kim, Y. E. (2012). Predicting time-varying musical emotion distributions from multi-track audio. Proceedings of the International Symposium on Computer Music Modeling and Retrieval, London, U.K.: CMMR. [PDF]
Batula, A. M., Morton, B. G., Migneco, R., Prockup, M., Schmidt, E. M., Grunberg, D. K., Kim, Y. E., and Fontecchio, A. K. (2012). Music Technology as an Introduction to STEM. Proceedings of the 2012 ASEE Annual Conference, San Antonio, Texas: ASEE. [PDF]
Scott, J., Dolhansky, B., Prockup, M., McPherson, A., Kim, Y. E. (2012). New Physical and Digital Interfaces for Music Creation and Expression. Proceedings of the 2012 Music, Mind and Invention Workshop, Ewing, NJ: [PDF]
Prockup, M., Batula, A., Morton, B., Kim,Y. E (2012). Education Through Music Technology. Proceedings of the 2012 Music, Mind and Invention Workshop, Ewing, NJ
Scott, J., Prockup, M., Schmidt, E. M., Kim, Y. E. (2011). Automatic Multi-Track Mixing Using Linear Dynamical Systems. Proceedings of the 8th Sound and Music Computing Conference, Padova, Italy: SMC. [PDF]
Kim, Y. E., Batula, A. M., Migneco, R., Richardson, P., Dolhansky, B., Grunberg, D., Morton, B., Prockup, M., Schmidt, E. M., and Scott, J. (2011). Teaching STEM concepts through music technology and DSP. Proceedings of the 14th IEEE Digital Signal Processing Workshop and 6th IEEE Signal Processing Education Workshop, Sedona, AZ: DSP/SPE. [PDF]