I am an associate professor at FusioncompLab with Prof. Morishima at Faculty of Library, Information and Media science, University of Tsukuba, Japan. Prior to this I was postdoc at the Laboratory for Sound and People with Computing, University Tsukuba working with Prof. Terasawa and Prof. Hiraga. I obtained my PhD in Computer Science from the Keio University, Kanagawa, Japan, in 2013. In 2019-2020, I was a visiting researcher at SLIDE team at the Laboratoire d’Informatique de Grenoble, CNRS, France working with Dr. Amer-Yahia on human factors in large scale data science.
|Apr. 2021||My grant proposal "A crowdsourcing framework realizing cognitive apprenticeship between human and AI" was accepted for JSPS Grant-in-Aid for Scientific Research (B).|
|Mar. 2021||Our paper "The influence of the content and its arrangement of cooking recipes on the burden on the writer and the utility of the reader" won Research Award at the 26th JSAI SIGAM.|
|May 2020||My grant proposal "A study of worker-centric and platform-centric fair online task assignment" was accepted for JST AIP Challenge.|
|Dec. 2019||Our paper "Aritomo, D., Watanabe, C., Matsubara, M. and Morishima, A.: A Privacy-Preserving Similarity Search Scheme over Encrypted Word Embeddings" won Best Paper Award at iiWAS2019. Congrats, Aritomo-kun!|
|Sep. 2019||I will be staying at CNRS/University of Grenoble Alpes in France as a visiting researcher until March 2020.|
|Sep. 2019||The TOKYO2020 'Make The Beat' project was announced. This project is joint collaboration with the Tokyo Organising Committee of the Olympic and Paralympic Games, Intel, and University of Tsukuba.|
|Apr. 2019||Our study on "Skill Improvement through Crowds and AI Interaction" won an JST AIP Network Lab Award.|
I am broadly interested in human computation, cognitive psychology, learning science, assistive technology, and cognitive musicology. My basic principle common to all of these field is to help people utilize their inherent cognitive abilities (e.g. learning, awareness, and top-down & bottom-up perception.) Recently, I'm working on human factor issues arising in online labor market, specifically methodologies to enhance workers learning and awareness of their cognitive bias. Prior to those studies, I focused on perceptualization (sonification, visualization, symbolization) to support people's metacognition for skill improvement. I also studied mental representation of performing and listening music. Topics are as follows (Clicking topic shows details):
Working online as learningHow to balance the worker-centric and platform-centric demand? We formalize a worker skill improvement model and devise task assignment strategy which consider the task outcomes and workers learning.
Collaborator: CNRS, University of Philippines (JST CREST, AIP Challenge)
Dynamic microtask for measuring cognitive biasesCognitive biases now become more serious issue in social networks and online job market. Towards better collective intelligence, we explore how to capture the workers cognitive bias and how to inform them.
Collaborator: Osaka Prefecture University (JST CREST, AIP Challenge)
Large scale platform for Kansei data scienceWe are developing a large scale platform for Kansei data science. We collect the affective response from crowds and devise a task design to improve reproducibility of the result.
Collaborator: TOCOG (JST MIRAI, CREST)
Embodied Knowledge of MusiciansWe investigate cognitive and physical abilities of people involved in music, from music lovers to professional performers. For example, score reading, selective listening, instrumental performance, and improvisation. Our studies cover a wide range of topics that are closely related to the perceptual and motor systems, such as Kinematic analysis using biosignals and motion capture, music interpretation analysis using the introspective dictation and the development of HCI technology to support musical activities.
Collaborator: UEC, Kunitachi College of Music, YAMAHA (JST MIRAI)
Cognitive music theory based on tree structureAs human speech is basically generated under context-free grammar, our brains have an intrinsic function of push-down stacks, where the words we hear are temporarily stored and we can predict the words that will be engaged. It is not surprising that we utilize such capability when listening to music, too. Generative Theory of Tonal Music (GTTM) is one of the cognitive msuic theory that represents listening process as a tree structure. We formalize further more for computational modelling and examine cognitive reality of the theory.
Collaborator: JAIST (JSPS KAKENHI)
Sonification design for awareness supportUtilizing ability of human auditory scene analysis, we aim to design sonification method to improve their awareness. For instance, we've been developing auditory EMG biofeedback for body movement learning, and interactive sonification tool for seismic data exploration based on auditory gestalt formation. We will also seek effective multimedia expressions for the visually impaireds by taking advantage of their characteristics.
Collaborator: AIST (JSPS KAKENHI)
Hearing loss and MusicOur field study has shown that deaf and hard of hearing people enjoy music using their residual hearing [Matsubara et. al., 2014]. Many of them are active listeners; they often go to Karaoke, watch YouTube to practice singing. We study thier cognitive process of music listening and explore the mechanism to improve their listening ability for enjoying music.
Collaborator: Tsukuba University of Technology, UCSanDiego, KTH Royal Institute of Technology (JSPS KAKENHI)
Because of the wide range of topics, I have an experience of various approaches, including largescale psychological experiment, computational modelling, formalization, constructivism, user experience design for practitioner, second-person action research, and autoethnography.
Matsubara, M., Matsuda, Y., Kuzumi, R., Koizumi, M. and Morishima, A.
Collecting and Organizing Citizen Opinions: A Dynamic Microtask Approach and its Evaluation
Matsubara, M., Kobayashi, M. and Morishima, A.
A Learning Effect by Presenting Machine Prediction as a Reference Answer in Self-correction
HMData, pp.3521－3527, 2018
Matsubara, M., Iguchi, M., Oba, T., Kadone, H., Terasawa, H. and Suzuki, K.
Wearable Auditory Biofeedback Device for Blind and Sighted Individuals
IEEE Multimedia, 2015
Sonification Metacognition Assistive Technology
Matsubara, M., Uchide, T. and Morimoto, Y.
Auditory Gestalt Formation for Exploring Dynamic Triggering Earthquakes
CMMR, pp.983－987, 2019
Sonification Gestalt Formation
- A Constructive Approach to Interactive Learning Facilitation in Music Cognition Masaki Matsubara Keio University, Feb. 2013
|Tango Quinteto Ibaraki
|Nakata Chiya and SinNomble