Moods and activities in music

DOI

Data consists of annotations of music in terms of moods music may express and activities that music might fit. The data structures are related to different kinds of annotation tasks, which addressed these questions: 1) annotations of 9 activities that fit a wide range of moods related to music, 2) nominations of music tracks that best fit the a particular mood and annotating the activities that fit them, and 3) annotations of these nominated tracks in terms of mood and activities. Users are anonymised, but the background information (gender, music preferences, age, etc.) are also available. Dataset consists of relational database, that is linked together by means of common ids (tracks, users, activities, moods, genres, expertise, language skill). Current approaches to the tagging of music in online databases predominantly rely on music genre and artist name, with music tags being often ambiguous and inexact. Yet, the possibly most salient feature musical experiences is emotion. The few attempts so far undertaken to tag music for mood or emotion lack a scientific foundation in emotion research. The current project proposes to incorporate recent research on music-evoked emotion into the growing number of online musical databases and catalogues, notably the Geneva Emotional Music Scale (GEMS) - a rating measure for describing emotional effects of music recently developed by our group. Specifically, the aim here is to develop the GEMS into an innovative conceptual and technical tool for tagging of online musical content for emotion. To this end, three studies are proposed. In study 1, we will examine whether the GEMS labels and their grouping holds up against a much wider range of musical genres than those that were originally used for its development. In Study 2, we will use advanced data reduction techniques to select the most recurrent and important labels for describing music-evoked emotion. In a third study we will examine the added benefit of the new GEMS compared to conventional approaches to the tagging of music. The anticipated impact of the findings is threefold. First, the research to be described next will advance our understanding of the nature and structure of emotions evoked by music. Developing a valid model of music-evoked emotion is crucial for meaningful research in the social and in the neurosciences. Second, music information organization and retrieval can benefit from a scientifically sound and parsimonious taxonomy for describing the emotional effects of music. Thus, searches for relevant online music databases need not be longer confined to genre or artist, but can also incorporate emotion as a key experiential dimension of music. Third, a valid tagging scheme for emotion can assist both researchers and professionals in the choice of music to induce specific emotions. For example, psychologists, behavioural economists, and neuroscientists often need to induce emotion in their experiments to understand how behaviour or performance is modulated by emotion. Music is an obvious choice for emotion induction in controlled settings because it is a universal language that lends itself to comparisons across cultures and because it is ethically unproblematic.

Data was collected using crowdsourcing method executed on Crowdflower platform. Participants completed the background information and then completed as many rounds of human annotations tasks as they wished. 1 round contained 3 sub-tasks, (1) mood and activity tagging, (2) track search and tagging, (3) track tagging for moods and activities task. These were designed to map various moods and activities related to music. Description of the questions, and the types of information obtained is given in further documentation (Questionnaire_Form.docx and Information_sheet.docx).

Identifier
DOI https://doi.org/10.5255/UKDA-SN-852024
Metadata Access https://datacatalogue.cessda.eu/oai-pmh/v0/oai?verb=GetRecord&metadataPrefix=oai_ddi25&identifier=294c7e351c8300199ca0c0743d5236488d00e25ba516a1b5504c1c923b32d67e
Provenance
Creator Eerola, T, Durham University, UK; Saari, P, Durham University, UK
Publisher UK Data Service
Publication Year 2015
Funding Reference ESRC
Rights Tuomas Eerola, Durham University, UK. Pasi Saari, Durham University, UK; The Data Collection is available to any user without the requirement for registration for download/access.
OpenAccess true
Representation
Resource Type Numeric
Discipline Fine Arts, Music, Theatre and Media Studies; Humanities; Music; Psychology; Social and Behavioural Sciences
Spatial Coverage Albania; Algeria; Antigua and Barbuda; Argentina; Australia; Austria; Bangladesh; Belgium; Bolivia; Brazil; Bulgaria; Canada; Chile; Colombia; Croatia; Cyprus; Czech Republic; Dominican Republic; Ecuador; Egypt; El Salvador; Estonia; Finland; France; Georgia; Germany (October 1990-); Greece; Hong Kong; Hungary; India; Indonesia; Ireland; Israel; Italy; Jamaica; Jordan; Kazakhstan; Kyrgyzstan; Latvia; Lithuania; Luxembourg; Macao; Former Yugoslav Republic of Macedonia; Madagascar; Malta; Mauritius; Mexico; Moldova; Morocco; Namibia; Nepal; Netherlands; New Zealand; Nicaragua; Nigeria; Norway; Pakistan; Panama; Paraguay; Peru; Philippines; Poland; Portugal; Puerto Rico; Romania; Russia; Saudi Arabia; Serbia; Singapore; Slovakia; Slovenia; South Africa; Spain; Sri Lanka; Sweden; Switzerland; Taiwan; Tanzania; Thailand; Trinidad and Tobago; Turkey; Uganda; Ukraine; United Kingdom; United States; Uruguay; Venezuela; Vietnam