Automated Soundtrack Generation for Fiction Books Backed by Lövheim’s Cube Emotional Model
Скачать файл:
URI (для ссылок/цитирований):
https://link.springer.com/chapter/10.1007%2F978-3-030-05594-3_13https://elib.sfu-kras.ru/handle/2311/129764
Автор:
Kalinin, Alexander
Kolmogorova, Anastasia
Коллективный автор:
Институт филологии и языковой коммуникации
Кафедра романских языков и прикладной лингвистики
Дата:
2019Журнал:
Communications in Computer and Information ScienceКвартиль журнала в Scopus:
Q3Библиографическое описание:
Kalinin, Alexander. Automated Soundtrack Generation for Fiction Books Backed by Lövheim’s Cube Emotional Model [Текст] / Alexander Kalinin, Anastasia Kolmogorova // Communications in Computer and Information Science. — 2019. — Т. 943. — С. 161-168Аннотация:
One of the main tasks of any work of art is transferring emotion conceived by the author to its recipient. When using several modalities a synergistic effect occurs, making the achievement of the target emotional state more likely. In reading, mostly, visual perception is involved, nevertheless, we can supplement it with an audio modality with the soundtrack’s help via specially selected music that corresponds to the emotional state of a text fragment.
As a base model for representing emotional state we have selected physiologically motivated Lövheim’s cube model which embraces 8 emotional states instead of 2 (positive and negative) usually used in sentiment analysis.
This article describes the concept of selecting special music for the “mood” of a text extract by mapping text emotional labels to tags in LastFM API, fetching music data to play and experimental validation of this approach.