Skip to content

The implementation of the paper, ``Speech Emotion Recognition with Fusion of Acoustic- and Linguistic-Feature-Based Decisions'' (pretraining-based part, acoustic features)

Notifications You must be signed in to change notification settings

kenro515/APSIPA-SER

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

APSIPA-SER

This code is the implementation of Speech Emotion Recognition (SER) with acoustic features. The network model is Convolutional Neural Network (CNN) + Bidirectional Long Short Term Memory (BLSTM) + Self-Attention.

How to use

  1. Edit preprocessing.py and preprocess your files
python3 preprocessing.py
  1. Edit hyper_param.yaml
  2. Run main.py
python3 main.py

Paper

Ryotaro Nagase, Takahiro Fukumori and Yoichi Yamashita: ``Speech Emotion Recognition with Fusion of Acoustic- and Linguistic-Feature-Based Decisions, '' Proc. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 725 -- 730, 2021.

About

The implementation of the paper, ``Speech Emotion Recognition with Fusion of Acoustic- and Linguistic-Feature-Based Decisions'' (pretraining-based part, acoustic features)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages