Title: | OS16-3 Facial Expression Synthesis Using Vowel Recognition for Synthesized Speech |
---|---|
Publication: | ICAROB2020 |
Volume: | 25 |
Pages: | 398-402 |
ISSN: | 2188-7829 |
DOI: | 10.5954/ICAROB.2020.OS16-3 |
Author(s): | Taro Asada, Ruka Adachi, Syuhei Takada, Yasunari Yoshitomi, Masayoshi Tabuse |
Publication Date: | January 13, 2020 |
Keywords: | MMDAgent, Speech recognition, Vowel recognition, Speech synthesis |
Abstract: | Herein, we report on the development of a system for agent facial expression generation that uses vowel recognition when generating synthesized speech. The speech is recognized using the Julius high-performance, twopass large vocabulary continuous speech recognition decoder software system, after which the agent's facial expression is synthesized using preset parameters that depend on each vowel. The agent was created using MikuMikuDanceAgent (MMDAgent), which is a freeware animation program that allows users to create and animate movies with agents. |
PDF File: | https://alife-robotics.co.jp/members2020/icarob/data/html/data/OS/OS16/OS16-3.pdf |
Copyright: | © The authors. This article is distributed under the terms of the Creative Commons Attribution License 4.0, which permits non-commercial use, distribution and reproduction in any medium, provided the original work is properly cited. See for details: https://creativecommons.org/licenses/by-nc/4.0/ |
(c)2008 Copyright The Regents of ALife Robotics Corporation Ltd. All Rights Reserved.