Authors:
Archit Jain
1
;
Takumi Kondo
2
;
Haruka Kamachi
2
;
Anna Yokokubo
2
and
Guillaume Lopez
2
Affiliations:
1
University Jean Monnet, Saint-Etienne, France
;
2
Aoyama Gakuin University, Tokyo, Japan
Keyword(s):
Eating Quantification, Chewing, Swallowing, Sound Analysis, Activity Recognition, Free-living Conditions.
Abstract:
Increasing the number of chews of each bite episode of a meal can help reduce obesity. Nevertheless, it is difficult for a person to keep track of his mastication rate without the help of an automatic mastication counting device. Such devices do exist, but they are big and non-portable and are not suitable for daily use. In our previous work, we proposed an optimization model for the classification of three meal-related activities, chewing, swallowing, and speaking activities from sound signals collected in free-living conditions with a cheap bone conduction microphone. To extract the number of chews per bite, it is necessary to differentiate the swallowing of food from the swallowing of drink. In this paper, we propose a new model that can not only classify speaking, chewing, and swallowing, but also differentiate whether swallowing is for food or drink, with an average accuracy of 96%.