Team Name
Food For Mood
Timeline
Fall 2020 – Spring 2021
Students
- Aashish Giri
- Abhishek Regmi
- Bishal Adhikari
- Ishmeet Kaur
Sponsor
Dr. Chris Conly
Abstract
Food for Mood a software application that can be installed on an android device to detect the emotion of a person with the help of a camera and thereby suggesting the food item that best aligns with their mood based on an algorithm.
Background
Face recognition, and emotion recognition are key interests of researchers and big corporations. It has multiple uses such as preventing retail crime, finding missing people etcetera. The other big thing is recommending systems which are highly sophisticated models, designed with the help of reinforcement learning algorithms to display certain items, ads, products, companies to the users based on their past search keywords, or their preferences. Google, Amazon, Netflix and other companies use this to filter the content to show to a particular user. On researching , we found out that a research conducted by the Harvard’s medical school in 2014, showed that people’s mood is linked to the food they eat. Our Food for mood application aims at achieving the right balance between an individual’s emotion and his/her food preference to provide maximum food satisfaction without going through the entire “what should I eat/cook/order today?” phase. The app will help individuals in figure out the food item that best aligns with their mood at a given point of time.
Project Requirements
- The system shall be available as an android application on Google Play store.
- The system shall let the user sign up for an account.
- The system shall be compatible and should run on all Android Devices irrespective of the screen resolution.
- The system shall open the camera and let the user take a picture to detect the emotion.
- The system shall display the recommended food items within 45 seconds.
- The system shall recommend the food item in accordance to the current detected emotion.
- The emotion recognition and recommending model should run synchronously.
- The system shall get feedback from the customer
- The system will use Firebase to store user information.
- The system shall store all the data collected with utmost security.
System Overview
Food for Mood application is separated into five layers i.e. Management, APIs, Design, Model and Storage. These five layers interact with each other for the efficient running of the application. Management layer works with all of the layers, API layer will be used for restaurant maps, design layer helps with the user interaction with the system, model layer consists of the model for the emotion detection and storage layer will be used for the collection of the customer data. The facial expression recognition model is built up using Convolutional Neural Network (CNN) trained on FER2013 dataset. The dataset consists of 48×48 pixel grayscale images of the faces. There are seven categories of the emotion: 0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral. The training set consists of 28,709 examples, and the test set consists of 3,589 examples.
![](https://websites.uta.edu/cseseniordesign/files/2021/05/DataFlow-3-1024x651.png)
Results
When it comes to results, I believe our team has achieved a lot given the lack of research in the domain of emotion based food recommendations. When it comes to our facial expression recognition model, our accuracy for the model is well over 65.2%. Our recommendation algorithm works well and stores the data correctly without any error. The issue is that emotions and facial textures are very subjective and change from one user to another. Our model tries to get the right emotion as much as possible, but user’s facial expressions should be in the same dimensions as well. Other things like brightness near the camera, the angle of the face, and objects like hats and glasses also affect the accuracy of the facial emotion recognition model.
Future Work
We have a lot of future work left. I will point them out in order of urgency and need.
- The first thing we need is a deep learning neural network recommendation system. Right now, the application is working an algorithm written by our team, and properly tested. We could not build a neural network at this point because currently there is no emotion based food dataset. There are multiple cuisine and ingredient datasets, but they cannot be used for our system.
- We also need to work on our API integration. Although google FoodMenu API and Restaurant Menus API can be used for the integration and retrieval of restaurants and their foods, we still need a lot of research and knowledge to execute it the right way. We didn’t integrate it at this point because we didn’t want to ruin the user experience.
- We need to train our Facial Expression Recognition every once in a while to keep getting better accuracy if possible. This would also require different, larger dataset. But this is an important need since emotions and expressions are very subjective things.
- Adding more features and integrating with other apps: We want to add as many features as possible which can make the user experience better. If we can achieve all the things mentioned above, we can also integrate our services with popular companies/apps like ubereats or doordash which can make overall user experience better.
Project Files
Project Charter (link)
System Requirements Specification (link)
Architectural Design Specification (link)
Detailed Design Specification (link)
References
[1] Octavio Arriaga, Matias Valdenegro-Toro, and Paul Plöger. Real-time Convolutional Neural Networks for Emotion and Gender Classification. arXiv:1710.07557 [cs], October 2017. arXiv: 1710.07557.
[2] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. Deep Learning Based Recommender System: A Survey and New Perspectives. ACM Computing Surveys, 52(1):1–38, February 2019.
[3] Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Processing Letters, 23(10):1499–1503, October 2016.