Last week our group came up with first drafts of how the final graphical user interface (GUI) of our multimodal application should look. Also we kicked off a project on github where our group already pushed first backend and database implementations.
The Recipe List
The first that the user of the cooking aid application will see is a list of Recipes that are stored inside the database. The List can be queried using the search box located on the top left of the screen. Inside the textbox, the use can type a recipe name, an ingredient name or the meal type and the list is then being filtered accordingly.
After a Recipe was selected from the recipe list, the application changes to the Recipe Overview view, inside which the user obtains all the information he needs about the selected recipe, including nutritional informations and the difficulty of the cooling steps. By Clicking/Touching the “Start Cooking” or by simple saying “Start cooking” into the microphone the application changes to the Recipe Steps View of the application.
Here the user control the application solely with speech input. By saying “Forward” or “Back” he can navigate through Cooking Steps.
Also can receive additional information from the application regarding nutritional information of the ingredients. For example by simply saying out loud “Show me nutritional information about two slices of cheese”, the user gets the asked information display on the screen. This feature will be accomplished with help of the Wolfram Alpha API: