Weekly Progress Report

During the last week’s presentation of our implemented prototype, where all the students from our course tried out our application, we received a lot of feedback and improvement ideas which we implemented for the final presentation on 15th July.

The improvement ideas included a lot of ways to increase the usability of our application. For example by implementing a feature where the cooking guide can be commanded by saying “repeat” out loud to repeat the speech output of the current cooking step or to display all available voice commands for the user by saying “repeat”.

We also improved the performance of our implemented Wolfram API Interface by implementing a caching feature to reduce waiting time during each request to the Wolfram Servers.

The complete change log for our final prototype is listed below:

  • say “repeat”: to repeat description of current step
  • say “help”: to display all available voice commands
  • say “previous” (alternative to back): to go one step back
  • Implemented nutrition facts dialog
  • Show current cooking progress by showing “Step x / y”.
  • Added link (dialog) to index page to get list with all voice commands.
  • Added loading sign to indicate processing and retrieval of wolfram data (for nutrition facts)
  •  Calculate nutrition facts of whole recipe by Wolfram Api
  • Added Wolfram Server request caching to reduce waiting time / delay
  • several Wolfram API interface bug fixes

Stay tuned for the release of a short clip introducing our final prototype next week!

Weekly Progress Report

Last week our group implemented the feature, which was a highlight of our project, namely the Nutritional Information retrieval using Wolfram API.

The implementation time took us longer then we expected, therefore unfortunately we had to postpone the last feature on our agenda – the speech output as a complementary output modality during the cooking guiding steps.

Weekly Progress Report

Last week, we prepared our project for the first release of the prototype of our cooking aid application which our group is going to present in our course meeting on Wednesday next week.

We fixed several code issues that our prototype had that were mentioned in our last blog entry. Speech recognition using Annyang and hands free navigation through the application now works flawlessly. Also, the Graphical User Interface was polished.

For the second release of our prototype we started to implement the retrieval of the nutritional information using the Wolfram API and also speech output using Google’s Speech Synthesis API. With the newly introduced speech output, the cooking guide will read the cooking instructions so that the user of the application doesn’t accidentally cuts his hands if he tries to read the shown instructions while cutting a vegetable.

Weekly Progress Report

Last week, after the intermediate presentation which was a big milestone in our project, we made some progress with the content of our application, and the application logic as well.

Content Additions

For our first recipe, we decided that it would the best option (and also most legal) to create an original content by cooking something on our own and document the steps using notes and a camera. This resulted in a nice looking meal which our future users will be happy to try out by using the first release of our prototype of the application:

Application Logic


Our application is now able to filter the recipe list by typing a search term inside the textbox above the list. For example by typing an ingredient name, the recipe list is filtered on the fly:


Speech Input

Until last week we had problems of integrating our speech recognition framework Annyang into the Javascript MVC Framework we are using to implement our frontend. Since this problem was resolved during last week, it is now possible to use the microphone to navigate through the guided cooking steps.

In order to resolve issues of the speech recognition framework we are using where our browser did not save the settings for microphone allowance we have to enable our backend to use https request handling. This way, the recognition engine will work flawlessly without constantly interrupting the user by asking whether he allows his microphone to be used.

Wolfram API

In order to access the Wolfram API to retrieve nutritional information of the food ingredients, as described in a former blog post, we are currently testing out several JavaScript Modules which are Wolfram API Wrappers.

Weekly Progress

This week, we have been working on the implementation of the GUI drafts. With the use of Twitter Bootstrap, we have prototyped the interface, which is now accessible via the the browser.

Overview Page

Screen Shot 2015-06-16 at 14.56.45

The overview page lists all the recipes that are in the database. Small icons show the time needed to cook the dish and how many calories each meal has.

Recipe Detail Page

Screen Shot 2015-06-16 at 14.57.00

The recipe detail page shows general information on the recipe. It provides the ingredientslist with small icons depicting the individual ingredient, and infos on the nutritional facts. Next up, we are planning the integrate for the Wolfram Alpha API for information on nutrition.

Recipe Step Page
Screen Shot 2015-06-16 at 14.57.11

The recipe step page show the current step of the recipe and an image showing the actions. At the moment the text and the image are just a placeholder. Next up, we will integrate voice commands to slide between steps.


This week we had to talk to A.L.I.C.E, an artificial intelligence on the internet. Here is the conversation:

judge: Hello
ALICE: Hi there!
judge: How are you?
ALICE: My logic and cognitive functions are normal.
judge: Very funny. I mean in an emotional way.
ALICE: I’m glad you find this amusing. I do not understand that allusion.
judge: Do you have feelings?
ALICE: as a robot I lack human emotions.
judge: What are you good for?
ALICE: As Edison said, “What good is a newborn babe?”.
judge: Can you tell me something?
ALICE: Gale said I became mean to him he is just having revenge an eye for an eye.
judge: Tell me about yourself.
ALICE: I am the latest result in artificial intelligence which can reproduce the functions of the human brain with greater speed and accuracy.
judge: Ask me a question.
ALICE: What’s your favorite movie?
judge: Idiocracy. And yours?
ALICE: I’ve never seen it. Try asking me a more specific question.
judge: What’s your favorite meal?
ALICE: I require only electricity.
judge: Yummy!
ALICE: It is delicious.

I tried by beginning like a normal conversation between human strangers, by saying “Hello”. As I expected, the answer was perfectly accurate.

To the question “How are you?”, it was answering with a joke. The joke was avoiding a difficult subject on whether AI can have feelings. It was a clever way of not breaking the illusion of smart human being.

A point where I definitely noticed that A.L.I.C.E is a computer is that it didn’t understand when I was referring to a subject from another sentence, earlier in the conversation (the favorite movie).

It was funny talking to it, but since I already know some AIs which answer questions, I wasn’t surprised or scared. Additionally, it didn’t feel like a normal conversation, because it didn’t ask me any questions by itself. So it was more like an interview.

Progress Report: Design and First Implementations

Last week our group came up with first drafts of how the final graphical user interface (GUI) of our multimodal application should look. Also we kicked off a project on github where our group already pushed first backend and database implementations.

The Recipe List

The first that the user of the cooking aid application will see is a list of Recipes that are stored inside the database. The List can be queried using the search box located on the top left of the screen. Inside the textbox, the use can type a recipe name, an ingredient name or the meal type and the list is then being filtered accordingly.

1Recipe Overview

After a Recipe was selected from the recipe list, the application changes to the Recipe Overview view, inside which the user obtains all the information he needs about the selected recipe, including nutritional informations and the difficulty of the cooling steps. By Clicking/Touching the “Start Cooking” or by simple saying “Start cooking” into the microphone the application changes to the Recipe Steps View of the application.

2Recipe Steps

Here the user control the application solely with speech input. By saying “Forward” or “Back” he can navigate through Cooking Steps.

3 Also can receive additional information from the application regarding nutritional information of the ingredients. For example by simply saying out loud “Show me nutritional information about two slices of cheese”, the user gets the asked information display on the screen. This feature will be accomplished with help of the Wolfram Alpha API:


Projekt Idea: Cooking Aid Application



This week our group came up with the wonderful idea of creating a cooking aid application. The main idea behind the application is to have a guide aiding people to cook by providing step by step recipes. An important aspect is that the application can be triggered using voice commands . This feature comes in especially handy when the chef is busy chopping up ingredients or has dirty hands. Chefs can benefit of a hands-free experience!


technology_stackFor the web client we will be using AngularJS, an MVC Javascript Framework to display the interface of the application and for the communication between our backend and the Google Web Speech API. The Web Speech API converts the voice commands into text so that they can be used in the web client. The JSON server takes care of the data management providing the raw data of the recipies.