>>>>> Here is a partial example (I have not written enough but, you get the idea of the content. You will have Section 3.1 Overview and then you will have sections 3.2-3.* depending on how many components/ steps are in your proposed system)
Section 3.1: OVERVIEW: I will have the following system components: Label Area Finding, OCR in each Label Area, Output Results to User. The main addition I am coming up with as added value is the Label Area Finding and the OCR will be done using already existing code. Ofcourse the integration into an app is also important.
example Section 3.2: Label Area Finding: I am going to take a picture of the box and find potential Label Areas. I am going to do this using the following unique idea that I came up with:
Do Color Blob Detection using XYZ algorithm see https://github.com/Itseez/opencv/tree/master/samples/android/color-blob-detection for reference
Then I am going to select the top 5 colors present based on their area (histogram). I will have to decide how "close enough in color to each of the 5 colors a pixel can be to be counted in the area for that color". This will be done experimentally.
For each of the 5 top colors, I will create a Label Area (subimage of the original image) I am going to then create a sub-image that is ideally smaller than the entire original image such that it is the rectangle to encompass all of the pixels of that color.
I am going to pass the 5 top Label Areas for processing to Section 3.3
After looking at the results for typical input images, I may adjust and choose a smaller or larger number than 5 (this will be a parameter setting in my app called Detect_Number_Label_Areas)
example Secton 3.3: OCR in a Label Area: For each Label area found from previous component, I perform OCR. This is done using the TEsseract OCR Engine (maintained by Google), see http://gaut.am/making-an-ocr-android-app-using-tesseract/ and https://github.com/rmtheis/android-ocr and https://play.google.com/store/apps/details?id=edu.sfsu.cs.orange.ocr&hl=en
example Section 3.4: Reporting Results: I will present the user with both a blown up text version of the label in black text on white background and text to speech. The first part is using standard android GUI elements of a TextView contained in a ScrollView. The second part, text to speech, will be done using standard Android TextToSpeech class see http://developer.android.com/reference/android/speech/tts/TextToSpeech.html