Week 3: 10th June – 16th June

Data Collection

After understanding how Tensorflow works, Jian Xian began to extract images for training. To obtain acceptable results, approximately 500 labelled images were required. To obtain the images, Jian Xian took a video of the smartphone from multiple angles and light intensity. To be certain that the model will work, the video contains segments where the object is obfuscated. In addition, the camera’s field of view contains other objects which the model needs to identify to increase the robustness of the model.

The frames were then extracted using ffmpeg, a powerful multimedia software on Ubuntu. Subsequently, suitable frames which are clear were chosen and they were labelled using an LabelImg software. A screen shot illustrating the labelling process is displayed below, which identified the presence of an oppoR9 and a cardslot. This is then repeated for another few hundred images.

Linking Image Buttons

After struggling to ensure images look correct on image buttons, Wei Bin also encountered another problem; the designation of the purpose of pressing a button.

Although it sounds simple, getting this button to redirect to another page upon being tapped, had no clear way of doing so, thus he had to go back to googling. He found two methods of detecting a click (tap), setting up a onClickListener or giving the button an attribute :onclick, and chose to use :onclick as I found it more elegant and less code needed.

After that, he found out how to switch activities on the push of a button. By learning that android studio runs activities based on something know as “intent” and a quick search, it  allowed us to write the code needed to switch between screens. This allowed for the button to be cause the switch to another screen. With functioning buttons and multiple screens, majority of the front page was completed.

Leave a Reply

Your email address will not be published. Required fields are marked *