Training the Model
With ample data, it is time to train the model. The model selected is the Single Shot Detector model. It is selected as it provides the best accuracy to latency ratio as compared to the rest. This model has been pre-trained using the COCO data set and it provides the foundation for the model where it is able to recognise curves and edges, making it possible for developers like us to finetune and train the model for our purpose.
The model was trained on Jian Xian’s house desktop GPU, however he had to tweak the training size and batch due to hardware limitations such as GPU memory which restricted the speed at which the model could be trained. After several trial and errors, he managed to configure one sufficiently fast for his GPU. After running it for 7 hours, the model has been trained to step 27818 achieving a loss ranging from 0.8 to 1.5. Which has been reduced from a loss of approximately 7. He stopped the process at this point of time as continued training may cause the model to over-fit the data and render the model inaccurate. The image below illustrates the checkpoints recorded by the programme whilst training. Subsequently, Jian Xian was able to produce a crude model for us to test on their app.
Testing Out the New Machine Model
The machine learning model is finally ready after we requested some changes to it. To understand how to properly incorporate it as an app first, Wei Bin downloaded the official Tensorflow demo app from github. We loaded in our model and replaced theirs but we started to hit multiple bugs along the way. The ML model we initially used had a different setting from theirs and caused our file to be 4x the size of their model. Thus we changed the settings we used to export our ML model and we are one step closer (determined by a different error code). After that we faced the problem of insufficient memory buffer and had to redefine parts of the code to find the problem. After a long round of googling(there are multiple reasons why this problem can arise and multiple ways to define the memory buffer), we changed on of the settings (quantised =false) to get 4x memory buffer and this gave us the correct memory size. With that out of the way, all that is left is to look into how to merge the 2 apps together.
Designing and Tweaking Object Assets on Autodesk Maya
With the switch from Unity to Autodesk Maya, it was time to learn the mechanics of the new software. Unfortunately, though both Autodesk Maya and Autodesk Fusion 360 are inherently software created by the same company, of which the latter we have some experience owing to the fact that we attended a 3D printing workshop in Year 1 Semester 1, their similarities pretty much end there, for both are catered for extremely different purposes; Fusion 360 is mainly for designing prototypes (basically a watered-down version of 3DS Max), while Maya can be used for 3D modelling and animations. Even though Choy Boy had some prior experience with regards to animations (using software such as Adobe AfterEffects), he had some difficulty navigating through the different controls.
We have also purchased our model of the smartphone we’re working on. The model initially looked like this (lighting on centre-right of phone):
Although it looks quite similar to the physical smartphone we had, there was a problem: the screen was too white. Thus, he had to change the material of the screen using Hypershade:
Here, a black colour was used and the diffuse property was reduced drastically (to 0.136 in this picture). Thus, the phone now looked more accurate:
As the model did not come with the card tray, the pin to remove the card tray and a NanoSIM, he had to model those from scratch. Modelling via Fusion 360 would have much easier, however the final choice was to use Maya as the models designed in the latent program could be instantly used for animations.
The final models looked like this: