Week 8 16th July- 22nd July

After attaining our minimum viable product (completion of seamlessly integrating step 1 of the procedure), we decided to prioritize finishing the entire application. Firstly, We created a mathematical counter to loop through the numbers 0 to 4. Then, after some trial and error, we managed to get the code to remove any existing 3d objects. After which, we pieced these 2 together. When the trigger for the next step is activated, the counter will increase by 1, which will represent the progress of the user throughout the 5 steps. When the counter changes, an update listener will proceed to remove the current 3d objects and overlay the 3d object corresponding to the counter. Thus, allowing us to transition from step to step using a trigger.

 

Currently the trigger is designated as a button press but we decided to implement an automatic step tracker so that the transition is much smoother and the experience could be more hands-free. To achieve that, we would have to improve the machine learning model to make it much more reliable and also develop an algorithm to track the steps based on the visual input. As of now, the algorithm is ready and the machine learning model is still in the midst of training.

Week 7 8th July- 15th July

A Glimmer of Hope for the app development

After hitting brick wall after brick wall when trying to integrate our machine learning model with ARCore. We finally managed to get a glimpse on how the integration would work out. We currently managed to get tensorflow to work alongside ARCore and get the models to be displayed on the position that tensorflow has defined. Although we have currently hit our Minimum Viable Product (MVP), much more can still be done to integrate the different features we want to integrate to make the app experience as seamless as possible.

Week 6: 1st July – 7th July

Completion of All Essential Animations

After some time fiddling around, we have finally finished making the essential animations using Autodesk Maya:

1) Insertion of pin, card tray is ejected (but not completely)

2) Card tray is ejected completely, pin is removed

3) Flipping the NanoSIM around

4) Inserting the NanoSIM into the card tray

5) Inserting the card tray back into the smartphone

 

Brick Wall in App integration

Although we have finally managed to successfully get our animations to work out in android studio and managed to get the animations to play on its own without the need for an additional button, we have hit another brick wall in trying to integrate both the machine learning model with the ARCore. Currently, ARCore does not allow for the camera to be shared with tensorflow machine learning models. Thus we have to think up of new ways to capture images from ARCore and pass it to the machine learning model in a format that it can receive. We tried looking for current examples in integrating machine learning with AR, however, most examples involve using tensorflow image classification instead of object detection, which does not provide tracking, which we believe would be useful for AR. Despite the lack of relevant case studies, we managed to get a hint on how to capture images from ARCore and pass it to the tensorflow model. However, more will need to be done to properly integrate our object detection model into it.