Development
16/May/2019 (update by Chris):
-Chris and Aravinth:
Now we only need to know how to generate a sequence of 1 and 0 such that an array of, say, 15×15 matrix, can always be guaranteed to have a different pattern of 0 and 1. Later, we’ll transform it to blanks and dots, and therefore the robot will know its position since the dot patterns are all unique.
-Code for image processing (in Python) (comments are in “#”):
from PIL import Image
im=Image.open(“C:\\blablablabla #this is the file directory”)
#btw file must be a jpg, not a pdf. Also, don’t forget to put the file name at the end of the directory. For example, after accessing the C or D drive, then you access a certain file which contain the jpg file that you want to process, you have to type in the name of the jpg file as well. Don’t forget to also put a “.jpg” at the end.
print(list(im.getdata()))
-After that, Prof Kiah also added some other lines of codes when he was testing the above toward some other random images
-So now we can already process images (i.e. the dots)!!! All that we need to figure out now is how to get the dots (i.e. how to generate the dots) that we want to process in the first place!
20-22 May 2019 (update by Chris):
-Chris and Aravinth:
-C: Figured out how to convert an image (any general image) from CMYK (or any other general color palletes) to RGB
-C: Figured out how to convert the RGB pixels into a matrix
So matrix is a 15 by 45 matrix, the reason being that our 15 by 15 matrix has 3 components for each elements in the matrix (the three being R,G,B, i.e. how red, green, or blue-ish is the pixels in the image). Thus, the 45 columns are grouped in 3, and hence we have a 15 by 15 matrix, which has 3 components for each individual components.
-A: Figured out how to map the pixels and binary numbers.
So, we can already interchange our matrix (which consists of the pixels’ information, or the RGB of each pixels in the image) with the binary numbers (the 1’s and the 0’s, or the (sonn to be) dots and columns).
-A: Made the overarching code (the “main()” function call)
So now the codes already integrate the following: Capturing the image by camera (still in pseudocode form), converting the image into RGB, then into a matrix (using the code by Chris), mapping the matrix into 1’s and 0’s, and then compare to dots (using code by Aravinth), then the robot should know its position by comparing the image to the overall map (still in pseudocode)
-Met with Prof Kiah: 1. Figure how to print out dots and make them all different, and 2. Figure out how to factor in the imperfect orientation in the real world (for example, what if the robot moves just 0.05 mm beyond what it is supposed to detect? Then, it will capture the dots in a different way. This needs to be fixed and accounted for. Also, initial placement of robot has to be very precise.
-Meeting in 2 weeks time with other groups.
29 May 2019 (update by Chris):
– YF Back
– Upload our source code to GitHub
– Explanation of the previous meetings and progress(es) to Yi Fong
http://mathworld.wolfram.com/IrreduciblePolynomial.html
-Aravinth proposes to use a contour plot to solve the previous problem (the one where we have to account for errors in the reading of dots by the robot)
-Chris proposes to use counter
-Aravinth’s Plot: Comparing frequency (percentage of similarity between the map and the actual robot’s reading), and then plot it out as a contour plot. Take the closest or highest rate of similarity.
-Chris’s Counter: Comparing map with the robot’s reading, and raising the counter by 1 for each positive match. Then we take the highest counter (meaning we are taking the one with the highest rate of similarity).
– Yi Fong:
https://www.pyimagesearch.com/2014/09/15/python-compare-two-images/
“Statistical Mean Squared” Method
https://docs.opencv.org/3.1.0/da/df5/tutorial_py_sift_intro.html
OpenMV (official documentation/tutorial):
http://docs.openmv.io/openmvcam/tutorial/index.html
- Related Q&A
- Positioning of Robot using fiducial markers
- use len_corr() method to correct lens distortion
- Positioning of Robot using fiducial markers
- Similar projects
- OpenMV H7 white line following (Youtube)
- Line Following (forum page)
- LiPo battery connected to robot kit base
31 May 2019 (update by Chris):
-C: Made a code for checking and comparing the 4×4 captured image (matrix) with the 15×15 map. Also accounts for the fact that the scan/reading/captured image might not be perfect in real life (rotations or small shifts in the robot’s movement may happen!). So, Chris develop the code for the “counter” method. The “contour plot” method by Aravinth and the Mean Squared method by Yi Fong to be developed starting on Monday.
-A: Uploaded the source code from his laptop online
-YF: To check the code if it runs smoothly
1 June 2019 (update by Yi Fong):
– Note: https://openmv.io/collections/shields/products/motor-shield (example)
– Purchases for next stage: Motor and accompanying parts and wheels
– Might want to just play with the microchip as it is easy to play with and this will involve major code rewriting like the data type of the image
– Information on data types: https://docs.openmv.io/library/index.html (this doesn’t make sense on its own; see next point. )
– If you can, download OpenMV IDE on a computer which you want to plug the micropython board in (conceptually this is exactly like arduino).
– Inside, there are the list of Example codes you can find.
3 June 2019 (update by Yi Fong):
-Connecting board with motor and wheels
-YF to implement/re-implement PIL on micropython so that the micropython also has the PIL library
-Explanation of the problem to translate Python into MicroPython on microchip / what we need to do eventually:
https://openmv.io/pages/download
(so, since we write our codes in Python, we need to change it because the microchip “speaks” in MicroPython language)
-Still need to understand how to instruct the camera to take a picture (i.e. what lines of codes, in Python language, will instruct the camera to snap a picture)
3 June 2019 (update by Chris):
-Finished the comparison code
-Now we can compare the 4 by 4 matrix taken by the camera with the 15 by 15 matrix of our full map
-Fixed some bugs in code and I also make it such that the code will return 2 numbers; they’re the row and column position which is detected by the robot
4 June 2019 (update by Chris):
-Made the 3D Hardware plans
-Code is finally compiled into one master code (one long code with all of the functions compiled)
-Made slides for presentation
-Chris to sieve out which functions are necessary to be installed from the PIL and numpy library into the micropython board
-Aravinth to make the software for the robot
-Yi Fong to speak with Tony for hardware purchase
5 June 2019 (update by Chris):
-Chris: Completed slides for the presentation, codes the program to convert binary into RGB pixel data, and from RGB pixel data into a real image. Therefore, from the 1’s and 0’s, an image of dots of blanks can be generated.
-Aravinth: Started on the robot movement codes.
10 June 2019 (update by Chris):
-To Making & Tinkering Lab: Tinkered with the wheels and wiring, motor, and connecting to the main board
-Safety Introduction Training (Course),2pm-4pm
-Meet with Prof. Kiah; discussed about some mistakes in generating the 1s and 0s
-Change of proof of concept
14 June 2019 (update by Chris):
-Discussing about the change of proof of concept
-Tested image capturing of the printed dots
-Found a code for image processing; Chris’s counter method is replaced with another similar, but more advanced method called the Integral Image Method (taken from a stack overflow page)
17 June 2019 (updated by Chris):
-Image localization needs refining
-Rotation needs changing
-Change the symbols for each 0,1,-1
-Add each window to a sheet music (music will play from that point) (so we divide a music into segments corresponding to how much unique window we have)
19 June 2019 (update by Chris):
-Chris: Made sliding mechanisms for the sheet stand
-Yi Fong: Checked Prof. Kiah’s lines of code
-Aravinth: Code for the image localization
20 June 2019 (update by Chris):
-Chris: Sketched the y-direction slider on paper, designed it on SolidWorks for 3D Modelling, and 3D Printed the y-direction grip parts
21 June 2019 (update by Chris):
-Chris: Same as the above (20 June); but this time for the x-direction grip parts. (Finished with the xy-direction).
25 June 2019 (update by Chris):
-Tested buttons and hardware (the design of the product itself)
-Figured out the “assignments” and tasks by Prof. Kiah
26 June 2019 (update by Chris):
-Meet with Dr. Ho
-3D print the casing of the camera+battery+button
1 July 2019 (update by Chris):
-WiFi shield (needs to connect to internet in order for the camera to redirect us to a specific time in a youtube video after it has processed the image and knows where we are in the sheet music)
-Meet with Prof. Kiah
5 July 2019 (update by Chris):
-Tested wiring of battery (hardware); we have soldered the battery together with the battery’s charger as well as the OpenMV Camera. It was successful and the battery could power the camera on for about 5 minutes
-Tested switches; so there will be 2 switches in the final product; 1 switch is for turning the camera on and off, and another one is to instruct the camera to snap a picture while it is on
-Measured the resistance of each electrical components (resistance of ESP32 Board, OpenMV Board, wires, switch (while it’s on and off), etc)
-Calculated the resistor needed to be attached to the ESP32 board during final product (OpenMV runs on 3.7 Volt LiPo Battery, while ESP32 runs on 3 Volts only, therefore we needed an extra resistor to be attached to the ESP32 Board because our final product will be to connect the two boards in parallel; i.e. both boards would be given the same voltage, and therefore some of the voltage given to the ESP32 board must be “shared” with another resistor, which should be around 1000 Ohm).