Week 11 Lab Trial 2 : Testing our new solution and IT WORKS!
We were able to get a working prototype for an actual titration reaction which was able to detect colour change on equivalence point and automatically measure the amount of volume used for titration. Our programme interface can also calculate the unknown concentration of titrant with the given information from the user about the reagents used, concentration of titrant and concentration ratio of the titrand and titrant. Way to go!
Post Recess Week: Implementing our wild idea
Finally our actuator has arrived!
An actuator works by converting the rotational motion into linear motion. In other words, if we know how much to rotate, we will be able to move our camera mounted to the actuator by however much distance we want!
Through experimentation and rigorous testing different angle values and the correponding distance travelled, we were able to finetune the angle rotated to move by 1 cm scale. This is especially convenient as we aim to use the linear distance travelled by the camera to calculate the amount of volume dispensed by the burette.
With the ability to track the distance travelled completed, we promptly started to work on how to detect the start and end point for our actuator’s linear motion. Recall that we are using brightly coloured circular fishing floats to act as a proxy for the meniscus level in our previous prototype. Continuing from this idea, we decided to create a circular region of interest using OpenCV in Python to keep track of when red colour comes into frame (in this case we just chose to use a red float). To precisely detect the start and end points, our code only sends the signal to the actuator to stop/start measuring once the contour drawn around the red float is perfect encapsulated by our circular region of interest.
We also realised that the actuator was long and thin, making it prone to tipping over. As such, we began engineering a mount to stabilise the actuator. The first version of the mount was an aluminium cage that was not able to fit the full set-up and did not provide enough support to prevent tipping. There would also too much gap between the actuator and the burette, which will affect the accuracy of our measurement.
Our initial set-up
In the final version of the mount, the actuator was anchored to a heavy wooden board using aluminium frames. We added an additional backbone frame to ensure greater stability to arrive at our improved set-up.
Our improved set-up! Much stronger now 😉
At last, the final piece of the puzzle! To attach our Arducam to the actuator, we designed an attachment piece to hold them together using 3D printing. This process similarly went through multiple iterations (See the “Learning Experiences” section of our blog).
+1 for problem-solving!
The final prototype is pictured below:
Here’s a video of our prototype in action!!
Recess week part 2: The Solution to A Big Problem (Volume Detection)
Coming back from the lab, we theorised that the error could be possibly overcome by using camera calibration on a chess grid to reduce parallax error perceived by the camera. With some experimentation, we were able to get the code working for the camera calibration, but these adjustments had limited success. Thinking back on this, this is likely because camera calibration tries to find the internal camera parameter to adjust for perceived tilt, but it does not correct for external sources of error due to the tilt of our set-up.
Further reflecting on our previous failures, we decided to conduct more tests to verify the reason of failure. Ultimately, we found that the camera’s distance from the burette, necessary to capture the full range, contributed significantly to parallax error. By positioning the camera closer to capture only the 0.00 to 25.00 cm³ range, we achieved higher accuracy with an error margin of ±0.10 cm³, albeit with a reduced measurement range.
This realisation meant that that the accuracy error we are facing right now will be difficult to reconcile as it is caused by the limitation of the set-up. We cannot have it both ways: our current approach either sacrifices the range of value that can be detected or the accuracy of the values obtained.
Realising that the camera needs to be nearer to the set-up, an epiphany struck us. What if we design a camera setup that is akin to flag-raising ceremony, with the camera mounted to a separate linear actuator to move and detect the sharp colour contrast of our fishing float in the burette to detect the start and end point of the float? This way, we can bring the camera even closer to the burette and use the distance travelled between start and end point to calculate the titre volume!
Therefore, we changed gears and decided to use an actuator instead to detect the volume of titrant used.
The figure drawn below shows the idea, which can be seen in our final prototype. The camera will move down from the top of the actuator until it detects the floater, then mark this point as a start point.
After the completion of the titration, the camera will move down to find the floater again and detect the distance travelled, then convert this distance travelled to the volume of the titrant used.

Now, with the conception of this idea, we tried to implement a mini-version of this idea on a small actuator to see if the linear motion can be programmed precisely and it works!
Now it is left for us to order all the necessary logistics and get our hand dirty to build our new prototype!
Recess week Part 1: Lab Trial 1 – Our First Setback
The progress of the prototype was going on quite smoothly – the computer vision seemed to work well and measure volumes accurately.
However, this had all been done within the confines of the MnT lab, which isn’t a Chemistry lab. As such, we decided to pilot our first prototype on an actual chemical reaction setting.
This was the first time that we had actually planned and fully carried out a Chemistry experiment without any guidance from anyone. Fortunately, after many lab lessons in school, we knew what to do and this lab trial went smoothly (on the Chem side).
Here’s a video!
We were met with the harsh reality: our volume detection software to be utterly inaccurate – by 1.0 cubic centimeters – which is disastrous for titrations when considering that it is a method often employed to detect miniscule deviations in concentration in pharmaceutical products.
August – October: Hands-on work for Volume Detection
To track volume changes in the burette, we used an external camera to monitor the water level and detect any shifts. Ideally, the program would read the burette’s meniscus and return the volume of titrant used based on an image at the moment of a sharp colour change. However, limitations in resolution make it difficult to detect the meniscus accurately, as it lacks strong contrast and blends with the background. Furthermore, unlike human eyes, the program cannot easily identify the burette’s start (0 cm³) and end (50 cm³) points, which are necessary for calculating the meniscus position via a proportional method.
To address these challenges, we added a distinct, buoyant floater to represent the meniscus and used coloured tape to mark the 0 cm³ and 50 cm³ points. We employed HSV (Hue, Saturation, Value) analysis on these taped areas to determine the y-coordinates for start and end points, providing more robust colour detection despite background interference.
(R:E the floater, we ended up using fishing floaters thanks to their bright colours!)
For motion tracking, we programmed the system to monitor volume changes using OpenCV in Python. More specifically, our programme tracks the motion changes of the float as it moves down when the burette is dispensing. The principle of our code is as follows:
First, to minimise background noise, we narrowed the region of interest to just the burette itself by implementing an user-drawn boundary system that allows rectangular contour of the burette (0 to 50 cm^3) to be drawn in a point-and-click manner. This also allows us to determine the y-coordinate value of the 0 cm^3 and 50 cm^3, which is just the top and bottom of the rectangular boundary drawn respectively.
Second, to detecting the floater’s movement between the start and end points, we use a background subtractor algorithm provided by OpenCV that can actively keep track the mean of every frame of camera data captured, and this can effectively filter out background fluctuations while allowing us to focus on the constantly moving object (aka our float).
Third, we added some blurring with threshold filtering to further limit the effect of random motion being detected.
All these elements come together to ensure that our programme only tracks the motion of the float and this allows us to use the y-coordinate value of the top of the track box around the float as the meniscus level. We can then apply the following formula to get the volume dispensed: (y-coordinate of meniscus level – y-coordinate of 0 cm^3) / (y-coordinate of 50 cm^3 – y-coordinate of 0 cm^3) *Â 50 cm^3.
However, this approach consistently overestimated volume by about 1-2 cm³, likely due to slight tilt in the camera setup. To overcome this challenge, we tried calculating the euclidean distance using Pythagoras’ formula instead of just y-coordinate values and found that this approach improved accuracy by a bit. Now, we are hopeful that our prototype will work for actual lab setting and we shall test it out during recess week!
July Week 4: But Light Detection/Light Pollution!
If we want to accurately detect colour in a wide variety of environments, we can’t leave the set-up exposed!
To ensure even lighting, we 3d printed a white box and fitted a ring light to it to ensure even lighting for accurate colour detection.
Our amazing box with an open top design to fit our ringlight!
July Week 3 : Colour Detection CodeÂ
In an ideal scenario, a colour detection algorithm works by comparing the colour of each frame to the next. A difference in colour indicates a change. However, real-world application introduces several challenges.
First, defining the “initial colour” of a frame is complex since real images contain multiple colours, rather than a single homogeneous hue. To address this, we take the average RGB value over a specified region of interest to represent the initial colour.
The next challenge is defining a true “colour change,” as fluctuations can occur due to stir bar motion and lighting variations. We handle these with a two-pronged approach:
- Tolerance Range: Fluctuations are managed by allowing colour to vary within a set range, which we visualize as a “sphere” around the initial value.
- Stability Check: Because fluctuations can sometimes briefly exceed this tolerance range without indicating a true change, we set a second variable to track the duration spent outside the range. If the colour remains outside the tolerance range long enough, we classify it as a real change rather than a fluctuation.
Further real-life adjustments are needed due to inherent delays in the chemical reaction process. Colour changes do not happen instantaneously; reaction time and delays in titrant addition mean we need to pace our drop rate and introduce small delays in the detection process. While minor, these delays prevent the program—reading data in fractions of a second—from detecting misleading changes too quickly. However, the duration of the delay must be adjusted precisely to strike a balance between accuracy and efficiency of operation.
In terms of colour representation, each colour can be expressed in the RGB space, where each channel (Red, Green, Blue) ranges from 0 to 255. Together, these RGB values form a 3D vector in colour space. Using this vector representation allows us to compute the difference between two colours by measuring the vector norm. A true colour change at the equivalence point occurs when this difference exceeds a predefined threshold and remains stable, avoiding false alarms from gradual shifts that may appear as reactions progress.
July Week 2: Making a wooden box to mount stabilise our stepper motor
In order to raise the stepper motor to an appropriate height to be attached to the burette stopcock, we needed a mount that was heavy enough to prevent it from tipping over due to the torque of the spinning.
To maintain contact between the burette tip and and rotation of the stepper motor, we used CAD to design a preliminary titration hand to hold the two together.
Our Sketch
Our first design
However, our first design does not have a firm grip to the burette tip and can easily fall off when we tested it out with the rotor. To further finetune our design, we remove the tolerance for the circular bulb in the middle for a more snug fit and opted for a pokeball design, where the entire tip of the burette are covered rather than the existing 4 rods design.
Our Final Design!
We also eventually decided on a solid wood block with a stepper motor and its mount screwed into it.

July week 1: START OF BUILDING
Over a month of careful deliberation, our group tried exceptionally hard to make a Stirling engine work – a cooling machine that would use mechanical work to remove heat from an enclosed space.
However, after more details were discussed, we decided against moving forward with this idea, ultimately choosing to go forward with an automatic titration machine instead.
It was difficult to leave our original idea behind and move forward with an entirely new idea with completely different working principles, but what we’ve learned from the experience – resilience and teamwork and problem-solving skills – is what is helping us to progress rapidly with the current idea.
Initially, we aimed to use a burette attached to a camera a fixed distance away to measure the volume of titrant dispensed. However, after discussing with our very helpful MnT lab personnel, we slowly opened up to the idea of using a laser tracker instead to track the volume of titrant dispensed.
As students who have had more experience taking exams and following lab instructions, we’re coming to truly understand the importance of experimentation and troubleshooting in understanding the project better and uncovering new angles at which to address our problem statements.
Moving on to the more bread-and-butter components of our idea: the codes for colour detection. We’ve been progressing smoothly with colour detection, making rapid strides since this Tuesday, since we first went down to the lab to experiment with our new idea. From no code at all, our group has now written something that flags out and selects colour changes, which users can then decide if it is an endpoint or just a simple colour change. We’re using both Python and OpenCV, and learning how to integrate two languages into one is something that none of us have done before – but we are learning! See the video below to understand the start of our colour detection code.
Another interesting part of our colour detection code – we’re treating the RBG axes as a three-dimensional space, then considering a sphere of a specific radius (specified in our code) as the initial colour. Then, anything detecting within this sphere is deemed close enough in colour to be treated as the same colour.
3D printing, an essential part of any MnT project, is starting to see its use in our project, where we use a 3D printed joint to attach the stopcock of the burette to the stepper motor.
In other news, our logistics are slowly arriving. Slowly but surely, our project is beginning to take shape, and we’re excited to share with you how it goes, stay tuned!
June: Our Stirling engine idea
We initially were trying very hard to create an idea that combined each of our group’s interests – environmental sustainability, computer vision, and heavy mechanical engineering. After many weeks of brainstorming, we finally chose to explore the idea of a Stirling Engine in greater detail. The idea was to make a ‘reverse’ Stirling engine that could cool air by channeling mechanical work into the removal of heat.
What we had to consider was very in depth – in order to make an effective Stirling engine, we had to take into consideration the pressure of the system, its ideal temperature, the ideal adhesives, metals and other materials used to make this. Skills normally associated with MnT, like 3d printing and circuit boards, would not have found their place in this project.
However, as time progressed, we faced more and more doubt about whether we could make this work. The materials would have taken too long to arrive, there would have been minimal coding, and this project did not seem feasible.
A blueprint of our Sterling Engine