Raspberry Pi

Motivation

To make our setup more mobile, we wanted to have an onboard controller to move the microscope in space, along with acquiring images and performing our image processing. In principle, this would remove the need for any bulky laptop devices to be connected to all the devices, and allow our setup to be more or less self-sufficient. This would also remove the need for the end-user to install or configure all the required software, when the required software could just be available on our controller device.

Choices

We were choosing between the Jetson Nano from NVIDIA and the Raspberry Pi Model 3B. The Jetson Nano had an onboard GPU, and is said to be optimised for running machine-learning models, and has found widespread use for deploying computer vision projects. It also comes with onboard 4K video. 

However, we eventually went with the Raspberry Pi as our machine vision problem was not particularly computationally intensive. If we were to use a CNN, the model would be fairly small, taking in images at less than 1 Hz in order to count cells, as opposed to a more complex problem such as objection detection, classification, and motion tracking. If we did not use a CNN and opted for traditional processing techniques, the computing power of the Jetson Nano would more or less go to waste.

Hence, we went with the Raspberry Pi to run our image-processing model. It controls a generic webcam, which is capable of 2K images (we just use FullHD at 1920×1080 because the 2K images are a lot noisier for little extra benefit). It is also responsible for controlling the motion of our microscope setup.

We also bought a touchscreen for the Raspberry Pi since in its operating (lab) environment, a keyboard and mouse would take up a lot of space, and we just use a wireless keyboard and mouse for development. 

For development purposes, we had a git branch that we used for testing. We would push code to the git branch, and then pull it onto our Raspberry Pi. We also had the ability to SSH (Secure SHell) into our Raspberry Pi through a local network, and we could execute code from our own devices. 

Notes from Setup

The setup of the Raspberry Pi was uneventful, with minimal configuration required. Interestingly, although its wifi card supports 5GHz IEEE 802.11.b/g/n/ac, it was unable to connect to the “NTU_SECURE” network. We haven’t really looked into why, but it was not a major issue.

Our Raspberry Pi runs on Raspbian, which is a modified flavour of Debian that uses a graphical desktop environment. Our SD card included Raspbian, so there wasn’t a need to flash the card with an image. 

We encountered some difficulty installing the needed libraries. This is mostly because Raspbian is based on Debian, which is not known for its bleeding-edge updates. We didn’t encounter issues installing tensorflow-lite, which would be required to run machine-learning models, but we did encounter issues installing the required graphical libraries (PyQt) that were needed by Printrun, the software interface that we used for the RAMPS board with Marlin firmware.

This issue was sidestepped by directly downloading the Printrun source code, installing the required libraries, and running directly from the source code.

All in all, configuring and setting up the Raspberry Pi was mostly uneventful, as our team has experience working with Linux environments. 

Leave a Reply