Blob Detection

Considerations and Justifications for Choice

We decided to experiment with different methods to explore alternatives not commonly mentioned in literature related to automated cell counting. As we are using yeast cells for our samples, their simple, near-circular cell shape would be suitable for blob detection as they can be identified according to their enclosed areas.

Concave, (Nearly) Circular Shaped Yeast Cells

V3.1: Blob Detection with SimpleBlobDetector

Identified Issues with Previous Prototype (Critique):

Deep Learning

  • Training deep learning models is difficult, as it requires a lot of development time for hyperparameter tuning and debugging to optimise performance. We even had trouble getting our model to convergence despite all that we’ve tried. 
  • After more consideration, we realised that our problem might not be that complex. Yeast cells have simple features that can be extracted and identified without deep learning. Hence, there is potential to explore simpler baseline methods that use traditional image processing techniques. 
  • Deep learning models are suited towards learning features of datasets to make predictions. They excel in handling uncertainty such as determining whether an object in the image is a cell. Modelling after real-world use-cases, our samples will not be intentionally contaminated as most biological laboratories have existing stringent procedures in place to reduce the possibility of cell culture contamination. On this note, deep learning models are likely to be overpowered for our use-case since there are no false positives to be handled.
  • Using a multiclass classification approach involves the cropping of microscope images. Some cells will inadvertently be missed or detected twice if their bodies are split into multiple pictures, ergo, an issue of undercounting or overcounting persists.
  • If we still want to use a neural network while avoiding the issue of cropping/edge effects, this would involve a density estimation calculation (estimate the number of cells per unit area, and sum over the whole image) similar to that in literature.
  • Nevertheless, the problem is not that complex to justify more development time into a machine learning model. 
  • Furthermore, since we are counting cells and not identifying more features of cell structure (e.g. finding shape of nuclei, classifying cell), a deep learning model may not necessarily provide a real-world benefit over traditional image processing techniques. 

Modifications Made (Redesign):

  • Move away from deep learning to less computationally intensive method
  • Detect cells as blobs of enclosed areas by identifying contours of cell in image

The SimpleBlobDetector class is built into the OpenCV library and detects blobs in the image. It is suitable for our use case as cells are enclosed and detectable as blobs. 

First, the source image is converted to binary images by applying thresholding with several thresholds from minThreshold (inclusive) to maxThreshold (exclusive) with distance thresholdStep between neighboring thresholds [3].
Connected components from every binary image are extracted by findContours and calculate their centers [3].
Next, centers from several binary images are grouped by their coordinates [3]. Close centers form one group that corresponds to one blob, which is controlled by the minDistBetweenBlobs parameter [3].
From the groups, the final centers of blobs and their radiuses are estimated and returned as locations and sizes of key points [3].

cv2.imread('/content/gdrive/MyDrive/cy2003_cellcounting/training_data_12jun/cropped1contrast.png')
 
# Set up the SimpleBlobDetector with default parameters.
params = cv2.SimpleBlobDetector_Params()
 
# Change thresholds
params.minThreshold = 0
params.maxThreshold = 256
 
# Filter by Area.
params.filterByArea = True
params.minArea = 30
params.maxArea = 10000
 
# Filter by Color (black=0)
params.filterByColor = True
params.blobColor = 0
 
# Filter by Circularity
params.filterByCircularity = False
#params.minCircularity = 0.5
#params.maxCircularity = 1
 
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.5
params.maxConvexity = 1
 
# Filter by InertiaRatio
params.filterByInertia = True
params.minInertiaRatio = 0
params.maxInertiaRatio = 1
 
# Distance Between Blobs
params.minDistBetweenBlobs = 0
 
# Do detecting
detector = cv2.SimpleBlobDetector_create(params)
 
# find key points for blob detection
keypoints = detector.detect(erosion)
 
# create black screen on which shapes are drawn
blank = np.zeros((1,1))
 
blobs = cv2.drawKeypoints(img, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
plt.imshow(blobs)

Performance with Prediction Dataset

The microscope image was cropped to 50 x 50 so that it was easier to zoom in and see what was identified as a blob. 

Blob Detection Performance on Prediction Test Set

While performance could be improved by fiddling with the different parameters such as circularity, area, convexity, the blob detector would be unable to generalise well as its parameters are customised to a specific cell dimension.


V3.2: Otsu’s Thresholding and Connected Components Detection

Identified Issues with Previous Prototype (Critique):

  • For SimpleBlobDetector, a lot of finetuning to fit parameters to blob in terms of circularity, area, convexity is required to obtain optimal performance. While this would be acceptable for cells of similar colours, shapes and sizes, for cases that involve cell counting for a variety of cells, the blob detector is unable to generalise well as its parameters are customised to a specific cell dimension.
  • Unable to customise preprocessing or change thresholding algorithm. Limited due to fixed parameters.

Modifications Made (Redesign):

  • Shifted focus to more thorough image preprocessing to clean image and make features more distinct.
  • Avoid cropping of microscope images so that cells are not cut off and missed out during detection and counting.
  • Change from using built-in class to building own function with alternate preprocessing and thresholding steps. Specifically, the powerful thresholding algorithm, Otsu’s Thresholding is used here instead.

Otsu’s thresholding assumes that the pixel intensities have two different peaks. Otsu’s thresholding then seeks to find the threshold that would split the image into two classes, 0 and 1, which represent the background and foreground (this assumes that the image can be meaningfully split into two classes, which in this case it probably can). More specifically, it seeks to maximise the interclass variance, thus acting as a dynamic threshold.

In the following plot, we compute the sum of the intraclass variance for every possible threshold, which in effect simulates the process of Otsu’s thresholding. Ignoring the endpoints, we can see that there is a clear local maximum at 159. For reference, the histogram of image values is also included below. The sharp discontinuities are present because at some thresholds, there are no pixels at all in the other class, and the variance is not meaningful for an empty set. It’s easy to rationalise ignoring the endpoints and only look at the local maxima as intuitively, the endpoints just represent the overall variance and cannot meaningfully separate two classes.

 

This leads to an image (with colour; for our pipeline we convert it into greyscale) like such:

The following section presents the code for our contour detector, which applies Otsu’s thresholding.


 

import numpy as np
 
def drawBasicGrid(img, pxstep, colour):
    """
    adds horizontal and vertical lines on image input to mimic hemocytometer gridlines
    :param      img: 3d matrix of image
    :param      pxstep: pixel distance between gridlines
    :param      colour: colour of lines in RGB
    """
    x = pxstep 
    y = pxstep 
    #Draw all x lines
    while x < img.shape[1]:
        cv2.line(img, (x, 0), (x, img.shape[0]), color=colour, thickness=5)
        x += pxstep 
 
    # Draw all y lines
    while y < img.shape[0]:
        cv2.line(img, (0, y), (img.shape[1], y), color=colour,thickness=5)
        y += pxstep 
 
 
def contourdetector(image_path, mm_distance = 592, max_area_cells = 1500):
  """
  preprocesses image, performs Otsu's thresholding and connected components detection to derive cell count by hemocytometer method
  excludes cells on outermost bottom and right grid lines
  :param      image_path:  The image file path to jpg
  :param      mm_distance: number of pixels per mm
  :param      max_area_cells: maximum enclosed area that contour detector will accept for it to count as a cell
  returns an integer as the cell count of the image and gridded image with detected contours
  """
  image = cv2.imread(image_path)
  image = cv2.bitwise_not(image) 
  height, width, channels = image.shape
 
  y=200
  x=500
 
  image = image[y:y+mm_distance, x:x+mm_distance] # use numpy slicing to execute the crop
  kernel = np.ones((2,2),np.uint8)
  sure_bg = cv2.erode(image,kernel,iterations = 1) # apply erosion filter
 
  sure_bg = cv2.copyMakeBorder(sure_bg, 0, 5, 0, 5, cv2.BORDER_CONSTANT, value=(52, 52, 52)) # add bottom and right outermost gridline to ignore cells on these lines
 
  gray = cv2.cvtColor(sure_bg, cv2.COLOR_BGR2GRAY) # convert to grayscale for Otsu's thresholding
  ret, thresh = cv2.threshold(
      gray, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU) # apply otsu threshold
  kernel2 = np.ones((1,1),np.uint8)
  thresh = cv2.erode(thresh,kernel,iterations = 1) # apply erosion again to smooth cells so the cell wall is smooth
 
  cnts, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  cnts = [cnts[i] for i in range(len(cnts)) if hierarchy[0][i][2] == -1] # count contours with no child contours only
 
  white_dots = [] # used contours as blob detection is more suited for detecting black or grey blobs
 
  for c in cnts:
      area = cv2.contourArea(c)
      if max_area_cells > area :
          cv2.drawContours(image, [c], -1, (36, 255, 12), 2)
          white_dots.append(c)
 
  drawBasicGrid(image, 148, (52, 52, 52)) # draw vertical and horizontal lines
  image = cv2.copyMakeBorder(image, 7, 5, 7, 5, cv2.BORDER_CONSTANT, value=(52, 52, 52)) # draw border lines
 
  return len(white_dots), image # returns cell count and image with drawn contours

Instead of restricting the parameters of the function excessively to define the area, shape and colour of the expected cell, Otsu’s Thresholding is able to conduct image binarisation and separate the image into foreground and background, in which the detected cells are in the foreground. This ensures the approach generalises well even for cells with anomalous area, shape or colour.

Subsequently, other image processing functions such as dilation and erosion are applied. Erosion and dilation are morphological processing operations to shrink and expand foreground respectively [2][3].

Firstly, images are represented as binary matrices with values of either 0 or 1. The structuring element, also known as the kernel, is another matrix consisting of only 0’s and 1’s of an arbitrary shape and size.  Pixels with values of 1s define the neighbourhood and the center pixel of the structuring element, called the origin, identify the pixel of interest or the pixel being processed.

 
 Erosion with Different Kernel Sizes [3]

Erosion shrinks the size of objects in the foreground and dilates the background, smoothing object boundaries and removing small anomalies. The greater the kernel size (always odd), the greater the extent of erosion as pictured. A local minimum is computed over the area of the given kernel [4]. A pixel in the original image will remain as 1 only if its neighbourhood pixels match all the neighbourhood pixels of value 1 in the kernel. Otherwise, the pixel is eroded and made to 0.

 Dilation with Different Kernel Sizes [3]

Dilation expands the size of foreground objects, smoothing object boundaries, closes holes, and fills broken areas [3]. The greater the kernel size (always odd), the greater the extent of dilation as pictured. The maximal pixel value overlapped by the kernel is computed and the image pixel in the anchor point position (center of kernel) is replaced with that maximal value [4]. A pixel in the original image will be converted to 1 as long as minimally one of its neighbourhood pixels match the neighbourhood pixels of value 1 in the kernel. Hence, many pixels of value 0 are converted to the value of 1, ‘blowing up’ the image or object.

After all our preprocessing steps, we use the findContours function from OpenCV to achieve our cell counting.

Validating Performance with a Synthetic Dataset

 V4.1 Otsu’s Thresholding Performance on V1 Synthetic Dataset
 V4.1 Otsu’s Thresholding Performance on another image form V1 Synthetic Dataset

We designed a program to generate synthetic images in which template cell images are inserted into a background. This contour detection solution was tested on images generated by version 1 of the synthetic dataset generation. More details about the development of the synthetic dataset generation can be found here. Performance was good but the synthetic dataset is not an accurate representation of webcam images as cells can be unfocused and images can be subject to uneven lighting. We sought to improve the performance of V4.1 in the mentioned areas.

—————————————————————————

V3.3: Repeated Otsu’s Thresholding and Connected Component Detection

Identified Issues with Previous Prototype (Critique):

  • Misses out very small and faded cells as thresholding gets distracted by larger and more obvious cells

Modifications Made (Redesign):

    • Do repeated rounds of Otsu’s Thresholding after colouring detected cells in the previous rounds to background RGB values
    • Update minimum and maximum areas set for contour finding to avoid double counting

Essentially, detected cells are first coloured in and another round of thresholding with this image is done so smaller cells can be detected. This method works as small cells were detected better in the second round of Otsu’s Thresholding.

 Detected Cells Filled with Grey Paint to Blend into Background for Second Otsu’s Thresholding

 

 Improved Performance with Repeated Otsu’s Thresholding, green circles are second round of Otsu’s Thresholding and yellow circles from first round
import matplotlib.pyplot as plt
import cv2
import datetime
import numpy as np
 
def drawBasicGrid(img, pxstep, colour):
    """
    adds horizontal and vertical lines on image input to mimic hemocytometer gridlines
    :param      img: 3d matrix of image
    :param      pxstep: pixel distance between gridlines
    :param      colour: colour of lines in RGB
    """
    x = pxstep 
    y = pxstep 
    #Draw all x lines
    while x < img.shape[1]:
        cv2.line(img, (x, 0), (x, img.shape[0]), color=colour, thickness=5)
        x += pxstep 
 
    # Draw all y lines
    while y < img.shape[0]:
        cv2.line(img, (0, y), (img.shape[1], y), color=colour,thickness=5)
        y += pxstep 
    cv2.line(img, (0,img.shape[0]), (img.shape[1],img.shape[0]), color = colour, thickness = 5) # bottom 
    cv2.line(img, (img.shape[1],img.shape[0]), (img.shape[1],0), color = colour, thickness = 5) # right
    cv2.line(img, (0,img.shape[0]), (0,0), color = colour, thickness = 5) # left
    cv2.line(img, (0,0), (img.shape[1],0), color = colour, thickness = 5) # top
 
def drawBottomRightLines(img):
    # add bottom and right outermost gridline to ignore cells on these lines
    cv2.line(img, (0,img.shape[0]), (img.shape[1],img.shape[0]), color = (52,52,52), thickness = 5) 
    cv2.line(img, (img.shape[1],img.shape[0]), (img.shape[1],0), color = (52,52,52), thickness = 5)
 
def thresholdingPreprocessing(img):
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert to grayscale for Otsu's thresholding
    ret, thresh = cv2.threshold(
      gray, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU) # apply otsu threshold
    kernel = np.ones((1,1),np.uint8)
    thresh = cv2.erode(thresh,kernel,iterations = 1) # apply erosion again to smooth cells so the cell wall is smooth
    return thresh
 
 
def contourdetector(image_path, mm_distance = 592, max_area_cells = 1500):
  """
  preprocesses image, performs Otsu's thresholding and connected components detection to derive cell count by hemocytometer method
  excludes cells on outermost bottom and right grid lines
  :param      image_path:  The image file path to jpg
  :param      mm_distance: number of pixels per mm
  :param      max_area_cells: maximum enclosed area that contour detector will accept for it to count as a cell
  returns an integer as the cell count of the image and gridded image with detected contours
  """
  image = cv2.imread(image_path)
  image = cv2.bitwise_not(image) 
 
  y=200
  x=500
 
  image = image[y:y+mm_distance, x:x+mm_distance] # use numpy slicing to execute the crop
  kernel = np.ones((2,2),np.uint8)
  sure_bg = cv2.erode(image,kernel,iterations = 1) # apply erosion filter
 
  ############### Contour Detection for Large Cells ###############
 
  drawBottomRightLines(sure_bg)
 
  thresh = thresholdingPreprocessing(sure_bg)
 
  cnts, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  cnts = [cnts[i] for i in range(len(cnts)) if hierarchy[0][i][2] == -1] # count contours with no child contours only
 
  ############### Contour Detection for Small Cells ###############
 
  # exclude large cells
  exclude_large_cells = image.copy()
  for c in cnts:
    area = cv2.contourArea(c)
    if 1500 > area > 90:
        cv2.drawContours(exclude_large_cells, [c], -1, (110, 110, 110), cv2.FILLED) # isolate small cells by removing large cells
        cv2.drawContours(exclude_large_cells, [c], -1, (110,110,110), 6) # draw border
 
  # contour detection
  thresh_small = thresholdingPreprocessing(exclude_large_cells)
 
  cnt_small, hierarchy_small = cv2.findContours(thresh_small, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  cnt_small = [cnt_small[i] for i in range(len(cnt_small)) if hierarchy_small[0][i][2] == -1] 
 
  ############### Contour Drawing for All Cells #######################
  white_dots = [] # used contours as blob detection is more suited for detecting black or grey blobs
 
  for c in cnt_small: 
    area = cv2.contourArea(c)
    if 90 > area > 0:
        cv2.drawContours(image, [c], -1, (36, 255, 12), 2)
        white_dots.append(c)
 
  for c in cnts:
      area = cv2.contourArea(c)
      if max_area_cells > area > 90 :
          cv2.drawContours(image, [c], -1, (36, 255, 12), 2)
          white_dots.append(c)
 
  drawBasicGrid(image, 148, (52, 52, 52)) # draw vertical and horizontal lines
 
  output_path = "app/static/capture/{}.jpg".format(datetime.datetime.now(), "%Y%m%d-%H%M%S")
 
  cv2.imwrite(output_path,image)
 
  return len(white_dots), output_path # returns cell count and image with drawn contours

 

Validating Performance with a Synthetic Dataset

We tested this solution on the improved version of the synthetic dataset generated, which now mocks unfocused cells with the application of blur filters. Performance was good as well, similar to V4.1 Otsu’s Thresholding. This likely means that our synthetic dataset is still not representative enough of real webcam images.

—————————————————————————

V3.4: Static Thresholding and Edge Detection

 

Identified Issues with Previous Prototype (Critique):

  • Did not work well with unfocused images
  • Thresholding performance is not very consistent when the same cells are imaged with minor lighting differences, resulting in variation in cell count

Modifications Made (Redesign):

  • Added a sharpen kernel
  • Added unmask sharpening, which takes a blurred image and substracts it from the original to reveal edges to help sharpen the iamge
  • Added a look up table to map intensity values
  • Changed from Otsu’s Thresholding to Static Thresholding, using a static threshold value of 60

Static thresholding compares all pixels to a fixed given threshold value. If the pixel value is lower than the threshold value, it is set to 0. Otherwise, it is set to a maximum value. There are multiple threshold settings that can be used and we used binary_inv, which would darken cells and lighten the background and make them more obvious for contour detection subsequently. A threshold value of 55-60 is used as it was determined to be most suitable after much tweaking.

 V4.2 Repeated Otsu’s Thresholding Performance on Same Image Sample

 

 V4.3 Repeated Static Thresholding Performance on Same Image


import matplotlib.pyplot as plt
import cv2
import datetime
import numpy as np
 
def drawBasicGrid(img, pxstep, colour):
    """
    adds horizontal and vertical lines on image input to mimic hemocytometer gridlines
    :param      img: 3d matrix of image
    :param      pxstep: pixel distance between gridlines
    :param      colour: colour of lines in RGB
    """
    x = pxstep 
    y = pxstep 
    #Draw all x lines
    while x < img.shape[1]:
        cv2.line(img, (x, 0), (x, img.shape[0]), color=colour, thickness=5)
        x += pxstep 
 
    # Draw all y lines
    while y < img.shape[0]:
        cv2.line(img, (0, y), (img.shape[1], y), color=colour,thickness=5)
        y += pxstep 
    cv2.line(img, (0,img.shape[0]), (img.shape[1],img.shape[0]), color = colour, thickness = 5) # bottom 
    cv2.line(img, (img.shape[1],img.shape[0]), (img.shape[1],0), color = colour, thickness = 5) # right
    cv2.line(img, (0,img.shape[0]), (0,0), color = colour, thickness = 5) # left
    cv2.line(img, (0,0), (img.shape[1],0), color = colour, thickness = 5) # top
 
def drawBottomRightLines(img):
    # add bottom and right outermost gridline to ignore cells on these lines
    cv2.line(img, (0,img.shape[0]), (img.shape[1],img.shape[0]), color = (52,52,52), thickness = 5) 
    cv2.line(img, (img.shape[1],img.shape[0]), (img.shape[1],0), color = (52,52,52), thickness = 5)
 
def lookup_curve(img, i = 0):
 
	kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
	img = cv2.filter2D(img, -1, kernel)
	thresh = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
 
	lut_in = np.array([0, 93, 126, 255]) 
	lut_out = np.array([255, 73, 0, 0])
 
        # look up table to remap pixel values
	lut_8u = np.interp(np.arange(0,256), lut_in, lut_out).astype(np.uint8)
 
	thresh = cv2.LUT(thresh, lut_8u)
	if i == 0:
		ret, thresh = cv2.threshold(thresh, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
	else:
		ret, thresh = cv2.threshold(thresh, i, 255, cv2.THRESH_BINARY_INV)
	kernel = np.ones((1,1),np.uint8)
	thresh = cv2.erode(thresh, kernel,iterations = 1) # apply erosion again to smooth cells so the cell wall is smooth
 
	thresh = cv2.bitwise_not(thresh)
	return thresh
 
def unsharp_mask(image, kernel_size=(3, 3), sigma=1.0, amount=0.5, threshold=0):
	"""Return a sharpened version of the image, using an unsharp mask."""
	blurred = cv2.GaussianBlur(image, kernel_size, sigma)
	sharpened = float(amount + 1) * image - float(amount) * blurred
	sharpened = np.maximum(sharpened, np.zeros(sharpened.shape))
	sharpened = np.minimum(sharpened, 255 * np.ones(sharpened.shape))
	sharpened = sharpened.round().astype(np.uint8)
	if threshold > 0:
		low_contrast_mask = np.absolute(image - blurred) < threshold
		np.copyto(sharpened, image, where=low_contrast_mask)
	return sharpened
 
def static_contour(image_path, mm_distance = 592, max_area_cells = 1500, debug = False, static = 0):
	"""
	preprocesses image, performs static thresholding and connected components detection to derive cell count by hemocytometer method
	excludes cells on outermost bottom and right grid lines
	:param      image_path:  The image file path to jpg
	:param      mm_distance: number of pixels per mm
	:param      max_area_cells: maximum enclosed area that contour detector will accept for it to count as a cell
	returns an integer as the cell count of the image and gridded image with detected contours
	"""
	image = cv2.imread(image_path)
	image = cv2.bitwise_not(image) 
	original = image.copy()
	h, w, _ = image.shape
 
	y=int(image.shape[0]/2 - mm_distance/2)
	x=int(image.shape[1]/2 - mm_distance/2)
 
	image = image[y:y+mm_distance, x:x+mm_distance] # use numpy slicing to execute the crop
	kernel = np.ones((2,2),np.uint8)
	sure_bg = cv2.erode(image,kernel,iterations = 1) # apply erosion filter
 
 
 
	############### Contour Detection for Large Cells ###############
 
	drawBottomRightLines(sure_bg)
 
	thresh = lookup_curve(sure_bg, static)	
 
	cnts, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
	#print(cnts)
	cnts = [cnts[i] for i in range(len(cnts)) if hierarchy[0][i][2] == -1] # count contours with no child contours only
 
	############### Contour Detection for Small Cells ###############
 
	# exclude large cells
	exclude_large_cells = image.copy()
	for c in cnts:
		area = cv2.contourArea(c)
		if 1500 > area > 90:
			cv2.drawContours(exclude_large_cells, [c], -1, (110, 110, 110), cv2.FILLED) # isolate small cells by removing large cells
			cv2.drawContours(exclude_large_cells, [c], -1, (110,110,110), 6) # draw border
 
	# contour detection
	thresh_small = lookup_curve(exclude_large_cells, static)
 
	cnt_small, hierarchy_small = cv2.findContours(thresh_small, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
	#print(cnt_small)
	cnt_small = [cnt_small[i] for i in range(len(cnt_small)) if hierarchy_small[0][i][2] == -1] 
 
	############### Contour Drawing for All Cells #######################
	white_dots = [] # used contours as blob detection is more suited for detecting black or grey blobs
 
	for c in cnt_small: 
		area = cv2.contourArea(c)
		if 90 > area > 0:
			cv2.drawContours(image, [c], -1, (36, 255, 12), 2)
			white_dots.append(c)
 
	for c in cnts:
		area = cv2.contourArea(c)
		if max_area_cells > area > 90 :
			cv2.drawContours(image, [c], -1, (200, 255, 12), 2)
			white_dots.append(c)
 
	drawBasicGrid(image, 148, (52, 52, 52)) # draw vertical and horizontal lines
 
	#output_path = "app/static/capture/{}.jpg".format(datetime.datetime.now(), "%Y%m%d-%H%M%S")
 
	if debug:
		f, axarr = plt.subplots(nrows=2,ncols=2)
		plt.sca(axarr[0, 0]); 
		plt.imshow(original, cmap = "binary"); plt.title('Original')
		plt.sca(axarr[0, 1]); 
		plt.imshow(sure_bg, cmap = "binary"); plt.title('Image Sharpen and Erosion')
		plt.sca(axarr[1 ,0]); 
		plt.imshow(thresh, cmap = "binary"); plt.title('Thresholding')
		plt.sca(axarr[1, 1]); 
		plt.imshow(image, cmap = "binary"); plt.title(f'Detected Contours with Cell Count:{len(white_dots)}')
		#plt.savefig('/content/gdrive/MyDrive/CY2003_MnT/blobs_n_contours/preprocessed_contours_without_sharpening_webcam.png',bbox_inches = 'tight')
		fig = plt.gcf()       
		fig.set_size_inches(8,6)
		fig.set_dpi(150)
		plt.show()
 
	return len(white_dots), image # returns cell count and image with drawn contours

Validating Performance with a Synthetic Dataset

 V4.3 Static Thresholding Performance on V2 of Synthetic Dataset
 V4.3 Static Thresholding Performance on V2 of Synthetic Dataset
 V4.3 Static Thresholding Performance on V2 of Synthetic Dataset

The accuracy for V4.3 static thresholding, when tested on version 2 of the synthetic dataset generated, was good, with cells at the corners also being picked up.

—————————————————————————

V3.5: Adaptive Gaussian Thresholding and Connected Component Detection

 

Identified Issues with Previous Prototype (Critique):

  • For webcam images, static thresholding gives false positives sometimes
  • Issue of double layering as static thresholding picks up the filled out cells in the second round of thresholding

Modifications Made (Redesign):

  • Remove repeated thresholding and stuck to one round
  • Replace static thresholding with adaptive Gaussian thresholding
  • Remove look-up table remapping of pixel intensity values
 Issue of Double Layered Cells across Two Rounds of Static Thresholding (V4.3)

 

 V4.4 Adaptive Gaussian Thresholding Performance on Same Webcam Image

While static thresholding performed well on the synthetic dataset, adaptive thresholding seemed to be more robust for webcam images, which are subject to uneven lighting and unfocused cells. Furthermore, it is also able to segment cell clusters to a good extent instead of counting them as one single large cell.

Adaptive thresholding, specifically adaptive Gaussian thresholding, obtains its threshold value as the weighted sum of neighbourhood values where weights are a Gaussian window [6]. Each output pixel is the weighted average of pixels in a defined neighbourhood. The neighbourhood or size of the region around the pixel to consider is defined by the parameter blockSize. A constant C can also be defined to be subtracted from the mean or weighted sum of the neighbourhood pixels for adjustment [7]. In our case, we used a blockSize of 11 and a constant C of 2 as this yield the best results after some tweaking.

Thresholding is done by comparing the original pixel value with the local average pixel value and pixels with a relatively high value are considered foreground and pixels with a relatively low value are considered background [7]. The usage of a local average allows for handling of uneven lighting as we become independent of the difference of illumination across the image as different and more appropriate threshold values for different regions of the same image are obtained as opposed to a global threshold value across the image in static thresholding [5][7].

import cv2
import datetime
import numpy as np
 
 
def drawBasicGrid(img, pxstep, colour):
    """
    adds horizontal and vertical lines on image input to mimic hemocytometer gridlines
    :param      img: 3d matrix of image
    :param      pxstep: pixel distance between gridlines
    :param      colour: colour of lines in RGB
    """
    x = pxstep 
    y = pxstep 
    #Draw all x lines
    while x < img.shape[1]:
        cv2.line(img, (x, 0), (x, img.shape[0]), color=colour, thickness=5)
        x += pxstep 
 
    # Draw all y lines
    while y < img.shape[0]:
        cv2.line(img, (0, y), (img.shape[1], y), color=colour,thickness=5)
        y += pxstep 
    cv2.line(img, (0,img.shape[0]), (img.shape[1],img.shape[0]), color = colour, thickness = 5) # bottom 
    cv2.line(img, (img.shape[1],img.shape[0]), (img.shape[1],0), color = colour, thickness = 5) # right
    cv2.line(img, (0,img.shape[0]), (0,0), color = colour, thickness = 5) # left
    cv2.line(img, (0,0), (img.shape[1],0), color = colour, thickness = 5) # top
 
def drawBottomRightLines(img):
    # add bottom and right outermost gridline to ignore cells on these lines
    cv2.line(img, (0,img.shape[0]), (img.shape[1],img.shape[0]), color = (52,52,52), thickness = 5) 
    cv2.line(img, (img.shape[1],img.shape[0]), (img.shape[1],0), color = (52,52,52), thickness = 5)
 
def unsharp_mask(image, kernel_size=(3, 3), sigma=1.0, amount=0.5, threshold=0):
	"""Return a sharpened version of the image, using an unsharp mask."""
	blurred = cv2.GaussianBlur(image, kernel_size, sigma)
	sharpened = float(amount + 1) * image - float(amount) * blurred
	sharpened = np.maximum(sharpened, np.zeros(sharpened.shape))
	sharpened = np.minimum(sharpened, 255 * np.ones(sharpened.shape))
	sharpened = sharpened.round().astype(np.uint8)
	if threshold > 0:
		low_contrast_mask = np.absolute(image - blurred) < threshold
		np.copyto(sharpened, image, where=low_contrast_mask)
	return sharpened
 
def adaptive_contourdetector(image_path, mm_distance = 592, max_area_cells = 800):
  """
  preprocesses image, performs adaptive Gaussian thresholding and connected components detection to derive cell count by hemocytometer method
  excludes cells on outermost bottom and right grid lines
  :param      image_path:  The image file path to jpg
  :param      mm_distance: number of pixels per mm
  :param      max_area_cells: maximum enclosed area that contour detector will accept for it to count as a cell
  returns an integer as the cell count of the image and gridded image with detected contours
  """
  image = cv2.imread(image_path)
  image = unsharp_mask(image)
  image = cv2.bitwise_not(image) 
 
  y=200
  x=500
 
  image = image[y:y+mm_distance, x:x+mm_distance] # use numpy slicing to execute the crop
  kernel = np.ones((2,2),np.uint8)
  sure_bg = cv2.erode(image,kernel,iterations = 1) # apply erosion filter
 
  drawBottomRightLines(sure_bg)
 
  gray = cv2.cvtColor(sure_bg, cv2.COLOR_BGR2GRAY) # convert to grayscale for thresholding
  thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,\
            cv2.THRESH_BINARY,11,2) # adaptive thresholding
  kernel2 = np.ones((2,2),np.uint8)
  thresh = cv2.dilate(thresh,kernel2,iterations = 1) # apply dilation to avoid disconnected contours
 
  cnts, hierarchy = cv2.findContours(255-thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  cnts = [cnts[i] for i in range(len(cnts)) if hierarchy[0][i][2] == -1] # count contours with no child contours only
 
  white_dots = [] 
 
  for c in cnts:
      area = cv2.contourArea(c)
      if max_area_cells > area > 0:
          cv2.drawContours(image, [c], -1, (255, 255, 12), 2)
          white_dots.append(c)
 
 
  drawBasicGrid(image, 148, (52, 52, 52)) # draw vertical and horizontal lines
 
  output_path = "app/static/capture/{}.jpg".format(datetime.datetime.now(), "%Y%m%d-%H%M%S")
 
  cv2.imwrite(output_path,image)
 
  return len(white_dots), output_path # returns cell count and image with drawn contours

Validating Performance with a Synthetic Dataset

 V4.4 Adaptive Thresholding Performance with V2 Synthetic Dataset
 V4.4 Adaptive Thresholding Performance with V2 Synthetic Dataset
 V4.4 Adaptive Thresholding Performance with V2 Synthetic Dataset

An issue of overcounting occurs for adaptive thresholding on the synthetic dataset, which points to a very sensitive thresholding method. However, we decided to stick with adaptive thresholding as it still performs the best on webcam images and is most robust against uneven lighting amongst other methods.


References

[1] “OpenCV: cv::SimpleBlobDetector Class Reference,” Opencv.org. [Online]. Available: https://docs.opencv.org/4.5.2/d0/d7a/classcv_1_1SimpleBlobDetector.html. [Accessed: 11-Jul-2021].

[2] “Morphological Image Processing,” Auckland.ac.nz. [Online]. Available: https://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/ImageProcessing-html/topic4.htm. [Accessed: 27-Jul-2021].

[3] B. Girod and G. Wetzstein, “Morphological Image Processing 1,” Stanford.edu, 2013. [Online]. Available: https://web.stanford.edu/class/ee368/Handouts/Lectures/2016_Autumn/7-Morphological_16x9.pdf. [Accessed: 27-Jul-2021].

[4] “OpenCV: Eroding and Dilating,” Opencv.org. [Online]. Available: https://docs.opencv.org/3.4/db/df6/tutorial_erosion_dilatation.html. [Accessed: 27-Jul-2021].

[5] “How does an adaptive Gaussian threshold filter work?,” Stackoverflow.com. [Online]. Available: https://stackoverflow.com/a/54054190. [Accessed: 28-Jul-2021].

[6] C. A. Montes, “Practical Computer Vision: Theory & Applications,” Bcamath.org, 2015. [Online]. Available: http://www.bcamath.org/documentos_public/courses/course_day2.pdf. [Accessed: 28-Jul-2021].

[7] “OpenCV: Image Thresholding,” Opencv.org. [Online]. Available: https://docs.opencv.org/4.5.2/d7/d4d/tutorial_py_thresholding.html. [Accessed: 28-Jul-2021].

Leave a Reply