The Algorithms subsystem is responsible for deciding every move made by the robot as it traverses through the unknown arena and plotting said arena with 100% accuracy. It is also responsible for calculating the fastest path from the start point to the goal point when the exploration phase is completed. The algorithms are written using a mixture of Java and Kotlin.
The subsystem comprises an arena display user interface to visualise the arena state as the robot traverses through the arena and a virtual robot that synchronises with the actual position of the robot.
Contents
Wi-Fi connection to the Raspberry Pi subsystem
Similar references: Android#Bluetooth connection to the Raspberry Pi subsystem
Communication with the other subsystems is achieved via a Wi-Fi connection to the Raspberry Pi subsystem using a Wi-Fi Socket managed by a Wi-Fi Controller static class. A connection may be initiated via the user interface and the controller attempts to establish a connection by binding the socket to the target address and port of the RPi.
Receiving messages from various subsystems
The controller manages a Read-Thread that triggers a callback to whichever class object is active at the moment whenever a message is received via the socket. The callback is implemented via higher-order functions in Kotlin and message parsing is done by the class objects.
Sending messages to various subsystems
Messages may be sent to any of the other subsystems via the Wi-Fi Controller. Messages have a prefix denoting the target subsystem destination.
Exploration
The arena traversal process is as follows:
-
- Algorithm starts at the start point set via the Android subsystem (see Android#Interactive display).
- Algorithm issues move command based on the current arena state and virtual robot’s position (see Wall hugging approach).
- Algorithm waits for a response from the Arduino subsystem with sensor data (see Arduino#Obstacle location detection).
- Algorithm moves the virtual robot according to the Arduino response (see Synchronising movement with the Arduino subsystem).
- Algorithm plots obstacles using the sensor data relative to the virtual robot’s position (see Plotting of obstacles, or lack thereof).
- Repeat from step 2 until the virtual robot is back at the stop zone or a ‘TERMINATE’ command is received from the Android subsystem.
Wall hugging approach
A left-wall hugging approach is implemented to navigate the arena. As the long-range sensor of the robot is capable of detecting all obstacles in the middle of the arena (see Sensors), a wall hug approach is chosen as it guarantees that the robot will enter the goal zone and return to the start zone after one round of hugging.
The algorithm for left-wall hugging is as follows:
- If previous move is not left-turn and left is not obstructed*, turn left.
- If front is not obstructed, move forward.
- Turn right.
* Not obstructed = all three spaces to the left of the robot is free of obstacles.
If any statement evaluates to true, a move or turn command is sent to the Arduino subsystem and the algorithm stops. Once a response is received from the Arduino and the arena state is updated (see Plotting of obstacles), the algorithm then restarts from the first statement for the next move (see Synchronising with the Arduino subsystem).
Example
Plotting of obstacles, or lack thereof
For an introduction on how grid distances are calculated, see Arduino#Obstacle location detection.
The coordinates of potential obstacles are calculated using the sensors’ coordinates and the sensor data received from the Arduino subsystem. A score is then applied on the grid corresponding to the calculated coordinates. Grids with a score above 0 indicate the presence of an obstacle while grids with a zero or negative score indicate the lack of an obstacle.
Sensor coordinates
The coordinates of each sensor are calculated using the virtual robot’s current position and heading. For example, using the following information:
-
- Robot position: (5, 3)
- Robot heading: 90 [facing right, if (1, 1) is at the bottom left of the arena]
sensor 3’s position is (6, 4) as the sensor is on the front-left of the robot viewed from top-down. Sensor coordinates for the other sensors are derived similarly.
Potential obstacle coordinates calculation
Using the following information:
-
- Robot position: (5, 3)
- Robot heading: 90
- Sensor data: 1, 2, 6, 6, 6, 6 (sensors 1, 2, 3, 4, 5, 6)
Sensors 1 to 5
For sensors 1 to 5 (short-range, front and left of the robot), the potential obstacle coordinates are calculated by offsetting an axis of each sensor’s coordinates by 1. The offset axis depends on the sensor # and robot heading, for example:
- the X-axis for sensor 3 is offset by 1 as the potential obstacle should be in front of the robot, giving the obstacle coordinates as (7, 4)
- the Y-axis for sensor 4 is offset by 1 as the potential obstacle should be on the left, giving the obstacle coordinates as (6, 5)
The offset is always 1 regardless of the sensor data received. The sensor data is used to determine the score applied to each calculated grid. This minimises any chances of mis-plots as obstacles that will be on the left or front of the robot will be plotted eventually at minimal range due to the left-wall hug approach.
Sensor 6
The coordinates are calculated similarly, except that the offset is the sensor data but capped at 5. Sensor data above 5 affects the score applied to the calculated grid instead.
Grid score
A grid score of -1000 is applied to grids touched by the virtual robot (3 by 3) as these grids are guaranteed to be obstacle-free, provided that there is no de-sync between the virtual robot and actual robot’s position.
For sensors 1 to 5, a grid score of 3 is applied to grids with a corresponding sensor data of 1, -3 if the corresponding sensor data is above 1.
For sensor 6, a grid score of 1 is applied if the sensor data is below or equal to 5, -1 if sensor data is above 5.
The weightage for the short-range sensors is higher as they are more reliable due to close proximity. A grid’s score may be modified multiple times throughout the exploration. Should a grid’s score go above 0 at any point in time, an obstacle will be plotted. Similarly, if a grid’s score falls below 1, any obstacle on the grid will be cleared.
The grid score system allows a more reliable plotting of obstacles, taking into account all readings instead of acting just on the latest with extra consideration on the readings’ reliabilities (short-range vs long-range).
Multi-grid movement
Short dashes have been implemented to perform multi-grid movements if it does not affect the arena state at all; the area up ahead has already been explored. While it may be seen as a gamble as this prevents certain grids to be double-checked for mis-plots, the exploration timing is significantly reduced.
The process is as follows, continuing until it reaches the last statement or a break occurs:
- Move count = 1 (minimally 1 move)
- Move the virtual robot forward temporary (without showing on the display).
- If left-centre has an obstacle, break (only for image recognition).
- If there is any unexplored grid in the robot’s 3 by 3 position, break.
- If left is unobstructed, break.
- If front is obstructed, break.
- If any grid on the left is unexplored, break.
- If any grid on the centre-right is unexplored up till 5 grids away, break.
- Move count += 1, repeat from step 2.
The move count is returned at the end of the process which determines the number of moves to make at one go. Move count of 1 means no multi-grid movement is performed; move as per normal.
Example
In the above example, 8 grids passed all the checks listed above, bringing the total number of moves to 9. The robot will move continuously until it stops at the position illustrated in the ‘AFTER’ figure.
Synchronising movement with other subsystems
Arduino
See Arduino#Synchronisation with the Algorithms subsystem.
Synchronisation acts as a fail-safe in the presence of mis-plots. It minimises the chance of a de-sync between the two subsystems in those scenarios.
Android
The arena state, in terms of MDF strings, and the robot position is sent to the Android subsystem after every move to synchronise the arena displays between the two subsystems. The state of each grid is stored in the form of a 2D array and is parsed according to the specifications of the MDF descriptor format to generate the MDF strings.
Image recognition
An image is taken whenever an obstacle is detected to the immediate left (any of the three grids) of the robot.
Reverse hugging
Reverse hugging is an implementation used for image recognition. Reverse hugging kicks in when an obstacle is detected on the immediate right of the robot and if the robot is facing north or south (again, relative to if (1, 1) is at the bottom left of the arena), also referred to as the ‘Reverse Start Point’. The robot turns 180-degree and proceeds to left-wall hug until it is back to the ‘Reverse Start Point’, following which the exploration continues as per normal.
This allows the camera, facing towards the left of the robot, to take any images that may be planted on the obstacles as these obstacles will not be hugged normally as they are in the middle of the arena. Additional obstacles detected to the immediate right during a reverse hug will be ignored to prevent an endless recursion of reverse hugs.
The mechanism has a significant impact of the exploration timing as grids may be explored multiple times but is almost guaranteed to capture every image in the arena.
‘Scanned’ flag
A ‘scanned’ flag is applied to an obstacle whenever it appears on any of the three grids to the immediate left of the robot. Reverse hugging will not kick in if the obstacle to the right is ‘scanned’ as the obstacle has already been hugged.
Image coordinates & string
Every time an image is taken, the coordinates to the left-centre and the heading of the robot are recorded and appended to a list (list A). At the end of exploration, a list of image IDs (list B) is received from the Image Recognition Server along with a list of image bound positions (list C, see Raspberry Pi#imgReg.py). With a one-to-one mapping between the three lists (A, B, C), indices corresponding to an image ID of -1 (means no image captured) is removed from all three lists.
Each coordinate remaining in list A is then offset by the corresponding image bound position in list C to obtain an image coordinate:
-
- If bound position is CENTRE, no offset is applied.
- If bound position is LEFT, the coordinate is offset to the left-rear of the robot; the image is captured when it is at the rear end of the robot
- If bound position is RIGHT, the coordinate is offset to the left-front of the robot; the image is captured when it is at the front end of the robot
Example
Once the process is complete, a final image string containing the image IDs and coordinates is generated and sent to the Android subsystem for display.
Fastest path
The fastest path is calculated using multiple A-star searches. The calculation process is as follows:
- A-star search is performed to find a path from the start point to one possible entrance of the waypoint.
- The robot heading at the end of the path is obtained.
- A-star search is performed from the waypoint to every possible entrance of the goal point using the robot heading obtained as the starting heading.
- Repeat step 3 for each possible entrance to the goal point.
- Merge the path in step 1 with all paths obtained in step 3-4 to form a complete path.
- Repeat from step 1 for every other possible entrance to the waypoint.
- Each complete path is iterated through, and a penalty is applied to every turn in the path.
- The path with the lowest cost is selected.
The process is much more complex than performing several direct searches but is guaranteed to return the fastest path no matter the complexity of the path (ex. waypoint is a very weird spot).