Skip to content

Tutorial 3: Getting Competition Ready

Fatemeh Pahlevan Aghababa edited this page Sep 1, 2020 · 34 revisions

Content Summary

In this tutorial we cover:

  • Detecting heated victims
  • Improvements to the existing sample code to improve functionality
  • Ideas for extending your program further

Seminar Video

The seminar video can be found below (click on the picture).

Changes from the seminar video for this tutorial

Some of the content and visuals may vary from the latest release, since the June seminars were based on Release 3.

Some of the major changes you may encounter are as such:

  • Different robot size
  • Change in recognising the start tile to receive an exit bonus

Fields

Tutorial 3 Field

The field/environment which accompanies the seminar video (and is shown to the right) can be found here. This world will NOT work with the latest release (only with release 3), but it is here for reference if anyone would like to have a go at it.

How to Change Fields

We also have a short video showing how to change fields here.

Progressing your code further

After working through this tutorial, there are many way of improving your program. We suggest some different approaches to investigate and think about.

'Manual' Optimisation

  • Optimise speed/turning/motion to explore as much of the maze as possible
  • Tracking along inner floating walls (floating wall victims score higher!)
  • Speed up where around empty areas to maximise area covered
  • Navigate towards heated victims in the same way as non-heated
  • Use the GPS values to record the location of the exit and help navigate home

Advanced Sensing

  • Computer Vision for victim type (detecting letters). To do this, look at using OpenCV with Python --> There are many different approaches for doing this, such as this approach. We also provided some basic step by step examples for you here.
  • Use the gyroscope for more robust navigation - another sensor you can expose the gryoscope. This can be used for identifying the direction of heading etc. Investigate how this could aim navigation.

Mapping & Search Algorithmns

  • Use the GPS to record where you have been, and encourage searching new location
  • Use the GPS to generate an map – a 2D array - which contains the location of swamps/traps/walls you detect Once you can generate maps you can look at search algorithmns, including:
  • Breadth/depth first search algorithms
  • A* search algorithm

Remember, for help/advice, or if you want to share ideas, head to discord.

Exemplar Code

The full uninterrupted program used in the seminar video can be found here.

Below we step through the full example code developed step by step, explaining what is going on. This is an up to date version of code discussed in the seminar video. It is also builds on the code developed in tutorial 2. The example code can be found here.

Try and build this up by yourself first, referring back to this if you need.


Stepping through the code

In this tutorial, the changes from the code in tutorial 2 will be run down and explained.

The first change is the addition of two heat/temperature sensors on either side of the robot. This is done in the same way as other sensors.

# Declare heat/temperature sensor
left_heat_sensor = robot.getLightSensor("left_heat_sensor")
right_heat_sensor = robot.getLightSensor("right_heat_sensor")

left_heat_sensor.enable(timeStep)
right_heat_sensor.enable(timeStep)

Similar to the function stopAtVisualVictim, a new function stopAtHeatedVictim is introduced. This function is quite straightforward, with an if statement to see if the two temperature sensors are reading more than a particular threshold of 37. Note the attenuation is quite high, so you must be quite near the victim to detect this temperature increase.

# Get visible victims using either the right or left temperature sensor
def stopAtHeatedVictim():
    global messageSent, victimDetectedGlobal
    #print(left_heat_sensor.getValue(),right_heat_sensor.getValue())
    
    if left_heat_sensor.getValue() > 37 or right_heat_sensor.getValue() > 37:
        stop()
        sendVictimMessage('T')
        
        print("Found heated victim!!")
        victimDetectedGlobal = True
    else:
        messageSent = False

In the following function, we navigate towards a visual victim that we can see - for example if the victim is off to one side of the screen. To do this - we find where the victim is in the camera frame, and find out if it is towards the right or left, and then turn the appropriate way to move towards it.

# Steer the robot towards the victim
def turnToVictim(victim):
    position_on_image = victim[1]

    width = camera.getWidth()
    center = width / 2

    victim_x_position = position_on_image[0]
    dx = center - victim_x_position

    if dx < 0:
        turn_right_to_victim()
    else:
        turn_left_to_victim()

In some cases, we can see multiple victims. So, we need to choose which to do to, by identifying the closest and heading there first. We do this by implementing a simple list sorting algorithm.

# Return the victim that is closest to you
def getClosestVictim(victims):
    shortestDistance = 999
    closestVictim = []

    for victim in victims:
        dist = victim[0]
        if dist < shortestDistance:
            shortestDistance = dist
            closestVictim = victim

    return closestVictim

Both the above functions, getClosestVictim and turnToVictim are now used in the stopAtVisualVictim introduced in tutorial 2. At the moment, the distance to the victims are only estimated using the distance sensor pointing straight ahead, so finding the "closest victim" is not necessary. However, the concept is important if you choose to implement smarter algorithms for example to estimate the distance of a victim by the size of it occupying the screen.

# Stop at a victim once it is detected
def stopAtVisualVictim():
    global messageSent, victimDetectedGlobal
    #get all the victims the camera can see
    victims = getVisibleVictims()

    foundVictim = False

    if len(victims) != 0:
        closest_victim = getClosestVictim(victims)
        turnToVictim(closest_victim)

    #if we are near a victim, stop and send a message to the supervisor
    for victim in victims:
        if nearObject(victim[0]) and not foundVictim and not victimDetectedGlobal:
            stop()
            sendVictimMessage('H') # <- Put detected victim type here
            print("Found visual victim!!")
            foundVictim = True

            victimDetectedGlobal = True

    if not foundVictim:
        messageSent = False`

A secondary method is proposed and introduced to identify hole traps and swamps. Previously, a single RGB value was used to identify the existence of holes swamps. In more complex simulations or on real robots, you would opt to use a range as a condition rather an equation. This program takes the RBG reading of colour sensor in binary and then converts it into HSV space. By doing so, it is easier to set a range to your sensor measurement threshold (N.B.: both numpy and opencv is required to run this function).

# avoid tiles uing the HSV decomposition of the colour camera instead of a single value. Requires opencv installation
def avoidTilesHSV():
    global duration, startTime
    colour = colour_camera.getImage()

    img = np.array(np.frombuffer(colour, np.uint8).reshape((colour_camera.getHeight(), colour_camera.getWidth(), 4)))
    img[:,:,2] = np.zeros([img.shape[0], img.shape[1]])
    hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)[0][0]


    # Change the range at which the robot detects the swamps and holes
    #       SWAMP                           HOLE
    if (hsv[0] > 40 and hsv[0] < 45) or (hsv[2] < 30):
        move_backwards()
        startTime = robot.getTime()
        duration = 2

Additional movement functions are created to steer the robot to visual victims.

# Setting the speed to steer right towards the victim
def turn_right_to_victim():
    #set left wheel speed
    speeds[0] = 1 * max_velocity
    #set right wheel speed
    speeds[1] = 0.8 * max_velocity

# Setting the speed to steer left towards the victim
def turn_left_to_victim():
    #set left wheel speed
    speeds[0] = 0.8 * max_velocity
    #set right wheel speed
    speeds[1] = 1 * max_velocity

In the main while loop there is only one addition, which is the the line stopAtHeatedVictim() to identify heated victims.

if usecamera:
    stopAtVisualVictim()
stopAtHeatedVictim() 

Exiting the maze

Although this was not included in the sample code, you can gain an exit bonus if you "exit" the maze by stopping and sending a message to the game controller: sendMessage(0,0,'E'). The method of finding the start tile must be developed by each team.

Clone this wiki locally